Technologies Evaluation &
Total Page:16
File Type:pdf, Size:1020Kb
2008
STEFANO CARRINO http://home.hefr.ch/carrinos/ PhD Student 2008-2011
Technologies Evaluation & State of the Art
This document details technologies for gesture interpretation and analysis and proposes some parameters for a classification. The technologies proposed are
Introduction In the following sections we illustrate the state of the art in technologies for the acquisition of data for gesture recognition. After that we introduce some parameters for the evaluation of these 1
2008 approaches, motivating the weight of each parameter according to our vision. In the last section we highlight the conclusion of this research in the state of the art in this field.
Our vision, in brief1 The AVATAR system will be composed by two elements:
- The Smart Portable Device (SPD).
- The Smart Environmental Device (SED).
The SPD has to provide the gesture interpretation for all the applications that are environment independent for what may concern the data acquisition (i.e. the cause and effect actions, inputs, computing machine and out put are all inside the SPD self).
The SED offers the gesture recognition where the SPD has not good performances. And, in addition, it could offer a layer for the connection of multiple SPD and the possibility of faster elaboration offering its computing power.
In this first step of our work we will focus the attention on the SPD but keeping in mind the future developments.
Technologies Study The choice of the employed technologies (input) for the gesture interpretation is very in important in order to achieve good results in the gesture recognition. In the last years the evolution of technology and materials has pushed forward the feasibility and the robustness of this kind of systems; also more complex algorithms are now ready for this kind of applications (augmented speed in the computing processes, in mobile devices too, make the “real-time approach” reality).
State of the Art: papers Follow a simple list of articles we have read, after the name is attached a short description.
Gesture recognition by computer vision Arm-pointing Gesture Interface Using Surrounded Stereo Cameras System Yamamoto, Y.; Yoda, I.; Sakaue, K.; Arm-pointing gesture interface using surrounded stereo cameras system, Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on Volume 4, 23-26 Aug. 2004 Page(s):965 - 970 Vol.4
- 2004
- Surrounding Stereo Cameras (four stereo cameras in four corners of the ceiling)
- Arm pointing
- Setting: 12 frame/s
1 For a detailed description see the document AVATAR - Scenarios 2
2008 - Recognition rate: 97.4% standing
- Recognition rate: 94% sitting posture
- The lighting environment had a slight influence
Improving Continuous Gesture Recognition with Spoken Prosody Kettebekov, S.; Yeasin, M.; Sharma, R.; Improving continuous gesture recognition with spoken prosody, Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer Society Conference on Volume 1, 18-20 June 2003 Page(s):I-565 - I-570 vol.1
- 2003
- Cameras and microphone
- HMM - Bayesian Network
- Gesture and Speech Synchronization
- 72.4% of 1876 gestures were classified correctly
Pointing Gesture Recognition based on 3DTracking of Face, Hands an Head Orientation Kai Nickel , Rainer Stiefelhagen, Pointing gesture recognition based on 3D-tracking of face, hands and head orientation, Proceedings of the 5th international conference on Multimodal interfaces, November 05-07, 2003, Vancouver, British Columbia, Canada
- 2003
- Stereo Camera (1)
- HMM
- 65% / 83% (without / with head orientation)
- 90% after user specific training
Real-time Gesture Recognition with Minimal Training Requirements and On-Line Learning Rajko, S.; Gang Qian; Ingalls, T.; James, J.; Real-time Gesture Recognition with Minimal Training Requirements and On-line Learning, Computer Vision and Pattern Recognition, 2007. CVPR '07. IEEE Conference on 17-22 June 2007 Page(s):1 - 8
- 2007
- (SNM) HMMs modified for reduced training requirement
- Viterbi inference
- Optical, pressure, mouse/pen
- Result: ???
3
2008 Recognition of Arm Gestures Using Multiple Orientation Sensors: gesture classification Lementec, J.- C.; Bajcsy, P.; Recognition of arm gestures using multiple orientation sensors: gesture classification, Intelligent Transportation Systems, 2004. Proceedings. The 7th International IEEE Conference on 3-6 Oct. 2004 Page(s):965 - 970
- 2004
- IS-300 Pro Precision Motion Tracker by InterSense
- Results
Vision-Based Interfaces for Mobility Kolsch, M.; Turk, M.; Hollerer, T.; Vision-based interfaces for mobility, Mobile and Ubiquitous Systems: Networking and Services, 2004. MOBIQUITOUS 2004. The First Annual International Conference on 22-26 Aug. 2004 Page(s):86 - 94
- 2004
- Head-worn camera
- AdaBoost
- (Larger than 30x20 pixels) runs with 10 frames per second on a 640x480 sized video stream on a 3GHz desktop computer.
- Interesting references
- 93.76% postures were classified correctly
GestureVR: Vision-Based 3D Hand interface for Spatial Interaction Jakub Segen , Senthil Kumar, Gesture VR: vision-based 3D hand interface for spatial interaction, Proceedings of the sixth ACM international conference on Multimedia, p.455-464, September 13-16, 1998, Bristol, United Kingdom
- 1998
- 2 cameras 60Hz 3D space
- 3 gestures
- Finite state classification
Gesture Recognition by Accelerometers Accelerometer Based Gesture Recognition for Real Time Applications
- Input: Accelerometer Bluetooth
- HMM
- Gesture Recognized Correctly 96%
- Reaction Time: 300ms
4
2008 Accelerometer Based Real-Time Gesture Recognition Beedkar ,K.; Shah, D.; Accelerometer Based Gesture Recognition for Real Time Applications, Real Time Systems, Project description; MS CS Georgia Institute of Technology
- Input: Sony-Ericsson W910i (3 axial accel.)
- 97.4% and 96% accuracy on a personalized gesture set
- HMM & SVM (Support Vector Machine)
- HMM (My algorithm was based on a recent Nokia Research Center paper [11] with some modifications. I have used the freely available JAHMM library for implementation.)
- Runtime was tested on a new generation MacBook computer with a dual core 2 GHz processor and 1 GB memory.
- Recognition time was independent from the number of teaching examples and averaged at 3.7ms for HMM and 0.4ms for SVM.
Self-Defined Gesture Recognition on Keyless Handheld Devices using MEMS 3D Accelerometer Zhang, Shiqi; Yuan, Chun; Zhang, Yan; Self-Defined Gesture Recognition on Keyless Handheld Devices using MEMS 3D Accelerometer, Natural Computation, 2008. ICNC '08. Fourth International Conference on Volume 4, 18-20 Oct. 2008 Page(s):237 - 241
- 2008
- Input: Three-dimensional MEMS accelerometer and a Single Chip Microcontroller
- 94% Arabic number recognition
Gesture-recognition with Non-referenced Tracking Keir, P.; Payne, J.; Elgoyhen, J.; Horner, M.; Naef, M.; Anderson, P.; Gesture-recognition with Non-referenced Tracking, 3D User Interfaces, 2006. 3DUI 2006. IEEE Symposium on 25-29 March 2006 Page(s):151 - 158
- 2005-2006 (?)
- Accelerometer Bluetooth (MEMS) + gyroscopes
- 3motion™
- Particular algorithm for gesture recognition
- No numerical results
Real time gesture recognition using Continuous Time Recurrent Neural Networks G. Bailador, D. Roggen, G. Tröster, and G. Triviño. Real time gesture recognition using Continuous Time Recurrent Neural Networks. In 2nd Int. Conf. on Body Area Networks (BodyNets), 2007. 5
2008 - 2007
- Accelerometers
- Continuous Time Recurrent Neural Networks (CTRNN)
- Neuro Fuzzy system (in a previously project)
- Isolated gesture: 98% was obtained for the training set and 94% for the testing set
- Realistic environment: 80.5% and 63.6 %
- Neuro fuzzy system can't work in dynamic (realistic situations)
- G. Bailador, G. Trivino, and S. Guadarrama. Gesture recognition using a neuro-fuzzy predictor. In International Conference of Artificial Intelligence and Soft Computing. Acta press, 2006.
ADL Classification Using Triaxial Accelerometers and RFID Im, Saemi; Kim, Ig-Jae; Ahn, Sang Chul; Kim, Hyoung-Gon; Automatic ADL classification using 3-axial accelerometers and RFID sensor; Multisensor Fusion and Integration for Intelligent Systems, 2008. MFI 2008. IEEE International Conference on 20- 22 Aug. 2008 Page(s):697 - 702
- >2004
- ADL = Activities of Daily living
- 2 wireless (Zigbee homemade) accelerometers for 5 body states
- Glove type RFID reader
- 90% over 12 ADLs
Technology The input devices used in the last years are:
. Accelerometers
o Wireless
o Non wireless
. Camera http://en.wikipedia.org/wiki/Gesture_recognition:
o Depth-aware cameras. Using specialized cameras one can generate a depth map of what is being seen through the camera at a short range, and use this data to approximate a 3d representation of what is being seen. These can be effective for detection of hand gestures due to their short-range capabilities.
6
2008 o Stereo cameras. Using two cameras whose relations to one another are known, a 3d representation can be approximated by the output of the cameras. This method uses more traditional cameras, and thus does not hold the same distance issues as current depth-aware cameras. To get the cameras' relations, one can use a positioning reference such as a lexian-stripe (?) or infrared emitters.
o Single camera. A normal camera can be used for gesture recognition where the resources/environment wouldn't be convenient for other forms of image-based recognition. Although not necessarily as effective as stereo or depth aware cameras, using a single camera allows a greater possibility of accessibility to a wider audience.
. Angle Shape Sensor http://www.5dt.com/ see the attached documentation.:
o Exploiting the reflexion of the light inside optical fibre we are able to rebuild a 3D hand(s) model
o Available also in wireless (Bluetooth), the present solutions (gloves) have to be connected with
. Infrared technology.
. Ultrasound / UWB (Ultra WideBand)
. RFID
. Gyroscopes (two angular-velocity sensors)
. Controller-based gestures. These controllers act as an extension of the body so that when gestures are performed, some of their motion can be conveniently captured by software. Mouse gestures are one such example, where the motion of the mouse is correlated to a symbol being drawn by a person's hand, as is the Wii Remote, which can study changes in acceleration over time to represent gestures.
Technology Evaluation
Evaluation Criteria In the following table there is a list of parameters of evaluation for the technologies presented in previous section.
. Resolution: in relative amounts, resolution describes the degree to which a change can be detected. It is expressed as a fraction of an amount to which you can easily relate. For example, printer manufacturers often describe resolution as dots per inch, which is easier to relate to than dots per page.
. Accuracy: accuracy describes the amount of uncertainty that exists in a measurement with respect to the relevant absolute standard. It can be defined in several different ways and is dependent on the specification philosophy of the supplier as well as product design. Most accuracy specifications include a gain and an offset parameter.
7
2008 . Latency: waiting time until the system firstly responses.
. Range of motion.
. User Comfort.
. Cost. In economic terms.
Technology Comparison
Parameters’ weight In this section we show how the weights in the previous table are chosen to characterize “my personal choice”.
First) Cost: we are in a research context so is not so important to value the cost of our system following a marketing approach. But I agree with the idea forwarded by H. Ford: “True progress is made only when the advantages of a new technology are within reach of everyone". For this reason the cost too appears as parameter in the table: a concept without possible future practical application is useless (to use gloves for hands modelling with a cost of 5000 $ or more are quite hard to see in a cheaper form in the future).
Second) User comfort: a technology completely invisible to the user will be ideal. In this perspective isn’t easy deal with the challenge “how to interface the user with the system”. For example wondering about implementation of gesture recognition without any charge to the final user (gloves, camera, sensors…) is not a dream, but, in the other hand, the output and the feedback have to be presented to the user. From this viewpoint a head-mounted display (we are wondering about application in the context of the augmented reality) looks like the first natural solution. At this point adding camera to this device doesn’t make worse the situation with a huge advantage (and future possibilities):
Possible uncoupling from the environment (if enough computational power is provided to the user): all the technology is on the user2.
a. In any case, if we need it, we can establish a network with other systems to gain more information and enrich our system.
b. We are able to enter in the domain of wearable/mobile systems. It is a challenge but it makes valuable and richer our system.
Third) Range of Motion: it is a direct consequence of the earlier point. With a wearable technology we can get rid of this problem; the range of motion is strictly related to the context and not dependents to our system. With other choices (e.g. cameras and sensors in the environment) the system will work in a specific environment and can lose in generality.
Fourth) Latency: to deal with this problem at this level is quite untimely. The latency depends on the used technology, the applied algorithms for gesture recognition and the tracking, but, potentially, also on other parameters such as the distance between input system, elaboration system and
2 According to the perspective declared in the introduction about the SPD (smart portable device) 8
2008 output/feedback system. (For example if the vector of information is the sound, the time of flight may be not negligible in a real-time system.)
Fifth) Accuracy & Resolution: first of all the system has to be reliable. Therefore these parameters are really meaningful in our application. As far as we are concerned we would like a tracking system able to discern correctly a little vocabulary of gestures and to make possible realistic interactions with three-dimensional virtual object in a three-dimensional mixed world.
Comparison Analyzing input approach we have noticed two features:
- Some of the equipments presented here are the direct evolution of the previous;
- Nowadays some technologies are (of course in this domain) evidently inferior if compared with other technologies.
According to the first sentence we discard from further analysis wired accelerometer; they have not advantages compared to the wireless equivalent solution.
Depending on the second one we can exclude the RFID compared with the UWB.
In previous section we add “gyroscopes” like possible technology this isn’t completely correct; in reality this kind of technology have real applicability only if integrated with accelerometers or other sensors.
Resolution Range of Technologies\Parameters Latency User Comfort Cost RESULTS - Accuracy motion
Accelerometers - wireless 3 4 5 2 5 55
Camera - singled camera 2 4 5 4 4 53
Camera - Stereo cameras 3 2 ? 3 (?) 3 26+3*?
Camera - depth-aware cameras 4 4 (?) 5 3 3 60
Angle shape sensor (gloves) 4 4 5 2 1 (-100) 54
Infrared technology 4 4 5 4 4 63
Ultrasound 2 ? ? ? ? 10+X
Weight 5 4 3 2 1
From this table we have evaluated two approaches as most interesting:
- The infrared technology
9
2008 - The depth-aware camera.
In reality these two technologies are not uncorrelated. In deed the depth-aware cameras are often equipped with infrared emitters and receivers to calculate the position in the space of the object in the field of view of the camera http://www.3dvsystems.com/ see the attached documentation..
Conclusions and Remarks Chose a technology to implement our future work was not easy at all! Above all is that: the validity of a technology is strictly linked with its use. For example the results using a camera for gestures interpretation is strictly connected with the algorithms used to recognise the gestures. So it is impracticable to say THIS IS THE technology to use. Moreover there are others factors (as technical evolution) that we have to take into account.
Computer vision offers the user a less cumbersome interface, requiring of them only that they remain within the field of view of the camera or cameras. By deducing features and movement in real-time from the images captured from the cameras, gesture and posture recognition. Computer vision typically also requires good lighting conditions and the occlusion issue makes this solution application dependent.
Generally we can show there are two principal ways to tackle the issues tied to the gesture recognition:
- Computer Vision;
- Accelerometers (often coupled with gyroscopes or other sensors).
Each approach has advantages and disadvantages. In general researches show a percentage of gesture recognition above the 80% (often the 90%) within a restrict vocabulary.
However the evolution of new technology pushes these results toward higher level.
Accelerometers, gloves and cameras… The scenarios we have thought about are in the context of augmented reality, for this reason, it is ordinary wondering about head-mounted display and to add a lightweight camera will not change drastically the user comfort;
Wireless technology provides us not so much cumbersome sensors but their integration on a human body is somewhat intrusive.
Gloves are another simple device not too much intrusive (in my opinion), but the cost to have a reliable mapping in a 3D space nowadays have a cost not negligible http://www.5dt.com/ see the attached documentation..
However considering generalized scenarios and the most various types of gesture (body, arms, hands…) we don’t discard the idea to bring together more kind of sensors.
10
2008 Proposition What we propose for the next step is to think about scientific problems such user identification and multiuser management, context dependence (tracking), definition of model/language of gesture, and gesture recognition (acquisition and analyses).
All this fixing two goals for the future applications:
Usability.
That is:
. Robustness;
. Reliability.
That not is (at this moment):
. Easy to wear (weight).
Augmented / virtual reality applicability:
. Mobility;
. 3D gesture recognition capability;
. Dynamic (and static?) gesture recognition.
As next steps I will define the following:
Work environment;
Definition of a framework for gesture modelling (???);
Acquisition technology selection;
Delve into state of the art for what concerns:
o Gesture vocabulary definition
o Action theory
o Framework for gesture modelling
The choice of the kind of gesture model will be effectuated in the forecast of the following step: to extend gesture interpretation to the environment. In this perspective we will need also a strategy to add a tracking system to determine the user position coupled with the head position and orientation. This will be necessary if we want to be independent from visual marker or similar solutions. Divers
11
2008 Observation G. Bailador, D. Roggen, G. Tröster, and G. Triviño. Real time gesture recognition using Continuous Time Recurrent Neural Networks. In 2nd Int. Conf. on Body Area Networks (BodyNets), 2007.: Hidden Markov models, dynamic programming and neural networks have been investigated for gesture recognition with hidden Markov models being nowadays one of the predominant approach to classify sporadic gestures (e.g. classification of intentional gestures). Fuzzy systems expert has also been investigated for gesture recognition based on analyzing complex features of the signal like the Doppler spectrum. The disadvantage of these methods is that the classification is based on the separability of the features, therefore two different gestures with similar values for these features may be difficult to classify.
Some commonly features for gesture recognition by image analysis [6]: . Image moments.
. Skin tone Blobs.
. Coloured Markers.
. Geometric Features.
. Multiscale shape characterization.
. Motion History Images and Motion Energy Images.
. Shape Signatures.
. Polygonal approximation-based Shape Descriptor.
. Shape descriptors based upon regions and graphs.
Gesture recognition or classification methods Hafiz Adnan Habib. Gesture Recognition Based intelligent Algorithms for Virtual keyboard Development. A thesis submitted in partial fulfilment for the degree of Doctor of Philosophy.Error: Reference source not found Following are the list of gesture recognition or classification methods proposed in the literature so far:
. Hidden Markov Model (HMM).
. Time Delay Neural Network (TDNN).
. Elman Network.
. Dynamic Time Warping (DTW). 12
2008 . Dynamic Programming.
. Bayesian Classifier.
. Multi-layer Perceptions.
. Genetic Algorithm.
. Fuzzy Inference Engine.
. Template Matching.
. Condensation Algorithm.
. Radial Basis Functions.
. Self-Organizing Map.
. Binary Associative Machines.
. Syntactic Pattern Recognition.
. Decision Tree.
"Gorilla arm" "Gorilla arm"http://en.wikipedia.org/wiki/Touchscreen was a side-effect that destroyed vertically-oriented touch-screens as a mainstream input technology despite a promising start in the early 1980s. Designers of touch-menu systems failed to notice that humans aren't designed to hold their arms in front of their faces making small motions. After more than a very few selections, the arm begins to feel sore, cramped, and oversized -- the operator looks like a gorilla while using the touch screen and feels like one afterwards. This is now considered a classic cautionary tale to human-factors designers; "Remember the gorilla arm!" is shorthand for "How is this going to fly in real use?" Gorilla arm is not a problem for specialist short-term-use uses, since they only involve brief interactions which do not last long enough to cause gorilla arm. References [1] Yamamoto, Y.; Yoda, I.; Sakaue, K.; Arm-pointing gesture interface using surrounded stereo cameras system, Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on Volume 4, 23-26 Aug. 2004 Page(s):965 - 970 Vol.4 [2] Kettebekov, S.; Yeasin, M.; Sharma, R.; Improving continuous gesture recognition with spoken prosody, Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer Society Conference on Volume 1, 18-20 June 2003 Page(s):I-565 - I-570 vol.1 [3] Kai Nickel , Rainer Stiefelhagen, Pointing gesture recognition based on 3D-tracking of face, hands and head orientation , Proceedings of the 5th international conference on Multimodal interfaces, November 05- 07, 2003, Vancouver, British Columbia, Canada
13
2008 [4] Rajko, S.; Gang Qian; Ingalls, T.; James, J.; Real-time Gesture Recognition with Minimal Training Requirements and On-line Learning, Computer Vision and Pattern Recognition, 2007. CVPR '07. IEEE Conference on 17-22 June 2007 Page(s):1 - 8 [5] Lementec, J.-C.; Bajcsy, P.; Recognition of arm gestures using multiple orientation sensors: gesture classification, Intelligent Transportation Systems, 2004. Proceedings. The 7th International IEEE Conference on 3-6 Oct. 2004 Page(s):965 - 970 [6] Kolsch, M.; Turk, M.; Hollerer, T.; Vision-based interfaces for mobility, Mobile and Ubiquitous Systems: Networking and Services, 2004. MOBIQUITOUS 2004. The First Annual International Conference on 22- 26 Aug. 2004 Page(s):86 - 94 [7] Jakub Segen , Senthil Kumar, Gesture VR: vision-based 3D hand interface for spatial interaction, Proceedings of the sixth ACM international conference on Multimedia, p.455-464, September 13-16, 1998, Bristol, United Kingdom [8] Beedkar ,K.; Shah, D.; Accelerometer Based Gesture Recognition for Real Time Applications, Real Time Systems, Project description; MS CS Georgia Institute of Technology [9] Zoltán Prekopcsák, Péter Halácsy, and Csaba Gáspár-Papanek; Design and development of an everyday hand gesture interface in MobileHCI '08: Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. Amsterdam, the Netherlands, September 2008. [10] Zoltán Prekopcsák (2008) Accelerometer Based Real-Time Gesture Recognition in POSTER 2008: Proceedings of the 12th International Student Conference on Electrical Engineering. Prague, Czech Republic, May 2008. [11] Zhang, Shiqi; Yuan, Chun; Zhang, Yan; Self-Defined Gesture Recognition on Keyless Handheld Devices using MEMS 3D Accelerometer, Natural Computation, 2008. ICNC '08. Fourth International Conference on Volume 4, 18-20 Oct. 2008 Page(s):237 - 241 [12] Keir, P.; Payne, J.; Elgoyhen, J.; Horner, M.; Naef, M.; Anderson, P.; Gesture-recognition with Non- referenced Tracking, 3D User Interfaces, 2006. 3DUI 2006. IEEE Symposium on 25-29 March 2006 Page(s):151 - 158 [13] G. Bailador, D. Roggen, G. Tröster, and G. Triviño. Real time gesture recognition using Continuous Time Recurrent Neural Networks. In 2nd Int. Conf. on Body Area Networks (BodyNets), 2007. [14] Im, Saemi; Kim, Ig-Jae; Ahn, Sang Chul; Kim, Hyoung-Gon; Automatic ADL classification using 3-axial accelerometers and RFID sensor; Multisensor Fusion and Integration for Intelligent Systems, 2008. MFI 2008. IEEE International Conference on 20-22 Aug. 2008 Page(s):697 - 702 [15] S. Mitra, T. Acharya; Gesture Recognition- A Survey, Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on 2007 [16] Hafiz Adnan Habib. Gesture Recognition Based intelligent Algorithms for Virtual keyboard Development. A thesis submitted in partial fulfilment for the degree of Doctor of Philosophy. [17] http://en.wikipedia.org/wiki/Gesture_recognition [18] http://www.5dt.com/ see the attached documentation. [19] http://www.3dvsystems.com/ see the attached documentation. [20] http://en.wikipedia.org/wiki/Touchscreen
14
2008 Attached 5DT Data Glove 5 Ultra
Product Description
The 5DT Data Glove 5 Ultra is designed to satisfy the stringent requirements of modern Motion Capture and Animation Professionals. It offers comfort, ease of use, a small form factor and multiple application drivers. The high data quality, low cross-correlation and high data rate make it ideal for realistic realtime animation. The 5DT Data Glove 5 Ultra measures finger flexure (1 sensor per finger) of the user's hand. The system interfaces with the computer via a USB cable. A Serial Port (RS 232 - platform independent) option is availible through the 5DT Data Glove Ultra Serial Interface Kit. It features 8-bit flexure resolution, extreme comfort, low drift and an open architecture. The 5DT Data Glove Ultra Wireless Kit interfaces with the computer via Bluetooth technology (up to 20m distance) for high speed connectivity for up to 8 hours on a single battery. Right- and left-handed models are available. One size fits many (stretch lycra).
Features
Advanced Sensor Technology Wide Application Support Affordable quality Extreme comfort One size fits many Automatic calibration - minimum 8-bit flexture resolution Platform independant - USB Or Serial interface (RS 232) Cross-platform SDK Bundled software High update rate On-board processor Low crosstalk between fingers Wireless version available (5DT Ultra Wireless Kit) Quick "hot release" connection
Related Products
5DT Data Glove 14 Ultra 5DT Data Glove 5 MRI (For Magnetic Resonance Imaging Applications) 5DT Data Glove 16 MRI (For Magnetic Resonance Imaging Applications) 5DT Wireless Kit Ultra 5DT Serial Interface Kit
15
2008 Data Sheets Data sheets must be viewed with a PDF-viewer. If you do not have a PDF-viewer, you can download Adobe Acrobat Reader from Adobe's site at http://www.adobe.com/products/acrobat/readstep.html.
5DT Data Glove Series Data Sheet: 5DTDataGloveUltraDatasheet.pdf (124 KB)
Manuals Manuals must be viewed with a PDF-viewer. If you do not have a PDF-viewer, you can download Adobe Acrobat Reader from Adobe's site at http://www.adobe.com/products/acrobat/readstep.html.
5DT Data Glove 5 Manual: 5DT Data Glove Ultra - Manual.pdf (2,168 KB)
Glove SDK
Windows and Linux SDK (free): The current version of the windows SDK is 2.0 and Linux 1.04a. The driver works for all versions of the 5DT Data Glove Series. Please refer to the driver manual for instructions on how to install and use it. Windows users will need a program that can open ZIP files, such as WinZip, from www.winzip.com. For Linux, use the "unzip" command.
Windows 95/98/NT/2000 SDK: GloveSDK_2.0.zip (212 KB) Linux SDK: 5DTDataGloveDriver1_04a.zip (89.0 KB) The following files contains all the SDK, manuals, glove software and data sheets for the 5DT Data Glove Series: Windows 95/98/NT/2000: GloveSetup_Win2.2.exe (13.4 MB) Linux: 5DTDataGloveSeriesLinux1_02.zip (1.21 MB ) Unix Driver: The 5DT Data Glove Ultra Driver for Unix provides access to the 5DT range of data gloves at an intermediate level. The driver functionality includes multiple instances, easy initialization and shutdown, basic (raw) sensor values, scaled (auto-calibrated) sensor values, calibration functions, basic gesture recognition and a cross-platform Application Programming Interface (API). The driver utilizes Posix threads. Pricing for this driver is shown below. Go to our Downloads page for more drivers, data sheets, software and manuals. Pricing PRODUCT NAME PRODUCT DESCRIPTION
5DT Glove 5 Ultra Right-handed 5 Sensor Data Glove: Right-handed
5DT Glove 5 Ultra Left-handed 5 Sensor Data Glove: Left-handed
Accessories
16
2008 5DT Ultra Wireless Kit Kit allows for 2 Gloves in one compact package
5DT Data Glove Serial Kit Serial Interface Kit
Drivers & Software
Alias | Kaydara MOCAP Driver
3D Studio Max 6.0 Driver
Maya Driver
SoftImage XSI Driver
UNIX SDK * Please Note Serial Only (No USB Drivers)
ZCamTM
3D video cameras by 3DV
Since it was established 3DV Systems has developed 4 generations of depth cameras. Its primary focus in developing new products throughout the years has been to reduce their cost and size, so that the unique state-of-the-art technology will be affordable and meet the needs of consumers as well as of these of multiple industries. In recent years 3DV has been developing DeepCTM, a chipset that embodies the company's core depth sensing technology. This chipset can be fitted to work in any camera for any application, so that partners (e.g. OEMs) can use their own know-how, market reach and supply chain in the design and manufacturing of the overall camera capabilities. The chipset will be available for sale soon.
The new ZCamTM (previously Z-Sense), 3DV's most recently completed prototype camera, is based on DeepCTM and is the company's smallest and most cost-effective 3D camera. At the size of a standard webcam and at affordable cost, it provides very accurate depth information at high speed (60 frames per second) and high depth resolution (1-2 cm). At the same time, it provides synchronized and synthesized quality colour (RGB) video (at 1.3 M-Pixel). With these specifications, the new ZCam TM (previously Z-Sense) is ideal for PC-based gaming and for background replacement in web- conferencing. Game developers, web-conferencing service providers and gaming enthusiasts interested in the new ZCamTM (previously Z-Sense) are invited to contact us. As previously mentioned,
17
2008 the new ZCamTM (previously Z-Sense) and DeepCTM are the latest achievements backed by a tradition of providing high quality depth sensing products. Z-CamTM, the first depth video camera, was released in 2000 and was targeted primarily at broadcasting organizations. Z-MiniTM and DMC-100TM followed, each representing another leap forward in reducing cost and size.
18