The smallest gesture can hide a world of meaning.
A particular flick of a baton and a beseeching gesture can transform the key moment of a concert from mundane to ethereal. Alas, computers are seriously handicapped in understanding human gestural language, both in software and hardware. In particular, finding a method for describing gestures presented to a computer as input data for further processing has proven a difficult problem. In response, Microchip Technologies has developed the world’s first 3D gesture recognition chip that senses the gesture without contact, through its effect on electric fields.
Why are normal human gestures so difficult to translate into a form suitable for computers? The meaning of a gesture is not a simple hand position or path of motion, but a gestalt – a approximate summary of the entire gesture. It is a task nearly defying description to decipher the intended meaning of a gesture from data acquired by tracking the movement of every portion of each finger and joint of the hand and wrist. This is one of the main reasons that, despite at least 30 years of effort, artistry has consistently eluded any computer-based orchestra controlled by a human conductor. Despite this,gesture controlr emains an active area of development, because of the enormous market that awaits a practical system.
Microchip Technologies has recently unveiled their GestIC technology as implemented in the soon-to-be-available MGC3130 chip, an outgrowth of anearlier technology. When used as a 3D digitizer, the MGC3130 resolves position within a 15 cm (6 in) cube at a remarkable resolution of 150 dpi. (Yes, that’s vertical resolution as well as in the plane, meaning that roughly a billion voxels (3D pixels) can be distinguished within the scanning volume.) The sampling rate is 200 measurements per second, allowing the GestIC technology to follow quick adjustments of hand and finger positions, velocities, and accelerations.
The MGC3130 enables a new approach to the problem of human-machine interfacing (HIM), recognizing gestures by measuring the changes in an electric field as the gesture is made. When gestures are sensed via their effect on electric fields, the step of precisely measuring hundreds of positions for each millisecond of a gesture and converting that data into a concise description of a gesture is no longer needed. Instead, a vastly simpler procedure can be adopted. The output of an electric field-based gesture sensor is itself something of a gestalt of a gesture, which has the potential to greatly simplify the interpretation of gestures.
via Gizmag – Brian Dodson
The Latest Streaming News: human-computer interfacing updated minute-by-minute
Bookmark this page and come back often