Now Reading
Mask-bot takes a new approach to giving robots a human face

Mask-bot takes a new approach to giving robots a human face

A 3D image of a human face projected onto the back of a plastic mask

 
While great strides have been made in the development of humanoid robots, such as Honda’s ASIMO, giving robots a human face with natural expressions and movement has proven a difficult task. While some look to create lifelike faces and expressions with motors under artificial skin replicating the function of facial muscles, German and Japanese researchers have joined forces to come up with a different solution called Mask-bot that sees a 3D image of a human face projected onto the back of a plastic mask.

The Mask-bot displays realistic three-dimensional heads using a projector positioned behind a transparent plastic mask. The projector beams a human face onto the back of the mask to create realistic features that can not only be seen from various angles, including the side, but can also be changed on demand.

Dr. Takaaki Kuratate compares the Mask-bot approach to that used to project faces onto sculptures in Disneyland’s Haunted Mansion ride. However, these images are projected from the front whereas the Mask-bot uses rear-projection. This means there is only a 12 cm (4.7 in) gap between the high-compression, x0.25 fish-eye lens with a macro adapter used to project the image and the face mask.

o ensure the projected image was also bright enough to be viewed in daylight, the team used a projector that is strong and small and they also gave the inside of the plastic mask with a coat of luminous paint.

In developing Mask-bot the team also faced the challenge of projecting a moving image onto the mask instead of just a static photo without requiring a video image of the person speaking. To achieve this they use a program that converts a normal two-dimensional photo into a correctly proportioned projection for the three-dimensional mask. Additional algorithms are then used to provide the facial expressions and voice.

The talking head animation engine developed by Takaaki Kuratate to replicate facial expressions filters an extensive series of face motion data from people collected by a motion capture system. It then selects the facial expressions that best match the specific sound – or phoneme – when it is being spoken. The computer extracts a set of facial coordinates from each of the selected expressions that is can then assign to any new face. Emotion synthesis software is responsible for delivering the visible emotional nuances, to indicate when someone is happy or sad, for example.

See Also

Read more . . .
 
Bookmark this page for “Emotion synthesis software” and check back regularly as these articles update on a very frequent basis. The view is set to “news”. Try clicking on “video” and “2” for more articles.

What's Your Reaction?
Don't Like it!
0
I Like it!
0
Scroll To Top