[ad_1]
Scientists have developed a robotic head that matches the expressions of nearby humans in real time, thanks to its pliable blue face.
Called Eva, the autonomous bot uses deep learning, a form of artificial intelligence (AI), to ‘read’ and then mirror the expressions on human faces, via a camera.
Eva can express the six basic emotions – anger, disgust, fear, joy, sadness, and surprise, as well as an ‘array of more nuanced’ reactions.
Artificial ‘muscles’ – cables and motors to be precise – pull on specific points on Eva’s face, replicating muscles beneath our skin.
The scientists, at Columbia University in New York, say human-like facial expressions on the faces of robots could build trust between humans and their robotic co-workers and care-givers.
Most robots today are developed to replicate human capabilities such as grasping, lifting and moving from one place to another.
A detail that therefore tends to be lacking is human-like facial expressions. The researchers point out that robots tend to ‘sport the blank and static visage of a professional poker player’.
Eva’s bright blue face was inspired by the Blue Man Group – an American performance art company featuring three mute, blue faced performers.
‘The idea for EVA took shape a few years ago, when my students and I began to notice that the robots in our lab were staring back at us through plastic, googly eyes,’ said Hod Lipson, director of Columbia University’s Creative Machines Lab.
Lipson observed a similar trend in a grocery store, where he encountered restocking robots wearing name badges, and in one case, a bot decked out in a hand-knitted cap.
‘People seemed to be humanising their robotic colleagues by giving them eyes, an identity or a name,’ he said.
‘This made us wonder, if eyes and clothing work, why not make a robot that has a super-expressive and responsive human face?’
The first phase of the project began in Lipson’s lab several years ago when the team constructed Eva’s disembodied bust, with different muscle points controlled by a computer.
The team used 3D printing to manufacture parts with complex shapes that integrated seamlessly with Eva’s skull.
Researchers then used a multi-stage training process to enable Eva to read and replicate the faces of nearby humans in real time.
Firstly, they had to teach Eva what her own face looked like. To do this, the team filmed hours of footage of her making a series of faces at random.
Eva is seen here during the training phase – making facial expressions at random while recording them on a camera
Then, like a human watching herself on Zoom, Eva’s internal neural networks learned to pair the different faces in the video footage with the muscle movements required to make them.
In other words, Eva became able to see herself making a particular face (such as a happy face) and learn how to replicate it.
The final stage was to essentially replace Eva’s own face with a human face in the process, captured on camera, which employed a second neural network.
After several refinements and iterations, Eva acquired the ability to read human face gestures from a camera, and to respond by mirroring that human’s facial expression.
After weeks of tugging cables to make EVA smile, frown, or look upset, the team noticed that Eva’s blue, disembodied face could elicit emotional responses from their lab mates.
The human makes the different facial expressions at a camera, which relays the image in real time on a small screen, directed at the robot’s face.
Eva’s bright blue face was inspired by the Blue Man Group – an American performance art company featuring three mute, blue faced performers (pictured)
‘I was minding my own business one day when Eva suddenly gave me a big, friendly smile,’ Lipson said. ‘I knew it was purely mechanical, but I found myself reflexively smiling back.’
While Eva is still currently a laboratory experiment, such technologies could one day have beneficial, real-world applications.
For example, robots capable of responding to a wide variety of human body language would be useful in workplaces, hospitals, schools and homes.
‘There is a limit to how much we humans can engage emotionally with cloud-based chatbots or disembodied smart-home speakers,’ said Lipson.
‘Our brains seem to respond well to robots that have some kind of recognisable physical presence.’
[ad_2]