Scientists develop a robot that matches the expressions of humans 

    [ad_1]

    Scientists have developed a robotic head that matches the expressions of nearby humans in real time, thanks to its pliable blue face. 

    Called Eva, the autonomous bot uses deep learning, a form of artificial intelligence (AI), to ‘read’ and then mirror the expressions on human faces, via a camera.    

    Eva can express the six basic emotions – anger, disgust, fear, joy, sadness, and surprise, as well as an ‘array of more nuanced’ reactions. 

    Artificial ‘muscles’  – cables and motors to be precise – pull on specific points on Eva’s face, replicating muscles beneath our skin.  

    The scientists, at Columbia University in New York, say human-like facial expressions on the faces of robots could build trust between humans and their robotic co-workers and care-givers.  

    HOW DOES EVA WORK? 

    The human makes the different facial expressions at a camera, which relays the image in real time on a small screen, directed at the robot’s face.

    Eva then relies on a library of facial emotions, to replicate the particular facial expression.

    This is achieved by artificial ‘muscles’ – computerised cables and motors – beneath its blue skin. 

    Researchers say Eva can mimic the movements of the more than 42 tiny muscles attached at various points to the skin and bones of human faces.  

    Most robots today are developed to replicate human capabilities such as grasping, lifting and moving from one place to another.

    A detail that therefore tends to be lacking is human-like facial expressions. The researchers point out that robots tend to ‘sport the blank and static visage of a professional poker player’. 

    Eva’s bright blue face was inspired by the Blue Man Group – an American performance art company featuring three mute, blue faced performers. 

    ‘The idea for EVA took shape a few years ago, when my students and I began to notice that the robots in our lab were staring back at us through plastic, googly eyes,’ said Hod Lipson, director of Columbia University’s Creative Machines Lab.

    Lipson observed a similar trend in a grocery store, where he encountered restocking robots wearing name badges, and in one case, a bot decked out in a hand-knitted cap.

    ‘People seemed to be humanising their robotic colleagues by giving them eyes, an identity or a name,’ he said. 

    ‘This made us wonder, if eyes and clothing work, why not make a robot that has a super-expressive and responsive human face?’ 

    The first phase of the project began in Lipson’s lab several years ago when the team constructed Eva’s disembodied bust, with different muscle points controlled by a computer. 

    The team used 3D printing to manufacture parts with complex shapes that integrated seamlessly with Eva’s skull. 

    Researchers then used a multi-stage training process to enable Eva to read and replicate the faces of nearby humans in real time. 

    Firstly, they had to teach Eva what her own face looked like. To do this, the team filmed hours of footage of her making a series of faces at random.

    Eva is seen here during the training phase - making facial expressions at random while recording them on a camera

    Eva is seen here during the training phase – making facial expressions at random while recording them on a camera 

    Then, like a human watching herself on Zoom, Eva’s internal neural networks learned to pair the different faces in the video footage with the muscle movements required to make them. 

    In other words, Eva became able to see herself making a particular face (such as a happy face) and learn how to replicate it. 

    The final stage was to essentially replace Eva’s own face with a human face in the process, captured on camera, which employed a second neural network.  

    After several refinements and iterations, Eva acquired the ability to read human face gestures from a camera, and to respond by mirroring that human’s facial expression. 

    After weeks of tugging cables to make EVA smile, frown, or look upset, the team noticed that Eva’s blue, disembodied face could elicit emotional responses from their lab mates.

    The human makes the different facial expressions at a camera, which relays the image in real time on a small screen, directed at the robot's face.

    The human makes the different facial expressions at a camera, which relays the image in real time on a small screen, directed at the robot’s face. 

    Eva's bright blue face was inspired by the Blue Man Group – an American performance art company featuring three mute, blue faced performers (pictured)

    Eva’s bright blue face was inspired by the Blue Man Group – an American performance art company featuring three mute, blue faced performers (pictured)

    ‘I was minding my own business one day when Eva suddenly gave me a big, friendly smile,’ Lipson said. ‘I knew it was purely mechanical, but I found myself reflexively smiling back.’ 

    While Eva is still currently a laboratory experiment, such technologies could one day have beneficial, real-world applications. 

    For example, robots capable of responding to a wide variety of human body language would be useful in workplaces, hospitals, schools and homes.

    ‘There is a limit to how much we humans can engage emotionally with cloud-based chatbots or disembodied smart-home speakers,’ said Lipson.

    ‘Our brains seem to respond well to robots that have some kind of recognisable physical presence.’  

    WHAT IS DEEP LEARNING?

    Deep learning is a form of machine learning concerned with algorithms which have a wide range of applications. 

    It is a field which was inspired by the human brain and focuses on building artificial neural networks.

    It was formed originally based on brain simulations and to allow learning algorithms to become better and easier to use. 

    Processing vast amounts of complex data then becomes much easier and allows researchers to trust algorithms to draw accurate conclusions based on the parameters the researchers have set. 

    Task-specific algorithms which exist are better for specific tasks and goals but deep-learning allows for a wider scope of data collection. 

    [ad_2]

    Previous articleCovid variant names: The 10 new Covid names as WHO buckles under 'discrimination' cries
    Next articleRita Ora stuns in a figure-hugging backless leather dress before flaunting her pins in a tiny skirt

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here