fb

Researchers create humanoid robot so advanced it can predict when you are going to smile

Non-verbal communication is what robots are lacking.
  • Researchers at Columbia University have developed a robotic head called Emo
  • He’s been trained to read non-verbal cues, like eye contact and smiling, to improve robot-human interactions
  • Emo is so good at reading subtle changes in a human face that he can anticipate when a human is about to smile and respond before they do

Published on Apr 9, 2024 at 8:13PM (UTC+4)

Last updated on Apr 10, 2024 at 3:11PM (UTC+4)

Edited by Amelia Jean Hershman-Jones
Meet Emo the robot who copies facial expressions
Columbia University

Columbia University has developed a robot called Emo which specializes in facial expressions.

He’s been designed to bridge the gap between humans and robots through non-verbal communication like smiling.

Emo is currently just a head, but the team plans to incorporate language model systems.

Ultimately they hope this project will help humans and robots have better and more meaningful interactions.

READ MORE! Yamaha unveiled self-driving motorbike that can recognize its owner from a distance

Thanks to models like ChatGPT, the communication skills of AI are at an all-time high.

There’s the humanoid robot who can perform an uncanny impression of Elon Musk, and Ameca the robot who had an exclusive chat with Supercar Blondie, recently.

But there is a huge element of human-robot interaction that is largely still lacking: non-verbal communication.

Researchers at Columbia University are on a mission to change that, building trust between humans and robots one smile at a time.

They’ve created a robotic human head complete with eyes and skin, and called it ‘Emo’.

Emo has 26 separate actuators, which allow him to express the micromovements required to produce realistic facial expressions.

He is also equipped with high-resolution cameras embedded in his eyes and a layer of silicone ‘skin’ that conceals his mechanical components.

The eyes help him to maintain eye contact during conversation, a crucial part of human interaction.

The team had two challenges: designing a robot that could both produce and read a wide range of facial expressions.

Led by Yuhang Hu, a Columbia Engineering PhD student, they built two AI models to work together.

One is to read a human’s face and predict what expression they’re about to make by analyzing subtle changes in their face.

And the other to immediately issue motor responses to Emo’s face replicating or responding to the facial expression.

First, they placed Emo in front of a mirror and instructed him to spend hours practicing various facial expressions.

Next, Emo studied facial expressions in humans by studying videos frame by frame.

Emo assimilated this information so effectively that he can now anticipate when a human is about to smile and respond before they do.

Hu believes that this will improve the interaction quality between robots and humans.

“Traditionally robots have not been designed to consider humans’ facial expressions during interactions,” Hu says.

“Now the robot can integrate human facial expressions as feedback.”

Going forward, the team will be integrating a language model system like ChatGPT to take Emo’s conversation skills to the next level.

For now Emo only operates in smiles while the team figures out how to incorporate more tricky expressions like pouting or frowning.

You might be interested in

Related Articles

Qatar Airways debuts world's first human-like AI flight attendant
First-ever flame-throwing robot dog on sale for less than $10,000
The psychology behind car collecting: why people do it
Man who lived on luxury cruise ship for two decades shares the messed up thing about being back on dry land
Supersonic nuclear-powered plane would fly from London to New York in less time than a soccer game
The technology behind Emirates' in-flight entertainment system
Life aboard Concorde: Experiences of passengers and crew
Apple planning on adding missing stock feature to iPad after 14 years