A wearable that can track facial expressions while wearing masks
The technology, C-Face, uses two tiny cameras and deep learning algorithm to continuously observe facial contours and reconstruct expressions.
(Subscribe to our Today’s Cache newsletter for a quick snapshot of top 5 tech stories. Click here to subscribe for free.)
Researchers at Cornell University have developed an ear-mounted wearable sensing technology that can track facial expressions, even when someone is wearing a face mask.
The technology, C-Face, uses two tiny cameras and deep learning algorithm to continuously observe facial contours and reconstruct expressions. The researchers have used this technology in prototype devices like earphones and headphones.
“Earlier, wearable technology was aiming to recognise facial expressions, most solutions needed to attach sensors on the face,” said Cheng Zhang, Director of Cornell’s SciFi Lab. “And even with so much instrumentation, they could only recognise a limited set of discrete facial expressions.”
“Because it works by detecting muscle movement, C-Face can capture expressions even when users are wearing masks,” Zhang explained.
The prototype devices are embedded with the RGB cameras, which are located below each ear to capture the facial muscle movements. Then using computer vision and a deep learning model the captured images are reconstructed.
During the process, the model simplifies the images of cheeks into 42 facial feature points, depicting the shapes and positions of the mouth, eyes and eyebrows as they are most affected by changes in expression, a Cornell release noted.
According to the researchers, the technology could be used as a communication tool in virtual reality (VR), to translate expressions into emojis, or for silent speech commands.
“What is very exciting as it gives you the opportunity to wear a VR set, and also translate your emotions directly to others,” Francois Guimbretière, co-author of the C-Face paper, said.