Advancing Emotion-Aware Robotics through Incremental Learning

September 24, 2025

The integration of emotional intelligence into robotics has long been a goal for researchers aiming to create machines that can interact naturally with humans. Traditional facial expression recognition (FER) models often struggle to generalize across diverse real-world scenarios, as they are typically trained on a single dataset and may not adapt well to new environments or individual variations.

Dr. Rahul Singh Maharjan and his team at the University of Manchester are addressing this challenge by developing a novel approach that enables robots to learn and adapt to human emotions incrementally. Their method allows AI systems to build upon previous knowledge while incorporating new emotional data, enhancing their ability to respond appropriately in dynamic human-robot interactions.

Maharjan, R. S., Bonicelli, L., Romeo, M., Calderara, S., Cangelosi, A., & Cucchiara, R. (2025). Continual Facial Features Transfer for Facial Expression Recognition. IEEE Transactions on Affective Computing, 16(3), 2352–2364. https://doi.org/10.1109/TAFFC.2025.3561139

In their recent study, Dr. Maharjan and collaborators introduced a model that continually leverages attention to important facial features from pre-trained models to improve performance across multiple datasets. This approach, termed Continual Facial Features Transfer (CFFT), was validated using split-in-the-wild datasets, where data is presented to the model incrementally rather than all at once. The model demonstrated improved performance in recognizing facial expressions across different domains, showcasing its adaptability and robustness in real-world settings.

The CFFT model addresses the issue of catastrophic forgetting, a common challenge in machine learning where models tend to forget previously learned information when exposed to new data. By incorporating mechanisms that allow the model to retain and build upon prior knowledge, CFFT enhances the ability of robots to recognize and respond to a wide range of human emotions.

The ability of robots to understand and respond to human emotions is crucial for effective human-robot interaction (HRI). Emotional awareness enables robots to provide more personalized and contextually appropriate responses, improving their utility in various applications such as healthcare, education, and customer service.

Dr. Maharjan from University of Manchester stated,

“For technology to truly integrate into our lives, it must understand our emotions. My goal is to help build AI that doesn’t just compute, but connects with us.”

The development of emotion-aware robots also raises important ethical considerations. As robots become more adept at recognizing and responding to human emotions, questions about privacy, consent, and the potential for emotional manipulation become increasingly pertinent. Researchers and policymakers must work together to establish guidelines and frameworks that ensure the responsible development and deployment of emotionally intelligent robots.

The work being conducted at the University of Manchester represents a significant step forward in the field of affective computing.

Leave a Reply

Your email address will not be published.

Previous Story

Next-Generation Electronics | MIT’s Magnetic Transistor Combines Memory and Logic

Next Story

Dark Matter Breakthrough | Gravitinos as a Viable Candidate

Privacy Preference Center