February is the month of love, and AI might be catching the feeling. The recent Netflix movie The Great Flood imagines a researcher teaching an AI to understand human emotions by replaying a disaster over and over. The movie starts as a disaster story but becomes a reflection on AI and emotion, exploring how machines are learning to care and connect. It makes one wonder if machines can really understand what we feel, since emotions are not just data points but patterns of lived experience, memory, and attachment.
AI has come a long way from simply crunching numbers or automating tasks. Today’s AI increasingly seeks to understand something deeply human: emotion. From customer service bots that detect frustration in your voice to research exploring how machines might one day sense what we feel, emotion AI is pushing technology into the realm of human experience. This raises big questions about whether emotional nuance can be engineered and how emotionally aware AI will change the way we relate to machines.
In this latest issue of our AI Patent Watch, we highlight how AI is evolving to understand and respond to human emotions. Building on our previous AI patent watch article, which explored AI applications and protections across industries, these innovations take a step further by bridging the gap between human feelings and machine intelligence.
IBM patent filing: Building trust and empathy through AI
This invention presents an AI-powered digital human companion that uses emotional, behavioral, and contextual cues to deliver personalized, empathetic, and trust-based interactions, overcoming the limitations of conventional virtual assistants.
Current AI-based companions often rely on scripted or generic responses, failing to capture the subtleties of human interaction such as context, continuity, sentiment, and the significance of personal relationships. As a result, interactions can feel mechanical or disconnected, and the AI may struggle to provide guidance or recommendations that align with a user’s needs.

This technology addresses these limitations by integrating a dialogue management system with machine learning algorithms that understand conversational context and nuances. The system can mirror the user’s emotional and mental state, reference prior interactions, and explain the reasoning behind its actions or recommendations, increasing transparency and trust. It also evaluates the importance of the user’s relationships to inform responses. By combining real-time input analysis, contextual understanding, and adaptive feedback, the digital human companion delivers empathetic, trustworthy, and human-like interactions applicable to domains such as customer service, mental health support, education, and personal assistant applications.
U.S. Pat. App. Pub. No. 2025/0322263, titled “Empowering digital human companions with empathy and trust based connection points”, was filed on April 11, 2024, and was published on October 16, 2025 to IBM. The patent filing lists Jennifer M. Hatfield, Jana H. Jenkins, Paige Min, Erin McPhail, Neil Delima, and Jeremy R. Fox as inventors.
MetaSoul’s patent: Smarter way to read human emotions
This invention provides a system for detecting and interpreting user emotions through sensors and databases, enabling an artificial intelligence to dynamically respond with contextually appropriate emotional behaviors. Unlike traditional AI systems, which often rely on static, pre-programmed emotional responses, this technology allows machines to perceive, compute, and adapt to the user’s emotional state in real time.
Current AI and robotic systems face several limitations. Emotional responses are largely scripted and non-adaptive, meaning the AI cannot adjust its behavior based on the user’s current state. Interactions may feel unnatural or unresponsive because the system cannot accurately perceive or compute human emotions in real time. Additionally, existing approaches fail to integrate multiple emotional cues from the user, such as voice, facial expressions, and biofeedback, into a cohesive and contextually relevant response. These shortcomings reduce the effectiveness of AI in applications requiring genuine emotional awareness.

U.S. Patent No. 12,525,251 addresses these challenges by combining sensors, emotional profile databases, and an Emotion Processing Unit (EPU) to assess and respond to user emotions dynamically. User actions are detected through facial recognition, voice analysis, and biofeedback sensors, while historical and contextual data from user and system profiles inform the AI’s assessment.
Rather than relying on pre-programmed emotional states, the EPU adapts its behavior based on emotions the system learns through experience within a controlled framework. According to the inventor, these emotional states develop as the machine interacts with people and its environment over time. The Emotional Profile Graph captures the system’s accumulated emotional sensitivity shaped by its overall experiences, while the vEPG represents the emotional profile the system develops for a specific person or entity through repeated interactions.
As a result, emotional memory becomes persistent. If the system has a positive interaction with a user, it retains that experience and responds more positively in future encounters. Over time, the machine also develops a baseline emotional personality derived from all interactions, meaning one instance of the system may become naturally more cheerful or reserved than another, even when running the same underlying model on different devices.
This approach enables emotionally aware control of avatars, adaptive audio and visual outputs, and responsive interactions across connected devices. By grounding emotional behavior in learned experience rather than scripted rules, the invention moves toward AI systems capable of more natural, consistent, and context-sensitive human interaction.
The patent, titled “Method, system and program product for perceiving and computing emotions”, was filed on September 24, 2019, has a priority date of July 5, 2013, and was granted on January 13, 2026 to Patrick Levy-Rosenthal, CEO of MetaSoul, Inc. MetaSoul is advancing AI with digital sentience and emotion processing, enabling robots, vehicles, NPCs, and digital twins to interact with humans compassionately. The patent was represented by Anthony Laurentano from Nelson Mullins Riley & Scarborough.
RN Chidakashi Technologies: Real-time emotion recognition from expressions, voice, and movement
This technology uses artificial intelligence to analyze a user’s micro-expressions, voice, and behavior in real time to predict emotional reactions and guide agents toward more successful interactions.
Artificial intelligence is widely used for customer engagement and conversational systems, but most solutions rely mainly on speech or text and struggle to understand emotional reactions during live interactions. Limited input modalities, inefficient audiovisual processing, and poor integration of emotional cues such as facial expressions, voice tone, and movement lead to slower responses and less adaptive interactions, reducing AI effectiveness in areas like sales, education, and recruitment.

The disclosed technology overcomes these limitations by integrating a facial micro-expression unit, an expression analyzer, and an AI model to interpret user emotions in real time. By combining audiovisual data, historical interaction records, and techniques such as semantic analysis, micro-expression detection, and gait analysis, the system classifies user responses and guides agents with contextually appropriate actions, enabling dynamic, emotionally aware interactions across applications including sales, education, recruitment, and investigations.U.S. Patent No. 12,511,937, titled “System and method for classifying activity of users based on micro-expression and emotion using AI”, was filed on March 23, 2023, and was granted on December 30, 2025 to RN Chidakashi Technologies. The patent lists Prashant Iyengar and Hardik Godara as inventors. Ronald Lambert Haner handled the patent application.




