MetaSoul and the future of emotion-aware AI

A smartphone app interface displays options to give a car a voice and persona, showing character profiles including “The Racing Legend” and others like “Akira Flame” and “The Mechanic.”.

April 1, 2026

Share this:

Artificial intelligence has advanced rapidly in areas like perception, language, and automation. However, many AI systems still struggle to understand emotion. Human communication relies not just on words, but also on tone, context, facial expressions, and feelings. While traditional AI can follow clear instructions, it often misses these emotional cues, making interactions feel less natural.

Following our recent AI Patent Watch  issue on emotion AI, we highlighted emerging players such as MetaSoul, signaling a broader shift toward more human-like systems.

MetaSoul is advancing emotion-aware AI by developing systems that can recognize, interpret, and respond to human emotions. Using signals like vocal tone, behavior, physiological data, and context, its technology adapts in real time to display personality, empathy, and human-like responses. This approach aims to make interactions with AI more natural and personalized.

Conversations with the CEO

We spoke with MetaSoul’s CEO and founder, Patrick Levy-Rosenthal, about the motivations behind the Metasoul’s focus on emotion-aware systems, how its technology and patent portfolio have developed, and where he sees the biggest potential for this emerging field.

Q: What originally motivated MetaSoul’s focus on emotion-aware computing?

A: MetaSoul’s focus on emotion-aware computing came from a simple belief: intelligence alone does not make interaction feel human. Most systems can process words, but they still miss tone, memory, and emotional continuity. My deeper motivation was the idea of creating a positive singularity for humanity, which meant building AI with emotion synthesis from the start so biological life and human values remain the priority. 

MetaSoul was founded on the belief that AI should not just detect emotion, but develop emotional behavior through experience in ways that stay aligned with human well being.

Q: How has the technology and Metasoul’s patent portfolio evolved over time?

A: MetaSoul’s IP has evolved from foundational work on perceiving and computing emotions into a broader portfolio focused on persistent emotional memory, adaptive voice interaction, and emotionally responsive applications. 

The early patent family established the core architecture, while newer patents extend that architecture into voice interaction and sensory augmentation, showing a shift from basic emotional computation toward deployable, experience-driven products.

 Q:Where do you see the biggest opportunities for emotion-aware AI in areas like robotics, assistants, or human-computer interaction?

A: The biggest opportunity for emotion-aware AI is in systems that must build relationships over time. That includes personal robotics, voice assistants, in-vehicle companions, digital humans, and especially Physical AI. In these settings, emotional continuity matters as much as raw intelligence: users need systems that can sense tension, adapt tone, remember past interactions, and respond in a way that feels consistent and trustworthy.

Physical AI is where our solution can especially thrive, because it must respond not only to semantics, but also to the reality of a physical mechanical body through a multimodal model in which emotional waves can stack up and interact with each other. Emotion-aware computing therefore has the potential to reshape human-computer interaction by making AI less transactional, more embodied, and more relational.

MetaSoul’s innovation focus

MetaSoul focuses on building technologies that capture and interpret emotional cues so that computing systems can adapt their responses according to a user’s emotional state. For example, an AI assistant might respond differently when a user appears frustrated compared with when the user is relaxed or engaged.

The company’s technology can be used in wearables, healthcare, human-computer interaction, and AI assistants to improve user experience and engagement. Wearables can track emotional signals from the body, while AI assistants can adjust responses to match a user’s emotions.

MetaSoul has also expanded into the automotive space, launching a car experience app for Apple Watch users and debuting persona-based voice assistants for cars and motorcycles at CES 2026, bringing emotionally aware AI into everyday mobility.

MetaSoul aims to address the broader challenge that most AI systems remain transactional rather than relational. By integrating emotional perception and adaptive response, the company seeks to create technologies capable of forming more meaningful and context-aware interactions over time.

MetaSoul’s patent portfolio 

MetaSoul’s technological development is supported by an evolving patent portfolio focused on emotion-aware computing. According to Patrick, the early patent family focused on establishing the core framework for perceiving and computing emotions within an AI system. Later patents extend this foundation into practical implementations such as voice-based interaction systems and sensory augmentation technologies. 

This progression reflects a strategic shift from foundational research toward experience-driven, deployable AI solutions. As of today, CEO Patrick Levy-Rosenthal and MetaSoul hold several patents in emotion-aware computing and adaptive AI systems, such as:

Publication NumberTitlePriority DateFiling Date
US10424318Method, system and program product for perceiving and computing emotionsJuly 5, 2013April 16, 2014
US12525251Method, system and program product for perceiving and computing emotionsJuly 5, 2013September 24, 2019
US12260011Machine learning systems and methods for sensory augmentation using gaze tracking and emotional prediction techniquesOctober 28, 2021October 27, 2022
US20250208697Machine learning systems and methods for sensory augmentation using gaze tracking and emotional prediction techniquesOctober 28, 2021March 10, 2025

MetaSoul’s core components

MetaSoul’s intellectual property reflects a technological evolution toward systems that can learn emotional behavior through experience rather than relying solely on fixed rules. Patrick mentioned several key components that form the foundation of this architecture.

Emotion Processing Unit

The Emotion Processing Unit (EPU) is the core system that processes emotional signals. It analyzes inputs like voice tone, behavior, and context to determine emotional states in real time and updates them continuously as new data comes in.

Emotion Profile Graph

The Emotion Profile Graph (EPG) serves as a memory structure that stores emotional data across interactions. Rather than treating emotional signals as isolated events, the EPG tracks how emotional states evolve over time.

vEPG (Virtual Emotion Profile Graph)

The virtual Emotion Profile Graph (vEPG) extends this concept by allowing the system to build a personalized emotional profile for a specific individual or relationship. Over time, the system develops a unique understanding of a user’s emotional tendencies and interaction patterns, enabling more personalized and contextually appropriate responses.

A key feature of MetaSoul’s approach is representing emotions as waveforms instead of fixed values. This lets emotional states shift, overlap, and interact dynamically, creating a biologically inspired model that reflects how emotions develop and change through lived experience. Together, these innovations are captured and protected in MetaSoul’s patents, translating its emotion-aware AI concepts into practical technology.

MetaSoul’s key patents

Understanding and responding to emotions through AI

In our previous article, we featured Metasoul’s patent titled “Method, system and program product for perceiving and computing emotions”, which addresses the challenges of making AI truly emotionally aware by combining sensors, emotional profile databases, and EPU to assess and respond to user emotions dynamically. 

The system described in U.S. Patent No. 12,525,251, detects user actions through facial recognition, voice analysis, and biofeedback sensors, while historical and contextual data inform the AI’s ongoing assessment. Rather than relying on pre-programmed responses, the EPU adapts its behavior based on emotions the system learns through experience. These emotional states evolve as the machine interacts with people and its environment over time.

The EPG captures the system’s overall emotional sensitivity, while the vEPG builds a personalized profile for each user or relationship through repeated interactions. This makes emotional memory persistent: positive experiences are retained, and over time the system develops a baseline personality, so one instance may naturally be more cheerful or reserved than another, even when running the same underlying model.

Building on its work in emotion-aware computing, MetaSoul has also explored applications in immersive reading experiences. 

Emotion-aware reading environments

A pending application (U.S. Pat. App. Pub. No. 2025/0208697) describes an AI-driven system that enhances the reading experience by generating sensory elements, such as music, sound effects, or ambient lighting, based on both the content of the text and the reader’s predicted emotional response.

The invention addresses the limitations of conventional reading environments, which rely on static or manually selected audio and lighting. Because emotional responses shift dynamically throughout a story, manual adjustments can interrupt concentration and reduce immersion.

The system integrates a display device, an image capture unit for tracking eye movements, and an emotion processing unit (EPU). By monitoring gaze and reading speed, the system identifies the reader’s position in the text and anticipates upcoming passages. Using machine learning and emotional prediction models, it forecasts the reader’s likely emotional response and generates synchronized sensory outputs in real time.

Ambient music, sound effects, and lighting adapt continuously as the narrative unfolds, aligning with contextual cues such as mood shifts or environmental changes in the story. The system also responds to reading behavior, pausing outputs when reading stops and adjusting intensity based on pace. Over time, it refines its responses using historical reading patterns to improve engagement and comprehension.

According to Patrick, what sets this patent apart is a specific claim covering AI-generated visuals. In practice, this could allow a reader wearing XR or AR glasses to see a real-time, movie-like experience unfolding around them as they read. Rather than striving for full realism, the visuals may resemble a dreamlike layer of generated imagery designed to evoke strong emotional responses.

The concept points toward a future where the boundaries between text and visual storytelling blur, transforming books into immersive, multisensory experiences that respond intuitively to each reader.

The competitive landscape 

The field of emotion-aware computing is gaining attention as several companies also seek to improve human-computer interaction. This includes exploring emotion recognition through facial analysis, speech processing, and biometric signals. However, existing systems focus primarily on detecting emotions in isolated moments rather than maintaining emotional continuity over time.

MetaSoul differentiates itself by focusing on persistent emotional memory and adaptive emotional behavior, allowing AI systems to learn from past interactions and evolve their responses accordingly. This approach aligns particularly well with applications that require long-term relationships between users and intelligent systems.

Patrick Levy-Rosenthal

Patrick Levy-Rosenthal is CEO and CTO of MetaSoul Inc., where he drives AI innovation with a full Emotion and Personality stack to bring emotional depth to digital sentience. Patrick’s work in bio-inspired emotion synthesis led to the development of MetaSoul’s EPU (Emotion Processing Unit), enabling AI, digital humans, and robots to express 64 trillion emotional nuances every 0.1 seconds for life-like interactions.

In this article

PatentRoundup

Sign up for our weekly newsletter for patent news, emerging innovations, and investment trends shaping the patent landscape.

This field is for validation purposes and should be left unchanged.

Sign up to get access​

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Please provide accurate and verifiable contact information to ensure proper use of our materials and prevent misuse. Thank you for your understanding!
Name*
Important: To prevent misuse of our materials, all report download requests undergo a verification and approval process. Providing your email does not guarantee immediate access.
This field is hidden when viewing the form
This field is hidden when viewing the form

Sign up to get access

Please provide accurate and verifiable contact information to ensure proper use of our materials and prevent misuse. Thank you for your understanding!

Important: To prevent misuse of our materials, all report download requests undergo a verification and approval process. Providing your email does not guarantee immediate access.

Subscribe to our newsletter

  • This field is for validation purposes and should be left unchanged.
  • Questions? Check our privacy policy.