Patent Snapshot: EnCharge AI and its ‘first-of-its-kind’ on-device AI accelerator

March 2, 2026

Share this:

Highlights:
  • EnCharge AI raised over $100 million and launched its first commercial AI accelerator, EN100, advancing analog in-memory computing from research into deployment.
  • The EN100 targets client and edge devices, delivering more than 200 TOPS within low power envelopes for local AI inference.
  • Its compute-in-memory architecture reduces data movement, improving efficiency versus conventional AI accelerators.

EnCharge AI is moving from academic research into commercial deployment as it seeks to address the growing power, cost, and scalability constraints of conventional AI hardware. Launched last May 2025, the company’s analog in-memory computing platform, the EN100, targets inefficiencies in traditional digital accelerators, which have become more pronounced as AI workloads expand beyond data centers into client and edge devices.

The Santa Clara–based semiconductor startup, spun out of Princeton University, is developing analog memory chips designed to accelerate AI inference while sharply reducing energy consumption. The company is targeting client and edge platforms such as laptops, workstations, and embedded systems, where power and thermal constraints increasingly limit the use of advanced AI workloads.

First-of-its-kind AI accelerator

Alongside the funding announcement, EnCharge introduced the EN100, which it describes as the industry’s first AI accelerator built on precise and scalable analog in-memory computing. The company claims that the EN100 is designed to deliver more than 200 tera-operations per second (TOPS) of AI compute within the power envelopes of client and edge devices.

EN100 enables AI inference to be performed locally rather than in centralized cloud infrastructure, addressing latency, cost, privacy, and security challenges. EnCharge said its accelerators can deliver up to 20 times better performance per watt than competing solutions across a range of AI workloads.

The EN100 comes in two form factors. An M.2 module for laptops delivers more than 200 TOPS within an 8.25-watt power envelope, enabling high-performance local AI inference on client devices. The second one is a PCIe card for workstations that combines four neural processing units (NPUs) to deliver roughly one peta-operation per second of compute, targeting professional AI workloads that demand near-GPU-class capability with significantly lower power draw and cost.

Software stack and early adoption

EN100 is supported by a full-stack software platform designed to optimize AI workloads across popular frameworks such as PyTorch and TensorFlow. The platform includes compilation tools, optimization software, and development resources intended to support a wide range of AI models, including generative language systems and real-time computer vision.

With support for up to 128 GB of LPDDR memory and up to 272 GB per second of bandwidth, EN100 is capable of running workloads traditionally reserved for specialized data center hardware. Its programmable architecture is designed to evolve alongside advancing AI models, helping ensure long-term flexibility as requirements change.

Funding reflects investor and policy interest

EnCharge AI has raised more than $100 million in Series B funding and unveiled its first commercial AI accelerator, marking a significant step in the transition of analog in-memory computing from academic research to market deployment. The funding round was led by Tiger Global

The Series B round includes participation from financial and strategic investors across semiconductors, defense, industrial technology, and infrastructure. Participants include Maverick Silicon, Capital TEN, SIP Global Partners, Zero Infinity Partners, CTBC VC, Vanderbilt University, and Morgan Creek Digital, alongside returning investors RTX Ventures, Anzu Partners, Scout Ventures, AlleyCorp, ACVC, and S5V. Corporate investors include Samsung Ventures and HH-CTBC, a joint venture between Hon Hai Technology Group and CTBC VC.

The company has also received backing from In-Q-Tel and VentureTech Alliance, as well as grant funding from U.S. government organizations including DARPA and the Department of Defense. The timing of the raise aligns with broader U.S. efforts to strengthen domestic semiconductor capabilities and AI infrastructure.

From research to execution

EnCharge spent nearly a decade developing its core technology at Princeton University before formally launching in 2022 to pursue commercial partnerships and venture financing. Having secured more than $100 million in capital and launched its first commercial product, the company is now entering a stage in which market adoption and operational execution will determine whether analog in-memory computing can gain traction as a scalable, energy-efficient alternative for AI inference at the edge

Encharge AI: Patenting Activity

As of this writing, EnCharge AI has filed multiple international patent applications centered on its in-memory computing architecture. 

Patent FamilyTitlePriority DatePublication Date
WO2025122584Systems and methods for high-throughput data operations in in-memory computing arrays2023-12-042025-06-12
WO2025122550Systems and methods for input reference generation technique for in-memory computing array2023-12-042025-06-12
WO2025122556Systems and methods for row or column redundancy in in-memory computing arrays2023-12-042025-06-12
WO2025122573Systems and methods for power and noise configurable analog to digital converters2023-12-042025-06-12
WO2025122596Systems and methods for high-resolution, high-speed, analog voltage delivery to in-memory computing arrays2023-12-042025-06-12
WO2025122564Configurable power management techniques for in-memory compute arrays2023-12-042025-06-12
WO2025122588Systems and methods for power and noise configurable analog to digital converters2023-12-042025-06-12

These applications cover the core subsystems required for scalable analog in-memory computing, including data operations, reference generation, redundancy, ADC design, voltage delivery, and power management. They establish a vertically integrated IP position around key performance and reliability constraints.

High-throughput compute-in-memory architectures for AI acceleration

One of the biggest challenges in modern artificial intelligence is the cost of moving data. In most computer systems, data must travel back and forth between memory and processors, which slows performance and consumes large amounts of power. As AI models grow larger and more complex, this constant data movement has become a major barrier to speed, efficiency, and scalability.

WO2025122584 addresses this problem by moving computation into the memory itself. Instead of sending data to a separate processor, the system performs calculations directly within memory arrays using specialized computing cells. These arrays are organized into multiple banks that can work in parallel, allowing large volumes of data to be processed at once while reducing delays caused by data transfers.

This approach delivers faster performance and lower energy use, particularly for AI workloads such as neural networks, where operations like matrix multiplication are repeated millions of times. By keeping data close to where computations happen, the architecture improves efficiency, supports flexible AI dataflows, and scales more effectively across different applications, from data centers to edge devices.

The patent, titled “Systems and methods for high-throughput data operations in in-memory computing arrays,” was filed on December 4, 2024 and published on June 12, 2025. The patent lists Echere Iroaga and Naveen Verma as inventors. 

PatentRoundup

Sign up for our weekly newsletter for patent news, emerging innovations, and investment trends shaping the patent landscape.

This field is for validation purposes and should be left unchanged.

Sign up to get access​

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Please provide accurate and verifiable contact information to ensure proper use of our materials and prevent misuse. Thank you for your understanding!
Name*
Important: To prevent misuse of our materials, all report download requests undergo a verification and approval process. Providing your email does not guarantee immediate access.
This field is hidden when viewing the form
This field is hidden when viewing the form

Sign up to get access

Please provide accurate and verifiable contact information to ensure proper use of our materials and prevent misuse. Thank you for your understanding!

Important: To prevent misuse of our materials, all report download requests undergo a verification and approval process. Providing your email does not guarantee immediate access.

Subscribe to our newsletter

  • This field is for validation purposes and should be left unchanged.
  • Questions? Check our privacy policy.