DEEPX AI Chip Line-up
Powering the Future of Edge AI

DX-M1

DX-M Series
DX-M1 delivers ultra-efficient AI performance, design achieves 25 TOPS (200 eTOPS) under 5W, significantly outperforming competitors in performance-to-power ratio.
Type:
AI Accelerator
AI Performance:
25 TOPS
Power Consumption:
3~5 W
Key Features
Exceptional Power Efficiency

Delivering breakthrough performance while consuming significantly less power than competitors, enabling longer battery life for mobile devices and reduced energy costs for data centers. (Butter-Proof)

Dedicated DRAM Design

Purpose-built dedicated DRAM enables simultaneous multi-model processing, allowing your applications to run multiple AI models concurrently without performance degradation.

Universal CPU Compatibility

Engineered for seamless integration with any host CPU architecture—zero compatibility issues means faster deployment, simplified development, and protection for your existing infrastructure investments

Leading Cost Efficiency

Our innovative design minimizes expensive on-chip SRAM requirements, delivering substantial cost advantages without compromising performance—making advanced AI accessible at scale

AI Performance
AI
Model
Input
Resolution
Accuracy (mAP)

FP32 GPU
Accuracy (mAP)
*IQ8 DX-M1
FPS
DX-M1
NPU Power (W)
DX-M1
FPS/W
DX-M1
YOLOv5m6
640×640
45.08
45.07
40.42
2.20
18.37
YOLOv7-e6
640×640
55.58
55.45
19.49
2.50
7.81
YOLOv8x
640×640
53.63
53.12
49.03
3.17
15.48
YOLOv8l
640×640
52.57
52.01
91.13
3.60
25.32
YOLOv8m
640×640
50.11
49.56
133.60
2.76
48.32
YOLOv9c
640×640
52.86
52.36
47.89
2.83
16.95
Successful Use Cases

DX-M1M

DX-M Series
Ultra-Compact AI Processing with Integrated Memory. The DX-M1M builds on the exceptional foundation of our flagship DX-M1 AI accelerator by integrating high-performance LPDDR memory directly onto the same die. This design delivers the same industry-leading AI performance while dramatically reducing the overall footprint required for deployment
Type:
AI Accelerator
AI Performance:

TBU

Power Consumption:

TBU

DX-V3

DX-V Series
DX-V3 is a powerful AI vision processor offering exceptional computational efficiency and versatile capabilities. This all-in-one solution integrates a CPU, ISP, DSP, video codec, and high-performance NPU, delivering 13 TOPS of AI power with industry-leading efficiency. Optimized for next-gen vision systems, DX-V3 processes imagery, video streams, and 3D point-cloud data with ease
Type:
AI Vision SoC
AI Performance:
13 TOPS
Power Consumption:
TBU

DX-M2

DX-M Series
DX-M2 is a specialized semiconductor designed specifically for efficient processing of GenAI & language models with up to 20 billion parameters. Delivering impressive performance of 25-30 TPS (Tokens Per Second), this processor enables deployment of sophisticated language models in a variety of edge and enterprise applications without sacrificing response time or accuracy
Type:
GenAI Accelerator
AI Performance:
TBU
Power Consumption:
TBU

Frequently Asked Questions

An AI Accelerator, also known as an NPU (Neural Processing Unit) or AI Chip, is an integrated chip specifically designed to run artificial intelligence software by executing machine learning models.
An AI Accelerator is a general AI processor that enhances AI/ML tasks across various domains. In contrast, an AI Vision SoC is specialized for computer vision, integrating both AI processing and image-related hardware.
You can purchase the Evaluation Kit from our website (Buy Now) and proceed with testing.
Any Questions?
For inquiries about our products and services, please contact us.