Embedded
Foundational IP. Won India's Microprocessor Challenge — Ranked #1 of 30 finalists. Established the architectural patterns that V2 builds on.
ExSLerate is the AI accelerator IP family that powers the Krsna SoC — and is licensable as standalone IP to silicon vendors, OEMs, and chip startups building their own AI hardware. Four configurations today (Lite to Apex). Two patented engines inside V2: Dynamic Neural Compression and the Infinite Series Engine. Two generations ahead on the roadmap. ARM-style licensing model for AI silicon.
Most AI silicon companies sell you one thing. We sell you two — because the buyers are different and the economics are different. Krsna SoC is the finished chip. ExSLerate is the IP behind it. Pick the model that matches your business.
For OEMs and device-makers buying a finished, production-ready AI chip. Per-chip pricing. Reference designs included. Lite for wearables, Apex for data center.
For silicon vendors and chip startups designing their own SoCs. Licensable IP — RTL, compiler stack, integration support. ARM-style commercial model: license fee + per-unit royalty.
V1 was the 2019 microprocessor challenge winner. V2 ships today across the Krsna SoC variants. V3 climbs to NVIDIA L40s-class inference workloads. V4 reaches A100-class — training and large-model inference.
Foundational IP. Won India's Microprocessor Challenge — Ranked #1 of 30 finalists. Established the architectural patterns that V2 builds on.
4-configuration family — Lite (M64, always-on wearables) → Apex (M4096, robotics + automotive). Native INT4 / FP8 precision. Ships in the Krsna SoC. Two engines inside: Dynamic Neural Compression (DNC Gen 1) and the Infinite Series Engine. 28% lossless weight compression, 50% memory traffic reduction at peak context.
DNC Gen 2 — 60% weight compression + 50% KV-cache compression. Targets local 27B-class inference for enterprise RAG on a single 24 GB GDDR6 card. Half the RAM and a third of the memory bus width vs the standard requirement (48 GB GDDR6 / 384-bit).
DNC Gen 3 — 70% compression target. A100-class throughput envelope. Built for full-rack deployment in sovereign and private clouds. The endgame for the chip program: silicon-to-runtime co-design at hyperscaler scale.
ExSLerate V2 ships across four Krsna SoC configurations — Lite (M64) for always-on wearables, Pulse (M256) for smartwatches and smart speakers, Surge (M1024) for drones and aerial platforms, and Apex (M4096) for robotics and automotive. Same IP family, scaled across the full edge-to-robotics envelope.
| Variant | IP | MAC count | Precision | Target |
|---|---|---|---|---|
| Krsna Apex | M4096 | 4,096 | INT4 · FP8 | Robotics · Automotive · Industrial |
| Krsna Surge | M1024 | 1,024 | INT4 · FP8 | Drones · Aerial · Light edge |
| Krsna Pulse | M256 | 256 | INT4 · FP8 | Smartwatch · Smart speaker |
| Krsna Lite | M64 | 64 | INT4 | Always-on wearables · Hearables |
V2 is shipping today across endpoint and robotics. V3 climbs to SOHO-server class — making 27B-class inference fit on a single 24 GB GDDR6 card with DNC Gen 2. V4 enters the data center with DNC Gen 3 and an A100-class envelope. Each generation pushes the compression ratio further; each unlocks a higher tier of model size on commodity hardware.
DNC Gen 2 pushes weight compression to 60% and adds 50% KV-cache compression. Targets enterprise RAG on a single 24 GB GDDR6 card — half the RAM and a third of the bus width of the standard requirement.
DNC Gen 3 targets 70% compression. A100-class throughput envelope. Built for full-rack deployment in sovereign and private clouds — the silicon endgame of the program.
Detailed throughput, latency, power, and per-configuration benchmarks are released under NDA on an engagement basis.
A typical AI IP license drops you RTL and tells you to figure out the rest. ExSLerate licensees get the IP plus the runtime that's already optimized to run on it — because we built both layers together.
Synthesizable Verilog RTL for the selected variant. Verification suite. Integration documentation.
CORE compiler pre-tuned for the licensed variant. Quantization, kernel scheduling, op fusion — all included.
EdgeFlow inference engine that runs out-of-the-box on your silicon. 193 model architectures pre-supported. Built on IREE / MLIR — open frontends, no vendor lock.
Engineering team available for SoC integration, customization, and tape-out support. Not a hands-off license.
2019 — ExSLerate V1 ranked #1 of 30 finalists in MeitY's India Microprocessor Challenge. Foundational silicon recognition that seeded the IP family.
2023 — Aegis Graham Bell Award for the chip program. Selected into MeitY C2S — 1 of 13 companies in India's flagship semiconductor program.
2024 — Selected into Qualcomm QSMP as 1 of 2 cohort companies — industry-partner validation from the chip leader.
2025 — Co-development partnership with Brandworks Technologies announced. First wave of co-developed AI hardware planned for 2026.