// SILICON IP

License the IP.
Build your own silicon.

ExSLerate is the AI accelerator IP family that powers the Krsna SoC — and is licensable as standalone IP to silicon vendors, OEMs, and chip startups building their own AI hardware. Four configurations today (Lite to Apex). Two patented engines inside V2: Dynamic Neural Compression and the Infinite Series Engine. Two generations ahead on the roadmap. ARM-style licensing model for AI silicon.

Current generation
V2
Variants shipping
4
Generations on roadmap
4
Foundational year
2019

Buy the chip. Or license the IP.

Most AI silicon companies sell you one thing. We sell you two — because the buyers are different and the economics are different. Krsna SoC is the finished chip. ExSLerate is the IP behind it. Pick the model that matches your business.

// FINISHED CHIP

Krsna SoC →

For OEMs and device-makers buying a finished, production-ready AI chip. Per-chip pricing. Reference designs included. Lite for wearables, Apex for data center.

Buyer: OEMs, device-makers · Sale: per-chip · Margin: hardware
// LICENSABLE IP

ExSLerate IP

For silicon vendors and chip startups designing their own SoCs. Licensable IP — RTL, compiler stack, integration support. ARM-style commercial model: license fee + per-unit royalty.

Buyer: Silicon vendors · Sale: license + royalty · Margin: IP

Four generations. 2019 to 2028.

V1 was the 2019 microprocessor challenge winner. V2 ships today across the Krsna SoC variants. V3 climbs to NVIDIA L40s-class inference workloads. V4 reaches A100-class — training and large-model inference.

2019
V1 · Embedded
Foundational IP. Won India's Microprocessor Challenge — Ranked #1 of 30 finalists. Established the architectural patterns that V2 builds on.
2024–2026
V2 · Endpoint + robotics
4-configuration family — Lite (M64, always-on wearables) → Apex (M4096, robotics + automotive). Native INT4 / FP8 precision. Ships in the Krsna SoC. Two engines inside: Dynamic Neural Compression (DNC Gen 1) and the Infinite Series Engine. 28% lossless weight compression, 50% memory traffic reduction at peak context.
2027
V3 · SOHO server · 27B-class inference
DNC Gen 2 — 60% weight compression + 50% KV-cache compression. Targets local 27B-class inference for enterprise RAG on a single 24 GB GDDR6 card. Half the RAM and a third of the memory bus width vs the standard requirement (48 GB GDDR6 / 384-bit).
2028
V4 · Data center · A100-class envelope
DNC Gen 3 — 70% compression target. A100-class throughput envelope. Built for full-rack deployment in sovereign and private clouds. The endgame for the chip program: silicon-to-runtime co-design at hyperscaler scale.
Each generation targets a higher compute tier. V3 and V4 are co-designed with EdgeFlow — so when the runtime absorbs new model architectures, the silicon is already ready for them.
V1
2019
SHIPPED

Embedded

Foundational IP. Won India's Microprocessor Challenge — Ranked #1 of 30 finalists. Established the architectural patterns that V2 builds on.

MeitY MPC ranking
#1
Target
Embedded edge
V2
2024–2026
CURRENT

Endpoint + robotics

4-configuration family — Lite (M64, always-on wearables) → Apex (M4096, robotics + automotive). Native INT4 / FP8 precision. Ships in the Krsna SoC. Two engines inside: Dynamic Neural Compression (DNC Gen 1) and the Infinite Series Engine. 28% lossless weight compression, 50% memory traffic reduction at peak context.

Configurations
4
MAC range
M64 → M4096
Compression
28% lossless
V3
2027
ROADMAP

SOHO server · 27B-class inference

DNC Gen 2 — 60% weight compression + 50% KV-cache compression. Targets local 27B-class inference for enterprise RAG on a single 24 GB GDDR6 card. Half the RAM and a third of the memory bus width vs the standard requirement (48 GB GDDR6 / 384-bit).

DNC generation
Gen 2
Weight compression
60%
Target
27B on 24GB
V4
2028
ROADMAP

Data center · A100-class envelope

DNC Gen 3 — 70% compression target. A100-class throughput envelope. Built for full-rack deployment in sovereign and private clouds. The endgame for the chip program: silicon-to-runtime co-design at hyperscaler scale.

DNC generation
Gen 3
Compression target
70%
Deployment
Sovereign full-rack
// V2 · CURRENT GENERATION

Four configurations. One IP family.

ExSLerate V2 ships across four Krsna SoC configurations — Lite (M64) for always-on wearables, Pulse (M256) for smartwatches and smart speakers, Surge (M1024) for drones and aerial platforms, and Apex (M4096) for robotics and automotive. Same IP family, scaled across the full edge-to-robotics envelope.

VariantIPMAC countPrecisionTarget
Krsna ApexM40964,096INT4 · FP8Robotics · Automotive · Industrial
Krsna SurgeM10241,024INT4 · FP8Drones · Aerial · Light edge
Krsna PulseM256256INT4 · FP8Smartwatch · Smart speaker
Krsna LiteM6464INT4Always-on wearables · Hearables

Climbing the NVIDIA stack.

V2 is shipping today across endpoint and robotics. V3 climbs to SOHO-server class — making 27B-class inference fit on a single 24 GB GDDR6 card with DNC Gen 2. V4 enters the data center with DNC Gen 3 and an A100-class envelope. Each generation pushes the compression ratio further; each unlocks a higher tier of model size on commodity hardware.

V3
2027 · Roadmap

SOHO server · 27B-class inference

DNC Gen 2 pushes weight compression to 60% and adds 50% KV-cache compression. Targets enterprise RAG on a single 24 GB GDDR6 card — half the RAM and a third of the bus width of the standard requirement.

DNC generation
Gen 2
Weight compression
60%
Target model size
27B-class
Card footprint
24 GB GDDR6
V4
2028 · Roadmap

Data center · A100-class envelope

DNC Gen 3 targets 70% compression. A100-class throughput envelope. Built for full-rack deployment in sovereign and private clouds — the silicon endgame of the program.

DNC generation
Gen 3
Compression target
70%
Compute class
A100 tier
Deployment
Sovereign rack

Detailed throughput, latency, power, and per-configuration benchmarks are released under NDA on an engagement basis.

Not just RTL. The full stack.

A typical AI IP license drops you RTL and tells you to figure out the rest. ExSLerate licensees get the IP plus the runtime that's already optimized to run on it — because we built both layers together.

// LAYER 01

RTL deliverables

Synthesizable Verilog RTL for the selected variant. Verification suite. Integration documentation.

// LAYER 02

Compiler stack

CORE compiler pre-tuned for the licensed variant. Quantization, kernel scheduling, op fusion — all included.

// LAYER 03

Runtime

EdgeFlow inference engine that runs out-of-the-box on your silicon. 193 model architectures pre-supported. Built on IREE / MLIR — open frontends, no vendor lock.

// LAYER 04

Integration support

Engineering team available for SoC integration, customization, and tape-out support. Not a hands-off license.

// PROOF

ExSLerate has been getting jury-validated since 2019.

2019 — ExSLerate V1 ranked #1 of 30 finalists in MeitY's India Microprocessor Challenge. Foundational silicon recognition that seeded the IP family.

2023 — Aegis Graham Bell Award for the chip program. Selected into MeitY C2S — 1 of 13 companies in India's flagship semiconductor program.

2024 — Selected into Qualcomm QSMP as 1 of 2 cohort companies — industry-partner validation from the chip leader.

2025 — Co-development partnership with Brandworks Technologies announced. First wave of co-developed AI hardware planned for 2026.

// LET'S BUILD

License ExSLerate. Build your silicon.