// FOUNDER'S DESK

Notes from the founder's
desk.

Long-form writing, field notes, and benchmarks from Kamalakar Devaki — on edge AI, sovereign intelligence, the cloud-tax problem, and what it actually takes to build a full-stack AI company from the transistor up.

Pieces published
50
Topics
8+
Author
K. Devaki
Cadence
Weekly

Filter by topic.

50 pieces across edge AI, model architecture, silicon, runtime, and strategy. Use the filters to narrow by topic.

April 2026
Strategy

Enterprise AI Has a Token Leakage Problem

Enterprise AI bills aren't high because models are expensive — they're high because tokens leak. Hallucinations, poor orchestration, context overload. A full-stack approach (model strategy + runtime + monitoring) cuts token consumption 30–40%.

LinkedIn article · Read on LinkedIn ↗
April 2026
Strategy

Why the Next 100× AI Returns May Not Be in Models

The inference inflection point has arrived. Competitive advantage is shifting from model size to deployment efficiency — cost per inference, energy, latency.

LinkedIn article · Read on LinkedIn ↗
Q1 2026
Shakti

Shakti Architecture: Designing Language & Vision Models for the Real World

How the Shakti family is designed for real-world enterprise constraints — architecture choices that make compact models punch above their weight.

LinkedIn article · Read on LinkedIn ↗
Q1 2026
Shakti

Shakti-4B — A Production-Grade Vision-Language Model

Shakti-4B as a production-ready VLM — engineered for the document-intelligence workloads enterprises actually run, not for leaderboard hill-climbing.

LinkedIn article · Read on LinkedIn ↗
Q1 2026
Strategy

The Edge AI Systems Problem — Here's How We're Solving It

The edge isn't a smaller cloud — it's a different problem. SandLogic's approach to silicon, runtime, and models as a co-designed system.

LinkedIn article · Read on LinkedIn ↗
Q4 2025
Shakti

Shakti-4B + OCR — Beating DeepSeek

Shakti-4B benchmarked against DeepSeek on OCR — quick field note on where the comparison lands.

LinkedIn post · Read on LinkedIn ↗
Q4 2025
Krsna · ExSLerate

Raising the Bar on On-Device AI — ExSLerate v2 Benchmarks

New benchmarks for ExSLerate v2 — what the numbers mean for on-device AI deployment and how they compare to incumbent options.

LinkedIn article · Read on LinkedIn ↗
Q4 2025
EdgeMatrix

Benchmarking LMCache vs EdgeMatrix — Why Caching Alone Isn't Enough

Prefix caching is necessary but not sufficient. Why hybrid KV-cache reuse beats prefix-only approaches in multi-tenant inference workloads.

LinkedIn article · Read on LinkedIn ↗
Q4 2025
Krsna · ExSLerate

Core/Edge Energy Performance in AI Chips

How to think about energy-per-inference at the core/edge boundary — the metric that actually matters at scale.

LinkedIn article · Read on LinkedIn ↗
Q4 2025
EdgeMatrix

Redefining LLM Inference — How EdgeMatrix Outperforms vLLM

EdgeMatrix vs vLLM head-to-head on enterprise SLMs — the architectural choices behind a 73% throughput lift on L40s.

LinkedIn article · Read on LinkedIn ↗
Q4 2025
Krsna · ExSLerate

Engineering Scalable Edge AI — The Semiconductor Stack

What the semiconductor stack for scalable edge AI actually looks like — silicon, compiler, runtime, and how they need to be designed together.

LinkedIn article · Read on LinkedIn ↗
Q3 2025
Strategy

Building the Full-Stack AI Future — Chip, Runtime, Models

The thesis statement for SandLogic — why full-stack vertical integration is the right shape for an AI company in 2025, not a luxury.

LinkedIn article · Read on LinkedIn ↗
Q3 2025
Krsna · ExSLerate

Why AI Chip Makers Need In-House Research, Now More Than Ever

Chip companies that outsource research are running an open-loop strategy. The case for the closed-loop alternative — and what it changes.

LinkedIn article · Read on LinkedIn ↗
Q3 2025
Shakti

Lexicons, Nexons, Shakti — A Continuum of Intelligence

Introducing the SandLogic model continuum: Lexicons (curated open-source, quantized), Nexons (open foundations refined with our datasets), Shakti (in-house, ground-up).

LinkedIn post · Read on LinkedIn ↗
Q3 2025
Shakti

Shakti LLM Series — Post 2: Built or Borrowed?

When do you build a sovereign language model and when do you start from open weights? Trade-offs from the SandLogic decision tree.

LinkedIn article · Read on LinkedIn ↗
Q3 2025
Shakti

Shakti LLM Series — Post 1: Why We Built a Sovereign Language Model

The founding rationale for Shakti — why sovereignty, language coverage, and edge deployment forced the in-house path.

LinkedIn article · Read on LinkedIn ↗
Q3 2025
Field Notes

GenAI · Edge AI · Multi-Modal LLMs — Field Note

Short field-note on multi-modal LLMs running on the edge — what works, what doesn't yet.

LinkedIn post · Read on LinkedIn ↗
Q3 2025
Strategy

Escape the Cloud Tax

Companion post for the "Escape the Cloud Tax" series — making the case for on-prem inference economics.

LinkedIn post · Read on LinkedIn ↗
Q3 2025
Strategy

Escape the Cloud Tax — Post 5: Serve Faster, Spend Smarter, Scale

Final post in the "Escape the Cloud Tax" series — how to design inference for speed, cost, and scale simultaneously.

LinkedIn article · Read on LinkedIn ↗
Q3 2025
EdgeMatrix

LLM Inference Acceleration — Field Note

Short post on the practical mechanics of LLM inference acceleration on the edge.

LinkedIn post · Read on LinkedIn ↗
Q3 2025
Field Notes

LLM MLOps on the Edge — Field Note

Operating LLMs at the edge — what an MLOps stack looks like when there is no cloud safety net.

LinkedIn post · Read on LinkedIn ↗
Q2 2025
Field Notes

LLM Inference MLOps — Notes

Quick notes on inference MLOps — observability, model swapping, drift detection at production scale.

LinkedIn post · Read on LinkedIn ↗
Q2 2025
EdgeMatrix

EdgeMatrix vs the Cloud Tax — Field Note

How EdgeMatrix maps to the "escape the cloud tax" thesis — the economics in one chart.

LinkedIn post · Read on LinkedIn ↗
Q2 2025
Krsna · ExSLerate

ExSLerate — On-Chip AI for the Edge

Introducing the ExSLerate IP family — what makes an AI accelerator chip "edge-native" rather than a shrunk-down data-center part.

LinkedIn post · Read on LinkedIn ↗
Q2 2025
EdgeMatrix

EdgeMatrix — Scaling 70B-Parameter Models for Enterprise AI

How EdgeMatrix scales to 70B-parameter LLMs without the cloud sticker shock — the engineering trade-offs explained.

LinkedIn article · Read on LinkedIn ↗
Q2 2025
Shakti

Shakti-4B's OCR Capabilities — Comprehensive Evaluation

Comprehensive evaluation of Shakti-4B's OCR performance — datasets, methodology, and head-to-head benchmarks.

LinkedIn article · Read on LinkedIn ↗
Q2 2025
EdgeMatrix

How EdgeMatrix Is Redefining Enterprise AI — More for Less Cost

Concrete enterprise economics — what "more for less" looks like when the inference layer is engineered for the workload.

LinkedIn article · Read on LinkedIn ↗
Q2 2025
Shakti

Shakti-4B — Multi-Modal AI Model Powering Intelligence

Shakti-4B as a multi-modal foundation — what it can do today across vision and language.

LinkedIn article · Read on LinkedIn ↗
Q1 2025
Shakti

Shakti-1B — Vision-Language Model Built for Enterprise

Shakti-1B as the right size for many enterprise document workflows — fast, accurate, and edge-deployable.

LinkedIn article · Read on LinkedIn ↗
Q1 2025
LingoForge

LingoForge — Revolutionizing How Enterprises Harness AI

LingoForge as the agent-orchestration layer enterprises actually need — and what "actually need" means in regulated industries.

LinkedIn article · Read on LinkedIn ↗
Q1 2025
Sruthi · ASR

Revolutionizing ASR with Samba-ASR

Samba-ASR — the Mamba-based architecture under Sruthi-S that beats Whisper-large-v3 on average WER. Linear complexity, frontier accuracy.

LinkedIn article · Read on LinkedIn ↗
Q1 2025
Sruthi · ASR

Speech Recognition Innovation — Field Note

Quick note on speech-recognition innovation — what changed and where it goes next.

LinkedIn post · Read on LinkedIn ↗
Q1 2025
Shakti

Shakti LLM · Generative AI — Recognition

Recognition for Shakti LLM in generative-AI excellence rankings.

LinkedIn post · Read on LinkedIn ↗
Q1 2025
Shakti

Real-World Applications of Shakti LLMs — Revolutionizing AI

How Shakti models show up in real-world enterprise deployments — concrete use cases across verticals.

LinkedIn article · Read on LinkedIn ↗
Q1 2025
Shakti

Shakti LLMs Driving On-Device AI Workplace Agents

On-device workplace AI agents powered by Shakti — what becomes possible when the model lives on the device.

LinkedIn article · Read on LinkedIn ↗
Q1 2025
Shakti

Precision & Power — Shakti's Blueprint for AI Excellence

The blueprint behind the Shakti family — where precision and power meet to define the model architecture.

LinkedIn article · Read on LinkedIn ↗
Q4 2024
Shakti

Harnessing the Power of Shakti — LLM Series

A walk-through of the Shakti family for builders — what to deploy, where, and how to think about model selection.

LinkedIn article · Read on LinkedIn ↗
Q4 2024
Shakti

From Edge to Excellence — Shakti LLM Revolution for Enterprise

Shakti from the edge to enterprise excellence — how the model line evolved from edge-first design to enterprise scale.

LinkedIn article · Read on LinkedIn ↗
Q4 2024
Field Notes

NASSCOM DeepTech Club — Startup Badge Awarded

SandLogic recognized by NASSCOM's DeepTech Club — a milestone moment.

LinkedIn post · Read on LinkedIn ↗
Q4 2024
Strategy

Make in India · AI for Good · Enterprise AI

On building sovereign AI for India — Make in India meets AI for Good meets enterprise reality.

LinkedIn post · Read on LinkedIn ↗
Q4 2024
Shakti

Shakti-2.5B — Live on Hugging Face

Announcing the Shakti-2.5B Hugging Face Space — explore the model interactively.

LinkedIn post · Read on LinkedIn ↗
Q4 2024
Shakti

Shakti — A 2.5-Billion-Parameter Small Language Model

The first wide-audience announcement of Shakti-2.5B — what it is, why it's small on purpose, and what it beats.

LinkedIn post · Read on LinkedIn ↗
Q4 2024
Strategy

Revolutionizing UI Localization Testing with LLMs

How LLMs change the economics of UI localization testing — a vertical use case for compact language models.

LinkedIn article · Read on LinkedIn ↗
Q4 2024
HaluMon

Shakti LLM · Responsible AI — Field Note

On building responsible AI guardrails into a sovereign LLM — the design choices behind HaluMon.

LinkedIn post · Read on LinkedIn ↗
Q3 2024
Lexicons

Introducing LexiQ — Your AI-Powered Assistant for KPI & PowerBI

LexiQ — a Lexicon-built assistant for KPI and PowerBI workflows. Domain-specialized AI shipped as a product, not a demo.

LinkedIn article · Read on LinkedIn ↗
Q3 2024
Lexicons

Optimized Llama3-Med42-8B GGUF SandLogic Lexicon

A medical-domain Lexicon — Llama3-Med42-8B optimized via SandLogic's quantization recipe. What enterprise-ready open-source looks like.

LinkedIn article · Read on LinkedIn ↗
Q3 2024
Lexicons

Unlocking Bilingual AI — A SandLogic Lexicon-Based Approach

Bilingual AI via curated Lexicons — the engineering and the dataset choices behind production-grade language coverage.

LinkedIn article · Read on LinkedIn ↗
Q3 2024
Lexicons

Turbocharge Your AI with SandLogic Lexicons

Why curated open-source — quantized, packaged, and benchmarked — beats raw model downloads for enterprise teams.

LinkedIn article · Read on LinkedIn ↗
Q3 2024
HaluMon

Introducing HaluMon — Ensuring Language-Model Reliability

The launch post for HaluMon — what reliable LLM deployment looks like in regulated industries, and the four-metric scoring that makes it auditable.

LinkedIn article · Read on LinkedIn ↗
Showing 49 of 49 piecesFollow on LinkedIn ↗
// ABOUT THE AUTHOR

Eight years. One thesis.

Kamalakar Devaki — Founder & CEO, SandLogic Technologies

Kamalakar Devaki · Founder & CEO

Kamalakar Devaki founded SandLogic in 2018 on the bet that intelligence belongs on the device, not rented from the cloud. Eight years later, the silicon, the runtime, the models, and the applications have all shipped under one roof.

The writing on this page is not marketing. It's the working record: engineering decisions, benchmark numbers, strategic bets, and the occasional unfiltered opinion. It all lives on LinkedIn because that's where the audience reads, comments, and shares. This page curates the archive in one place.

The thesis hasn't changed. The execution has.

// LET'S BUILD

Want a deeper conversation? Talk to us.