Sarvam AI Just Made India’s Most Credible Argument for Sovereign AI

Main Image
  • Like
  • Comment
  • Share

At the India AI Impact Summit in New Delhi on Tuesday, Sarvam AI co-founder Pratyush Kumar pulled out a basic feature phone, the kind with a physical keypad that still outsells smartphones across rural India, and demonstrated a chatbot called Vikram holding a fluent conversation in Hindi, then Punjabi, then Marathi, switching mid-exchange. The model running Vikram was Sarvam 30B, trained from scratch by a Bengaluru startup that did not exist two and a half years ago. No OpenAI API. No Google Cloud. Just Indian-built intelligence on a device that most AI companies don’t acknowledge exists.

Yesterday, Sarvam officially released two foundational models: Sarvam 30B and Sarvam 105B. These are not fine-tuned wrappers around Llama or Mistral, the shortcut most Indian AI startups have quietly taken. Sarvam built them from scratch, trained them on data that reflects how Indians actually communicate (including code-mixed languages like Hinglish and Tamil-English), and is releasing the weights openly. The 105B model claims to outperform DeepSeek R1 — China’s 600-billion-parameter flagship — while using six times less compute. If that claim holds under independent scrutiny, India has just proven it can build frontier AI at a fraction of the cost and without foreign dependency. That is a different kind of story from another benchmark announcement.

Why the Architecture Is the Point

Both models use a Mixture-of-Experts (MoE) architecture, which is what makes them viable for India at scale. The 30B model has 30 billion parameters of stored knowledge, but for each output token it generates, only 1 billion parameters activate. The 105B model activates only 9 billion per token. Inference cost — what it actually costs to run an AI model every time a user sends a message — tracks active parameters, not total ones. A 30B MoE model costs roughly the same to run as a 1B dense model. That is the economic breakthrough. OpenAI’s pricing is calibrated for American enterprise budgets. An Indian startup building a Hindi voice bot for agricultural credit workers cannot afford GPT-4 at scale. Sarvam’s cost structure changes that calculation entirely.

The 105B model’s 128,000-token context window means it can hold an entire loan document, a year of financial records, or a 100-page regulatory filing in memory while answering questions about it. The 30B model’s 32K window is built for real-time conversation. These are not the same product for the same use case — they are two different tools for two different layers of the economy.

This Is Infrastructure Policy, Not a Tech Announcement

The IndiaAI Mission subsidised Sarvam’s training compute, providing access to 4,096 NVIDIA H100 GPUs. That investment should be understood the same way you’d understand a government building a power plant: not as a technology grant, but as strategic infrastructure. Every major Indian institution that wants frontier AI capability currently routes through OpenAI (American servers, American jurisdiction) or DeepSeek (Chinese infrastructure, with active geopolitical tension between the two countries). Banking regulators, defence departments, and state governments cannot let sensitive data leave the country or land on servers they do not control. Sarvam’s models, deployable on private cloud infrastructure within India, eliminate that dependency entirely. The intellectual property belongs to India. If geopolitical conditions sever access to US or Chinese AI tomorrow, India now has a backup brain that it built itself.

DeepSeek’s January 2025 release already proved that frontier AI was not exclusively American. It rattled US technology markets and reframed the global AI race from a two-player game into something more complex. Sarvam’s 105B — if the benchmark claims hold — extends that logic: if China could challenge America, India can now challenge both. The India AI Impact Summit running this week is not incidental context. Soket AI, Gnani, Gan AI, and others are all launching foundational models simultaneously. This is a coordinated national effort to ensure India is not simply a consumer of foreign intelligence. Sarvam is the most advanced expression of that effort to date.

Who It Actually Serves? and the Honest Gaps

For developers and startups, the open-source release is immediately valuable. The economics of building vernacular applications on GPT-4 simply do not work — the API cost prices out exactly the use cases that matter most for India. Sarvam’s MoE efficiency changes the unit economics for voice bots in regional languages, education tools for rural communities, and customer service in the 19 languages that OpenAI treats as afterthoughts. For enterprises and government, the 105B model’s reasoning depth and data residency story are the headline. Banks, insurance companies, and government departments can deploy these models on domestic infrastructure without compliance gymnastics. The government’s Citizen Connect 2047 initiative and AI4Pragati — both of which Sarvam is building for — represent the kind of public-sector AI deployment that only becomes possible when the infrastructure is sovereign.

The honest gap is the same across all three audiences: open-source weights are not a deployed product. The developer tooling, the fine-tuned vertical models for healthcare and agriculture and legal, the voice interface that works reliably on a 2G connection in Rajasthan — none of this exists yet at scale. Sarvam knows this; Kumar framed the company’s next phase explicitly around applications rather than model size. But the hard work — the unglamorous work of turning a capable model into something a farmer in Vidarbha actually uses — is ahead of them, not behind.

Two Things Have to Go Right

First: the benchmark claims need independent verification. Announcing that your model outperforms DeepSeek R1 at a government-hosted summit, using your own evaluation framework, is not the same as the research community reproducing that result on held-out tasks. AI labs — all of them, including OpenAI and Google — optimise for the benchmarks they publish. The specific claim that Sarvam 105B outperforms a 600B-parameter model on key reasoning metrics needs external scrutiny before it becomes received fact. This is not a criticism of Sarvam’s work. It is the minimum standard of evidence the field requires.

Second: open source needs to actually mean open source. Sarvam has committed to releasing model weights. It has not committed to releasing training data or full training code. That distinction matters enormously for Indian language AI, where bias in training data has direct consequences for the communities the models are meant to serve. A model that performs well in formal Hindi and poorly in Bhojpuri does not serve the population it was built for — it serves the population that was already served. Full transparency about what went into the training would allow the research community to audit for exactly these failures. Sarvam should be pressed on this publicly.

The Bottom Line

Sarvam has done something genuinely difficult: built frontier-class language models from scratch, in under three years, with resources that are a fraction of what OpenAI or Google DeepMind command, and done it specifically for the languages and use cases that major labs treat as secondary markets. That deserves recognition without the hedging that characterises most Indian technology commentary, which tends to either oversell domestic achievements or dismiss them depending on the political moment.

India’s sovereign AI mission does not succeed when Sarvam releases model weights. It succeeds when a health worker in Chhattisgarh uses an AI system in Gondi to identify symptoms, when a farmer in Punjab gets crop insurance guidance in Punjabi without navigating a call centre, when a municipal government in Kerala processes citizen grievances in Malayalam at a cost that makes it financially practical at scale. The models released yesterday make all of that more possible than it was on Monday. They do not make it inevitable. The foundation is real. What gets built on it is the question that matters.

Key Facts

  • Sarvam AI founded: July 2023, Bengaluru. Co-founders: Vivek Raghavan and Pratyush Kumar (formerly AI4Bharat).
  • Funding: $50 million+. Investors: Lightspeed, Khosla Ventures, Peak XV.
  • Compute: IndiaAI Mission — 4,096 NVIDIA H100 GPUs, infrastructure from Yotta, support from Nvidia.
  • Sarvam 30B: 30B total / 1B active parameters. 32K context. Trained on 16T tokens.
  • Sarvam 105B: 105B total / 9B active parameters. 128K context. Trained from scratch.
  • Benchmark claim: 105B outperforms DeepSeek R1 (600B) on key benchmarks. Requires independent verification.
  • Languages: All 22 scheduled Indian languages. Voice-first. Designed for feature phones.
  • Forthcoming: Sarvam for Work (enterprise), Samvaad (conversational platform), coding models.

You can follow Smartprix on TwitterFacebookInstagram, and Google News. Visit smartprix.com for the latest tech and auto newsreviews, and guides.

Karan RathoreKaran Rathore
Karan Rathor is a tech reviewer at Smartprix. With an electrical engineering degree from BITS Pilani, he brings hands-on, expert analysis to his reviews of mobile hardware and automotive tech. See all of his work on his official author page.

Related Articles

ImageIndia’s Smartphone Market Reset: How vivo and Apple Outperformed the Price War

India’s smartphone market just finished a year of brutal consolidation. While the total market barely budged, growing a microscopic 0.5%, the landscape underneath has been completely transformed. The era of cheap, online-first dominance is being stifled by a new reality: if you don’t have a massive retail footprint and a premium story to tell, you’re …

ImageIndia Gets Its Own AI Assistant as Sarvam AI Announces Sarvam Samvaad: Features Inside

India has officially launched its own AI Assistant called Sarvam Samvaad, thanks to the homegrown AI startup Sarvam AI. This AI Assistant serves as a conversational platform that allows businesses to create, test, and implement AI agents in up to 11 Indian languages. With Sarvam Samvaad, the company seeks to enhance customer experience by facilitating …

ImageLG’s new Essential Series ACs Trade AI Hype for Better Cooling and 2026 Energy Compliance

LG Electronics India just dropped its “Essential Series” inverter ACs, and the most interesting thing about them is what’s missing: there’s no mention of AI, no flashy Wi-Fi gimmicks, and zero talk of “smart” connectivity. Instead, LG is making a calculated bet on the boring stuff. With summer 2026 looming and India’s Bureau of Energy …

ImageASUS Zenbook S16 and Zenbook 14 Go Up for Pre-Order in India Alongside New Vivobooks

ASUS has started pre-orders in India for the Zenbook S16 (UM5606GA) and Zenbook 14 (UM3406GA). ASUS announced the rollout today and confirmed that the Zenbooks and new Vivobooks all use AMD Ryzen AI 400 Series processors. ASUS is also pushing its “Crafted with Purpose” AI PC portfolio messaging. The key focus this time is on-device …

ImageInfinix Note Edge 5G Launched in India With Curved AMOLED Display, Dimensity 7100, and 6,500mAh Battery

Infinix has officially launched the Note Edge 5G in India, targeting the crowded mid-range space with a mix of curved-screen design and AI features. The phone runs XOS 16 based on Android 16, and Infinix promises 3 years of OS updates and 5 years of security patches. Infinix Note Edge Price in India The Infinix …

Discuss

Be the first to leave a comment.