What Does Sovereignty Mean, Anyway?
The India AI Impact Summit kicked off today under the banner of “Shaping AI for Humanity, Inclusive Growth, and Sustainability.” The US delegation arrived with more than 100 companies represented and a distinct agenda: selling the American stack (American chips, American cloud infrastructure, American models) wrapped in the language of “sovereign AI.”
Back in September, I argued that African nations faced a closing window to shape global AI governance before the rules were set by others. Five months later, the window hasn’t closed — but the terms of the conversation have shifted in ways that should concern anyone invested in a cooperative AI future.
Same Words, Different Languages
“Sovereign AI” is the phrase of the moment. As Pablo Chavez documents in Lawfare, the number of government-backed sovereign AI projects has more than tripled in two years; from roughly 40 across 30 countries to nearly 130 across more than 50. Everyone is talking about sovereign AI. The problem is that almost no one means the same thing by it.
When the Trump administration says “sovereign AI,” it means deployment-layer control: countries get to configure and run AI systems on their own terms, but the chips, the frontier models, and the cloud infrastructure underneath remain American-originated and American-controlled. The White House Office of Science and Technology Policy has positioned this as helping countries build “sovereign AI capabilities with American technology,” meaning sovereignty as a feature of the American stack, not independence from it.
When governments like India say “sovereign AI,” they mean something different, typically domestic control over allocation of compute, data governance, and model development. India’s IndiaAI Mission is building control points across each of these layers, not to replace the American stack entirely, but to ensure New Delhi decides who gets compute and on what terms. This is sovereignty as state capacity: leverage over the infrastructure that runs on your soil, regardless of who manufactured the chips.
And when the civil society organizations and Global South stakeholders who shaped the summit’s own agenda say “sovereign AI,” they mean something else again. The MAP-AI insights document that emerged from pre-summit consultations frames sovereignty in terms of democratic participation and community agency — ensuring that local communities have meaningful say in how AI systems affect their lives, grounded in transparent and accountable governance. This vision goes beyond which government controls the stack. It asks who has power over AI outcomes for actual people, and whether that power is exercised democratically.
These aren’t different degrees of the same concept. They’re completely different concepts wearing the same label. The US arrives in New Delhi talking about modular stack packages. The summit’s framing document talks about connectivity, electricity, labor protections for data workers, and linguistically inclusive models. They’re not disagreeing, they’re just not having the same conversation.
Full-stack AI sovereignty, in any case, isn’t a realistic goal for the vast majority of nations. You need chip fabrication, frontier model training capacity, massive compute, and deep talent pipelines. A handful of countries can plausibly build all of that. For everyone else, “sovereign AI” functions less as an achievable end state and more as a negotiating position. It’s a way to signal to Washington and Beijing that you have options, that your terms matter, that you’re not captive to anyone’s stack. Countries using sovereignty language aren’t delusional about their capacity to go it alone. But they want a deal with guarantees that the rules won’t change after signing.
That’s a dynamic the US should understand and work with. Countries posturing for better terms are countries that want to be at the table. The right response is to meet them there with credible assurances.
Instead, the US appears to be doing the opposite.
The Trust Deficit
The core problem isn’t the technology package, rather the partner who’s selling it. Countries need a reliable and trustworthy AI infrastructure partner:: assurance that access won’t be cut off, that the rules won’t change mid-stream, and that technology dependence won’t be held for political leverage.
The US cannot credibly offer any of those assurances right now. Consider what countries evaluating the American AI stack see when they look at the broader US posture. The US dissolved USAID, let AGOA lapse for months before offering a bare-minimum one-year renewal after African leaders lobbied for sixteen, and imposed travel bans and visa bonds on Global South countries. Those aren’t AI policy — but they tell countries a lot about how this partner treats relationships. Meanwhile, the withdrawal from UNESCO and attempts to preempt state-level AI regulation signal that the US isn’t interested in governance frameworks it doesn’t control — domestically or internationally. European officials have started talking openly about a technology “kill switch” — the risk that Washington could unilaterally terminate access to critical tech infrastructure.
This shows up in the data. Afrobarometer’s most recent survey across 29 African countries found that China has overtaken the US in favorability for the first time — 60% of respondents view China’s influence positively, compared to 53% for the United States. And that data was collected before the worst of the current administration’s Africa-specific actions. The numbers have almost certainly gotten worse since. This isn’t because Africans are uncritically pro-China; majorities of those aware of Chinese loans still worry their governments have borrowed too much. It’s because the US keeps making the relationship harder, while China keeps showing up with infrastructure. African nations have historically been wary of dependency on any external power. But that wariness is pragmatic, not ideological — and it shifts based on who’s actually delivering value and who’s adding friction. Right now, the US is adding friction.
The Self-Defeating Loop
This is where the US strategy looks self-defeating. The more Washington pursues AI dominance rather than partnership, the more it validates the hedging behavior it wants to prevent. Countries that perhaps would have accepted a reasonable deal on the American stack (deployment control, some continuity assurances, fair terms) are instead accelerating investments in alternatives. They’re deepening relationships with Chinese providers offering turnkey solutions. They’re building whatever domestic capacity they can, however imperfect. They’re using sovereignty language not just as a negotiating position but as a genuine diversification strategy.
The Chavez analysis identifies the mechanism precisely: the American AI Exports Program offers deployment-layer sovereignty, but refuses to address the continuity and jurisdictional concerns that actually drive the sovereign AI impulse. There are no guarantees of uninterrupted access independent of export licensing. No portability rights if the relationship sours. No data residency commitments. And under the CLOUD Act, any provider subject to US jurisdiction can be compelled to disclose data regardless of where it’s stored. For countries that define sovereignty as insulation from foreign government discretion, the American package has a structural gap at its center. And any deal you sign may only be as durable as the current political mood in Washington.
What This Means for Africa and the Global South
African countries sit at the sharp end of this dynamic. They’re precisely the markets the American AI Exports Program is designed to target — resource-constrained, in need of infrastructure, with limited alternatives. They’re also the ones with the least leverage to push back individually and the most to lose when a partner changes the rules.
Collectively, African nations hold significant leverage — through critical mineral reserves, through the African Union’s growing institutional voice, through strategic alignment with other Global South groupings. The countries that will navigate this muddle most effectively are the ones that organize not just for better terms on someone else’s technology, but to ensure governance frameworks center the people AI is supposed to serve.
The MAP-AI pre-summit consultations put it plainly: democratizing AI must go beyond access to models and data. Without parallel investment in connectivity, electricity, infrastructure, and meaningful participation in governance, expanding access to technology is just expanding dependence by another name. That’s the conversation the summit is trying to have. It’s not likely that’s the conversation the US came for.
So what does sovereignty mean? It depends on who’s asking, and what they actually need. Until Washington figures that out, the American stack is likely to keep losing ground to better partners.


One of my LinkedIn readers pointed me to Heesoo Jang's very apt paper from AIES-25 "Exporting Autonomy, Importing Dependency: The Geopolitical Work of Sovereign AI" https://ojs.aaai.org/index.php/AIES/article/view/36634
Great article , Tara. I sent it to Robert Reich . Hope you don’t mind.