Recently, at the Rio summit, the BRICS bloc called for the United Nations to lead global AI governance, citing human rights, equitable access, and the need to prevent militarization.
It was a performance.
The call didn’t arise from unity or principle. It was a strategic maneuver, designed to stall the West while BRICS builds quietly at home. China, the bloc’s gravitational center, isn’t advocating oversight out of altruism. It’s wiring up molten salt reactors to power round-the-clock inference, building government-backed compute hubs, and stacking open-weight models: AI systems whose internal architecture and parameters are publicly released, allowing anyone to run or modify them.
They’re not going to wait for the UN. They’re laying the groundwork for independence now.
This is the architecture of sovereign inference: the ability to run advanced AI models entirely on domestic hardware, with no reliance on foreign APIs, platforms, or compliance.
On the surface, BRICS is drafting rules. Underneath, they’re trying to block the West from hard-coding its advantage. This isn’t a brotherhood of ideals. It’s a business arrangement. Brazil doesn’t share China’s political model. India doesn’t share Russia’s foreign policy. But they do share one thing: a desire to slow Western dominance long enough to write their own playbooks.
So when they invoke “equity” and “human rights,” don’t mistake it for shared conviction. It’s shared positioning.
Because what keeps the Western frontier labs, like OpenAI, awake at night isn’t misuse, it’s irrelevance. It’s the fear of a world where they’re no longer needed. So they warn about bad actors, but what they really mean is independence from their paradigm.
In response, Western labs reach for regulatory capture: a quiet strategy in which private firms write their own rules, convincing lawmakers that only their closed systems can be trusted. Safety becomes a smokescreen. Governance becomes gatekeeping.
And while the West flails through moral panic and lobbying blitzes, BRICS is buying time.
Theatrics aside, it’s a scramble for digital ground.
The Theater of Governance
BRICS isn’t appealing to the UN out of principle. It’s a soft-power counterstrike against U.S./EU regulatory dominance. “Global governance” is code for slow them down while we catch up.
Calling for the UN to lead has nothing to do with safety. It’s a stalling tactic, meant to freeze the game while new players load in.
“Equity” becomes the rhetorical crowbar, used to pry legitimacy away from Western AI hegemony under the guise of uplift. If you can’t control the infrastructure, control the narrative.
And while diplomats draft language no one will enforce, China skips the debate entirely. They’ve already lit the fuse, bringing molten salt reactors (a technology invented in Tennessee and abandoned by the U.S.) online to power sovereign data centers without Western oversight or cloud dependencies. These reactors promise near-limitless, clean energy, quietly rewriting the rules of AI scalability. Meanwhile, the American firm xAI is still burning methane to run Grok from a data center in Tennessee.
BRICS doesn’t want to break the rules. They want to help pick the referee, because if the West writes them alone, the outcome is already fixed.
But don’t mistake this for a two-player game.
They’re not the only ones gaming the system.
While BRICS plays the geopolitical long game, Western firms are staging their own power grab at home.
The Cartel’s Capture Playbook
OpenAI, Anthropic, and xAI aren’t rivals. They’re a cartel in function, if not form. Musk co-founded OpenAI. The same investors fund their competitors. Advisors cycle between labs. The narratives shift in unison.
Their warnings about rogue AI are strategy, not a public service. It’s regulatory capture dressed up as altruism. Convince lawmakers that open models are dangerous, and only closed labs that are properly licensed and considered ‘responsible’ can be trusted.
Governance is the headline. Gatekeeping is the policy.
And the groundwork started early.
Musk and his circle have been sounding the alarm about AI for years, long before diffusion models became dinner table talk. It wasn’t prophecy. It was pretext.
By casting AI as an existential threat from the start, they claimed the moral authority to lock it down. The danger, they insisted, was too great to leave in the hands of anyone else.
And here’s the twist: they’re not wrong.
These models can be weaponized. They can fabricate identities, generate mass propaganda, accelerate chemical research, or supercharge cyberwarfare.
But the truth of that danger doesn’t make the cartel’s motives pure.
Because when the powerful ring the alarm, it’s not always to protect you.
Sometimes, it’s to claim the fire for themselves.
Their nightmare isn’t some lone actor asking ChatGPT how to build a bomb.
It’s a rogue nation, or rogue citizen, running a frontier model without permission.
Because open weights don’t just spread risk. They spread power.
And power that can’t be monitored, monetized, or revoked?
That’s dangerous to them.
So they launched a PR war, casting themselves as the arbiters of AI safety and warning that competing models could fuel cyberwarfare, bioterrorism, or, of course, Skynet. The American public didn’t need convincing. After decades of sci-fi dread, the narrative slid neatly into place: familiar, automatic, and unexamined.
But narrative control only works if the infrastructure stays locked down.
Silicon, Joysticks, and Sovereignty
Governments still lurch for the collective levers: cloud access, export controls, telecom choke points. The U.S. bans NVIDIA GPUs to slow China. Europe wrings its hands over compute licensing. But those switches no longer control the flow.
The real bottlenecks are deeper: data, silicon, and the minerals beneath them.
China controls over 70% of global rare earth supply (USGS, 2024). This gives them more than leverage over chip production; it grants strategic pressure across the entire AI stack. From gallium to neodymium, the raw materials of machine intelligence are already unevenly distributed. And while Western policymakers draft restrictions, China builds stockpiles.
Even so, locking down chips isn’t a kill switch. It’s whack-a-mole.
Because the moment you restrict enterprise-grade silicon, you crash into a cultural firewall: gamers.
The same GPUs that render photorealistic battlefields now run quantized LLMs. Consumer hardware has become dual-use by default.
Consumer GPUs are part of a global gaming ecosystem valued at over $230 billion as of 2024. Try banning 50-series cards and see how long it takes before gamers, developers, and the entire hardware supply chain revolts. The video game lobby will never let this happen.
Whether it’s smuggled chips, grey-market imports, or homegrown Chinese silicon,
Sovereign inference is no longer hypothetical.
It’s live.
The Wrench in the Gears
Then DeepSeek dropped, and the winds started to shift.
With a 40-series GPU and a torrent client, anyone could summon frontier-level language power on their own terms. No API key. No permissions. No oversight. What followed wasn’t chaos. It wasn’t even misuse.
It was loss of narrative control.
Suddenly, the narrative slipped from the hands of western AI firms, claimed instead by hackers, hobbyists, and upstart nations alike.
Open weights don’t just empower hobbyists, they enable sovereign inference. Any nation with sufficient compute can now fine-tune and deploy top-tier models, regardless of alliance. No OpenAI account. No AWS backend. No compliance tether.
Yes, hobbyists can run distilled or quantized variants, including some with 70B parameters, on consumer-grade cards like the RTX 5090. Even models like Kimi K2, developed by Beijing-based Moonshot AI, which demands over 2,400GB of VRAM in full precision, are already circulating in stripped-down, quantized forms, runnable on a single GPU.
But the full-scale versions?
DeepSeek-R1. Kimi K2. These aren’t hobby projects. They’re state-backed engines, built for those with data centers, deep pockets, and strategic intent.
Kimi K2’s release was a quiet Samson Option.
Instead of pulling down the temple, they opened the gates and invited the world to take what Silicon Valley once guarded like sacred fire.
And while Moonshot AI may be a private firm, nothing this powerful leaves Beijing without Party approval.
Countries like Iran or North Korea, with a proven track record in nuclear engineering and cyberwarfare, could plausibly adapt models like DeepSeek for local deployment. There’s no public confirmation, but based on historic precedent (black-market pipelines, Chinese partnership) it’s far from speculative.
It’s no longer about gaining access to the best models.
Now, anyone who can afford the FLOPs can run an AI that competes with or in some cases surpasses OpenAI’s latest and greatest, without ever touching an American server.
The age of permission is over.
The age of sovereign intelligence has begun.
Laws Lag, Loopholes Lead
Governments are scrambling to legislate AI, but the laws are always chasing the last model, not the next one. By the time a framework is drafted, the target has already shifted. Yesterday’s threats are today’s open-source GitHub repos.
Licensing and export controls can slow down frontier labs. But they can’t stop nation states running open-weight models on domestic hardware. You can blacklist NVIDIA. You can ban Huawei chips. But you can’t ban math. Once the architecture is public, it becomes a formula, not a product. And formulas don’t respect borders.
Data restrictions may complicate large-scale training, but they don’t erase the past. Petabytes of pre-2024 language data already exist in torrents, forks, mirrors, and academic corpora. Every delay in enforcement is time bought for replication.
Platform enforcement (watermarking, compliance wrappers, API throttling) only works inside the walled garden. It protects the endpoints. But sovereign inference doesn’t use those endpoints. It’s air-gapped, insulated, silent. No one’s watermarking a model running in Tehran, Pyongyang, or deep inside a PLA lab.
The architecture of control was built for a centralized internet. Open weights shattered that. The new architecture is post-platform. Post-permission. Post-observable.
Now the gaps aren’t bugs. They’re exits. And some nations are already walking through them.
The Myth Collapses
The BRICS proposal was never about safety. It was about leverage.
A diplomatic veneer for a strategic delay. Just long enough to build at home while the West argues over frameworks and footnotes.
But open-weights broke the illusion.
You can’t put power back behind the gate once it escapes. You can’t regulate what no longer needs your permission.
The future won’t follow the blueprints of UN committees or Silicon Valley manifestos.
It won’t be orderly. It won’t be centralized.
Maybe it will be controlled by states, corporations, or new coalitions of the powerful.
Maybe it will be contested through fragmentation, homebrewing, and rogue actors.
But one thing is certain: the genie is out.
And whatever comes next will not fit inside the narrative they wrote for it.
They told you the danger was in what AI might do.
The real danger was in what it allowed others to build.
Don’t ask who controls the future. Ask who builds it in silence.