The many dangers of AI in state hands
Unchecked AI integration in governance threatens human rights, accountability and democratic principles while enabling state overreach.

In a nutshell
- AI weapons risk atrocities, ethnic targeting and devaluing human life
- Opaque algorithms erode accountability, justice and democratic governance
- A dystopian future is likely without strong public opposition, regulation
- For comprehensive insights, tune into our AI-powered podcast here
Artificial intelligence is by far the most transformative technology our species has developed in decades, arguably the most important leap after the advent of the internet itself. It has the potential to radically reshape our economies and sociopolitical systems; in fact, this shift has already begun.
However, as with many new technologies, AI’s potential for abuse skyrockets when placed in the hands of governments. This is because politicians have the option to grant themselves special powers and to exempt themselves from laws that apply to citizens and private companies, even to suspend basic individual liberties in cases of “emergency.” As history has repeatedly demonstrated, when governments acquire new tools of control, they rarely surrender them. The risks are as numerous as they are severe, and the path forward is riddled with many reasons for serious concern for all citizens.
Slippery slope
For many years, one of the primary concerns of privacy advocates all over the world has been government surveillance excesses and their implications. Facial recognition applications, mass data‑mining tools and all kinds of “intelligence-led policing” algorithms have already been put into use, but the recent explosion in AI innovations and technological advances is bound to make their impact still more consequential.
“Pre-crime,” a concept introduced in the 2002 sci-fi thriller “Minority Report,” suddenly does not seem that far-fetched anymore. AI-powered predictive policing involves gathering and analyzing large datasets drawn from crime reports, arrest records and social, demographic or geographic information to identify patterns and forecast where future crimes might occur and who may be involved.
And while crude versions of these tools have already been used to track broader trends over time, tomorrow’s versions will be able to provide much more specific results. Instead of mere location-based predictions, particular individuals can be flagged as high-risk, even if they have not yet committed a crime. What states decide to do with this information will likely determine whether they create a society where citizens are treated as suspects, guilty until proven innocent.

Another Orwellian possibility is government abuse of AI tools to further centralize power and force compliance in the population through the suppression of free speech or the spread of propaganda and information manipulation. We have already seen AI-generated images and videos (deepfakes) become almost indistinguishable from real ones; most of those were probably produced by amateur AI enthusiasts at home. One can only imagine what can be produced using state funds and access to much more sophisticated tools. This could allow governments to manufacture evidence, discredit journalists and smear political opponents, as well as create entirely fictional narratives and threats to justify domestic and international policy overreach attempts, or even outright conflict.
Adding to these concerns is the prospect of experiments with, or even full adoption of, “social credit” governance systems. Like in China, AI-powered tools could combine individual financial and employment information, social behaviors and networks, political affiliations and expressed opinions into a comprehensive and granular scoring system. Even more worryingly, they could also be used to automatically distribute rewards and penalties, to incentivize compliance and discourage dissent, by centrally allocating access to essential services, like banking, housing or even healthcare. Unsettling as it may sound, it is not difficult to imagine that the authoritarian governments that imprison journalists and ordinary civilians for speaking out against the regime today, would also deny essential services to anyone else they would identify as an “enemy of the state.”
All this is, of course, disturbing enough on a domestic level, but there is a whole other can of worms that can be opened when we turn our attention to potential military applications. The risks multiply exponentially in this context, as AI‑assisted targeting systems could be used to identify and target specific ethnic or political groups, facilitating atrocities and crimes against humanity such as ethnic cleansing.
Government bureaucracies already struggle with accountability, and AI is set to magnify this problem to an unprecedented extent.
The development of semiautonomous weapons, like unmanned aerial vehicles (UAVs), most commonly referred to as drones, has already demonstrated how quickly (and cheaply for the side that operates them), the rate of destruction and loss of life can scale. Even on a psychological level, it is much easier for a drone pilot to shoot at a dot on his screen than it is for a soldier on the ground, whose own life is at risk or who is facing his target in person, to do the same. If that last human element is removed, and the decisions on who lives and dies are fully delegated to machines, the value of human life is bound to approach zero.
Finally, let us not ignore the potential contributions AI can make to weapons research and development (R&D): cyber warfare capable of causing massive infrastructure damage, or perhaps even bioweapons, could become part of a new global arms race.
The end of accountability?
In the late 1970s, IBM included the following statement in its training materials: “A computer can never be held accountable, therefore a computer must never make a management decision.” This seems to resonate today more than ever. Government bureaucracies already struggle with accountability, and AI is set to magnify this problem to an unprecedented extent.
Public servants, government officials and politicians can (and do) point fingers at their superiors and “play tag” with responsibility whenever something goes horribly wrong. With opaque, “black-box” AI systems in charge, it will be infinitely more challenging to hold anyone accountable.
And when these systems become responsible for decisions like policing and public safety or healthcare and welfare decisions, mistakes can irreparably destroy people’s lives. Both false positives and false negatives may seriously harm and endanger innocent citizens that will have little to no recourse, no realistic chance to challenge the decisions or to identify who (if anyone) is to blame for the injustice.
Read more by finance and technology expert Vahan P. Roth
- The evils of financial illiteracy
- What is really fueling the new gold rush?
- Hazards of the surveillance state and privatizing national security
This is part of a much broader and much more fundamental risk that comes with AI in government: Shifting the actual work and duties of governance away from democratic accountability and toward technocracy. Politicians may defer to opaque AI systems in making policy, claiming that “the algorithm” is neutral or more efficient than human judgment, promoting it as a surefire way to eliminate bias and to deliver a much fairer and just society. However, all AI systems are trained on data that inevitably reflect historical biases, political assumptions and perhaps even “facts” that may be disproven in the future. Even worse, they can be explicitly designed to skew in favor of the state or whomever happens to be in power at the time.
Therefore, what they would deliver could not be further from the promised neutrality. Even if they did somehow manage to deliver neutral and fair outcomes, the fact remains that the concept of voting would still lose all meaning. Automated governance offers no grounds for public debate, no exchange of ideas, no opposition. It only offers a black-and-white version of the world, with only right policies and wrong ones, as determined by a supposedly all-knowing machine that cannot be argued with.
Scenarios
Most likely: A dystopian future may come sooner than expected
Unfortunately, the AI race between states appears to be unstoppable at this point. The urgency to develop and control the most advanced systems is undeniable and there is no reason to believe this will change anytime soon. Partnerships, subsidies and all kinds of incentives to private companies in the AI space also make it very unlikely that the private sector will put any limits on what they share with governments. The most likely scenario, therefore, is that some, if not all, of the aforementioned aspects of a dystopian future lie ahead, perhaps manifesting sooner than we might hope.
The only realistic way for this course to be corrected would be strong and effective public opposition, before AI systems become too deeply entrenched in government. That would require a clear swing in public opinion and vocal opposition, decisive enough to make it politically untenable for governments and those hoping to get elected to adopt the policies that would lead to the dismal outcomes described above. However, this appears rather unlikely at this point, given that the integration of AI in government has already begun and that many people seem to embrace the technology in their private lives, largely dismissing the risks it carries on a state level.
Less likely: Non-state freedom fighters use AI to resist
Another possibility would be for AI tools to be embraced and deployed by private individuals, groups and companies to counter the state-owned and -operated systems. They could be used to overwhelm AI-powered government programs and processes, to monitor abuses or mistakes, to appeal decisions and verdicts and to prevent the complete takeover of the government apparatus. After all, this technology, like so many others that came before it, cuts both ways and its use cannot be realistically limited to state hands alone.
Least likely: Elected officials act responsibly for as long as possible
Finally, there is also the unlikely scenario that governments would enact strict regulations against AI overreach and only use the technology to make governing more efficient, to reduce bureaucracy and waste, save taxpayer money or limit corruption, among other potential, genuine benefits for society and the economy.
However, it is very easy to see how this would eventually result in “mission creep.” For example, the government could start by launching a simple “TaxGPT” bot that helps people file their taxes, but sooner or later individuals will be incentivized to link all their assets, bank accounts and employment information based on the argument “if you have nothing to hide, why not automate the process completely and have the government deduct what you owe automatically?”
It is not far-fetched to imagine that many, if not most people, would accept that proposition, for the sake of convenience and peace of mind. Similarly attractive offers can be conceived of in other aspects of a citizen’s relationship with the state: national security, healthcare, or their children’s education. Before long, we would end up in the dystopian future outlined above.
Contact us today for tailored geopolitical insights and industry-specific advisory services.