•      Fri Dec 5 2025
Logo

Harnessing AI Without Undermining Democracy



PALO ALTO – AI is already impacting the pillars of democratic governance around the world. Its effects can be mapped as concentric circles that radiate outward from elections through government adoption; political participation, public trust, and information ecosystems; and then to broader systemic risks – economic shocks, geopolitical competition, and “existential” risks like climate or bioweapons. Each circle presents both opportunities and challenges.

[Chart]

Start with elections. Especially in the United States, election administrators are severely understaffed and underfunded. Many argue that AI could help by translating ballots into multiple languages, verifying mail-in ballots, or selecting optimal locations for polling sites. Yet today, only 8% of US election administrators use these tools.

Instead, AI is being used to make voting harder. In the state of Georgia, activists used the Eagle AI network to mass-generate voter challenges and pressure officials to purge election rolls.

(Opponents are using similar tools to try to reinstate voters.) And familiar risks – such as deepfakes designed to confuse or mislead voters – abound. In 2024, Romania annulled its presidential election results amidst evidence of AI-amplified Russian interference – the first unequivocal example of AI’s impact.

But the hunt for “smoking guns” may miss the greater danger: the steady erosion of trust, truth, and social cohesion.

Government use of AI offers a second vector of influence – one with greater promise. Public trust in the US federal government hovers around 23%, and government agencies at every level are experimenting with AI to improve efficiency. Such efforts are already delivering results.

The State Department, for example, has reduced the staff time spent on Freedom of Information Act (FOIA) requests by 60%. In California, San Jose relied on AI transit-optimization software to redesign bus routes, cutting travel times by almost 20%.

Such improvements could strengthen democratic legitimacy, but the hazards are real. Black-box algorithms already influence decisions about eligibility for government benefits, and even criminal sentencing, posing serious threats to fairness and civil rights. Military adoption is also accelerating: in 2024, the US Defense Department signed $200 million contracts with four leading AI firms, heightening concerns about state surveillance and AI-driven policing and warfare.

At the same time, AI could transform public participation. In Taiwan – a global model for tech-enabled government – AI-powered tools like Pol.is helped rebuild public trust following the 2014 occupation of parliament, boosting government institutions’ approval ratings from under 10% to more than 70%. Stanford’s Deliberative Democracy Lab is now deploying AI moderators in over 40 countries, and Google’s Jigsaw is exploring similar approaches to support healthier debate. Even social movement organizers are using AI to identify potential allies or track the people behind the money propping up anti-democratic efforts.

But four risks loom large: broken engagement systemsas processes like “notice-and-comment” are flooded with AI slop; active silencing, as AI-amplified doxxing and trolling – and even state surveillance – threaten to intimidate activists and drive them out of civic spaces; passive silencing if people further opt out of real-world civic spaces in favor of digital ones, or eventually even delegate their civic voice entirely to AI agents; and finally, competency erosion, as overreliance on AI – or sycophantic AI chatbots – further dull our capacity for sound judgment and respectful disagreement.

The information ecosystem is also changing as a result of AI. On the positive side, newsrooms are innovating. In California, CalMatters and Cal Poly are using AI to process legislative transcripts across the state, mine them for insights, and even generate story ideas.

But these benefits could be overshadowed by a flood of ever more convincing deepfakes and synthetic media. False content can sway opinions – people are able to distinguish real from fake images only 60% of the time. More insidiously, the sheer volume of fakes fuels the so-called “liar’s dividend,” as people become so overwhelmed with fabricated content that they start to doubt everything. Cynicism and disengagement ensue.

Finally, beyond the immediate threats to democratic institutions lie broader systemic challenges. The International Monetary Fund estimates that AI could affect 60% of jobs in advanced economies, while McKinsey projects that between 75 and 345 million people may need to change jobs by 2030.

The problem is not just that big economic shocks invariably jeopardize political stability. AI could exacerbate extreme concentrations of wealth, distorting political voice and undermining equality. Add the possibility that the West loses the AI race, ceding global military and economic dominance to anti-democratic superpowers like China.

Meeting these challenges requires action on two fronts. First, sector-specific steps can help journalists, government officials, election administrators, and civil society adopt AI responsibly.

Second, we need broader “foundational interventions” – cross-cutting measures that safeguard not just individual sectors but society as a whole.

Foundational measures must cover the entire AI lifecycle, from development to deployment. This includes strong privacy protections, as well as transparency concerning the data used to train models, potential biases, how corporations and governments deploy AI, dangerous capabilities, and any real-world harms (this global tracker is a great start).

Limits on use are also essential, from police deploying AI for real-time facial recognition to schools and employers tracking student or worker activities (or even emotions). Liability regimes are needed when AI systems wrongly deny people jobs, loans, or government benefits. New ideas in antitrust or economic redistribution may also be required to prevent democratically unsustainable levels of inequality.

Finally, public AI infrastructure is necessary – open models, affordable computing resources, and shared databases that civil society can access to ensure that the technology’s benefits are widely distributed.

While the European Union has moved quickly on regulation, federal action in the US has stalled. But state legislatures are forging ahead: 20 states have enacted privacy laws, 47 now have AI deepfake statutes, and 15 have restricted police use of facial recognition.

The window for policy action is narrow. Just as campaign-finance reforms followed the Watergate scandal, and efforts to regulate social media accelerated – then stalled – after the 2016 US election, democracies must rise to the challenge of AI, mitigating its costs while capturing its remarkable benefits.

Kelly Born, former director of Stanford University’s Cyber Policy Center, is Director of the Democracy, Rights, and Governance initiative at the David and Lucile Packard Foundation.

Copyright: Project Syndicate, 2024.
www.project-syndicate.org