
During economic strife, authoritarian political systems masquerading as populism rise to protect the power, profits, and privilege of the wealthy ruling class. Power is obtained through conspiracy theories and disinformation and it is maintained through surveillance.
We are at the crest of a disinformation apocalypse.
Elections can be won when disinformation and conspiracy theories—such as the false claims that the 2020 election was rigged or that Barack Obama is Kenyan—are deliberately used to promote hate speech against trans individuals, undocumented immigrants, or members of the opposing party.
Consider the 2016 U.S. presidential election. In that contest, Cambridge Analytica used the Facebook data of some 50 million users without their consent to run manipulative, micro-targeted political ads designed to influence voting behavior. Fortunately, attempts at election interference in 2016 and 2020 were more low-tech, relying on generic bot messaging containing low-quality content; thus, having a limited impact.
But since then, AI has been fine-tuned as bad actors continue to develop strategies at a massive scale. The disinformation of 2016 and 2020, promoted by political bots designed to shock, is being replaced by humanlike bots that psychologically target individuals through a slow, subtle, and corrosive manipulation of online conversations. That dialogue gets quietly slipped into everyday digital discussions.
According to Brett Goldstein and Brett Benson, two Vanderbilt professors specializing on national security, the Chinese company GoLaxy is believed to have already carried out such operations in Hong Kong and Taiwan. The company is almost certainly preparing to expand to the U.S.
Once disinformation is generated and spread by AI to win elections, power is sustained by sophisticated surveillance systems perfected by other nations, most notably China. The prevalence of closed-circuit television cameras, coupled with facial recognition software and social media monitoring, creates a massive intrusion into private lives, which enables authorities to keep tabs on dissidents.
AI can search, capture, and harness data at an astonishing scale within microseconds with extraordinary precision, enabling governmental agencies to perceive threats in real time. Such Orwellian constant surveillance threatens the right to disagree with the ruling authorities.
The Department of Homeland Security has already employed AI to analyze social media posts to target visa or green card applicants whom they label “terrorist sympathizers” who have used so-called “extreme” rhetoric or “antisemitic activity.”
Although the White House denies it, sources at federal agencies such as the Environmental Protection Agency, the Department of Veterans Affairs, and the Department of Housing and Urban Development claim AI is monitoring workers, searching for language deemed hostile or contradictory to Trump. Even if these sources are wrong, the capacity for workplace surveillance already exists at U.S. companies.
When we consider how the federal government is currently weaponized to intimidate or neutralize potential opposition to the President, concern arises that AI could be used as a tool in this aggressive behavior.
A national privacy bill can help mitigate these AI evasive practices. However, such a bill passing Congress in its current form would be a long shot.
If democracy is to thrive, then divergent voices are not a threat to be extinguished. Instead, what is required of constitutional democracies is the regulation of AI to uphold civil and human rights.
During his keynote address at this year’s Black Hat cybersecurity conference, Ron Deibert, director of a digital rights research group, argued that the cyber community can defend against the U.S. “dramatic descent into authoritarianism.” This is in opposition to what he sees occurring, which is a “descent into a kind of fusion of tech and fascism.”
The liberative ethical commitment to solidarity, not isolated profit or convenience, must be central to any framework for AI ethics. As an ethical imperative, solidarity democratizes access to power by insisting that AI systems equitably share both benefits (such as productivity gains and access) and burdens, ensuring that disenfranchised communities are not left behind. The radical solidarity envisioned by liberationist ethicists is fundamentally incongruent with AI systems that confine identity through algorithmic pigeonholing.
Although Miguel Luengo-Oroz is a data scientist and not a liberation ethicist, he nonetheless observes that solidarity, as an ethical principle, is mostly absent in the development of AI. Incorporating solidarity as an AI principle means sharing in the prosperity generated by AI by redistributing the productivity gains it enables, while ensuring AI does not contribute to inequality.
Additionally, the long-term implications of developing and deploying AI must be assessed to ensure that no group of humans becomes irrelevant. These ethical principles resonate with what Australian moral philosopher John Tasioulas calls a “humanistic ethics.”
It also stresses a commitment to a plurality of values, the importance of procedures that emphasize outcomes rather than the processes by which they are achieved, and the centrality accorded to individuals through collective participation in defining human flourishing.
Such an approach must supersede our current pervasive neoliberal economic model of optimizing profit and efficiency.
In the final analysis, I argue that solidarity transcends the individualist human-centric approaches to ethics. It moves the discourse toward a humanity-centric ethics. Technology should not entrench systemic inequality, but should be utilized to reduce disenfranchisement.
For AI to be liberative, communities must be meaningfully included in AI governance. Community voices, especially those historically marginalized, must be involved in design processes.
This echoes UNESCO and other global calls for transformative technologies to serve human goals, rather than the other way around, through collective participation in ethical evaluation of AI rather than top-down mandates.


