Begin typing your search...

AI models can act like demagogues, propagate misinformation

Sanders is a data scientist and an affiliate at the Berkman Klein Center at Harvard University. Schneier is a fellow and lecturer at the Harvard Kennedy School.

AI models can act like demagogues, propagate misinformation
X
Representative image

NEW DELHI: At its best, artificial intelligence (AI) could be a tool to increase the accessibility of political engagement and ease polarisation. At its worst, it could propagate misinformation and increase the risk of voter manipulation, Nathan E. Sanders and Bruce Schneier wrote for The Atlantic.

Sanders is a data scientist and an affiliate at the Berkman Klein Center at Harvard University. Schneier is a fellow and lecturer at the Harvard Kennedy School.

In the time-honoured tradition of demagogues worldwide, the LLM could inconsistently represent the candidate's views to appeal to the individual proclivities of each voter.

In fact, the fundamentally obsequious nature of the current generation of large language models (LLMs) results in them acting like demagogues, the authors said.

Current LLMs are known to hallucinate - or go entirely off-script - and produce answers that have no basis in reality. These models do not experience emotion in any way, but some research suggests they have a sophisticated ability to assess the emotion and tone of their human users.

Although they weren't trained for this purpose, ChatGPT and its successor, GPT-4, may already be pretty good at assessing some of their users' traits - say, the likelihood that the author of a text prompt is depressed. Combined with their persuasive capabilities, that means that they could learn to skillfully manipulate the emotions of their human users, the article said.

The number of incidents concerning the misuse of AI is rapidly rising, a study by Stanford University found.

According to the AIAAIC database, which tracks incidents related to the ethical misuse of AI, the number of AI incidents and controversies has increased 26 times since 2012. Some notable incidents in 2022 included a deepfake video of Ukrainian President Volodymyr Zelensky surrendering and US prisons using call-monitoring technology on their inmates. This growth is evidence of both greater use of AI technologies and awareness of misuse possibilities.

Markus Anderljung and Julian Hazell in a research paper for The Centre for the Governance of AI said that authoritarian governments could misuse AI to improve the efficacy of repressive domestic surveillance campaigns. The Chinese government has increasingly turned to AI to improve its intelligence operations, including facial and voice recognition models and predictive policing algorithms.

Notably, these technologies have been used for the persecution of the Uyghur population in the Xinjiang region. This persecution might constitute crimes against humanity, according to a recent UN report.

In response, it has been suggested that democratic countries coordinate to design export controls that stifle the spread of these technologies to authoritarian regimes.

AI could be used to create lethal autonomous weapon systems (LAWS) with significant misuse potential. Some critics have argued that LAWS could enable human commanders to commit criminal acts without legal accountability, be used by non-state actors to commit acts of terrorism, and violate human rights, the research paper said.

Moreover, LLMs may enable malicious actors to generate increasingly sophisticated and persuasive propaganda and other forms of misinformation. Similar to automated phishing attacks, LLMs could increase both the scale and sophistication of mass propaganda campaigns. The use of large language models to automate propaganda can result in a higher number of propagandists as the reliance on manual labor is decreased, thus reducing the overall costs of these campaigns, the paper said.

Furthermore, image generation models could be used to spread disinformation by depicting political figures in unfavourable contexts.

Visit news.dtnext.in to explore our interactive epaper!

Download the DT Next app for more exciting features!

Click here for iOS

Click here for Android

IANS
Next Story