Begin typing your search...

Growing threat of AI-assisted crimes

In a democracy, AI could also threaten the fundamental rights of its citizens. For instance, politicians or parties who have the power and authority could use AI to analyse mass-collected data and create targeted propaganda to mislead them. During elections, they could circulate fake videos for social manipulation and deception.

Growing threat of AI-assisted crimes
X

Chennai

Anand Gandhi’s OK Computer is a sci-fi comedy series set in 2031 starring Radhika Apte, Jackie Shroff and Vijay Varma. In the film, on a beautiful moonlit night in a tranquil coastal town in North Goa, when a self-driving car bangs into a pedestrian and kills him instantaneously, the police get confronted with three irking questions regarding the culpability of the crime — Is the CEO of the taxi company culpable? Or is the programmer culpable? Or is the car itself with the AI system culpable? When the police commence investigations, detective Vijay Varma uncovers it to be wilful murder. Still, Radhika Apte, who heads an organisation for the ethical treatment of robots, disputes it as she believes AI is incapable of harming humans. The questions the show hurls at us are whether an AI can enable a crime and if it commits a crime, who should be culpable?

Technology is a double-edged weapon. With the advent of the internet, we had internet crimes, and with the inception of social media, crimes on social media proliferated. OK, Computer may be pure fiction. But Artificial Intelligence (AI), could play an increasing role in committing and enabling crimes in the future. Particularly going by the rampant proliferation of AI in various sectors, especially public safety, administration, finance, etc, the attack on such AI-based systems is likely to rise. Many criminal, political and terror scenarios could arise from targeted disruption of such systems.

For instance, AI-generated fake content in media could lead to widespread mistrust and deterioration of faith in audio and visual content. Deep fakes are getting extraordinarily sophisticated, convincing and more challenging to prevent. Fake content on social media has, frequently, affected democracy and national politics. For instance, a tailored video of a drunk housekeeper, Nancy Pelosi, speaking in a slurring manner garnered over 2.5 million views on Facebook in 2020. Using AI, a UK-based organisation called Future Advocacy in 2019 created a deep fake AI video showing election rivals Boris Johnson and Jeremy Corbyn advocating each other for the post of Prime Minister. Though there are algorithms to detect deep fakes online, several avenues are available for manipulated videos to spread undetected. Creating means of detecting the deep-fake at the point of upload may be the need for the hour. GAN (generative adversarial network), an AI technique recently invented at Stanford has enabled the generation of hoaxes, doctored video, and forged voice clips easier to execute with excellent results.

Further, in a democracy, AI could also threaten the fundamental rights of its citizens. For instance, politicians or parties who have the power and authority could use AI to analyse mass-collected data and create targeted propaganda to mislead them. During elections, they could circulate fake videos for social manipulation and deception.

Furthermore, AI technologies power autonomous systems. Autonomous vehicles may be in their infancy, but they could become more common in the future and run the risk of being repurposed as weapons. Criminals could load an autonomous vehicle with explosives and send it to an earmarked destination, or they could hack an autonomous vehicle and use it to damage property or attack pedestrians. Further, it may be possible to control an autonomous car through computer hardware or software. A malicious attacker taking advantage of security gaps could take over a car or even cause it to crash willfully. The ability to utilise a vehicle without requiring a human at the wheel would likely dramatically speed up this practice. Autonomous drones at present are not being used for crimes of violence, but their mass and kinetic energy are potentially destructive. Criminals could also fit drones with weapons that could prove lethal in self-organising swarms.

Natasha Pajema, in her book Rescind Order, portrays a scenario of AI-based systems going haywire when an automated command-and-control system detects an incoming nuclear attack, and automatically gives the launch order for the nuclear weapon. The protagonist cannot verify if the automated system has detected a false attack or if the attack is actual. The protagonist has precisely 8 minutes and 53 seconds to decide. Rescind Order narrates a heart-rending story of US decision-makers steering a nuclear crisis in 2033, during a tricky era of autonomous systems, social media communication, and deep fakes, which we are likely to encounter shortly as well.

Another AI-based crime ‘Tailored phishing’, is likely to give sleepless nights to cyber-crime experts in which criminals would collect info by installing malware or digital messages by creating an impression of a trusted party such as the user’s bank. The phisher plays with the existing trust to persuade the user to execute actions he would otherwise be wary of, such as revealing passwords or clicking on dubious links.

Likewise, culprits may use AI as a blackmail tool to harvest personal information from social media or large unique datasets like phone contents, browser history, etc. They may use them to tailor threat messages to their targets to blackmail them. AI could also generate fake evidence and assist criminals in sextortion. The latter involves using AI to hack into the computer or phone of the victim to extract videos or access personal pictures to blackmail the victim for sexual favours or money.

Criminals could also use AI to poison data. For instance, a smuggler intending to smuggle weapons on board a plane could make an automated X-ray threat detector insensitive to firearms. Criminals could use AI to mislead an investment advisor into making unexpected recommendations which the criminal could exploit because of shifting market value. Criminals could also capitalise on the rampant proliferation of AI in various sectors such as power or food, leading to widespread power disruption to traffic gridlock and breakdown of food logistics. Systems with responsibility for public safety and security are probable to become crucial targets, as those systems dealing with financial transactions. Criminals could also use AI to trick face recognition and create AI-authored fake reviews.

Unlike conventional crimes, crimes in the cyber domain can be repeated, shared or sold to criminals for perpetrating crimes. UCL’s Matthew Caldwell suggests we may even witness the marketisation of AI-enabled crime soon with the advent of Crime as a Service (CaaS). To counter and deter such AI risks, there is a need for legislation of AI crimes within the cyber-crime framework.

Finally, AI is encroaching on the spiritual domain as well. The pandemic is replacing traditional worship with virtual tools. Digitally mediated religious communities, sometimes, are proving more attractive and allowing more connectivity than brick and mortar churches and temples.

— The author is ADGP, Armed Police

Visit news.dtnext.in to explore our interactive epaper!

Download the DT Next app for more exciting features!

Click here for iOS

Click here for Android

migrator
Next Story