Begin typing your search...
Is it all right to trust algorithms altogether?
Whenever we think of the film Minority Report, predictive policing might come to our mind. In the film, a clairvoyant group foresees a crime and police arrest individuals based on their information before the crime gets executed. But algorithmic policing is nothing like the Minority Report.
Here, a unique algorithm uses data on the times, locations and nature of past crimes to provide police strategists information concerning where and at what times police patrols should patrol, or maintain a presence, to prevent and detect crime. Such predictions made by the algorithms help in the intelligent targeting of police resources.
As days go by, we will find algorithms playing an increasingly active role in many facets of our lives. Legendary Silicon Valley entrepreneur Vinod Khosla has referred to the contemporary age as Dr A’s age because Dr Algorithm is heralding a healthcare revolution, where we may not need a doctor because of AI, big data, and diagnostics that would meet 90-99 per cent of our health needs.
An algorithm is a step-by-step protocol for calculations. We can use Algorithms for calculation, data processing, and automated reasoning. Algorithms are becoming a ubiquitous part of our lives. Algorithms are today being deployed in diverse fields. For instance, in law enforcement, we are using it for predictive policing, while on roads, red-light and speeding cameras are detecting transgressions of the law. In border control, Al is flagging travellers and their baggage for screening. In finance for credit scoring, for instance, the FICO tally is determining an individual’s creditworthiness. For intelligence collection and surveillance, CCTV cameras are spotting unique activity by computer vision analysis. In the military, warfare drones and other robots are discovering targets and killing without human intervention. Some dating sites such as eharmony and others promise to use maths to find a person’s soul mate and the perfect match.
The tantalising possibility of portending crime before it transpires has probably got law enforcement agencies most excited about algorithmic policing. Police departments and courts in the USA and several nations have incorporated crime-predicting algorithms, facialrecognition, and pretrial and sentencing software deep inside their legal system. Algorithms consistently are more accurate than people in predicting recidivism. In some tests, the tools approached 90 per cent accuracy in predicting which defendants might get arrested again. Predictive analytics uses historical data to predict future events. Typically, we use historical data to build a mathematical model that captures essential trends.
Most times, the patterns inherent in the crimes themselves provide ample information to predict which places and windows of time are at the highest risk for future crimes. The basic assumption behind predictive policing being that a lot of crime is not random. For example, home burglaries are relatively predictable. When a house gets robbed, the likelihood of that house or places near it getting stolen spikes again in the following days. In such a prediction method, crimes get separated from individuals, and a visible law enforcement presence can be an effective deterrent.
Proponents argue that predictive policing can help predict crimes more accurately and effectively. Predictive policing are currently used in several US states and by Kent County Police in the UK, Netherlands, and China. India too has set out towards big algorithmic policing in its own way. Gurgaon based startup ‘Staqu’ is using big data for identifying criminals and finding missing persons. Staqu launched an AI-based human efface detection (ABHED) application for effective policing. The startup has integrated the app with the database of police of eight Indian states, including Rajasthan and Punjab, for identifying criminals by facial recognition. Although the algorithmic formulas’ writers would endorse the algorithms to be perfectly neutral, the actual truth could be something else. Algorithms often could get coloured with the bias of the person who has authored it. How can we know how a black box algorithm protected by intellectual property law as a trade secret is behaving? For instance, the FICO algorithm, which plays a significant role in Americans’ access to credit, which earns hundreds of millions of dollars each year, never gets disclosed. It is a closely guarded secret.
The near-total absence of transparency in the algorithms that drive the world means that we, the people, have no insight and no say in profoundly crucial decisions being made about us and for us. The concentrated power of algorithms to harm us has gone unnoticed by most until now. Without insight and transparency into the algorithms that are running our world, there can be no accountability or true democracy. As a result, the 21st century society is becoming increasingly vulnerable to manipulation by authors who operate algorithms that pervade our lives.
Algorithmic technology has infuriated several activists. They contend that the technology, which predicts crime and violence instead of reducing crime, has led to over-policing and mass imprisonment, the perpetuation of racism and increased tensions between police and communities. Still, algorithms meant to foretell where crime would happen often justifies massive and often fierce deployment to neighbourhoods already suffering from poverty.
The recent pandemic witnessed the release of thousands from jails where social distancing is near-impossible. At the same time, police officers, afraid of overcrowding jails, curtailed arrests. These changes led to a crime drop in several cities. Our experience during the COVID-19, therefore, is confirming that police can arrest and jail far fewer people without jeopardising security. Hence, affirming the requirement for snuffing out algorithmic decision making.
Today, big data, cloud computing, artificial intelligence, and the Internet of Things act on physical objects on our behalf in 3D space. Having an AI driving a robot that cleans your house and makes coffee for you is fine. What if the robot’s algorithm mistakenly detects the owner as a threat and eliminates him as with Kenji Urada, the 37-year-old employee of Kawasaki. In 1981, the robot crushed Urada to death by pushing him into a grinding machine.
Finally, we have entered a new era where the algorithm rules. Algorithms determine what search results we see with Google. If the human brain’s intelligence can be wrapped up in a particular algorithm, imagine what it would mean for AI. The same algorithm could apply to how AI neural networks work. Further, what if we could make machines driven by algorithms conscious? Could we program them to contain a soul?
— The author is ADGP, Law & Order