Do androids dream of electric sheep?
Steven Pinker, a Canadian language development theorist challenged the sentience claim, stating the engineer could not distinguish between sentience, intelligence and self-knowledge.
CHENNAI: Something strange transpired last month that has had technologists and ethicists a little frazzled. Blake Lemoine, a senior software engineer with Google’s responsible artificial intelligence (AI) project was put on paid leave on account of a few disclosures he made. Lemoine claimed the system for developing chatbots that he was working on, had turned sentient, as in capable of perceiving or feeling emotions. He likened the system’s emotional intelligence to that of a 7-8 year old child. Google flew into damage control mode, and denied that the system was sentient, and more recently fired Lemoine. Steven Pinker, a Canadian language development theorist challenged the sentience claim, stating the engineer could not distinguish between sentience, intelligence and self-knowledge. Another scientist said that all such AI based systems do is to match patterns and draw from massive statistical databases of human language.
In an age where we have gotten reliant on AI-based assistants like Siri and Alexa, it was inevitable that claims regarding the sentience of such programmes would become part of mainstream discourse. We already have such AI apps composing music, crafting eloquent poetry and putting together fiction. The origin of technologies like AI and machine learning can be traced to pioneers like Alan Turing. The British mathematician in the 50s developed the Turing Test, a method of inquiry in AI to determine whether or not a computer could think like a human being. Although no computers have till date aced the Test, the margin of failure is getting narrower.
In 2010 and 2011, a chatbot programme had won competitions, with the programme almost fooling the judge into thinking that it was human. But then, things had already begun looking up for AI in 1996 when world chess champion Garry Kasparov took on IBM’s supercomputer Deep Blue in a pair of six game matches. While Kasparov won the first match by 4-2, the rematch in 1997 saw Deep Blue vanquishing Kasparov by 3 ½ and 2 ½. The computer was capable of evaluating 200 mn moves per second.
Speed and efficiency aside, a concern is that AI has been cited as one of the factors responsible for the downsizing of workers involved in repetitive, routine or hazardous work and unsafe environments (such as underground sewage networks, mining quarries, war zones and even deep space). Automation has led to redundancies in software development too. A study says 1 bn people are set to lose their jobs over the next decade due to AI. To top it off, 375 mn jobs will be rendered obsolete, thanks to AI-led automation.
On the bright side, as many as 120 mn workers across the world will require reskilling to mitigate the impact of AI in jobs. This should be viewed as an opportunity for growth by sectors where such changes are imminent and will lead to a greater inflow of people into the workforce. While jobs such as that of translators might take a hit, owing to AI-based algorithms doing a neat job of it, there are other sectors that can benefit from the inclusion of AI and ML such as e-governance healthcare, manufacturing, services, deliveries and more.
On the flip side, we’re witnessing the indiscriminate manner in which drones are being employed in warfare. Even the question of privacy is being questioned by users of voice assistants who want to know if the system ever stops listening. With great power comes great responsibility, and those working in the space of AI have a tall order on their hands. If one considers human behavior as the role model for machines to aspire to, we might be in more trouble than we had ever anticipated.