Begin typing your search...

Editorial: Paranoid android

The report said that an ‘overworked’ civil servant robot working for the Gumi City Council in South Korea was found unresponsive after apparently throwing itself down a flight of stairs.

Editorial: Paranoid android
X

Representative Image

In 1942, science fiction author Isaac Asimov introduced the Three Laws of Robotics, which were to be followed by robots in many of his stories. The three Laws contained in the fictional Handbook of Robotics, 56th Edition, 2058 AD comprise: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law; 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Going by the click-bait headline carried by a British tabloid last month, which said that locals in South Korea are now mourning the country’s first robot suicide, it looks like Asimov’s instructions have fallen on deaf ears. The report said that an ‘overworked’ civil servant robot working for the Gumi City Council in South Korea was found unresponsive after apparently throwing itself down a flight of stairs. The 'Robot Supervisor' was found smashed up lying in the stairwell between the first and second floors of the council building. Social media had a field day dissecting this little oddity, with one user commenting, “If the workload had been too much, would he have spun around for a long time and then rushed down the stairs?”

The episode has led to netizens inquiring whether we are anywhere near attaining the phenomenon of singularity. It’s loosely defined as a hypothetical moment in time when artificial intelligence (AI) and other technologies have become so advanced that humanity undergoes a dramatic and irreversible change. In the aforementioned case, terming the malfunctioning of the robot as a suicide, rests on the dreadful assumption that the machine has grown sentient, developed a self-consciousness or self-awareness of sorts, had experienced hopelessness similar to having reached a point of no return, and from which the only respite was self-destruction.

Of course, we can put such concerns to rest, at least for a while, as our current paranoia surrounding AI has more to do with wide ranging unemployment and sudden, unplanned obsolescence of the workforce, so to speak. A news report that appeared last March said that the investment bank Goldman Sachs had estimated that close to 300 million jobs in the United States and Europe were at risk of getting wiped out by the fast-growing technology called generative AI. It’s a sobering thought for anyone involved in not just the creative spaces, but even in traditional sectors such as software, manufacturing, real estate, and more.

Last year, the Writers Guild of America (WGA), which represented 11,500 screenwriters, had called for a strike that had lasted for the better part of five months, from May to September. Apart from addressing the question of a share in the residuals earned from streaming media, the writers had also sought legal guardrails so that studios could use AI tools such as ChatGPT only to help with research or facilitate script ideas and not as a means to replace writers altogether. The WGA strike is in no way a spiritual successor to the 19th century English textile workers — Luddites who went on a rampage against wage-stealing machinery in the dead of the night. However, the strike was a reminder of what we stand to lose, in the event of employers blatantly deeming human workers as redundancies, instead of opting to reskill them.

Editorial
Next Story