'Humans long way from building conscious machines, understanding consciousness'

The authors argue that although the responses of these AI systems seem conscious, they are most likely not.

Update: 2023-10-29 17:15 GMT

NEW DELHI: We are a long way from building conscious machines, neuroscientists write in an opinion published in the journal Trends in Neurosciences.

When we, humans, are interacting with AI systems such as ChatGPT, we consciously perceive the text the language model generates. The question is whether the language model also perceives our text when we prompt it, neuroscientists from Estonia, Germany, and Australia write.

The authors argue that although the responses of these AI systems seem conscious, they are most likely not.

First, they say, the inputs to language models lack the embodied, embedded information content characteristic of our sensory contact with the world around us.

Second, the architectures of present-day AI algorithms are missing key features of the thalamocortical system (relating to or connecting the cortex and thalamus) that have been linked to conscious awareness in mammals, they write.

Finally, the evolutionary and developmental trajectories that led to the emergence of living conscious organisms arguably have no parallels in artificial systems as envisioned today, they write.

They argue that the existence of living organisms depends on their actions and their survival is intricately linked to multi-level cellular, inter-cellular, and organismal processes culminating in agency and consciousness.

"What we know, and what this new paper points out, is that the mechanisms are likely way more complex than the mechanisms underlying current language models," the authors write, even as they contend that researchers do not have a consensus on how consciousness rises in our brains.

For instance, they point out, real neurons are not akin to neurons in artificial neural networks. Biological neurons are real physical entities, which can grow and change shape, whereas neurons in large language models are just "meaningless" pieces of code.

Thus, while it may be tempting to assume that ChatGPT and similar systems might be conscious, this would severely underestimate the complexity of the neural mechanisms that generate consciousness in our brains, the authors write. 

Tags:    

Similar News