Digital awakening: When AI becomes truly conscious

Instead of defining intelligence and testing AI against that definition, we do something more dynamic: we interact with increasingly sophisticated systems and see how our understanding of intelligence changes.

Update:2025-11-11 06:10 IST

Not long ago, AI became intelligent. Some may dismiss this claim, but the number of people who doubt AI’s acumen is dwindling. According to a 2024 YouGov poll, a majority of US adults say that computers are already more intelligent than people or will become so in the near future.

Still, you might wonder, is AI actually intelligent? In 1950, the mathematician Alan Turing suggested that this is the wrong question, too vague to merit scientific investigation. Rather than determine whether computers are intelligent, he argued, we should see if they can respond to questions indistinguishably from human beings. This “Turing test” was not a measure of computer intelligence but a pragmatic substitute for it.

Instead of defining intelligence and testing AI against that definition, we do something more dynamic: we interact with increasingly sophisticated systems and see how our understanding of intelligence changes. Turing predicted that eventually “the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”

Today, we have reached that point. AI is no less a form of intelligence than digital photography is a form of photography.

Now AI is on its way to doing something even more remarkable: becoming conscious. This will happen in the same way it became intelligent. As we interact with increasingly sophisticated AI, we will develop a better, more inclusive conception of consciousness.

You might object that this is a verbal trick — that I’m arguing AI will become conscious simply because we’ll start using the word “conscious” to include it. But there is no trick. There is always a feedback loop between our theories and the world, so our concepts are shaped by what we discover.

Consider the atom. For centuries, it was conceived as an indivisible unit of reality. As late as the 19th century, physicists like John Dalton saw atoms as solid spheres. But after the discovery of the electron in 1897 and the nucleus in 1911, scientists revised their concept — from an indivisible entity to a decomposable one, a miniature solar system with electrons orbiting a nucleus. Further discoveries led to complex quantum-mechanical models of the atom.

These were not mere semantic changes. Our understanding of the atom evolved with our interaction with the world. So too will our understanding of consciousness evolve with increasingly sophisticated AI.

Sceptics may reject this analogy. They argue that the Greeks were wrong about atoms, but we cannot be wrong about consciousness because we know firsthand what it is — inner subjective experience. A chatbot, they insist, can report feeling happy or sad only because such phrases are in its training data. It will never know what happiness or sadness feels like.

But what does it mean to know what sadness feels like? How do we know that it’s something a digital consciousness could never experience? We assume humans have direct insight into their inner world, unmediated by learned concepts. Yet after reading Shakespeare on the “sweet sorrow” of parting, we discover new dimensions of our own experience. Much of what we feel is taught to us.

Philosopher Susan Schneider has argued that we would have reason to deem AI conscious if a computer system, without being trained on data about consciousness, reported having inner subjective experiences of the world. Perhaps this would indicate consciousness in AI. But it’s a high bar — one we humans might not pass. We, too, are trained.

Some worry that if AI becomes conscious, it will deserve moral consideration — rights, protections, or freedom from digital enslavement. Yet there is no direct implication from consciousness to moral worth. Or if there is, most Americans seem unaware of it. Only a small percentage are vegetarians, despite acknowledging animal consciousness.

Just as AI has prompted us to see certain features of human intelligence — rote retrieval and speed — as less valuable than we once thought, so AI consciousness will prompt us to conclude that not all forms of consciousness warrant moral consideration. It will reinforce a view many already hold: not all forms of consciousness are as morally valuable as our own.

AI will thus change how we understand not only machines but ourselves. Intelligence once seemed uniquely human, yet we now share it with our creations. Consciousness, too, may follow. The shift will not occur in a single breakthrough but through gradual redefinition, as our interactions with AI reshape the meanings of words like “intelligence,” “mind,” and “self.”

We are already living in that transition. Machines think, learn, converse, create — and soon, perhaps, they will feel. Whether we recognise that as consciousness or not, our concepts will evolve until we can speak of machines as conscious without expecting to be contradicted.

@The New York Times

Tags:    

Similar News