Orchestra 
Edit & Opinions

Artificial Metacognition: Giving AI ability to ‘think’ about its ‘thinking’

Today’s generative AI systems are remarkably capable but fundamentally unaware.

Ricky J Sethi

Have you ever had the experience of rereading a sentence multiple times only to realise you still don’t understand it? As taught to scores of incoming college freshmen, when you realise you’re spinning your wheels, it’s time to change your approach.

This process of recognising that something isn’t working and adjusting accordingly is the essence of metacognition, or thinking about thinking. It’s your brain monitoring its own thinking, recognising a problem, and controlling or adjusting its approach. Metacognition is fundamental to human intelligence and, until recently, has been understudied in artificial intelligence systems.

My colleagues Charles Courchaine, Hefei Qiu and Joshua Iacoboni and I are working to change that. We’ve developed a mathematical framework designed to allow generative AI systems, specifically large language models like ChatGPT or Claude, to monitor and regulate their own internal “cognitive” processes. You can think of it as giving generative AI a way to assess its own confidence, detect confusion and decide when to devote more effort to a problem.

Today’s generative AI systems are remarkably capable but fundamentally unaware. They generate responses without knowing how confident they should be, whether their answer contains conflicting information, or whether a problem deserves additional scrutiny. This limitation becomes critical in high-stakes applications such as medical diagnosis, financial advice and autonomous vehicle decision-making.

Consider a medical generative AI system analysing symptoms. It might confidently suggest a diagnosis without any mechanism to recognise situations where it should pause and reflect, such as when symptoms contradict one another or fall outside typical patterns. Developing such a capacity requires metacognition, including both monitoring one’s own reasoning and regulating the response.

Inspired by neurobiology, our framework aims to give generative AI a limited version of these abilities by using what we call a metacognitive state vector. This vector quantifies the system’s internal “cognitive” state across five dimensions: emotional awareness, correctness evaluation, experience matching, conflict detection and problem importance. Together, these function like sensors that allow a model to evaluate its thinking.

We integrate these signals into a mathematical framework that uses the metacognitive state vector to control ensembles of large language models. In effect, it converts qualitative self-assessments into quantitative signals the system can use to guide its behaviour. For example, when confidence drops below a threshold or internal conflicts rise, the system can shift from fast, intuitive processing to slower, more deliberative reasoning, similar to what psychologists describe as System 1 and System 2 thinking.

An ensemble of language models can be imagined as an orchestra. When the task is simple and familiar, the system operates efficiently with minimal coordination. When the task is complex or ambiguous, greater coordination is required, with different models assuming specialised roles such as critic or domain expert. The metacognitive state vector informs a control system that determines when such shifts are necessary and how the models should interact.

The implications extend beyond incremental performance gains. In health care, a metacognitive system could recognise atypical cases and escalate them to human experts. In education, it could adapt when it detects confusion. In content moderation, it could flag nuanced cases for human review rather than relying on rigid rules.  

Our framework does not give machines consciousness or human-like self-awareness. Instead, it provides a computational architecture for allocating resources and improving responses. Our longer-term goal is generative AI systems that understand their own limitations: systems that know when to be confident, when to be cautious, and when to defer to others.

The Conversation

Sengottaiyan blames BJP for interference in alliances, slams AIADMK as DMK’s ‘B’ team

Steroid misuse fuelling silent glaucoma crisis, warn eye specialists at Dr Agarwals Eye Hospital

TN implements 11 key reforms in registration department: Here’s what you need to know

India-EU announce mega FTA; PM Modi calls it New Delhi's biggest trade deal ever

IITM Pravartak, SWAYAM Plus to train rural school teachers in AI; free certification for first 500