Representative image Pexels
National

Flagging governance risks: India’s AI push fragmented, reactive, vulnerable to misuse, white paper reveals

Until these mechanisms are operational at scale, AI deployment is likely to continue outpacing safeguards, it cautions.

Ramakrishna N

CHENNAI: Even as artificial intelligence is being deployed across sectors at population scale, India’s ability to govern the technology remains fragmented, reactive and vulnerable to misuse, a government-backed white paper has warned, flagging serious risks ranging from data misuse and embedded bias to deepfake abuse and opaque decision-making.

Ad 1

The white paper, ‘Strengthening AI Governance through Techno-Legal Framework’, released by the Office of the Principal Scientific Adviser to the Government of India, cautions that existing legal and regulatory instruments are struggling to keep pace with the speed, scale and complexity of modern AI systems.

While baseline laws such as the I-T Act and the Digital Personal Data Protection Act offer limited safeguards, they were not designed to address AI’s full lifecycle risks, it notes. A key concern highlighted is the largely post-facto nature of enforcement.

Provisions dealing with impersonation, obscenity or defamation are typically invoked only after harm has occurred, leaving little scope for early detection or prevention. “Once an AI system is trained or deployed at scale, correcting bias, privacy violations or unsafe behaviour becomes extremely difficult,” the paper observes.

The document is particularly critical of weak data governance. It warns that poorly regulated data collection and training practices can permanently embed discrimination, privacy breaches and intellectual property violations into AI models. Such flaws, it says, propagate downstream across sectors including healthcare, finance and welfare delivery, undermining public trust and exposing institutions to regulatory and reputational fallout.

“Data-stage failures are irreversible in most cases. If bias or unlawful data use enters at the training phase, no amount of downstream moderation can fully undo the damage,” an AI policy researcher from IIT-Madras told DT Next.

Generative AI and deep-fakes pose another growing challenge. The white paper notes that content takedowns and platform moderation alone are insufficient to counter synthetic media that can be rapidly generated, replicated and amplified. Without enforceable provenance mechanisms, persistent identifiers and coordinated incident reporting, deepfake abuse could continue to evade accountability, it warns.

Data-stage failures are irreversible in most cases. If bias or unlawful data use enters at the training phase, no amount of downstream moderation can fully undo the damage
Data-stage failures are irreversible in most cases. If bias or unlawful data use enters at the training phase, no amount of downstream moderation can fully undo the damage

The paper also flags uneven institutional capacity as a systemic weakness. Smaller firms, startups and public agencies often lack the expertise and tools needed to conduct audits, document compliance or monitor AI behaviour in real time. “This creates blind spots where high-impact AI systems operate with minimal oversight. Those gaps usually affect ordinary citizens first,” the AI researcher opined.

Although the paper proposes a techno-legal approach, embedding legal obligations directly into technical systems, it concedes that many such tools remain immature, unstandardised and without formal legal recognition. Until these mechanisms are operational at scale, AI deployment is likely to continue outpacing safeguards, it cautions.

The paper also flags uneven institutional capacity as a systemic weakness. Smaller firms, startups and public agencies often lack the expertise and tools needed to conduct audits, document compliance or monitor AI behaviour in real time. “This creates blind spots where high-impact AI systems operate with minimal oversight. Those gaps usually affect ordinary citizens first,” the AI researcher opined.

Although the paper proposes a techno-legal approach, embedding legal obligations directly into technical systems, it concedes that many such tools remain immature, unstandardised and without formal legal recognition. Until these mechanisms are operational at scale, AI deployment is likely to continue outpacing safeguards, it cautions.

No backing for OPS to ally with DMK

Krishnagiri: 900-yr-old later Chola-era Shiva temple found

Customers flag significant delay in BSNL FTTH refund

TNMLC seeks bids to hire over 1,860 vehicles for State departments