A roadmap to balance AI progress with accountability

India charts global path with the landmark ‘AI Governance Guidelines’, but experts flag gaps in oversight and enforcement

Author :  Ramakrishna N
Update:2025-11-13 07:09 IST

Representative image

CHENNAI: The Ministry of Electronics and Information Technology (MeitY), GoI, has released the ‘India AI Governance Guidelines’ under the India AI Mission, setting out an ambitious vision to create a ‘safe and trusted AI innovation ecosystem’. While the landmark document, the first of its kind in India, has been hailed as a forward-looking, inclusive, and innovation-friendly framework, experts have cautioned that it falls short in addressing a few critical concerns, including enforcement clarity, independent oversight, and safeguards against misuse.

The 148-page report, ‘Enabling Safe and Trusted AI Innovation’, lays out a detailed roadmap to balance technological progress with accountability and ethics.

Six-pillar framework

Developed by a committee chaired by B Ravindran, Head, WSAI, IIT Madras, the report defines AI governance not merely as a regulatory exercise but as a nation-building effort. “This is a governance guideline and not a regulatory guideline alone. The Governance Guidelines are as much about enabling AI adoption and making it truly impactful for the nation as they are about avoiding the risks and misuse of AI,” he said, emphasising the twin goals of inclusion and safety.

The framework rests on 6 key pillars: enabling infrastructure, capacity building, policy and regulation, risk mitigation, accountability, and institutional development – all of which form the backbone of India’s AI governance structure. However, despite its comprehensive approach, some analysts argue that the framework’s non-binding nature could limit its effectiveness in curbing AI misuse and ensuring accountability.

Recognising both the transformative potential and the inherent risks of AI, the guidelines acknowledge challenges ranging from deepfakes and misinformation to algorithmic bias, data misuse, and national security threats. “The guidelines provide a framework that balances AI innovation with accountability, and progress with safety,” the report notes.

Principal Scientific Adviser Ajay Kumar Sood described AI as a profound dual-use technology capable of redefining human productivity and prosperity. “For India, this technological inflection point is a force multiplier in achieving our aspiration of Viksit Bharat by 2047,” he said. “AI must serve as an enabler for inclusive development across all strata of society.”

No enforceable mechanisms

While the document’s 7 guiding sutras – trust, people first, innovation over restraint, fairness and equity, accountability, understandable by design, and safety and sustainability – establish an ethical foundation, policy experts point out that they stop short of creating enforceable mechanisms. Unlike the European Union’s AI Act, which mandates compliance for high-risk applications, India’s approach relies on voluntary frameworks and techno-legal self-regulation, a choice that some warn may be inadequate in curbing malicious use or systemic bias.

The report’s Risk Mitigation section outlines 6 categories of AI threats: malicious use (such as deepfakes), bias and discrimination, transparency failures, systemic risks, loss of control, and national security challenges. However, it provides limited details on enforcement mechanisms or penal consequences for violations.

For instance, while it proposes an AI Incident Reporting Database to document harms caused by AI, there is no mention of a dedicated regulatory authority empowered to investigate or act on such reports. The guidelines suggest that existing agencies, such as CERT-In or sectoral regulators, could oversee compliance, but without clear accountability lines, experts warn that this could lead to jurisdictional confusion and fragmented enforcement.

Further, the framework encourages ‘voluntary measures’ such as transparency reporting and self-certification. However, critics note that voluntary commitments, without auditing or independent verification, often fail to prevent harm.

“Trust cannot be built on goodwill alone. It needs enforcement, oversight, and deterrence,” a AI policy researcher at IIT-Madras told DT Next.

The document rightly flags the explosive rise of deepfakes as a serious threat to democracy, security, and women’s safety. It recommends adopting content authentication and provenance standards such as watermarking, digital identifiers, and cryptographic tagging. Yet, experts note that India still lacks a legal definition of deepfakes, a mechanism to remove such content swiftly, and victim redressal pathways.

The report proposes that MeitY establish a multi-stakeholder expert group to draft global standards for content authenticity. But it does not specify any timeline, enforcement process, or penalties for platforms that host manipulated content.

“Deepfake detection must move from voluntary detection to mandatory verification, especially during elections and in cases involving gendered violence,” said an industry executive from a leading AI firm.

While the guidelines reference the Digital Personal Data Protection Act (DPDP) 2023, they do not clarify how AI systems that rely on vast, unstructured datasets will comply with purpose limitation and consent principles under the Act. Nor do they specify how AI developers will be held responsible for privacy breaches, algorithmic discrimination, or intellectual property infringement.

The report recommends a graded liability model, but without statutory backing, the concept remains advisory. The absence of clear liability attribution, especially in autonomous or generative AI systems, remains one of the most pressing unresolved issues. Critics suggested that India needs an AI Ombudsman or National AI Regulator with investigative powers to handle AI-related harms, similar to data protection authorities in other jurisdictions.

Public inputs missing

To its credit, the report proposes a robust institutional architecture comprising the AI Governance Group (AIGG), Technology and Policy Expert Committee (TPEC), and AI Safety Institute (AISI) to coordinate and oversee AI development.

The AISI will be the technical nerve centre, responsible for testing AI models, conducting risk assessments, and developing safety standards. However, the document does not clearly outline how independent these bodies will be, or whether they will have enforcement powers beyond advisory roles.

Experts argue that independence and transparency are key to preventing conflicts of interest in AI oversight. Without autonomous regulatory powers, these institutions could risk becoming bureaucratic extensions rather than watchdogs.

Additionally, the guidelines’ emphasis on techno-legal governance (embedding laws directly into system architecture), is visionary but insufficient without human oversight. “Digital architecture can enforce compliance, but it cannot interpret ethics or ensure justice. AI governance must combine machine accountability with human judgement,” the research scholar noted.

One area conspicuously underrepresented in the report is public participation. While it mentions voluntary frameworks and stakeholder consultation, it provides no mechanism for citizen feedback or grievance escalation. Experts warn that without transparency in decision-making and a public reporting mechanism, AI governance could become overly technocratic, detached from citizen experiences and democratic accountability.

Good start

Despite its gaps, the India AI Governance Guidelines mark a historic and necessary beginning, establishing an ethical, inclusive foundation for AI in the world’s most populous democracy.

In the short term, the action plan proposes setting up institutions, expanding compute infrastructure, and launching public AI literacy campaigns. The medium-term focus will be on publishing standards for content authenticity, fairness, and cybersecurity, while the long-term vision includes adaptive AI legislation and international collaboration.

MeitY Secretary S Krishnan said, “The Safe and Trusted AI pillar ensures ethical and technical integrity across the ecosystem. Without robust safety measures, our efforts in innovation might falter due to societal risks.”

Tags:    

Similar News