

KUMBAKONAM: Government’s move to soon mandate labelling of AI-generated content is an absolutely necessary regulation as morphed or manipulated images can cause serious harm to people, Zoho founder-chief scientist Sridhar Vembu has said, adding he fully endorses the proposed rules.
The comment from one of India’s most prominent technology leaders comes as the government has proposed changes to IT rules, mandating the clear labelling of AI-generated content and increasing the accountability of platforms, and indeed other players involved, in verifying and flagging synthetic information.
“We absolutely need this regulation because morphed images and all of that...can cause a lot of damage to people. I fully support this,” Vembu said.
The Centre’s move to mandate labelling of AI-generated content seeks to empower users to scrutinise such content and ensure that synthetic output is not masquerading as truth, IT secretary S Krishnan had recently said at an event, and informed that the rules are now nearing finalisation.
The move geared to curb user harm from deepfakes and misinformation aims to impose obligations on two key sets of players in the digital ecosystem; one, the providers of AI tools such as ChatGPT, Grok and Gemini, and on social media platforms.
The draft rules involve mandating companies to label AI-generated content with prominent markers and identifiers, covering a minimum of 10 per cent of the visual display or the initial 10 per cent of the duration of an audio clip.
The IT ministry had earlier highlighted that deepfake audio, videos, and synthetic media going viral on social platforms demonstrates the potential of generative AI to create “convincing falsehoods”, where such content can be “weaponised” to spread misinformation,
Anything that violates someone’s privacy and any attack on that, has to be regulated. We will evolve but (our system is that) we respond quickly to this damage reputations, manipulate or influence elections, or commit financial fraud.
In fact, the issue of deepfakes and AI user harm, once again, came into sharp focus following the recent controversy surrounding Elon Musk-owned Grok allowing users to generate obscene content. Users flagged the AI chatbot’s alleged misuse to ‘digitally undress’ images of women and minors, raising serious concerns over privacy violations and platform accountability.