Penalty provisions for development, dissemination of deepfakes can create deterrent effect: CUTS

The Ministry of Electronics and Information Technology in the advisory issued to intermediaries and platforms warned of criminal action in case of non-compliance.

Update: 2024-03-24 09:00 GMT

Representative Image

NEW DELHI: Penalty provisions can act as a deterrent to the development and dissemination of deepfakes and misinformation, a senior official of global think tank Cuts International said while calling for the deployment of technology interventions to check misuse of AI-generated content.

CUTS International, Director, Research, Amol Kulkarni told PTI that internet users would require adequate opportunities to verify the genuineness of content and it becomes important during the election season while the role of credible fact-checkers and trusted flaggers becomes crucial.

He said that while the government advisory on March 15 removes permission requirements, it continues to rely on information disclosures to users for making the right choices on the Internet.

"Though transparency is good, information overload and 'pop-ups' across user journeys may reduce their quality of experience. There is a need to balance the information requirements, with other implementable technological and accountability solutions which can address the problem of deepfakes and misinformation," Kulkarni said.

After a controversy over a response of Google's AI platform to queries related to Prime Minister Narendra Modi, the government on March 1 issued an advisory for social media and other platforms to label under-trial AI models and prevent hosting unlawful content.

The Ministry of Electronics and Information Technology in the advisory issued to intermediaries and platforms warned of criminal action in case of non-compliance.

The previous advisory has asked the entities to seek approval from the government for deploying under trial or unreliable artificial intelligence (AI) models and deploy them only after labelling them of "possible and inherent fallibility or unreliability of the output generated".

The Ministry of Electronics and IT on March 15 issued a revised advisory on the use and rollout of AI-generated content.

The IT ministry removed the need for government approval for untested and under-development AI models but emphasised the need for labelling AI-generated content and information to users about the possible inherent fallibility and unreliability of the output generated.

Kulkarni said that addressing the issue of deepfakes and misinformation will require clarifying the responsibility of all stakeholders in the internet ecosystem: developers, uploaders, disseminators, platforms and consumers of content.

"Penalty provisions for the development and dissemination of harmful deepfakes and misinformation could also create a deterrent effect. Technological solutions to tag potentially harmful content and shifting the burden on developers and disseminators to justify the use of such content could also be designed," he said.

Tags:    

Similar News