Collaborating with Indian govt to address deepfakes: Google

The Centre last week gave a seven-day deadline to social media platforms to tweak their policies as per Indian regulations in order to address the spread of deepfakes

Update: 2023-11-29 12:15 GMT

Michaela Browning, VP, Government Affairs & Public Policy, Google Asia Pacific (Left) and Minister of State for Electronics and IT Rajeev Chandrasekhar

NEW DELHI: As the Indian government takes tough stand on AI-generated fake content especially deepfakes, Google on Wednesday said the company's collaboration with the Indian government for a multi-stakeholder discussion aligns with its commitment to addressing this challenge together and ensuring a responsible approach to AI.

By embracing a multi-stakeholder approach and fostering responsible AI development, we can ensure that AI's transformative potential continues to serve as a force for good in the world, said Michaela Browning, VP, Government Affairs & Public Policy, Google Asia Pacific.

"There is no silver bullet to combat deep fakes and AI-generated misinformation. It requires a collaborative effort, one that involves open communication, rigorous risk assessment, and proactive mitigation strategies," Browning added.

The company said it is pleased to have the opportunity to partner with the government and to continue dialogue, including through its upcoming engagement on the Global Partnership on Artificial Intelligence (GPAI) Summit.

"As we continue to incorporate AI, and more recently, generative AI, into more Google experiences, we know it's imperative to be bold and responsible together," said Browning.

The Centre last week gave a seven-day deadline to social media platforms to tweak their policies as per Indian regulations in order to address the spread of deepfakes on their platforms.

Deepfakes could be subject to action under the current IT Rules, particularly Rule 3(1)(b), which mandates the removal of 12 types of content within 24 hours of receiving user complaints, said Minister of State for Electronics and IT Rajeev Chandrasekhar.

The government will also take action of 100 per cent of such violations under the IT Rules in the future.

According to Google, it is looking to help address potential risks in multiple ways.

"One important consideration is helping users identify AI-generated content and empowering people with knowledge of when they're interacting with AI generated media," said the tech giant.

In the coming months, YouTube will require creators to disclose altered or synthetic content that is realistic, including using AI tools.

"We will inform viewers about such content through labels in the description panel and video player," said Google.

"In the coming months, on YouTube, we'll make it possible to request the removal of AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice, using our privacy request process," it added.

Google recently updated its election advertising policies to require advertisers to disclose when their election ads include material that's been digitally altered or generated. "We also actively engage with policymakers, researchers, and experts to develop effective solutions. We have invested $1 million in grants to the Indian Institute of Technology, Madras, to establish the first of its kind multidisciplinary center for Responsible AI," Browning noted.

Tags:    

Similar News