Begin typing your search...

​Fakery, not flattery

Stakeholders have remarked that deepfakes can be misused to diminish trust in surveillance videos, bodycam footage, or discredit and incite violence against law enforcement officers.

​Fakery, not flattery
X

Representative image

NEW DELHI: Episodes involving deepfake videos of two actresses have turned the spotlight on the manner in which artificial intelligence (AI) and machine learning (ML) are being deployed for nefarious reasons. Following the release of the doctored videos of Rashmika Mandanna, and more recently Katrina Kaif, IT Minister Rajeev Chandrasekhar asked those affected to lodge an FIR, while social media enterprises have been directed to remove such content from platforms within 24 hours of receiving a complaint, or risk being censured around provisions of the IT Rules. In fact, Section 66 D of the IT Act, 2000 provides for punishment — imprisonment for up to 3 years and a penalty of Rs 1 lakh for those found cheating by personation using a computer resource.

Stakeholders have remarked that deepfakes can be misused to diminish trust in surveillance videos, bodycam footage, or discredit and incite violence against law enforcement officers. This could also be employed for cyber-bullying, blackmailing, stock manipulation and creating political uncertainty. But, only a few countries have introduced legislation addressing the menace of deepfake videos and resolutions for those affected. China has adopted expansive rules that require manipulated material to have the consent of the individual being showcased. Such material also needs to bear digital signatures. Deepfake service providers need to offer options or solutions to refute or dispel rumours too.

In the US, a New York Democratic Representative Yvette Clarke had put forth a proposal for a Deepfakes Accountability Act in 2019. It was to provide prosecutors, regulators and victims with resources, like detection tech to stand up against the threat posed by deepfakes. The attempt to create a federal taskforce to deal with the problem has stalled, but Clarke is set to reintroduce the bill this year. The European Union and Britain have introduced guardrails for the technology, but they are yet to translate into legislation.

Civic responsibility aside, there are incentives to crack down on indiscriminate use of deepfakes. In May 2022, a deepfake video featured Elon Musk promoting a new cryptocurrency. The platform was said to offer 30% return on crypto deposits, too good to be true. Several hapless folks were conned into parting with their savings. In fact, deepfake is being used to generate vocal renditions that mimic the voices of real people such as customers, and machine-generated requests are targeted at financial services institutions like banks and lenders, in an attempt to carry out illegal transactions over the phone.

In the US, between 2020 and 2022, personal data of millions of people got compromised, thanks to investment and imposter frauds, which led to losses of $8.8 bn, as per the Federal Trade Commission. Digital rights groups like the Electronic Frontier Foundation have asked lawmakers to step away and place the onus of deepfake policies on tech firms. Legislators have been urged to rely on existing legal frameworks to deal with issues of fraud, copyright infringement, obscenity and defamation. Google has already barred people from using its Collaboratory tool to generate deepfakes. Stable Diffusion has also handicapped users from creating nude or porn content. Baby steps, but imperative for the greater good.

Editorial
Next Story