REPORT ON DEEPFAKE: SCIENCE & TECHNOLOGY
NEWS: Focus on content disclosure, labelling: Govt report to Delhi HC on ‘deepfakes’
WHAT’S IN THE NEWS?
MeitY's report to the Delhi High Court highlights the growing concerns surrounding deepfake technology, emphasizing risks like misinformation, privacy violations, and national security threats. The report recommends improved regulation, AI detection tools, and enhanced enforcement to mitigate deepfake-related crimes.
MeitY's Status Report on Deepfake Technology Submitted to Delhi High Court
• Key Highlights
The Ministry of Electronics and Information Technology (MeitY) submitted a comprehensive report to the Delhi High Court, addressing the growing concerns about deepfake technology. The report focuses on the challenges posed by deepfakes, especially in terms of misinformation, privacy violations, and malicious uses, while also proposing actionable recommendations to mitigate these risks.
About Deepfake Technology
• Definition: The term "deepfake" comes from deep learning and fake, referring to AI-generated synthetic media that manipulates or replaces real content with fabricated, hyper-realistic counterparts.
• Working:
• Generative Adversarial Networks (GANs) are used in deepfake technology, where two AI models (the generator and discriminator) compete to create more authentic content.
• Data Collection: The AI is trained on large datasets of real images, videos, or audio recordings of the target person.
• Feature Learning: The AI learns facial structures, expressions, and speech patterns.
• Synthesis & Manipulation: AI generates synthetic media by swapping faces, altering expressions, or mimicking voices.
• Refinement via GANs: The generated content is refined for better realism and reduced detectable inconsistencies.
Key Concerns Highlighted in the Status Report
• Lack of Uniform Definition: There is no standardized definition for deepfake technology, which complicates efforts to regulate and detect such content effectively across stakeholders.
• Targeting Women During Elections: Deepfakes have been increasingly used to target women, especially during state elections, raising concerns about privacy violations and the spread of harmful content.
Other Concerns Surrounding Deepfakes
• Misinformation and Political Manipulation: In India, where social media plays a major role in political discourse, deepfakes can be weaponized for political manipulation, leading to misinformation and potentially creating social unrest.
• Threat to National Security: Malicious actors can use deepfakes to impersonate government officials, leading to misinformation or cyber warfare tactics that threaten national security.
• Financial Frauds and Cybercrime: Deepfake voices have been used to impersonate corporate executives, leading to financial fraud. Such crimes could severely impact businesses and individuals in India’s digital economy.
• Violation of Privacy and Defamation: Deepfakes are often used to create non-consensual explicit content, disproportionately affecting women and resulting in serious privacy violations.
• Undermining Trust in Media: The circulation of realistic fake content erodes public trust in authentic journalism, negatively impacting democratic processes and evidence-based reporting.
Government Response and Legal Framework
• Information Technology (IT) Act, 2000: Provides a broad framework for cybercrimes but lacks specific provisions addressing deepfake-related offenses.
• Section 66D: Punishes identity theft and impersonation using digital means.
• Section 67: Penalizes publishing obscene material, which can be used against deepfake pornography.
• Personal Data Protection Bill (PDPB) (now Digital Personal Data Protection (DPDP) Act, 2023): Aims to regulate the collection and use of personal data, with the potential to challenge the misuse of deepfakes involving personal identity.
• Intermediary Guidelines & Digital Media Ethics Code (2021): These rules require social media platforms to proactively monitor and remove harmful content, including deepfakes, or risk losing legal immunity under the IT Act.
• Fact-Checking and AI Detection Initiatives:
• Platforms like PIB Fact Check have been actively debunking deepfake videos spreading misinformation.
• Indian start-ups and researchers are developing AI tools to detect and flag deepfake content.
• Global Collaboration: India is collaborating with global tech firms and governments to combat deepfakes through policy discussions and AI research initiatives.
Challenges in Regulation
• Intermediary Liability Frameworks: The report raised concerns over the over-reliance on intermediary liability frameworks, which dictate the extent to which platforms can be held accountable for the content shared on their platforms.
• Detection Difficulties: Audio deepfakes, in particular, are harder to detect, which emphasizes the need for advanced technological solutions to counteract this challenge.
Recommendations from the Report
• Mandatory Content Disclosure: The report recommends regulations requiring that AI-generated content be disclosed and labeled as such, ensuring transparency and accountability.
• Focus on Malicious Actors: The report stresses targeting the malicious uses of deepfake technology rather than focusing on benign or creative applications.
• Improved Enforcement: Instead of introducing new laws, the report advocates for enhancing the capacity of investigative and enforcement agencies to tackle deepfake-related crimes more effectively.
Conclusion
The growing concerns surrounding deepfake technology are being recognized at the highest levels, with multiple recommendations aimed at enhancing legal, technological, and enforcement frameworks to mitigate the risks posed by this technology in India. The government and private sectors are working together to address these challenges, ensuring better regulation and detection capabilities in the face of this rapidly evolving issue.
Source: https://indianexpress.com/article/cities/delhi/focus-on-content-disclosure-labelling-govt-report-to-delhi-hc-on-deepfakes-9908127/