Syllabus: GS3/Science & Technology
Context
- Recently, the MeitY submitted a comprehensive status report to the Delhi High Court, addressing the growing concerns surrounding deepfake technology.
- It highlights the challenges posed by deepfakes, particularly in the context of misinformation, privacy violations, and malicious uses, while proposing actionable recommendations to mitigate these risks.
About Deepfake Technology
- The term ‘deepfake’ originates from ‘deep learning’ and ‘fake’ referring to AI-generated synthetic media that manipulates or replaces real content with fabricated, hyper-realistic counterparts.
- Deepfake models use generative adversarial networks (GANs), where two AI models — the generator and the discriminator — compete against each other to improve the authenticity of the generated content.
Working of Deepfakes
- Data Collection: The AI is trained on a large dataset of real images, videos, or audio recordings of the target person.
- Feature Learning: The deep learning model learns facial structures, expressions, and speech patterns.
- Synthesis & Manipulation: AI algorithms generate synthetic media that can swap faces, alter expressions, or mimic voices.
- Refinement via Generative Adversarial Networks (GANs): The generated content is refined to improve realism and reduce detectable inconsistencies.
Key Concerns Highlighted in the Status Report
- Lack of Uniform Definition: Stakeholders emphasized the absence of a standardized definition for ‘deepfake’, complicating efforts to regulate and detect such content effectively.
- Targeting Women During Elections: Deepfakes have been increasingly used to target women, especially during state elections, raising serious concerns about privacy and the spread of harmful content.
Other Concerns Surrounding Deepfakes
- Misinformation and Political Manipulation: In India, where social media platforms play a crucial role in political discourse, deepfake videos can be weaponized to create unrest.
- Threat to National Security: Malicious actors can use deepfakes to impersonate government officials, leading to misinformation or even cyber warfare tactics that threaten national security.
- Financial Frauds and Cybercrime: AI-generated deepfake voices have been used to mimic corporate executives, leading to financial fraud.
- In India’s digital economy, such crimes could severely impact businesses and individuals.
- Violation of Privacy and Defamation: Deepfakes are frequently used to create non-consensual explicit content, disproportionately targeting women.
- Undermining Trust in Media: When realistic fake content circulates widely, it erodes public trust in authentic journalism and evidence-based reporting, affecting democratic processes.
Government Response and Legal Framework
- Information Technology (IT) Act, 2000: It provides a broad framework for cybercrimes but lacks specific provisions addressing deepfake-related offenses.
- Section 66D: Punishes identity theft and impersonation using digital means.
- Section 67: Penalizes the publishing of obscene material, which can be used against deepfake pornography.
- Personal Data Protection Bill (PDPB) [Now Digital Personal Data Protection (DPDP) Act, 2023]: It aims to regulate the collection and use of personal data. Misuse of deepfakes involving personal identity could be challenged under this act.
- Intermediary Guidelines & Digital Media Ethics Code (2021): These rules mandate social media platforms to proactively monitor and remove harmful content, including deepfakes, failing which they may lose legal immunity under the IT Act.
- Fact-Checking and AI Detection Initiatives: Platforms like PIB Fact Check have been actively debunking deepfake videos spreading misinformation.
- Indian start-ups and researchers are developing AI tools to detect and flag deepfake content.
- Global Collaboration: India is collaborating with global tech firms and governments to combat deepfakes through policy discussions and AI research initiatives.
Challenges in Regulation
- Intermediary Liability Frameworks: The report raised concerns about over-reliance on intermediary liability frameworks, which determine the extent to which platforms can be held accountable for content.
- Detection Difficulties: Audio deepfakes, in particular, pose significant challenges for detection, underscoring the need for advanced technological solutions.
Recommendations from the Report
- Mandatory Content Disclosure: The report advocates for regulations requiring AI-generated content to be disclosed and labelled, ensuring transparency and accountability.
- Focus on Malicious Actors: Emphasis was placed on targeting the malicious uses of deepfake technology rather than benign or creative applications.
- Improved Enforcement: Instead of introducing new laws, the report recommends enhancing the capacity of investigative and enforcement agencies to tackle deepfake-related crimes effectively.
Previous article
Public Accounts Committee Recommendations For GST Regime
Next article
India’s Bioeconomy & Road Ahead