Syllabus: GS3/Developments in Science and Technology
Context
- The MEITY (Ministry of Electronics and Information Technology) issued an advisory to several large platforms for the regulation of generative Artificial Intelligence (AI) recently.
About the Advisory
- The advisory primarily targets large platforms and does not apply to startups.
- MeitY stipulated that platforms must explicitly seek permission from the government to operate in India and provide disclaimers and disclosures indicating that their platforms are under testing.
- All platforms ensure their computer resources do not permit bias, discrimination, or threats to the integrity of the electoral process through the use of AI, generative AI, large-language models (LLMs), or similar algorithms.
- However, Big Tech firms building apps on AI will need to label their models as “under testing”, which experts say is subjective and vaguely defined.
Why is there a need to regulate the AI Sector?
- The regulation of Artificial Intelligence (AI) is a rapidly evolving field as governments grapple with the potential benefits and risks of this powerful technology.
- Need for Regulation:
- Mitigate Risks: AI can lead to bias, discrimination, privacy violations, and safety hazards. Regulations can help mitigate these risks.
- Transparency and Explainability: Many AI systems are opaque, making it difficult to understand their decision-making process. Regulations can promote transparency and explainability.
- Accountability: Regulations can establish clear lines of accountability for the development, deployment, and use of AI systems.
- Public Trust: Clear regulations can build public trust in AI and encourage its responsible development.
Current Status: Global Efforts to Regulate AI
- Japan’s Social Principles of Human-Human-Centric AI: manifests the basic principles of an AI -capable society: human-centricity; education/literacy; data protection; ensuring safety; fair competition; fairness, accountability and transparency, and innovation.
- Europe’s AI Act: It was passed in December 2023 and has concrete red lines like prohibiting arbitrary and real-time remote biometric identification in public spaces for law enforcement, bans emotion detection, etc.
- In November 2023, the ‘Bletchley Declaration’ by AI Safety Summit called for,
- to work together in an inclusive manner to ensure human centric, trustworthy and responsible AI,
- AI that is safe, and supports the good of all through existing international fora and other relevant initiatives, and
- to promote cooperation to address the broad range of risks posed by AI.
- In July 2023, the US government announced that it had persuaded the companies OpenAI, Microsoft, Amazon, Anthropic, Google, Meta, etc to abide by “voluntary rules” to “ensure their products are safe”.
Indian Efforts
- Digital Personal Data Protection Act in 2023: The government of India also recently enacted a new privacy law, the Digital Personal Data Protection Act in 2023, which it can leverage to address some of the privacy concerns concerning AI platforms.
- Global Partnership on Artificial Intelligence: India is a member of the GPAI. The 2023 GPAI Summit was recently held in New Delhi, where GPAI experts presented their work on responsible AI, data governance, and the future of work, innovation, and commercialization.
- The National Strategy for Artificial Intelligence #AIForAll strategy, by NITI Aayog: It featured AI research and development guidelines focused on healthcare, agriculture, education, “smart” cities and infrastructure, and smart mobility and transformation.
- Principles for Responsible AI: In February 2021, the NITI Aayog released Principles for Responsible AI, an approach paper that explores the various ethical considerations of deploying AI solutions in India.
Challenges of Regulation
- Rapid Evolution of AI: The field is constantly evolving, making it difficult to write future-proof regulations.
- Balancing Innovation and Safety: Striking a balance between fostering innovation and ensuring safety is a challenge.
- International Cooperation: Effective AI regulation requires international cooperation to avoid a fragmented landscape.
- Defining AI: There’s no universally agreed-upon definition of AI, making it difficult to regulate effectively.
Measures for Effective Regulation
- Risk-Based Approach: Regulations should be based on the potential risks posed by different AI systems.
- Focus on Specific Use Cases: Regulations can target specific use cases, such as AI in healthcare or autonomous vehicles.
- Human Oversight and Control: Regulations should emphasize the importance of human oversight and control over AI systems.
- Focus on Transparency and Explainability: Regulations can encourage the development of transparent and explainable AI systems.
- Multi-stakeholder Approach: Effective regulation requires collaboration between governments, industry, academia, and civil society.
Way Ahead
- Artificial Intelligence (AI) is here to stay and possesses the capability to fundamentally change the way in which we work. It is a far greater force of either good or evil or both, AI needs to be regulated.
- By acknowledging the potential dangers of AI and proactively taking steps to mitigate them, we can ensure that this transformative technology serves humanity and contributes to a safer, more equitable future.
Daily Mains Practice Question [Q] With Artificial Intelligence (AI) rapidly evolving and impacting various aspects of our lives, what are the biggest challenges in creating effective regulations for its development and use? |
Previous article
Analysis of India’s R&D Funding
Next article
Poverty Estimation in India