Inter-ministerial Committee to Enforce AI rules to Develop AI Guidelines

Syllabus: GS3/Science and Technology

Context

  • A Union Government panel has recommended that an inter-ministerial committee be set up to enforce AI rules to develop AI guidelines.

About

  • IndiaAI Mission headed by the Principal Scientific Advisor is seeking public feedback on a report released by its AI guidelines sub-committee. 
  • The report proposes a “coordinated, whole-of-government approach to enforce compliance and ensure effective governance as India’s AI ecosystem evolves”.

Major Highlights of the Report

  • The report outlines the following principles for AI governance:
    • transparency of AI systems with “meaningful information on their development” and capabilities; 
    • accountability from developers and deployers of AI systems; 
    • safety, reliability and robustness of AI systems by design; 
    • privacy and security of AI systems; 
    • fairness and non-discrimination; 
    • human-centred values and ‘do no harm’; 
    • inclusive and sustainable innovation to “distribute the benefits of innovation equitably”; 
    • and “digital by design” governance to “leverage digital technologies” to operationalise these principles.
  • Life Cycle Approach: In order to operationalise these principles, policymakers must use a life cycle approach.
    • They must look at AI systems at different stages of their development, deployment, and diffusion, during which distinct risks can occur. 
    • There should be an “ecosystem view” of AI actors.
  • The report proposes a tech-enabled digital governance system. 

Artificial Intelligence

  • Artificial intelligence (AI) is a wide-ranging branch of computer science concerned with building smart machines capable of performing tasks that typically require human intelligence. 
  • Artificial intelligence allows machines to model, or even improve upon, the capabilities of the human mind. 
  • And from the development of self-driving cars to the proliferation of generative AI tools like ChatGPT and Google’s Bard, AI is increasingly becoming part of everyday life — and an area every industry is investing in.

Why do we need rules on AI?

  • Ethical Concerns: AI systems can make decisions and take actions that impact individuals and society. Establishing rules helps address ethical concerns related to the use of AI, ensuring that it aligns with human values and respects fundamental rights.
  • Privacy: AI often involves the processing of large amounts of data. Rules can help protect individual privacy by specifying how data should be collected, stored, and used. 
  • Security: This includes safeguarding against potential vulnerabilities and protecting against malicious uses of AI technology.
  • Transparency: Rules can mandate transparency in AI systems, requiring developers to disclose how their algorithms work.
  • Competition and Innovation: Establishing a regulatory framework provides a level playing field for businesses, preventing the abuse of market dominance and encouraging responsible innovation.
  • Public Safety: In cases where AI is used in critical domains such as healthcare, transportation, or public infrastructure, rules are essential to ensure the safety of individuals and the general public.

Regulation of AI in India

  • Digital Personal Data Protection Act in 2023: The government enacted the Digital Personal Data Protection Act in 2023, it can address some of the privacy concerns concerning AI platforms.
  • Global Partnership on Artificial Intelligence: India is a member of the GPAI. The 2023 GPAI Summit was held in New Delhi, where GPAI experts presented their work on responsible AI, data governance, and the future of work, innovation, and commercialization. 
  • The National Strategy for Artificial Intelligence #AIForAll strategy, by NITI Aayog: It featured AI research and development guidelines focused on healthcare, agriculture, education, “smart” cities and infrastructure, and smart mobility and transformation.
  • Principles for Responsible AI: In February 2021, the NITI Aayog released Principles for Responsible AI, an approach paper that explores the various ethical considerations of deploying AI solutions in India.

Challenges of Regulation

  • Rapid Evolution of AI: The field is constantly evolving, making it difficult to write future-proof regulations.
  • Balancing Innovation and Safety: Striking a balance between fostering innovation and ensuring safety is a challenge.
  • International Cooperation: Effective AI regulation requires international cooperation to avoid a fragmented landscape.
  • Defining AI: There’s no universally agreed-upon definition of AI, making it difficult to regulate effectively.

Way Ahead

  • Artificial Intelligence (AI) is here to stay and possesses the capability to fundamentally change the way in which we work. It is a far greater force of either good or evil or both, AI needs to be regulated.
  • By acknowledging the potential dangers of AI and proactively taking steps to mitigate them, we can ensure that this transformative technology serves humanity and contributes to a safer, more equitable future.

Source: TH