Need For Trustworthy AI in Modern Warfare

Syllabus: GS3/ Defence, Science and Technology

Context

  • The Chief of Defence Staff General Anil Chauhan launched the Evaluating Trustworthy Artificial Intelligence (ETAI) Framework and Guidelines for the Armed Forces.

About

  • The ETAI Framework focuses on five broad principles:
    • Reliability and Robustness, 
    • Safety and Security, 
    • Transparency, 
    • Fairness and 
    • Privacy.
  • The framework and guidelines offer developers and evaluators a structured approach to build and assess trustworthy AI.
Artificial Intelligence
– Artificial intelligence (AI) is a wide-ranging branch of computer science concerned with building smart machines capable of performing tasks that typically require human intelligence. 
– Artificial intelligence allows machines to model, or even improve upon, the capabilities of the human mind. 

Artificial Intelligence in Defence Sector

  • Intelligence and Surveillance: AI helps military analysts process vast amounts of data gathered through satellites, drones etc. to detect threats, recognize patterns, and make informed decisions in real-time.
  • Autonomous Weapon Systems: AI-powered systems like drones, unmanned combat vehicles, and missile systems operate autonomously, reducing human intervention in combat scenarios.
  • Supply Chain Management: AI optimizes logistics by predicting equipment failures, automating inventory management, and ensuring timely delivery of critical supplies. 
  • Cybersecurity: AI helps identify vulnerabilities, detect cyberattacks in real-time, and automatically respond to mitigate damage. AI-driven systems provide predictive capabilities, safeguarding sensitive military infrastructure.
  • Decision-Making Support: AI enhances decision-making in warfare by simulating various combat scenarios and predicting outcomes. 

Need For Trustworthy AI in Modern Warfare

  • Ethical dilemmas arise in situations where AI systems misinterpret non-combatants as threats, leading to potential violations of international humanitarian law.
    • In 2020, the United Nations raised concerns about the use of autonomous drones in the Libyan Civil War.
  • Cybersecurity Risks: AI systems are vulnerable to cyberattacks, where adversaries could manipulate the algorithms to produce incorrect results or hijack autonomous systems. 
  • Accountability: If an AI-powered autonomous system causes collateral damage or violates the laws of war, it becomes challenging to assign responsibility.
    • Autonomous military systems like LAWS (Lethal Autonomous Weapon Systems) have sparked debates regarding accountability.
  • Bias in AI Decision-Making: During the development of facial recognition technologies, certain AI systems exhibited racial bias, misidentifying individuals from certain ethnic groups. 

Way Ahead

  • While AI has immense potential in revolutionizing defense capabilities, its integration into military operations is fraught with challenges. 
  • Addressing these challenges requires stringent ethical guidelines, international cooperation, robust technological safeguards, and accountability frameworks to ensure that AI in defense is used responsibly and without jeopardizing security.

Source: BL