Responsible Use of Artificial Intelligence in the Military Domain (REAIM)

Syllabus: GS3/Science and Technology

Context

  • The summit on Responsible Use of Artificial Intelligence in the Military Domain (REAIM) has begun in Seoul, South Korea.

About

  • It is part of the new global diplomacy to shape global norms on the military applications of AI. 
  • The summit is being co-hosted by Kenya, the Netherlands, Singapore, and the United Kingdom. 
  • This is the second iteration of the summit; the first took place in 2023 in The Hague, Netherlands
  • The three-fold objective of the summit is to:
    • understand the implications of military AI on global peace and security, 
    • implement new norms on using AI systems in military affairs, 
    • and develop ideas on long-term global governance of AI in the military domain.

Artificial Intelligence

  • Artificial intelligence (AI) is a wide-ranging branch of computer science concerned with building smart machines capable of performing tasks that typically require human intelligence. 
  • Artificial intelligence allows machines to model, or even improve upon, the capabilities of the human mind. 
  • And from the development of self-driving cars to the proliferation of generative AI tools like ChatGPT and Google’s Bard, AI is increasingly becoming part of everyday life — and an area every industry are investing in.

Application of AI in Military

  • While AI has long been used by leading militaries for inventory management and logistical planning, in the past few years, the use of AI in intelligence, surveillance, and reconnaissance of the battlefield has significantly expanded.
  • Major militaries see the capacity of AI to transform the collection, synthesis, and analysis of vast amounts of data from the battlefield.
    • It can be useful in raising situational awareness, increasing the time available for decision-making on the use of force, enhancing precision in targeting, limiting civilian casualties, and increasing the tempo of warfare.
  • Many critics have warned that these presumed attractions of AI in warfare might be illusory and dangerous.
  • The proliferation of the so-called AI decision-making support systems (AI-DSS) and their implications are among the issues that are now being debated under the REAIM process. 

Need for the Regulation

  • The fear that the conduct of warfare would be taken up by computers and algorithms had generated calls for controlling these weapons. 
  • Keeping humans in the decision-making loop on the use of force has been a major objective of this discourse. 
  • The issues relating to lethal autonomous weapon systems (LAWS) have been discussed within a group of governmental experts since 2019 at the United Nations in Geneva.
  • The REAIM process widened the debate beyond ‘killer robots’ to a broader range of issues by recognising that AI systems are finding ever greater applications in warfare. 

Responsible use of AI in Military Affairs

  • The REAIM process is one of the many initiatives to promote responsible AI — national, bilateral, plurilateral, and multilateral.
  • The US has also encouraged its NATO allies to adopt similar norms. 
    • NATO’s 2021 strategy identified six principles for the responsible military use of AI and unveiled a set of guidelines for its forces. 
    • The objective is to “accelerate” the use of AI systems that could generate military gains for NATO, but in a “safe and responsible” manner.
  • The world is going to see more AI in warfare than less; that comports with the historic trend that all new technologies will eventually find military applications. 
  • The REAIM process recognises this — and given the potentially catastrophic outcomes from such use, the idea is to develop an agreed set of norms. 

Source: IE