RESPONSIBILITY OF KILLER ROBOTS FOR CAUSING CIVILIAN HARM: A CRITIQUE OF AI APPLICATION IN WARFARE DOCTRINE
Abstract
Artificial intelligence and technological advancements have headed to the development of robots capable of performing various functions. One of the purposes of robots is to replace human soldiers on battlefields. Killer robots, referred to as “autonomous weapon systems,” pose a threat to the principles of human accountability that underpin the international criminal justice system and the current law of war that has arisen to support and enforce it. It poses a challenge to the Law of War’s conceptual framework. In the worst-case scenario, it might encourage the development of weapons systems specifically to avoid liability for the conduct of the war by both the government and individuals. Furthermore, killer robots cannot comply with the fundamental law of war principles like the principle of responsibility. The accountability of autonomous robots in warfare has been addressed in this research paper. Killer robots are not designed to have human-like characteristics, which are needed for these principles. The study contends that while the law of war accepts responsibility for a human agency, determining responsibility in the circumstances involving killer robots is problematic. The following article is based on qualitative research methodology