Technology

Killer Robots– A Threat to International Peace and Security?

Artificial Intelligence has proved to be one of the revolutionary inventions of the 21st Century. The application of the technology has spread across various fields including political, business, health, life-style, electronic, etc. Countries like the United States of America and China are known to have invested the highest in the Research and Development of Artificial Intelligence. However, the technology has come under the radar of a lot of activists from the scientific as well as ethical community. Their focus is specifically on opposing the technology and to challenges of these new technologies, which must be used exclusively for peaceful purposes. In other words, today we have AI being used for very lethal, destructive and non-peaceful purposes.

In 2013, a scientific community led Campaign to Stop Killer Robots was initiated. Later at the 24th International Joint Conference on Artificial Intelligence held in July 2015, at Buenos Aires, the capital city of Argentine, a paper was presented. This paper was a declaration signed by almost 1000 experts and intellectuals including AI experts to completely ban the research and development on Killer Robots. This group of experts includes Noam Chomsky, Elon Musk, Stephen Hawking, etc. The United Nations convened a conference in order to discuss the fate of such hazardous technologies, however any prospect of a ban has been met with objections by United States, Russia, Israel, etc. since these countries have already poured millions of dollars into Research and Development.

This category of Killer Robots comes under the broader paradigm of technology known as Lethal Autonomous Weapons. There is no one definition of these category weapons. The most-commonly agreed upon definition of LAWs, the military weaponry systems that upon the application of Artificial Intelligence, are capable of identifying, locating, selecting and engaging targets. However, before the international community agrees upon one definition on the term, they need to decide the ambit of two terms- lethality and autonomy.

In terms of the technical autonomy, Dustin Lewis, Naz Modirzadeh and Gabriella Blum from the Harvard Law School Program on International Law and Armed Conflict, suggest the use of war algorithms as an effective criteria of determination various levels of autonomy. They define a war algorithm as “any algorithm that is expressed in computer code, that is brought into effect through a constructed system, and that is capable of operating in relation to armed conflict.”

The main idea behind the introduction of AI technology in conventional warfare is to replace human life. The idea is that robots shall be replaced with human soldiers and combatants. Robots could be efficient in doing the job of killing a perpetrator. This has also been argued to cut down a huge chunk of governmental expenditure on defense sector. However, is it really to feasible to invite a future where robots shall war with humans or in worse cases, where robots are fighting against the enemy-robots? There are huge doubts over the functionality and the viability of such a war. It shall be never-ending. It is also very important to analyze these lethal autonomous weapons with an ethical and moral lens. Can a robot be given the right to end a humans life, even if the human is a deemed terrorist? Can a robot make a wise decision whether a person’s life should be ended or not, or whether to release the person if he has accepted his guilt? There are a lot of limitations of artificial intelligence and a robot can never be made to take the place of humans especially in warfare.

Finally, it is important to discuss whether where the right place is for these Lethal Autonomous Weapons in International Law. As per the United Nations Convention on Prohibition and Restrictions of Certain Conventional Weapon, any weapon that can be severely injurious to people and not deemed to be a part of the conventional warfare needs to be prohibited by law. It was brought into effect in 1993. The Convention also has about five protocols, where each protocol talks about a specific category of weapons such as eye-lasers, landmines, etc. and calls for a complete ban on the same. There has been a lot of debate over whether should these ‘Lethal Autonomous Weapons’ be banned under an additional protocol to the 1993 Convention by the United Nations. However, before that it is important to define the ambit of the very term ‘Lethal Autonomous Weapons’ because it encompasses a wide range of weapons of varying autonomy. It is important to distinguish between the lethal weapons that are significantly under human control even after its deployment and those which near to absolute autonomy and work on a system of ‘fire and forget’.

It is important to take the opinions and the views of scientific community members before legislating any law. Otherwise, the politicians shall continue to fund the making and development of these weapons in order to further their political interest. Even though, the international community agrees over continuing their research, specifically on the category of fully automated weapons and only control it through regulations, it shall pose as a threat to the poor countries. It is important to note that only the militarily advanced countries own this technology and could use them on the third-world.

Picture Credits : https://www.bostonglobe.com



Most Popular

To Top
Please check the Pop-up.