The state of autonomous weapons in today’s world

In November 2020, a New York Times report confirmed that Iran’s top nuclear scientist Mohsen Fakhrizadeh was assassinated by Israeli agents using an AI-assisted sniper rifle. This report corroborates earlier claims by Iranian Revolutionary Guard investigators that a “satellite-controlled intelligent machine gun” was used to kill the scientist while he was driving with his wife. While Fakhrizadeh was shot four times, his wife was unharmed.
Assassination of Fakhrizadeh
Register for our upcoming Masterclass>>
The gun used to kill Fakhrizadeh was accompanied by a robotic device that weighed about a ton. The entire system was installed in the bed of a truck with multiple cameras to give the assassins a full view of the surroundings. The truck was also filled with explosives to detonate evidence once the mission was completed / compromised.
The weapon was connected to an Israeli command center via a satellite communications relay. An agent was behind this gun which could aim at the target using a computer screen. An AI system was developed to track the movement of Fakhrizadeh’s car and had a 1.6 second lag. The facial recognition software built into the AI system was designed to target only Fakhrizadeh and leave his wife unharmed.
While the attackers were able to complete the mission and detonated the truck after the assassination, the smart rifle system was not completely destroyed. The remains of the rifle were used by the Iranian Revolutionary Guards to investigate the attack. This investigation revealed some interesting facts about modern warfare.
In a similar but failed attempt in 2018, an AI-controlled drone nearly killed then-Venezuelan President Nicolas Maduro. He was attending an event when two drones detonated explosives near him.
Advances in AI have also accelerated the development of autonomous weapons. In the future, these guns are expected to become more precise, faster and even cheaper. If this development is done ethically and responsibly, these machines could reduce casualties, help soldiers target combatants only, and use autonomous weapons defensively against perpetrators.
In an article in The Atlantic, Taiwanese-born American computer scientist Kai-Fu Lee said that autonomous weaponry is the third revolution in warfare after gunpowder and nuclear weapons. He wrote that true AI-enabled autonomy involves the full commitment to kill, which includes “finding, deciding to engage, and erasing another human life, without any human involvement.” .
This year, the Pentagon’s US Agency for Defense Advanced Research Projects (DARPA) tested fully autonomous AI-based drones equipped with weapons. In August, an exercise with AI-controlled drones and tank-like robots was held in Seattle. These drones received specific instructions from human operators, but they functioned autonomously for actions such as locating and destroying targets. This exercise demonstrated the benefit of using AI systems for combat situations where conditions are too complex and dangerous for human intervention.
Not only the United States, but many other countries are also actively studying the inclusion of AI in warfare. China, no doubt, is leading the race. According to a report by the Brookings Institute, the Chinese military has pursued significant investments in robotics, swarming and other AI-based weapons. Although it is difficult to determine the sophistication of these systems, the report indicates that weapons can have different levels of autonomy.
Inherent dangers
Activists and experts from all walks of life believe that the use of autonomous weapons presents many dangers, sometimes far outweighing the benefits. Kai Fu-Lee, in his recent interview, said, “The biggest danger is autonomous weapons. He said war is the only time AI is trained to kill humans. Lee said autonomous weapons becoming more advanced and affordable would wreak havoc and could even be used by terrorists to commit genocide. “We have to figure out how to ban or regulate it,” Lee added.
In 2015, tech and business leaders like Elon Musk and Steve Wozniak, along with 200 other AI researchers, signed an open letter proposing a complete ban on autonomous weapons. This proposal has received the support of more than 30 countries; however, a report commissioned by Congress advised the United States to defy the ban.
Is regulation an option?
Human Rights Watch and other non-governmental organizations launched the Stop Killer Robots campaign in 2013. Since then, concern about fully autonomous weapons has become an international priority. It has been recognized as a great threat to humanity which deserves urgent multilateral action.
Since 2018, United Nations Secretary-General António Guterres has urged states to ban autonomous weapons that could, on their own, target and attack human beings, calling them “morally repugnant and politically unacceptable”.
A legally binding instrument called the Convention on Conventional Weapons entered into force in 2014. Partner countries meet annually to discuss concerns related to Lethal Autonomous Weapons Systems (LAWS). Nearly 30 countries called for a ban on fully autonomous systems, and 125 member states of the Non-Aligned Movement called for a “legally binding international instrument” on LAWS. Critics still believe that a full ban would go into effect anytime soon.
In a previous interview with Analytics India Magazine, Trisha Ray, associate member of the Observer Research Foundation, whose research focuses on laws, said that “CCW is unlikely to call for a ban, but calls for guarantees in accordance with international humanitarian law. The law, including meaningful human control.
Join our Discord server. Be part of an engaging online community. Join here.
Subscribe to our newsletter
Receive the latest updates and relevant offers by sharing your email.
