The Age of Killer Robots

In the rapidly evolving realm of military innovation, one of the greatest threats to human rights, international security, and humanitarian law is the emergence of lethal autonomous weapons systems (LAWS), commonly known as “killer robots.” These systems have the capability to identify, select, and engage targets without human intervention. The advancements in military technology by countries like Russia, Israel, and others demonstrate that the age of killer robots is no longer science fiction—it’s our reality.
Perhaps the most alarming example is Israel’s use of AI-driven warfare in Gaza. For several years, Israel has employed AI-based facial recognition software at checkpoints in the West Bank. More concerning is the deployment of systems like Lavender, which marked 37,000 individuals as targets during the early stages of the Gaza war. This illustrates how artificial intelligence is making life-or-death decisions. Due to Lavender’s 10% error rate, thousands of innocent civilians were reportedly misidentified—violating the principles of proportionality and distinction enshrined in international humanitarian law.
Israel’s military arsenal has also included booby-trapped robots, remote-controlled dogs, and AI-guided quadcopters to bomb civilian areas in Gaza. These autonomous systems indiscriminately destroy infrastructure and human lives. With 59% of buildings and 87% of schools in Gaza now destroyed, this represents not only a humanitarian disaster but also a direct consequence of dehumanized, automated warfare.
Russia, too, has integrated autonomous weapons into its military operations, especially in Ukraine. Its extensive use of Iranian-made Shahed drones to target Kyiv exemplifies how automated systems threaten civilians and decimate infrastructure. These drones are pre-programmed to identify and attack targets autonomously, without human oversight—marking a dangerous evolution from conventional war to algorithmic aggression.
Similarly, India has used Israeli-made Harop drones—also known as Low-Cost Miniature Swarm Drones—during cross-border operations. These “suicide drones” autonomously patrol targeted areas and strike without human intervention. While praised for precision, they eliminate ethical judgment and human accountability. Pre-programmed drones cannot assess proportionality or distinguish civilian proximity, making them fundamentally flawed.
This concern is not hypothetical. In complex conflict zones—where the line between combatants and civilians is often blurred—autonomous weapons become exceptionally dangerous. Critics argue that these systems eliminate essential human oversight from decisions of life and death. Machines lack empathy, moral reasoning, and the capacity to weigh military advantage against potential civilian harm. As Human Rights Watch warns, automation is dehumanizing war—reducing individuals to mere data points.
Proponents argue that killer robots can reduce risks to soldiers and eliminate irrational behaviors, such as impulsive violence or revenge killings. They also claim LAWS can process data more efficiently and potentially reduce collateral damage. However, these justifications ignore the unpredictability and ethical complexity of warfare. Article 6 of the International Covenant on Civil and Political Rights (ICCPR) affirms the inherent right to life, which must be protected by law. Delegating such decisions to machines violates this fundamental right.
An even more concerning dimension is the accessibility of LAWS to non-state actors. In 2016, ISIS conducted its first successful drone strike in Iraq, targeting two Kurdish fighters. This success led to the creation of its own drone unit, “Unmanned Aircraft of the Mujahideen.” In 2018, Syrian rebel groups launched 12 homemade drones to attack Russian military installations. These incidents show how easily AI-driven technologies can be weaponized by terrorists.
AI and robotics expert Noel Sharkey has warned of a future in which cheap, lethal drones without safety mechanisms become available to extremist groups. This fear is visualized in the viral video “Slaughterbots,” where AI mini-drones hunt and kill people using facial recognition and social media data. Such technology could enable untraceable, targeted assassinations with terrifying precision.
This threat extends especially to fragile, conflict-affected regions. In Africa, for instance, there is fear that autonomous weapons lost in counterterrorism missions could fall into insurgent hands. The U.S. has reportedly lost Reaper and Predator drones in unstable states like Yemen, Libya, and Niger. If modified or hacked, these weapons could allow non-state actors to carry out deadly strikes with devastating effect.
Accountability is a critical concern. Who is to blame when a killer robot commits a war crime? The programmer? The manufacturer? The military official who authorized deployment? Without a clear framework for responsibility, such incidents risk going unpunished—challenging the core tenets of command responsibility in international law.
Despite the rising danger, international regulatory efforts remain stalled. Countries like India, Russia, Iran, Türkiye, and Israel oppose binding treaties, fearing constraints on their military capabilities. Their resistance has slowed the progress of forums like the UN Convention on Certain Conventional Weapons (CCW). However, a significant step was taken in November 2023 when the UN General Assembly passed a resolution—by a vote of 164 in favor—calling for regulation of autonomous weapons.
Civil society has also mobilized. Campaigns such as Stop Killer Robots, Human Rights Watch, and Amnesty International are advocating for a comprehensive treaty that ensures ethical oversight and legal accountability.
To prevent a dystopian future, urgent international action is needed. Nations must come together to draft and ratify a binding treaty prohibiting the development and use of lethal autonomous weapons. This treaty must ensure accountability, uphold the right to life, and mandate meaningful human control over the use of force. Without such safeguards, we risk entering a world where machines decide who lives and dies—without conscience, compassion, or consequence.
The views and opinions expressed in this article/paper are the author’s own and do not necessarily reflect the editorial position of The Spine Times.

Sarina Tareen
The writer is a research intern at the Balochistan Think Tank Network (BTTN) and a graduate in International Relations from BUITEMS.