‘Killer robots’ are coming. Is the US ready for the consequences?

The battlefields of the future will be dominated by those who can best harness intelligence to rapidly and precisely maneuver against opponents. The current war in Ukraine is a stark example: While defending their nation against Russia’s invasion, the outnumbered and outgunned Ukrainians have used battlefield intelligence and pinpoint firepower to negate the numerical and qualitative advantages of the invading force. During a recent disastrous Russian river crossing in the Donbas region, for instance, Kyiv’s forces were able use intelligence sources to identify, trap, and destroy an entire battalion. 

Warfare is evolving, and the evidence is mounting that a smarter, more agile force can decisively defeat a stronger adversary through the precise application of new technologies. Central to this fast-evolving domain are autonomous combat systems, officially known as lethal autonomous weapons systems (LAWS)—or, colloquially, “killer robots.” These uncrewed systems use artificial intelligence (AI) and machine learning (ML) algorithms to autonomously identify and destroy a target. 

The United States and its allies and partners—in addition to their strategic competitors and adversaries—are researching and developing unmanned aerial systems and drones, as well as ground and underwater vehicles. While most are still being tested, some are already in operation (such as in Libya in 2020). The killer robot age may have already dawned with barely a ripple of public recognition, and now, the United States must make some tough decisions: on its willingness to field LAWS, as well as the circumstances under which it will empower those systems to use lethal force. 

Deadly—but delicate—tech

The allure of LAWS is clear: They reduce the risk to forces and are easier to support logistically, since the requirements of keeping an operator safe can be complicated and costly. They can also provide the speed of action that has been shown to be so critical on the ground in Ukraine, identifying a target and making a near-instantaneous execution decision without needing to send the information to a commander then wait for approval. This speed can be the difference between destroying a high-value target and watching it safely flee while awaiting a fire order. 

Finally, LAWS are calm, calculating, unemotional, and unbiased in their decision making. For example, imagine an autonomous drone that identifies a Russian multiple launch rocket system (MLRS) preparing to fire dozens of rockets into a residential area. A search of its internal database could mark the vehicle as a Russian MLRS system, a perimeter search could rule out the risk of collateral damage, and a preprogrammed rule of engagement algorithm could confirm the target was valid. A pre-authorized shot (if the target met certain parameters) could be executed without delay, and the detection-to-destruction timeline could be seconds—as opposed to the tens of minutes it might take for those discrete actions to be taken off-board.

But despite the many benefits of LAWS, there is also an important moral component that must be addressed. Are national decision makers and operational commanders willing to allow an autonomous vehicle to take a life—perhaps many lives? Are we, as a society, comfortable with empowering killer robots to do our military’s bidding? How much risk of error is an individual commander willing to accept? War, after all, is a messy business. LAWS will make mistakes; killer robots will inevitably take the lives of innocent civilians, they will cause collateral damage and carnage, and they are likely to inadvertently cause the death of friendly forces.

All of this is no longer science fiction and must be addressed soon. The time to legally empower LAWS to employ lethal force is prior to a conflict, not in the heat of battle. At the institutional level, the Department of Defense (and its counterparts in US-allied nations) must craft an operational framework for LAWS, as well as offer strategic guidance, to ensure their ethical application in the future. Autonomous systems must be tested thoroughly in the most demanding of scenarios, the results must be evaluated at the granular level, and an expected error rate must be calculated. As a baseline, LAWS should pose less risk of error than a human operator. 

Prior to the beginning of an engagement, the tactical decision for using lethal force needs to be made either by a theater commander or his or her delegated representative. That commander must evaluate the guidance provided by national decision makers, the operational environment, and the critical nature of individual targets on a tactical battlefield. The commander must provide clear guidance that can be written into an algorithm used throughout a particular conflict that will eventually make a decision to take a human life autonomously (or direct the LAWS to request further guidance or authorization if the scenario is unclear).

The commander must also be prepared to justify his or her decision if and when the LAWS is wrong. As with the application of force by manned platforms, the commander assumes risk on behalf of his or her subordinates. In this case, a narrow, extensively tested algorithm with an extremely high level of certainly (for example, 99 percent or higher) should meet the threshold for a justified strike and absolve the commander of criminal accountability.

Lastly, LAWS must also be tested extensively in the most demanding possible training and exercise scenarios. The methods they use to make their lethal decisions—from identifying a target and confirming its identity to mitigating the risk of collateral damage—must be publicly released (along with statistics backing up their accuracy). Transparency is crucial to building public trust in LAWS, and confidence in their capabilities can only be built by proving their reliability through rigorous and extensive testing and analysis. 

The decision to employ killer robots should not be feared, but it must be well thought-out and meticulously debated. While the future offers unprecedented opportunity, it also comes with unprecedented challenges for which the United States and its allies and partners must prepare.


Tyson Wetzel is the 2021-22 senior US Air Force fellow at the Atlantic Council’s Scowcroft Center for Strategy and Security. The positions expressed do not reflect the official position of the United States Air Force or the Department of Defense.

Further reading

Image: The US Army launches an ALTIUS-600 drone from a Dagor ultralight tactical vehicle in May, 2021, at the Dugway Proving Ground in Utah. Photo by FVLCFT/Cover-Images.com/REUTERS