Opinion: Establishing Safeguards for the Use of Artificial Intelligence and Weaponization of Algorithms in the Context of Defense and Intelligence Gathering for the United States
- Steven Rahman (Guest Writer)
- 4 days ago
- 6 min read
Edited by Humphrey Chen
Note: this is an opinion piece
Executive Summary
A few years ago, I spoke at a biometrics symposium in Florida. In my lecture, I argued that the United States should avoid encouraging the weaponization of algorithms - and the weaponization of any consumer technology more broadly - because the economy of the United States is dependent upon technology and computing infrastructure more so than any other economy in the world. If it becomes normal to weaponize the technology, then we should expect our adversaries to do the same. This proved to be a very unpopular opinion and a board member of Lockheed Martin reacted very poorly to my presentation. A few months after that, I said the same thing to the former Chief of Staff to the US Secretary of Defense and I noted his general unwillingness to agree to the outright ban of algorithm-based weapons. Today, within the defense community, there exists an unwillingness to recognize that the United States is the most vulnerable country in the world when it comes to the weaponization of algorithms.
Introduction
This article argues for a data-driven analysis of national vulnerability to algorithmic attacks. My working hypothesis is that the United States is the most vulnerable country in the world when it comes to algorithm weaponization. If it is established that the United States is among the most vulnerable countries in the world, then it would make sense for the United States to take the lead in creating a global ban on the weaponization of algorithms. This might include the creation of a multilateral convention prohibiting weaponization of algorithms and their use by militaries. This should be similar to other weapons bans, such as the Chemical Weapons Convention or the Nuclear Non-Proliferation Treaty. The result of such Conventions might require the creation of a new multilateral agency, similar to International Atomic Energy Agency, to monitor and to provide safeguards.
Context
Two specific and dangerous moments from the last decade.
Though we have seen several advances in technology and its application to weapons systems, there have been two moments that have really stood out as particularly dangerous because of the precedents they may set. Stuxnet Hezbollah Pager Attack As a technologist, I followed the Stuxnet saga with great interest and more recently, the pager-based attack on Hezbollah commanders.
Stuxnet
Stuxnet was an incredibly sophisticated algorithm designed specifically to destroy centrifuges manufactured by Siemens, the German technology company. Stuxnet was a computer virus that was passed via thumb drives and was created specifically to destroy the centrifuges used to make the fissile material needed for atomic weapons. Though we do not know for sure, the technology community speculates that this was a sophisticated attack designed by a state actor because of the care taken to limit collateral damage and to ensure that Stuxnet would only destroy centrifuges located physically within Iran.
Hezbollah Pager Attack
More recently, there is speculation that the pager-bombs which targeted Hezbollah commanders were deployed by Israeli security services. These pager bombs were circulating within civilian populations for months before they were activated. In both cases, a sophisticated entity exploited the civilian-maintained technology infrastructure and supply chains for a military purpose. In both examples, safeguards could have failed and the cyber weapons could have unleashed havoc accidentally upon civilians and other unintended targets. What if Stuxnet’s geo-fence failed or if the virus impacted other Siemens products? What if the Hezbollah pagers exploded while aboard a passenger aircraft? Given the risks to non-combatants, I cannot imagine that the United States could legally deploy such weapons given existing laws and rules governing targeting and the engagement of enemy combatants. Any deployment of such weapons by the United States would be illegal. Not only would it violate international treaties regarding the legal use of force, but such deployment would also be illegal according to American domestic laws.
Thesis
I propose that algorithm weaponization is a grave threat to the United States and most advanced NATO member states whose economies are dependent on a free-flowing information infrastructure. Algorithms that are weaponized will impact 1) the countries, economies, and civilian infrastructure that are the most dependent on information technology, and 2) the countries where information is encouraged to flow freely, and 3) the economies that are disproportionately dependent on new technology adoption.
Recommendations:
Establish that NATO member states are most vulnerable in the context of cyber attacks. United States, specifically, is more vulnerable than just about any other country. This could be proven through an evergreen index for policy makers to enable comparison among nations and establish channels to publish the index to increase public awareness.
Establish the historical trend of algorithmic attacks world-wide and develop frameworks to imagine likely future attacks. Establish how these attacks would impact the countries within the index. This would also include an attempt to predict the types of attacks likely to be deployed against the most vulnerable countries.
Create a new multilateral organization if needed (i.e. an organization similar to the International Atomic Energy Agency) that can monitor and enforce prohibitions on algorithmic attacks.
Artificial Intelligence
The United States should avoid using algorithms like Stuxnet and exploiting global supply chains because in doing so, it invites other countries and non-state actors to do the same. Rather, the United States should identify the vulnerabilities of countries worldwide and make recommendations to keep all people safe from algorithmic attacks. Artificial Intelligence is the most current category of algorithms that could be potentially weaponized. Nowhere else in the world is there a country at greater risk than the United States. For example, are the people of Italy regularly stepping into driverless taxis? Are the citizens of Egypt seeing their election infrastructure attacked by state actors? Are the children of Laos being persuaded via short-form video to adopt specific political philosophies? All of these things are happening in the United States. From what I can see, there is today no concerted effort in the Defense and the Intelligence community to address this danger. Nor is there recognition that if the United States builds these weapons, it is inviting an attack.
Treaties Limiting Weapons Proliferation Have Been Effective
On the contrary…through informal surveys of defense professionals, I have noted a very different attitude. Many argue that the United States should continue to invest in cyber weapons. This attitude reminds me of Dr. Edward Teller, a Manhattan Project alumnus who, during the dawn of the nuclear age, argued that the United States should continue to advance its atomic weapons program because it had the technological advantage and should press the advantage. Such advantages are short-lived. Knowledge rarely remains confined safely within a silo.
Cyber attacks on the United States will inflict more damage than cyber attacks on other countries. Such attacks are likely more damaging than ANY attack the United States could launch. It is therefore in the national interest for the United States to lead efforts to achieve a global ban of algorithmic weapons.
There is a lot to learn from existing arms control conventions. If we consider conventions such as the Chemical Weapons Convention, the Nuclear Non-Proliferation Treaty, and the Strategic Arms Limitation Talks, among many others, we can see that these conventions have been effective. The spread of weapons to new environments and the creation of new types of weapons has been contained.
When nuclear weapons were first developed, there was considerable fear that many countries would develop atomic weapons programs. During the early years of the Cold War, it was feared that over 50 countries had the capability to begin atomic weapons development. To address this, the existing nuclear states came together and created multilateral agencies to monitor the safe use of nuclear technology while seeking a total ban on weapons proliferation. In 1970, the Nuclear Non-Proliferation Treaty (NPT) came into force.
The United States was a major contributor and provided significant leadership to the creation of the NPT. Today, policy makers should consider a similar approach applied to the weaponization of algorithms; especially so when we consider just how effective weapons conventions have been. Are there 50 nuclear states today? After all, the technology is now close to 80 years old and many university physics departments are capable of building the engineering and scientific capacity to develop an atomic weapons program.
The answer is very fortunately no. Today, there are less than ten official and unofficial nuclear states. Thanks to the NPT, atomic weapons programs have not proliferated as feared and we have not seen the use of a nuclear weapon in war since 1945.
Some say the information age began in the 1960s. If you were to casually pick the first year of the information age, perhaps it is 1968, the year that Intel Corporation was incorporated. Since then, we have not seen the information technology infrastructure exploited programmatically and strategically by militaries, non-state actors, or terrorists. There has been no digital Hiroshima.
We have been very lucky. But it is inevitable that our luck will run out.