Categories
China Defense & Security military Politics & Government Security US

The Folly of an “A.I. Arms Race” Framework for U.S.-China Relations and Cybersecurity Concerns

A new, highly automated future is approaching fast and an “arms race” framework will fuel instability without learning from past experiences.

History of the “A.I. Arms Race”

There has been an ongoing conversation in media, government, and academia about the potential of an “A.I. Arms Race” between China and the United States. A study published in 2016 for the eighth International Conference on Cyber Conflict found that one could even claim that the U.S. and Iran, as well as North and South Korea, could be said to be in an “arms race” in cyber capabilities currently.1 This stems from a fear that whomever achieves the advantage in A.I. could use the dual-nature of such technology to significantly outpace their rivals in military capabilities and cyber operations. However, Chinese and American government officials, technologists, and researchers have begun to push back against this narrative as damaging and potentially dangerous.

Problems of the “Arms Race” Framework with China

The most significant example of the emerging narrative of an “arms race” framework about A.I. came when Dr. Kai-Fu Lee, a prominent Taiwanese technologist and researcher, described China’s “Sputnik moment.” A google software, AlphaGo, defeated the best Chinese player at Go. This board game has long functioned, like chess, as a benchmark for strategic thinking and gaming in China. This ancient Chinese strategy game has a simple rule set but almost endless possible moves, which opened the eyes of the Chinese technology sector and the Chinese government to the possibilities of A.I.2 China has now set a goal to become the A.I. industry leader by 2030, to much consternation and skepticism from the U.S. media and government. The CCP has invested billions of yuan into development and research, which has allowed the government to outpace the U.S. in terms of publishing papers on A.I. and enhancing its search for AI military applications.

This created the boom of “arms race” talk that can now be seen playing out internationally, and many foreign policy experts and leading technology innovators worrying that such talk could actually harm the safety of such systems as they are rushed out to “get ahead” of rivals. This arms race, if it actually emerges, has few parallels to the nuclear arms race of the 20th century. The emerging technologies are not researched as secretively as nuclear arms, and the government has less of a role to play in the industry compared to the near monopoly on nuclear arms. 

Cybersecurity Concerns in the “Arms Race”

While A.I. and M.L. will be applied to many sectors, the cybersecurity industry could be especially impacted by its development and eventual deployment. This technology could expand both offensive and defensive actors’ capabilities, with attackers able to automate and improve DDOS attacks, use big data to improve phishing and social engineering, and create more effective malware and viruses. Defenders benefit from A.I. in a similar way, potentially automating certain network defenses and allowing for more rapid responses for security teams, possibly within seconds.3 Experts have warned about the following attacks and developments caused by either reliance on or use of A.I.:

  • Hacking self-driving cars.
  • Bots impersonating people over the phone using voice recordings.
  • Social engineering with deep fakes for blackmail.
  • Authoritarian governments’ use of facial-recognition, predictive crime analytics, and monitoring “anomalous public behavior.”4

Lessons from the Nuclear Age

While the framework of an arms race is damaging and creates uncertainty between states, there are some lessons that can be drawn from the Cold War arms race and applied to emerging technologies, like artificial intelligence. The first is the creation of channels of communication and international norms governing the safe deployment of these technologies. Paul Scharre of the Center for a New American Security put it best: “A race to the bottom on AI safety is a race no one would win.”5 Scharre advocates for funding of safety testing programs for A.I. domestically and greater cooperation, even with adversaries, on issues of A.I. safety and even on unacceptable uses for A.I. technologies moving forward. 

In February 2019, former President Trump signed the Executive Order on Maintaining American Leadership in AI. This came during a broader White House initiative to advance research and industry investment in A.I., almost directly in response to China’s stated goals. A significant step in the right direction from that initiative is the DARPA A.I. Next Campaign, which would research how to make A.I. more defensible and secure in the face of cyber attacks.  It also facilitated summits with industry, so that the private enterprises who are leading the field are included in any discussion of “limiting” or “controlling” the spread of this technology, as testing and information sharing will be vital to ensuring the safe deployment of A.I. 

Conclusion

The rhetoric of an “A.I. Arms Race” between the United States and China may have reached its height at the November democratic presidential debate, where it was brought up by some candidates during a discussion of foreign policy. However, the consensus among experts and those in the technology sector is that perpetuating an “arms race” framework could cause an actual race to emerge. The key moving forward is to acknowledge the risks of such discourse and attempt to build a broader international consensus on preventing rushed and unsafe deployment of such systems. A.I. will allow states to improve their defenses and attackers to create more efficient assaults in cyberspace but letting fears of such advances turn into a race to develop the best military A.I technology is dangerous. A new, highly automated future is approaching fast and an “arms race” framework will fuel instability without learning from past experiences.

ENDOTES


1. Anthony Craig and Brandon Valeriano, “Conceptualising Cyber Arms Races,” 2016 8th International Conference on Cyber Conflict (CyCon), 2016.


2.  Kai-Fu Lee, AI Superpowers: China, Silicon Valley, and the New World Order (Boston: Houghton Mifflin Harcourt, 2018)


3. Gil Press, “The AI Cybersecurity Arms-Race: The Bad Guys Are Way Ahead,” Forbes, April 26, 2018.


4. Paul Scharre, “Killer Apps: The Real Dangers of an AI Arms Race,” Foreign Affairs, 2019, Vol. 98, Issue 3, pp. 135-145.

5. Ibid. 





Processing…
Success! You're on the list.

___
Latest






___
Editor’s Picks

By Dylan Biggs

Dylan Biggs is an American-Australian graduate of Temple University in Philadelphia, Pennsylvania. He completed his Bachelor of Arts in History with honours in 2019, and began his Master of International Service at American University in Washington, DC the same year. Dylan hopes to be at the front of the emerging field of international tech policy.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s