Discover more from Planetarian Perspectives from EON3
A Self-Driving Nuclear Weapons Race Has Begun
A Tale of Chatbots, Warbucks and Warbots
“Autonomous nuclear weapons introduce new risks of error and opportunities for bad actors to manipulate systems. Current AI is not only brittle; it’s easy to fool. A single pixel change is enough to convince an AI a stealth bomber is a dog.” - Zachary Kallenborn – Bulletin of Atomic Scientists
By James Heddle - EON
Welcome to CyberWonderland
ChatGPT is being hyped as a cutting-edge new ‘helper bot’ by the Elon Musk-backed tech firm OpenAI. Sott.net reports that “Microsoft on Monday announced a new multiyear, multibillion-dollar investment with ChatGPT-maker OpenAI.”
Thanks for reading Planetarian Perspectives from EON3! Subscribe for free to receive new posts, or for any ammount to support our work.
According to the New York Post, “This superhuman tech can do a variety of complicated tasks on the fly, from composing complex dissertations on Thomas Locke to drafting interior design schemes and even allowing people to converse with their younger selves.”
Wow! Do you suppose this wondrous technology could maybe get weaponized with malicious intent?
You bet it can...And it is.
In 2021 Henry A. Kissinger, Eric Schmidt, Daniel Huttenlocher co-authored a book titled The Age of AI And Our Human Future. As you might expect, these guys are arch AI boosters. Critics pointed out that,
“Its title alone—The Age of AI: And Our Human Future—declares an epoch and aspires to speak on behalf of everyone. It presents AI as an entity, as superhuman, and as inevitable—while erasing a history of scholarship and critique of AI technologies that demonstrates their limits and inherent risks, the irreducible labor required to sustain them, and the financial incentives of tech companies that produce and profit from them.”
The reviewers objected that adoption of AI by the military is presented by the three authors as an inevitability, instead of as an active policy choice that involves ethical complexities and moral trade-offs.
Now, just months later, the war in Ukraine has brought those complexities and trade-offs to the front and center.
The Expose’ reports that, “On 30 June 2022, NATO announced it is creating a $1 billion innovation fund that will invest in early-stage start-ups and venture capital funds developing “priority” technologies such as artificial intelligence, big-data processing, and automation.”
The story by Rhoda Wilson also notes that “The US Department of Defense requested $874 million for artificial intelligence for 2022.” Of course European countries, China - and no doubt Russia - are rushing to keep up. Nuclear-armed countries in a warbot race puts the nuclear arms race on steroids. Multiple contending NukeBot forces - that can mistake a dog for a stealth bomber - making nano-second decisions based on a pixel. Armageddon Man has sprouted another head.
This new autonomous nukes race is a potential windfall for Big Tech giants like Peter Thiel’s Palantir, but also for aspiring newcomers to Silicon Valley.
Last July Melissa Heikkilä penned an article in the MIT Technology Review titled Why Business is Booming for Military AI Startups.
She points out that, “Ultimately, the new era of military AI raises a slew of difficult ethical questions that we don’t have answers to yet.”
She interviews Kenneth Payne, who leads defense studies research at King’s College London and is the author of the book I, Warbot: The Dawn of Artificially Intelligent Conflict. He says that a key concept in designing AI weapons systems is that humans must always retain control. But Payne believes that will be impossible as the technology evolves.
“The whole point of an autonomous [system] is to allow it to make a decision faster and more accurately than a human could do and at a scale that a human can’t do,” he says. “You’re effectively hamstringing yourself if you say ‘No, we’re going to lawyer each and every decision.’”
If It’s AI, It’s Hackable – Self-Driving Nukes?
Award-winning reporter Eric Schlosser’s 2014 book Command and Control and the eponymous Oscar-shortlisted documentary based on it, directed by Robert Kenner, showed how the history of the U.S. nuclear arsenal is studded with examples of how both serious human error and courageous interventions by individual human intelligence have repeatedly risked and saved the world from thermonuclear destruction. That was then and this is now, when displacing humans with AI algorithms is under serious (and insane) consideration.
Mikko Hypponen is a Finnish global cyber security expert whose thirty-year career has coincided with the growth of the criminalization of the internet. In his recent book, If It’s Smart, It’s Vulnerable, he gives a flyover of the developmental stages of cybercrime from viruses, to worms, to malware, to ransomware, to Stuxnet and beyond.
“Question: How many of the Fortune 500 are hacked right now?
That’s the way Hypponen sets up his basic contention from a lifetime of cyber security sleuthing: “If a company network is large enough, it will always have vulnerabilities, and there will always be something odd going on…” making it possible for the system’s security measures to be “…breached by attackers.”
With that as background, the prospect of giving AI warbots the codes to the world’s nuclear weapons arsenals is clearly just one more suicidal societal concession to Armageddon Man.
This post is excerpted from:
James Heddle Co-Directs EON – the Ecological Options Network with Mary Beth Brangan, who generously contributed ideas and research for this article. The EON feature documentary S.O.S. – The San Onofre Syndrome will be released this Spring.
Thanks for reading Planetarian Perspectives from EON3! Subscribe for free to receive new posts, or for any amount to support our work.