New! Become A member
Subscribe to our newsletter
Insights

AI vs The Bomb

AI&F Advisor Dr. Don Howard is a Professor of Philosophy, a Fellow of the University of Notre Dame’s Reilly Center for Science, Technology, and Values, and an Affiliate of Notre Dame’s interdisciplinary Technology Ethics Center (TEC). Don is a long-time editor of the Einstein Papers at Princeton and has been writing and teaching about the ethics of science and technology for over three decades. Don’s current research interests are focused on ethical and legal issues in cyberconflict, cybersecurity, and autonomous systems. 

Are you as surprised as I am that the recent release into the wild of various forms of generative AI, such as ChatGPT and DALL-E 2, has become the occasion for a number of prominent tech CEOs and other “thought leaders” to issue apocalyptic warnings about the possibly existential threat that unregulated AI might pose? Perhaps most noteworthy is the statement released recently by the Center for AI Safety that reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Many score industry leaders, senior researchers, and policy specialists have added their signatures to the statement, including Sam Altman, CEO of OpenAI, which built ChatGPT and DALL-E 2, Geoffrey Hinton, the University of Toronto computer scientist who is often described as “the godfather of AI,” and Bill Gates.

In the same spirit, more than 30.000 people have endorsed the Future of Life Institute’s call for a six-month moratorium on the training of AI systems more powerful than GPT4. Proclaiming the coming of Armageddon is surely an attention getter and might lead to good if it promotes serious, reasoned debate and the introduction of sensible controls on the future development of AI. But is that the likely outcome, and is the comparison with nuclear weapons either valid or helpful?

I tell all my students that, when thinking about the risks of new technologies, the worst thing that one can do is to give in to the temptation to indulge in fantasies about ultimate doom. The Skynet scenario in the “Terminator” movies is not our future. “Apocalyptomainia” is my name for this cognitive weakness.

Put aside the psychological allure of being a prophet of doom. There is also a serious logical issue here. I call it the “apocalyptic fallacy.” Simply put, if one assigns infinite negative utility to a possible course of events, then, however low its probability, that term swamps all of the other cases in your cost-benefit analysis and thus renders rational weighing of options impossible. This is why sounding the trumpets of doom cannot be conducive to thoughtful reflection on the risks and benefits of AI or on how, in practical terms, to regulate AI.

What, then, about the specific comparison of AI to nuclear weapons? Is it valid? No. Is it helpful? No. We know exactly why nuclear weapons pose an existential threat. And, thanks to decades of strategic nuclear weapons planning and war gaming, we have a pretty clear understanding of what kinds of actions by nuclear capable nations could lead to a global nuclear catastrophe.

Nuclear weapons kill people, along with other forms of life, and destroy structures and infrastructure that sustains life in three main ways. The first is direct blast effects, including massive pressure waves, searing heat and fire, and immediate or near-term death from intense radiation exposure. The second is lingering death from lower doses of direct radiation and from exposure to radioactive fallout. The third, and by far the worst is what is termed “nuclear winter.” An all-out nuclear exchange between just the US and Russia would lift immense amounts of particulate matter into the upper atmosphere, where it would remain for months or years, blocking so much sunlight from reaching the earth’s surface that global temperatures would plummet and nearly all higher plant and animal life would perish.

These prognoses are all based on rock-solid science. By contrast, we have no clear understanding of what might turn out to be the more baleful impacts of unregulated AI. We simply have no scientific basis for prophesying that AI could lead to the extinction of human life. None.

Climate science provides another contrast case. There are uncertainties in climate modeling. But the virtually unanimous view among climate scientists, based on a well-established, scientific understanding of how the global climate works, is that unchecked global warming due to the continued release of CO2 and other greenhouse gases into the atmosphere is guaranteed to render planet Earth uninhabitable. There is no comparable scientific foundation for forecasting an AI apocalypse.

But if there is no scientific foundation for the claim that AI is leading us to the end times, why are so many seemingly smart and knowledgeable people sounding the alarm? Here’s a clue. In a 2019 interview, Sam Altman revealed how he saw his company, and, by implication, himself:

As Mr. Altman sipped a sweet wine in lieu of dessert, he compared his company to the Manhattan Project. As if he were chatting about tomorrow’s weather forecast, he said the U.S. effort to build an atomic bomb during the Second World War had been a “project on the scale of OpenAI — the level of ambition we aspire to.

If OpenAI is like the Manhattan Project, then Sam Altman is like J. Robert Oppenheimer. The comparison is, of course, preposterous, but it might hint at the psychology of the tech bros.

The comparison is flawed for many reasons. Consider two.

First, the Manhattan Project had a clear and technically well-specified goal from the very start: to build an atomic bomb. AI developers do not have a clear, well-specified goal. They talk about creating Artificial General Intelligence (AGI) as their goal, but none of them have even the faintest idea what the means. Why? In part because we do not even know what human intelligence consists in.

Second, Oppenheimer and all the other scientists and engineers who built the bomb were doing what they did to prevent an apocalypse, namely, the catastrophe that would ensue were Hitler to get the bomb before the Allies did. They were doing what they did for a high moral purpose. Most of them did not clearly foresee that, ironically, their success would lead to a different existential threat. Still, they had a clear mission and their end was a noble one. Sam Altman is not trying to save the world from Hitler. There is no urgent moral purpose for building AGI.

But isn’t it cool to be a giant like Oppenheimer? Isn’t it cool to be the creator of a technology that can bring an end to all human life? And if one is like Oppenheimer, then, like Oppenheimer, one can weep crocodile tears, insincerely ruing the moral calamity that might be wrought by the creations of one’s mighty powers; this as one begs Congress for new regulations to stop you from doing what you should not be allowed to do and would not do if you possessed a clear sense of moral purpose and the strength of will to act as morality demands.

Do we need regulations on AI? Yes, we do. And the good news is that now we have them. On Wednesday, June 14, the European Parliament overwhelmingly approved the new EU Artificial Intelligence Act, a well-crafted set of tough rules governing many aspects of AI. As happened with the EU’s 2018 General Data Protection Regulation, the impact of the new AI regulations will be global, because all the major private sector actors do business globally. But it was not hyperbolic warnings of an AI apocalypse that moved the EU to act. It was careful identification of specific risks as we now perceive them. That risk assessment made possible the crafting of thoughtful and focused regulations. And, happily, the EU has a well functioning system of governance. Even if we currently do not, at the very least, we can go to school on their example.


Don Howard, Ph.D

AI&F Advisor Dr. Don Howard is a Professor of Philosophy, a Fellow of the University of Notre Dame’s Reilly Center for Science, Technology, and Values, and an Affiliate of Notre Dame’s interdisciplinary Technology Ethics Center (TEC). Don is a long-time editor of the Einstein Papers at Princeton and has been writing and teaching about the ethics of science and technology for over three decades. Don’s current research interests are focused on ethical and legal issues in cyberconflict, cybersecurity, and autonomous systems. 

Leave a Reply

Your email address will not be published. Required fields are marked *

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter