Does superintelligence scare you? It certainly scares the Center for AI Safety. In a new report, Superintelligence Strategy, AI fear is elevated to a matter of national and global security (Hendryks, Schmidt and Wang 2025).
In the future, malevolent bad actors may have new munitions that rank with weapons of mass destruction. A military armed with an expert-level AI hacker could distribute a novel attack vector to cripple critical infrastructure, shut down the energy grid, stall transportation, sever communication, halt access to medical records. A cyberattack could shut down military defenses in advance of an aerial attack. In brief, the very safety of civilization is vulnerable to the malicious malware of our assailants.
Up to now we have enjoyed relative tranquility because only nation-states have been capable of sophisticated attacks. This is no longer the case. Highly capable AI espionage and hacking systems can guide extremist cells from plan to execution (Hendryks, Schmidt and Wang 2025).
The New Dilemma Confronting National Security
Imagine tomorrow’s national security dilemma. If one nation successfully achieves superintelligence, it will either retain or lose control of that system. If it can retain control, this would likely undermine the security of its rival nations. On the other hand, if the nation loses control, renegade AI could undermine the security of all states (Katzke and Hendryks 2025). MAIM describes the situation: Mutual Assured AI Malfunction. This would be a deterrence regime that resembles what we went through during the Cold War with Mutually Assured Destruction (MAD). Now, does superintelligence scare you?
What should we do?
The Center for AI Safety Director Dan Hendryks and colleagues are working on a plan that includes deterrence, strategic competition, and nonproliferation (Hendryks, Schmidt and Wang 2025). If deterrence worked to keep the peace during the Cold War, might it work again?
Deterrence in AI takes the form of Mutual Assured AI Malfunction (MAIM)—today’s counterpart to MAD—in which any state that pursues a strategic monopoly on power can expect a retaliatory response from rivals. To preserve this deterrent and constrain intent, states can expand their arsenal of cyberattacks to disable threatening AI projects. This shifts the focus from winning the race to superintelligence to deterrence (Hendryks, Schmidt and Wang 2025).
What might be the progressive alternative? How about prophylactic international cooperation? How about a planetary deal?
As the Center for AI Safety has shown us, ethical anticipation and ethical application count. First, let us look ahead and make forecasts. Mariarosaria Taddeo and Luciano Floridi employ “foresight methodologies to indicate ethical risks and opportunities and prevent unwanted consequences” (Taddeo August 24 2018, 751). We need our futurists today to anticipate tomorrow’s threats.
Second, we can promote a planetary vision of the common good and advocate for a prophylactic set of international guardrails. Pope Francis provides an example in Antiqua et Nova, recently explicated by Mark Graves. He wants to ensure that AI “applications are used to promote human progress and the common good” (Francis 2025, §4).
A vision of the common good should guide and energize international preemptive cooperation. According to the Pontifical Academy for Social Communications document of 2002, Ethics In Internet, “The Internet’s transnational, boundary-bridging character and its role in globalization require international cooperation in setting standards and establishing mechanisms to promote and protect the international common good” (Vatican, 2002).
The planetary common good ideal would not count as a simple alternative to Superintelligence Strategy, to be sure. Though perhaps more idealistic, it is still complementary (Peters 2025).
Conclusion
Does superintelligence scare you? Well, it scares figures such as Elon Musk. “AI has great power to do good and evil,” wrote Musk in a Tweet. “Better the former.”
I want to thank our Jewish, Muslim, and Christian colleagues at AI and Faith who are engaged in future forecasting and positive encouragement. The mission of AI and Faith is to equip and encourage people of faith to bring time-tested, faith-based values and wisdom to ethical AI conversation.
Does superintelligence scare you? Does the future of AI prompt you to think ahead? To act ahead?
References
Francis, Pope. 2025. Antiqua et Nova. https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_ddf_doc_20250128_antiqua-et-nova_en.html, Vatican: Dicastery for the Doctrine of the Faith & Dicastery for Culture and Education.
Hendryks, Dan, Eric Schmidt, and Alexandr Wang. 2025. Superintelligence Strategies. https://www.nationalsecurity.ai/, Center for AI Safety.
Katzke, Corin, and Dan Hendryks. 2025. Center for AI Safety Newsletter #49: Superintelligence Strategy. https://newsletter.safe.ai/p/ai-safety-newsletter-49-superintelligence, Center for AI Safety.
Peters, Ted. 2025. The Promise and Peril of AI and IA. Adelaide: ATF; https://atfpress.com/product/the-promise-and-peril-of-ai-and-ia-new-technology-meets-religion-theology-and-ethics/.
Taddeo, Mariarosaria, and Luciano Floridi. August 24 2018. “How AI can be a force for good.” Science 361:6404 751-752.
Vatican. 2002. Ethics in Internet. https://www.vatican.va/roman_curia/pontifical_councils/pccs/documents/rc_pc_pccs_doc_20020228_ethics-internet_en.html: Pontifical Academy for Social Communications.
Views and opinions expressed by authors and editors are their own and do not necessarily reflect the view of AI and Faith or any of its leadership.