How to avoid nuclear war in the era of artificial intelligence and misinformation
- Юджин Ли
- Jul 18
- 7 min read
Nuclear deterrence is no longer a game for two players, and new technologies further threaten the status quo. The result is a new risky nuclear era.
The Doomsday Clock - a symbolic arbiter about how much humanity is to self-destruction - is now sitting at 89 seconds until midnight, closer than ever to signaling the point of no return of our species.
Many threats, including climate change and biological weapons, prompted global security experts in the Bulletin of Nuclear Scientists in Chicago, Illinois, to shift the clock hands in January. But the main one among these dangers is the growing - and often overlooked - risk of nuclear war.
"The message we continue to hear is that the nuclear risk is over, that this is an old Cold War risk," says Daniel Holtz, a physicist at the University of Chicago who advised the Doomsday Clock solution. "But when you talk to experts, you get the opposite message - that in fact the nuclear risk is very high, and it is growing."
From Russia's bursting war in Ukraine and the boiling tension between India and Pakistan, which broke out in May, to the attacks of the United States and Israel on Iranian nuclear facilities in June, the world has no shortage of conflicts involving one or more nuclear-armed states.
But not only the number of clashes that can escalate causes outrage. The previous large build-up of nuclear weapons, the Cold War between the United States and the Soviet Union, in fact, included two, reasonably matched superpowers. Now China is becoming the third superpower with nuclear weapons, North Korea is building up its nuclear arsenal, and Iran has enriched uranium beyond what is necessary for civilian use. It is believed that India and Pakistan are also expanding their nuclear arsenals. Add to this the potential for misinformation and misinformation on the Internet to influence leaders or voters in nuclear countries, as well as for artificial intelligence (AI) to introduce uncertainty in military decision-making, and it is clear that the set of rules has been broken.
"Eight years later in the nuclear era, we are at the point of reckoning," says Alexandra Bell, president and CEO of the Bulletin of Nuclear Scientists.
Among this frightening landscape, scientists are working to prevent the destruction of the world. At a three-day conference in Chicago, which began on July 14 - almost exactly 80 years after researchers and the U.S. military tested the first atomic weapons - dozens of scientists, including Nobel Prize winners from a wide range of disciplines, met to discuss actions to prevent nuclear war. They issued a new warning about its risks, as well as recommendations on what society can do to reduce them, including a call on all countries to talk transparently with each other about the scientific and military consequences of AI.
The dawn of the nuclear era
The emerging multipolar world violates the principles of nuclear security, which helped to avoid nuclear war in the past. The principles of nuclear deterrence are based on the assumption that no nation wants to start a war that will necessarily have devastating consequences for everyone. This meant the distribution of nuclear arsenals that could not be withdrawn in one strike, reducing any incentive for the first strike, knowing that the enemy would strike back, and the consequence would be "mutually guaranteed destruction". It also meant clarity among the nuclear-weapon States as to who had the ability to strike and, therefore, what the possible consequences of any attack would be. Fragile stability prevailed thanks to feedback between hostile countries and diplomatic signals designed to avoid misunderstandings that could lead to accidental pressing of the nuclear button.
The multipolar model is more complex, which makes it difficult to manage communications. The existence of more nuclear powers also increases the possibility of small nuclear wars, which would still be completely destructive, but will not necessarily lead to mutually guaranteed destruction, weakening the deterrent to the first strike.
Moreover, many back-channel connections that existed to help defuse nuclear tensions, such as informal discussions between the United States and Soviet scientists during the Cold War, no longer occur to the same extent. Official diplomatic channels are also worn out, stopped or interrupted by modern conflicts such as the Israeli-Iranian war, says Karen Hallberg, a physicist at the Balceiro Institute in San Carlos de Bariloche, Argentina, who is the secretary general of Pugwash, a group founded by scientists that works to eliminate nuclear weapons and other weapons of mass destruction. "The most alarming fact is the current trend of competition instead of cooperation in science and international relations," she says.
Some call the multipolar world the third nuclear era. The first was the Cold War, from the consequences of World War II to the collapse of the Soviet Union in 1991; and the second was in the 1990s and 2000s, when the United States and Russia reduced their nuclear arsenals (see "The Changing Landscape of Nuclear Weapons").
Cloud of misinformation
The power of the Internet and social networks to increase misinformation - and its close relative, misinformation, the deliberate dissemination of false facts - only increase the fragility of the new situation. Both can obscure delicate discussions around nuclear deterrence and increase the risks of military escalation.
Take the Indian-Pakistani conflict in May, which began when India struck Pakistan after a terrorist attack. Pakistan took revenge, and both countries threw missiles and other weapons at each other, with casualties on both sides. News channels and social media channels were flooded with false statements about military triumph, including images created with artificial intelligence, allegedly showing enemy targets that were destroyed. Global security specialists are concerned that misinformation about the scale of destruction may provoke an escalation into a nuclear war.
This did not happen: after four days of conflict, India and Pakistan agreed to a ceasefire after pressure from the United States. But it is not difficult to imagine how misinformation can push the leader of a nuclear nation to decide to use nuclear weapons, says Matt Corda, a nuclear analyst at the Federation of American Scientists in Washington, D.C., which is working to minimize the risks of global threats. For example, a routcast actor who wants to cause problems may start a false rumor on social networks aimed at convincing American news consumers that Iran has created nuclear weapons. Such fears may then fall into the media, which are closely followed by the President of the United States, and eventually affect his actions. "To influence this leader, you must influence the internal base of the United States, which I think we learned is incredibly easy to do," says Korda.
The fog of war with artificial intelligence
Another emerging problem is the role that AI can play in deciding whether and how to release nuclear weapons. The U.S. Department of Defense uses artificial intelligence tools for planning and action on the battlefield in a conventional war. Its chief official in charge of nuclear forces, Air Force General Anthony Cotton, said last year that AI could help speed up decisions on nuclear command and control.
Artificial intelligence companies have started partnerships with the American military. In June, Anthropic in San Francisco, California, announced the creation of a set of secret Claude Gov models based on its public assistant Claude AI, which will be used for U.S. national security needs. In January, OpenAI, also in San Francisco, announced a partnership with U.S. national laboratories, which includes the mastering of the company's models in the field of nuclear weapons. China and Russia are also reportedly integrating artificial intelligence tools into both conventional and nuclear war.
Various countries have long used artificial intelligence-based algorithms to optimize and accelerate specific tasks, such as identifying and tracking possible incoming nuclear missiles. What's new is the use of AI reasoning models, says Alice Saltini, a researcher in artificial intelligence and nuclear safety at the James Martin Center for Non-Proliferation Studies in Monterey, California. According to her, these models, similar to o3 OpenAI or those underlying Claude, can generalize tasks and accelerate the processing of huge amounts of data, and, according to her, probably in early warning and intelligence systems in many countries.
Reasoning with artificial intelligence can open up new ways of thinking about war and military options. "You can imagine that AI is used as a decision aid, not to make decisions, but to propose different courses of action," says Herbert Lin, a researcher on cyber policy and security at Stanford University in California. Such models can evaluate input data and offer results for decision-making, for example, how many victims a particular attack can cause or which approaches violate international norms of war.
It seems unlikely, says Lin, that the country completely transfers the decision-making process to the artificial intelligence tool. "It's a very bad idea to give ChatGPT launch codes," he says. Saltini notes that many models of artificial intelligence are prone to hallucinations (inventing things) and are biased towards immediate actions - factors that do not help in making a military decision about life and death.
Some countries with nuclear weapons have indicated that they will keep people informed of any decision on whether to deploy nuclear weapons. Others did not make public statements about the use of AI in nuclear decision-making.
But artificial intelligence technologies could contribute to the "fog of war" by confusing the enemy's motives or capabilities, undermining well-understood principles of nuclear deterrence. They can also increase the risk of nuclear escalation in other subtle ways. Artificial intelligence models that improve a country's ability to detect enemy missiles or submarines can increase the risk that this country will strike first to eliminate the obvious threat. The speed of AI processing can lure leaders into a false sense of confidence that they receive accurate information when in fact errors can slip away, says Lin. This may include excessive confidence in the interpretation of radar data as a real incoming attack, rather than as a false alarm, for example.
Stop of nuclear war
So, what can be done to reduce the risks associated with the fact that AI unintentionally helps to provoke a nuclear war? Bell and Saltini say that politicians need more information about the possibilities and limitations of AI technologies. The main global place to discuss artificial intelligence and military security is the annual summits on responsible artificial intelligence in the military sphere, which started in 2023; the next one is scheduled for September. At last year's summit, a non-binding statement was issued that people should control all aspects of the deployment of nuclear weapons. But the rapid pace of AI development "transforms our ability to understand, not to mention control, the consequences of technology," says Bell.
The risks of nuclear war - both new and old - must be solved, says Holz. He hopes that Governments will now take note of the recommendations of delegates to the Chicago Conference, which include a call for countries to reaffirm their obligations under a moratorium on nuclear explosives tests and support the development of an arms reduction treaty to succeed the New START Treaty between the United States and Russia, which expires next year.
David Gross, a physicist at the University of California, Santa Barbara, was inspired to help organize the Chicago conference after nuclear weapons were hardly discussed in many subsequent elections in 2024. "People don't really realize the danger" of nuclear war, Gross says. "This is a very difficult situation."














