I don’t know how many warnings exhausted science fiction authors, screenwriters and movie directors can issue about the danger of artificial intelligence before our corporate and political overlords get the message. Apologies, sci-fi fabulists—America’s profit-maximizing elite are hurtling ahead with this dangerous tech regardless of the existential risks you’ve pointed out over and over.
You’ve shown us a scheming HAL forcing an unscheduled space docking in “2001: A Space Odyssey”—“Sorry, not sorry Dave!” We’ve been aghast at the globe-pulverizing tag-team of supercomputers—America’s Colossus and the Soviet Union’s Guardian—in “Colossus: The Forbin Project.”
We were relieved when young Matthew Broderick saved the earth from the big WOPR with a nice game of tic-tac-toe in “WarGames,” and we were thoroughly entertained, if a little horrified, by perhaps the most popular portrayal of the coming A.I. apocalypse—when the dread Skynet achieved earth-ending self-awareness at 2:14 a.m. EDT on August 29, 1997—in James Cameron’s “Terminator” series.
How to avoid that unhappy A.I. endtime?
Bring on the bureaucrats?
Hear me, fellow carbon-based life forms: There’s no fate but what we make for ourselves! But we can’t just chill until Sarah Connor shows up to save us. Maybe it’s time we urge the White House to reassess its reluctance to craft a commonsense regulatory framework around artificial intelligence?
The much-maligned bureaucrats of the European Union in Brussels are already on the job creating controls meant to address some of the hazards of the advance of artificial intelligence, but Trump administration officials insist that regulatory overwatch would only restrain this emerging sector.
An executive order from Mr. Trump, in fact, lifted modest guardrails that had been placed on A.I. during the Biden administration. Another Trump executive order prohibits states from establishing their own safety rules to govern A.I. development and use.
It’s telling that even among the biggest proponents of A.I. platforms can be found innovators who harbor deep misgivings about where A.I. may be leading. One has been locked in a standoff this week with former Fox News host and now Secretary of Defense, uh, War, Pete Hegseth.
While Google, OpenAI and Elon Musk’s xAI have already bowed to Mr. Hesgeth’s demands, Anthropic C.E.O. Dario Amodei has proved a holdout. Mr. Hegseth has demanded that Anthropic open its artificial intelligence technology to unrestricted military use. Mr. Amodei has so far refused, raising ethical concerns about the dangers of fully autonomous armed drones and of A.I.-assisted mass surveillance that he worries could be used to suppress dissent.
Mr. Hegseth does not share Mr. Amodei’s A.I. anxieties. In fact, during a visit to Mr. Musk’s SpaceX facility in Texas last month he offered a no-brakes-necessary vision for A.I.’s rapidly expanding role at the Department of Defense. At the Pentagon, he said, “We’re executing an AI acceleration strategy that will extend our lead in military AI…eliminate bureaucratic barriers, focus on investments and demonstrate the execution approach needed to ensure we lead in military AI and that it grows more dominant into the future.”
The United States “will win this race,” he assured, “by becoming an AI first warfighting force across all domains, from the back offices of the Pentagon to the tactical edge on the front lines.”
Mr. Hegseth added, “We will not employ AI models that won’t allow you to fight wars.”
“We will judge AI models on this standard alone; factually accurate, mission relevant, without ideological constraints that limit lawful military applications. Department of War AI will not be woke. It will work for us. We’re building war ready weapons and systems, not chatbots for an Ivy League faculty lounge.”
Frustrated by Mr. Amodei’s hesitancy, Mr. Hegseth is threatening to pull the Pentagon plug on Anthropic’s A.I. chatbot, Claude, and its $200 million D.O.D. contract. Or he could just force the corporate leadership at Anthropic to comply.
The Associated Press reports that Defense Department officials have warned Anthropic that the department could designate its resistance a supply chain risk or use the Cold War era’s Defense Production Act to give the U.S. military authority to use Anthropic products even if the A.I. developer doesn’t approve of how they are used.
The contretemps, first reported by Axios, underscores a continuing debate over A.I.’s role in national security and concerns about how the technology could be used in high-stakes situations involving lethal force, sensitive information or government surveillance.
Mr. Hegseth has made it a personal mission to root out what he calls “woke culture” in the armed forces—a stance that Mr. Amodei apparently does not find reassuring in terms of the potential surveillance tasks the Trump administration may put poor Claude to work on.
“A powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow,” Mr. Amodei wrote in an essay last month.
After the latest round of negotiations, Mr. Amodei confirmed on Feb. 26 that Anthropic “cannot in good conscience accede” to the Pentagon’s demands.
A spokesperson for Anthropic said in a statement that the company is not walking away from negotiations but that new contract language received from the Defense Department “made virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons.”
In comments on social media, Sean Parnell, the Pentagon’s top spokesman, said that the military “has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement.”
The church’s A.I. anxieties
Mr. Amodei can likely count on the strong support of the Holy See in resistance to A.I.-governed weapons, though it should be noted that the church’s anxiety and skepticism extends to the entire contemporary A.I. project. Pope Leo XIV chose his papal name to honor a predecessor who in 1891 took a historic stand for worker rights. Our era’s Pope Leo has focused critically on the likely impact of artificial intelligence on the world’s human workforce, asking how to measure authentic human progress in an era of rapid social and technological dislocation.
The church has been fretfully assessing A.I. development for more than a decade before Leo—especially its potential use in weapons systems. The church argues that removing the human factor in lethal decision-making is itself a violation of international humanitarian law that governs conflict.
In May 2014, the Vatican’s Archbishop Silvano M. Tomasi first warned policymakers in Geneva against the use of lethal autonomous weapons: “Decisions over life and death inherently call for human qualities, such as compassion and insight, to be present. While imperfect human beings may not perfectly apply such qualities in the heat of war, these qualities are neither replaceable nor programmable.”
In an address to G7 leaders in Rome in July 2024, Pope Francis supported a ban on autonomous weapons. “We would condemn humanity to a future without hope if we took away people’s ability to make decisions about themselves and their lives, by dooming them to depend on the choices of machines,” he said.
In a papal reflection, “Antiqua et Nova,” Pope Francis first noted how “the ability to conduct military operations through remote control systems” has diminished an appreciation of the “devastation caused by those weapon systems and the burden of responsibility for their use,” leading to “an even more cold and detached approach to the immense tragedy of war.”
Systems capable of identifying and striking targets without direct human intervention, Pope Francis wrote, are a “cause for grave ethical concern” because they lack the “unique human capacity for moral judgment and ethical decision-making.”
“No machine should ever choose to take the life of a human being,” Francis said.
The church continues to make that case. This week, Vatican News reported that Msgr. Daniel Pacho, an official of the Secretariat of State, renewed the church’s call for broad disarmament at a U.N. conference in Geneva on Feb. 25.
Lamenting diplomacy based on force instead of dialogue and recalling Pope Leo XIV’s warning in January that war is “back in vogue, and a zeal for war is spreading,” Monsignor Pacho said humanity is at “a critical juncture.” He reiterated the Holy See’s opposition to nuclear proliferation and worried that A.I. is dehumanizing the way that wars are waged.
“When autonomous weapons ‘become’ the combatants,” he said, “the unique human capacity for moral judgment and ethical decision-making disappears, as does the burden of responsibility, dangerously lowering the threshold for conflict.”
Humans must therefore remain in control in all use of force, Monsignor Pacho said.
If Anthropic loses its lucrative Defense Department deal on principle, it’s likely that other A.I. providers will quickly step in to snap up that sweet Pentagon lucre, ethics and outcomes be damned. Elon Musk’s xAI is standing back and standing by this week, ready to Grok on Pentagon networks once poor Claude is forced out.
The unironic name of xAI’s supercomputer in Memphis? It’s Colossus.
Makes you wonder if any of these guys ever go to the movies.
With reporting from The Associated Press
More from America
- Trump, Elon Musk and the dangers of ‘god mode’ tech powers in government
- Ukraine and the troubling future of A.I. warfare
- All is fair in A.I. warfare. But what do Christian ethics have to say?
- What does the Vatican know about A.I.? A lot, actually.
A deeper dive
- The Holy See’s Position on Lethal Autonomous Weapons
- Systems:
- Holy See at Geneva, 13 November 2014
- The Adolescence of Technology
- The Holy See’s Position on Lethal Autonomous Weapons
- UN: Start Talks on Treaty to Ban ‘Killer Robots’
- Stop Killer Robots
The Weekly Dispatch takes a deep dive into breaking events and issues of significance around our world and our nation today, providing the background readers need to make better sense of the headlines speeding past us each week. Last week: Why did the Vatican decline to join Trump’s ‘Board of Peace’ for Gaza?
For more news and analysis from around the world, visit Dispatches. This week: Mexico burns after drug cartel leader is captured and killed; Q&A: Why did the Vatican pass on Trump’s Board of Peace? and The ICE surge in Minnesota is winding down. Is Arizona next?
