Washington- The Secretary of Defense, Pete Hegsethgave the CEO of Anthropic a deadline of Friday to open the technology of artificial intelligence of the company for unrestricted military use or risk losing its government contract, according to a person familiar with their Tuesday meeting.
Anthropic makes the Claude chatbot and is the latest of its competitors not to supply its technology to a new internal US military network. Its CEO, Dario Amodei, has repeatedly raised ethical concerns about the uncontrolled use of AI by governments, including the dangers of fully autonomous armed drones and AI-assisted mass surveillance that could track dissent.
Defense officials warned that they could designate Anthropic a supply chain risk or use the Defense Production Act to essentially give the military more authority to use its products, even if it does not approve of how they are used, according to the person familiar with the meeting and a senior Pentagon official, who were not authorized to comment publicly and spoke on condition of anonymity.
The news, previously reported by Axios, underscores the debate over AI’s role in national security and concerns about how the technology could be used in high-risk situations involving the use of lethal force, sensitive information or government surveillance. It also comes as Hegseth has pledged to eradicate what he calls a “woke culture” in the military.
“A powerful artificial intelligence that analyzes billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty in the making, and shut them down before they grow,” Amodei wrote in an essay last month.
According to this person, the tone of the meeting was cordial, but Amodei did not budge on two issues that it has established as lines that Anthropic will not cross: fully autonomous military operations and domestic surveillance of American citizens.
The Pentagon opposes Anthropic’s ethical restrictions because military operations need tools that don’t come with built-in limitations, the senior Pentagon official said. The official argued that the Pentagon has only issued legal orders and stressed that legal use of Anthropic tools would be the responsibility of the military.
1 / 25 | Photos from the Pentagon expose the magnitude of the military deployment in Puerto Rico. More than a month later, the Department of Defense released images showing the magnitude of military exercises in Puerto Rico and the Caribbean. In this image, an AV-8B Harrier II performs a detonation after taking off from the USS Iwo Jima warship. -DVIDS
Will no longer be the only authorized AI company
The Pentagon announced last summer that it was awarding defense contracts to four AI companies: Anthropic, Google, OpenAI and Elon Musk’s xAI. Each contract amounts to 200 million dollars.
Anthropic was the first AI company to gain approval for classified military networks, where it works with partners like Palantir. Musk’s xAI company, which operates the Grok chatbot, claims that Grok is also ready to be used in classified environments, according to the senior Pentagon official.
The official noted that the other AI companies were “close” to that milestone. SpaceX, Musk’s spaceflight company that recently merged with xAI, did not immediately return a request for comment on Tuesday.
Hegseth said in a January speech at SpaceX in south Texas that he shrugged off any AI model “that doesn’t allow us to fight wars.”
Hegseth said his vision for military AI systems involves them operating “without ideological constraints that limit lawful military applications,” before adding that “the Pentagon’s AI will not be woke.”
The Secretary of Defense announced that Grok would join the Pentagon’s artificial intelligence network, called GenAI.mil. The announcement came days after Grok, which is integrated into X, the social network owned by Musk, came under global scrutiny for generating highly sexualized deepfake images of people without their consent.
OpenAI announced in early February that it would also join GenAI.mil, allowing members of the service to use a customized version of ChatGPT for unclassified tasks.
Concerned about security
Anthropic said in a statement after Tuesday’s meeting that it “continued good faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do.”
Since its founders left OpenAI to found this company in 2021, Anthropic has presented itself as the most responsible and security-conscious of the major AI companies.
The uncertainty with the Pentagon is testing those intentions, according to Owen Daniels, associate director of analysis and fellow at Georgetown University’s Center for Security and Emerging Technology.
“Anthropic’s peers, including Meta, Google and xAI, have been willing to comply with the department’s policy on using models for all legal applications,” Daniels said. “So the company’s bargaining power here is limited, and it risks losing influence in the department’s push to adopt AI.”
In the AI craze that followed the launch of ChatGPT, Anthropic closely aligned itself with President Joe Biden’s Democratic administration by volunteering to subject its AI systems to third-party scrutiny to protect itself from national security risks.
Amodei, the CEO, has warned of the potentially catastrophic dangers of AI, while rejecting the label that he is an AI “doomer.” He argued in the January essay that “we are considerably closer to the real danger in 2026 than we were in 2023” but that those risks must be managed in a “realistic and pragmatic way.”
Anthropic has fallen out with Trump
It wouldn’t be the first time Anthropic has advocated for stronger safeguards for AI and taken on President Donald Trump’s administration. Anthropic has publicly criticized chipmaker Nvidia, criticizing Trump’s proposals to relax export controls to allow some AI computer chips to be sold in China. However, the AI company remains a close collaborator with Nvidia.
The Republican Trump administration and Anthropic have also been on opposite sides of a lobby to regulate AI in US states.
Trump’s top AI adviser, David Sacks, accused Anthropic in October of “executing a sophisticated regulatory capture strategy based on fear-mongering.”
Sacks was responding in
Anthropic hired several former Biden officials shortly after Trump’s return to the White House, but has also tried to signal a bipartisan approach. The company recently added Chris Liddell, a former White House official during Trump’s first term, to its board of directors.
The Pentagon’s “dizzying” adoption of AI demonstrates the need for greater oversight or regulation of AI by Congress, especially if it is used to surveil Americans, said Amos Toh, senior adviser at the National Security and Liberty Program at New York University’s Brennan Center.
“The law is not keeping up with how quickly technology is evolving,” Toh wrote in a post on Bluesky. “But that doesn’t mean the DoD has a blank check.”