News Elementor

RECENT NEWS

Hegseth declares Anthropic a supply chain risk, restricting military contractors from doing business with AI giant


Defense Secretary Pete Hegseth deemed artificial intelligence firm Anthropic a “supply chain risk to national security” on Friday, following days of increasingly heated public conflict over the company’s effort to place guardrails on the Pentagon’s use of its technology. 

Hegseth declared on X that effective immediately, “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” The decision could have a wide-ranging impact, given the sheer number of companies that contract with the Pentagon.

“America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final,” Hegseth wrote.

President Trump announced earlier Friday that all federal agencies must “immediately” stop using Anthropic, though the Defense Department and certain other agencies can continue using its AI technology for up to six months while transitioning to other services.

Anthropic vowed in a statement Friday to “challenge any supply chain risk designation in court,” calling the move “legally unsound” and warning it would set a “dangerous precedent for any American company that negotiates with the government.” The company wrote that Hegseth doesn’t have the legal authority to ban military contractors from doing business with Anthropic, since a risk designation would only apply to contractors’ work with the Pentagon.

“Designating Anthropic as a supply chain risk would be an unprecedented action—one historically reserved for US adversaries, never before publicly applied to an American company,” Anthropic said.

The company also said it has “not yet received direct communication from the Department of War or the White House on the status of our negotiations.” 

In a social media post Friday night, OpenAI CEO Sam Altman said his company had “reached an agreement with the Department of War to deploy our models in their classified network.”

“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement,” Altman wrote, adding that OpenAI is asking the Defense Department “to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept.” 

The decision to cut off Anthropic came after a dispute with the Pentagon that highlighted sweeping disagreements about the role of AI in national security and the potential risks that the powerful technology could pose.

The company — which is the only AI firm whose model is deployed on the Pentagon’s classified networks — has sought guardrails that prevent its technology from being used to conduct mass surveillance of Americans or carry out military operations without human approval. But the Pentagon insisted any deal should allow use of Anthropic’s Claude model for “all lawful purposes.”


The Free Press: Will AI Doom Us All? The Market Can’t Decide


The Pentagon had given Anthropic a deadline of Friday at 5:01 p.m. to either reach an agreement or lose out on its lucrative contracts with the military.

The military’s position is that it’s already illegal for the Pentagon to conduct mass surveillance of Americans, and internal policies restrict the military from using fully autonomous weapons. As talks between the two sides broke down this week, Pentagon officials have publicly accused the company of seeking to impose its own views onto the military.

Hegseth called Anthropic “sanctimonious” and arrogant on Friday, and accused it of trying to “strong-arm the United States military into submission.” 

“Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable,” Hegseth alleged.

But Anthropic CEO Dario Amodei has argued that guardrails are necessary because Claude is not infallible enough to power fully autonomous weapons and a powerful AI model could raise serious privacy concerns. He says the company understands that military decisions are made by the Pentagon and has never tried to limit the use of its technology “in an ad hoc manner.”

“However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values,” Amodei said in a statement Thursday. “Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.”

Amodei has been outspoken for years about the potential risks posed by unchecked AI technology, and has backed calls for safety and transparency regulations.

The company held firm to its position late Friday, writing: “No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons.”

“We are deeply saddened by these developments,” Anthropic said. “As the first frontier AI company to deploy models in the US government’s classified networks, Anthropic has supported American warfighters since June 2024 and has every intention of continuing to do so.”  

On Thursday, the eve of the military’s deadline to reach a deal, the Pentagon’s chief technology officer, Emil Michael, told CBS News that the Pentagon had made concessions, offering written acknowledgements of the federal laws and internal military policies that restrict mass surveillance and autonomous weapons.

“At some level, you have to trust your military to do the right thing,” said Michael, who also noted, “We’ll never say that we’re not going to be able to defend ourselves in writing to a company.”

Anthropic called that offer inadequate. A company spokesperson said the new language was “paired with legalese that would allow those safeguards to be disregarded at will.” 


Defense Secretary Pete Hegseth deemed artificial intelligence firm Anthropic a “supply chain risk to national security” on Friday, following days of increasingly heated public conflict over the company’s effort to place guardrails on the Pentagon’s use of its technology. 

Hegseth declared on X that effective immediately, “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” The decision could have a wide-ranging impact, given the sheer number of companies that contract with the Pentagon.

“America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final,” Hegseth wrote.

President Trump announced earlier Friday that all federal agencies must “immediately” stop using Anthropic, though the Defense Department and certain other agencies can continue using its AI technology for up to six months while transitioning to other services.

Anthropic vowed in a statement Friday to “challenge any supply chain risk designation in court,” calling the move “legally unsound” and warning it would set a “dangerous precedent for any American company that negotiates with the government.” The company wrote that Hegseth doesn’t have the legal authority to ban military contractors from doing business with Anthropic, since a risk designation would only apply to contractors’ work with the Pentagon.

“Designating Anthropic as a supply chain risk would be an unprecedented action—one historically reserved for US adversaries, never before publicly applied to an American company,” Anthropic said.

The company also said it has “not yet received direct communication from the Department of War or the White House on the status of our negotiations.” 

In a social media post Friday night, OpenAI CEO Sam Altman said his company had “reached an agreement with the Department of War to deploy our models in their classified network.”

“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement,” Altman wrote, adding that OpenAI is asking the Defense Department “to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept.” 

The decision to cut off Anthropic came after a dispute with the Pentagon that highlighted sweeping disagreements about the role of AI in national security and the potential risks that the powerful technology could pose.

The company — which is the only AI firm whose model is deployed on the Pentagon’s classified networks — has sought guardrails that prevent its technology from being used to conduct mass surveillance of Americans or carry out military operations without human approval. But the Pentagon insisted any deal should allow use of Anthropic’s Claude model for “all lawful purposes.”


The Free Press: Will AI Doom Us All? The Market Can’t Decide


The Pentagon had given Anthropic a deadline of Friday at 5:01 p.m. to either reach an agreement or lose out on its lucrative contracts with the military.

The military’s position is that it’s already illegal for the Pentagon to conduct mass surveillance of Americans, and internal policies restrict the military from using fully autonomous weapons. As talks between the two sides broke down this week, Pentagon officials have publicly accused the company of seeking to impose its own views onto the military.

Hegseth called Anthropic “sanctimonious” and arrogant on Friday, and accused it of trying to “strong-arm the United States military into submission.” 

“Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable,” Hegseth alleged.

But Anthropic CEO Dario Amodei has argued that guardrails are necessary because Claude is not infallible enough to power fully autonomous weapons and a powerful AI model could raise serious privacy concerns. He says the company understands that military decisions are made by the Pentagon and has never tried to limit the use of its technology “in an ad hoc manner.”

“However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values,” Amodei said in a statement Thursday. “Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.”

Amodei has been outspoken for years about the potential risks posed by unchecked AI technology, and has backed calls for safety and transparency regulations.

The company held firm to its position late Friday, writing: “No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons.”

“We are deeply saddened by these developments,” Anthropic said. “As the first frontier AI company to deploy models in the US government’s classified networks, Anthropic has supported American warfighters since June 2024 and has every intention of continuing to do so.”  

On Thursday, the eve of the military’s deadline to reach a deal, the Pentagon’s chief technology officer, Emil Michael, told CBS News that the Pentagon had made concessions, offering written acknowledgements of the federal laws and internal military policies that restrict mass surveillance and autonomous weapons.

“At some level, you have to trust your military to do the right thing,” said Michael, who also noted, “We’ll never say that we’re not going to be able to defend ourselves in writing to a company.”

Anthropic called that offer inadequate. A company spokesperson said the new language was “paired with legalese that would allow those safeguards to be disregarded at will.” 

Reporter US

RECENT POSTS

CATEGORIES

Leave a Reply

Your email address will not be published. Required fields are marked *

The US Media

The US Media is a dynamic online news platform delivering timely, accurate, and comprehensive updates across a range of topics, including politics, business, technology, entertainment, and sports. With a commitment to credible journalism, United News provides in-depth analyses, breaking news, and thought-provoking features, ensuring readers stay informed about global and local developments.

SUBSCRIBE US

It is a long established fact that a reader will be distracted by the readable content of a page when looking at its layout. The point of using Lorem Ipsum is that it has a more-or-less normal distribution