• Anthropic draws line in the sand in standoff with US government

    From Mike Powell@1:2320/105 to All on Sat Feb 28 10:55:25 2026
    `We cannot in good conscience accede to their request': Anthropic CEO Dario Amodei draws a line in the sand in standoff with US government

    By Benedict Collins published yesterday

    Dario Amodei makes Anthropic's boundaries very clear

    Anthropic CEO Dario Amodei does not want Claude used by the Pentagon for mass domestic surveillance and autonomous weapons
    A statement has laid bare Anthropic's reasons for retaining Claude's safety rails
    Pete Hegseth gave Anthropic until Friday to provide the DoD with full access

    Anthropic CEO Dario Amodei has released a statement concerning the company's ongoing disagreement with the US Department of Defense. Amodei declared Anthropic "cannot in good conscience accede" to the DoD's request to provide full access to its AI models, over fears they could be used for `mass
    domestic surveillance' and `fully autonomous weapons'.

    US Defense Secretary Pete Hegseth has threatened to label Anthropic as a "supply chain risk" and invoke the Defense Production Act to force the
    company to comply.

    Unprecedented threats against Anthropic

    In his statement, Amodei said Anthropic has historically had a very good relationship with the US government, including being the first AI company to deploy its models within US government networks, the National Laboratories, and the first to deploy models for national security.

    Amodei also noted the company has complied with US regulations on the use and sale of AI models to China, to the extent that it chose to "forgo several hundred million dollars in revenue" by preventing the use of Claude by the Chinese Communist Party.

    "Anthropic understands that the Department of War, not private companies,
    makes military decisions," Amodei continued. "However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values."

    But the hesitations to provide the DoD with full access to Claude surround the potential misuse of the model for two nefarious purposes.

    Regulations surrounding AI have not caught up with the capabilities of AI models such as Claude, Amodei says, which would allow the US government to deploy Claude as a tool for mass domestic surveillance. Theoretically, the government could purchase highly detailed records and use AI models organize it into highly accurate reflection of US citizens at a scale never seen before.

    As for AI use in weapons systems, Amodei says they "may prove critical for
    our national defense," but he argues that current AI models are "simply not reliable enough to power fully autonomous weapons." If an AI model in charge
    of an autonomous weapon system were to suffer a hallucination, the responsibility would likely fall on the model developer.

    Amodei also addresses the threats made by Hegseth, stating that they "are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security."

    The statement concludes that Anthropic's "strong preference is to continue
    to serve the Department and our warfighters-with our two requested safeguards in place."

    "Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required."


    https://www.techradar.com/pro/security/we-cannot-in-good-conscience-accede-to-t heir-request-anthropic-ceo-dario-amodei-draws-a-line-in-the-sand-in-standoff-wi th-us-government

    $$
    --- SBBSecho 3.28-Linux
    * Origin: Capitol City Online (1:2320/105)
  • From Mike Powell@1:2320/105 to Mike Powell on Sat Feb 28 10:55:50 2026
    Trump just banned Anthropic from government use - here's why its CEO
    refused the Pentagon's 'dystopian' request

    Opinion By Lance Ulanoff last updated 17 hours ago

    A voice of reason

    Anthropic AI just got banned from all use across US Government agencies. President Donald Trump's order is the fallout from Anthropic CES Dario Amodei denying the Pentagon's request to loosen Anthropic's safety policy.

    Now that the company and its Claude AI are banned, the Department of War and other agencies will spend the next six months disengaging from Anthropic's AI models.

    Lingering questions, like how this will impact the US's effectiveness in competing with other AI-armed countries, how hard or easy it will be to remove Anthropic, and which major AI company will take its place, remain. We already know that OpenAI is standing with Anthropic, as per CEO Sam Altman.

    Elon Musk's Grok AI is a possible candidate, but then there's a letter he signed nine years ago.

    How we got here

    "Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend."

    That's not a quote from Anthropic CEO Dario Amodei refusing to accede to the US Department of War's request that it allow its Claude AI models for mass surveillance and perhaps more problematically "Fully autonomous weapons." Instead, it comes from a 2017 Open Letter to the UN, co-signed by, among dozens of other AI and robotics leaders, Elon Musk, asking the global organization to ban autonomous weapons.

    It's a window into long-brewing concerns over the abuse and misuse of autonomous systems for warfare. It's also likely, despite Musk's closeness to the current Trump administration, that US Secretary of Defense (or War) Pete Hegseth has never read it.

    Anthropic is now at risk of losing a $200M US Department of War contract, despite, as Amodei describes it, already working "proactively to deploy our models to the Department of War and the intelligence community."

    Amodei is by no means anti-defense or against the use of AI by the US government. In his letter explaining Anthropic's decision, Amodei writes, "I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries."

    However, what Hegseth has asked is for Anthropic to countermand its own "Constitution", a set of principles and safety restrictions for the use and behavior of its AI models. The US Department of War basically wants Anthropic to remove the guardrails. Anthropic Constitution Principles, such as being "Broadly Safe" and "Broadly Ethical," are in direct conflict with Hegseth's demands that the AI be used for mass surveillance and for fully autonomous weapons.

    Amodie makes it clear that his systems are not ready for any of this.

    "Today, frontier AI systems are simply not reliable enough to power fully autonomous weapons," writes Amodei, adding, "Without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day."
    Armed and dangerous

    These are not new concepts. Many in the tech industry have been pondering these issues for almost a decade (if not longer). Musk and the AI and robotics community raised the alarm in 2017 because we were already seeing AI-backed robot systems being used in questionable ways.

    In 2016, a bomb disposal robot was used to kill a mass shooting suspect in Texas. Dallas PD put an explosive device on the robot's arm, guided it to where the suspect was holed up, and then they detonated the explosive device and killed the suspect.

    At the time, some saw it as an inflection point, and a concerning one at that. Episodes like that may or may not have triggered that 2017 letter to the UN.

    Keep in mind that this happened before the current generative and agentic AI revolution.

    Amodei knows better than most the massive leaps foundational models are taking every few months and, as he makes clear in his letter, our rules and strategies for managing AI in these circumstances have already fallen behind their capabilities.

    "AI-driven mass surveillance presents serious, novel risks to our fundamental liberties. To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI," he wrote.

    Essentially, with AI, we don't know what we don't know. Hegseth's willingness to recklessly use powerful AI models in both surveillance and warfare indicates he has zero knowledge or interest in the past and even less understanding of the intricacies of these systems.

    A very bad idea

    I've yet to talk to a technologist, a roboticist, or someone within the AI community who thinks letting an AI (or an AI-powered robot) control or carry a weapon is a good idea.

    Hegseth isn't necessarily spelling out that scenario, but his requirement to remove the guardrails Anthropic has smartly put in place indicates to me that he doesn't really care about repercussions and AI casualties. He's focused on results, perhaps at any or all costs, including safety and liberty.

    Amodei's done the right thing here, basically calling Hegseth's bluff. As the Anthropic CEO made clear, Claude AI is already being used in many Department of War systems. Pulling it out and retrofitting for another, perhaps less powerful and intelligent set of models might not be easy and probably won't have the desired outcome of a system ready to carry out Hegseth's bidding.

    Clearer heads must prevail here. As the tech leaders and, yes, even Elon Musk, wrote in 2017, "Once this Pandora's box is opened, it will be hard to close."


    https://www.techradar.com/ai-platforms-assistants/today-frontier-ai-systems-are -simply-not-reliable-enough-to-power-fully-autonomous-weapons-anthropic-ceo-on- why-it-wont-agree-to-pete-hegseths-scary-request

    $$
    --- SBBSecho 3.28-Linux
    * Origin: Capitol City Online (1:2320/105)