Anthropic sues Trump administration
Digest more
By Jack Queen and Deepa Seetharaman NEW YORK, March 9 (Reuters) - Anthropic on Monday filed a lawsuit to block the Pentagon from placing it on a national security blacklist, escalating the artificial intelligence lab’s high-stakes battle with the U.
The maker of the Claude chatbot says its research could help identify economic disruptions by measuring how AI is currently reshaping work.
Anthropic's partnerships with Amazon and Palantir helped it make inroads into the DOD, and its blacklisting is concerning to many industry experts.
Anthropic sued the U.S. government on Monday, escalating a dispute the AI company frames as retaliation for refusing to remove safety limits on its Claude model.
Anthropic said Opus assigned itself a, "15-20 percent," chance of being fully conscious and Douthat asked Amodei if he would believe a model that gave itself a 72 percent chance of being conscious. "This is one of those really hard-to-answer questions," Amodei replied.
Anthropic’s moral stand on U.S. military use of artificial intelligence is reshaping the competition between leading AI companies but also exposing a growing awareness that maybe chatbots just aren’t capable enough for acts of war.
Copilot Cowork operates in the cloud, inside Microsoft 365's infrastructure, and draws on something Claude Cowork simply cannot access: the full graph of a user's enterprise work data.
Of course, those are only examples. Currently, according to Anthropic’s report, 30% of jobs have almost zero risk of AI elimination. The common denominators are pretty clear here: physical work, in-person service and real-world manipulation. Extrapolating from that list, here’s a broader list:
Anthropic launched Code Review in Claude Code, a multi-agent system that automatically analyzes AI-generated code, flags logic errors, and helps enterprise developers manage the growing volume of code produced with AI.