American AI company Anthropic has until 5:01 pm ET to give in to the Pentagon’s demands or face being labeled a “supply chain risk,” a type of designation usually reserved for companies thought to be extensions of foreign adversaries.
The Pentagon, which uses Anthropic’s Claude AI system on its classified networks, wants to be able to use it for “all lawful purposes.” But Anthropic has two redlines for the Pentagon: that Claude will not be used in autonomous weapons, and that it will not be used in the mass surveillance of US citizens.
Anthropic on Thursday announced it has no intention of acquiescing.
→ Continue reading at CNN - Business News