Bless and Praise Anthropic
You might recall that the Pentagon used and AI model from Anthropic to help capture Venezuela's Nicolas Madero. Anthropic reminded the Pentagon of the company's guardrails, suggesting AI should not be used for mass surveillance of the public and it must not be given power to kill without a human control.
The Trump administration did not like that, so it terminated Anthropic's Pentagon work and banned other federal agencies from using Anthropic. Anthropic sued, and a judge blocked the government from blacklisting Anthropic.
From its start, Athropic’s entire identity is built upon a foundation embedding safety principles directly into AI behavior. The company should be lauded, praised and honored for placing such moral in its AI models. Other companies—OpenAI, Google, xAI—accepted government contracts lacking the guardrails that Anthropic requires.
No comments:
Post a Comment