home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   alt.survival      Discussing survivalism for end-times      131,166 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 131,161 of 131,166   
   Ho Li Phuc to All   
   Anthropic refuses to bend to Pentagon on   
   27 Feb 26 09:06:47   
   
   XPost: alt.politics.trump   
   From: HLP@aol.com   
      
   One can begin to see why certain actors are claiming that AI could   
   result in the death of all mankind.   
      
   https://kdvr.com/news/money/ap-anthropic-refuses-to-bend-to-pent   
   gon-on-ai-safeguards-as-dispute-nears-deadline/   
      
   by: MATT O'BRIEN and KONSTANTIN TOROPIN, Associated Press   
      
   Posted: Feb 27, 2026 / 07:33 AM MST   
      
   WASHINGTON (AP) — A public showdown between the Trump administration and   
   Anthropic is hitting an impasse as military officials demand the   
   artificial intelligence company bend its ethical policies by Friday or   
   risk damaging its business.   
      
   Anthropic CEO Dario Amodei drew a sharp red line 24 hours before the   
   deadline, declaring his company “cannot in good conscience accede” to   
   the Pentagon’s final demand to allow unrestricted use of its technology.   
      
   Anthropic, maker of the chatbot Claude, can afford to lose a defense   
   contract. But the ultimatum this week from Defense Secretary Pete   
   Hegseth posed broader risks at the peak of the company’s meteoric rise   
   from a little-known computer science research lab in San Francisco to   
   one of the world’s most valuable startups.   
      
   If Amodei doesn’t budge, military officials have warned they will not   
   just pull Anthropic’s contract but also “deem them a supply chain risk,”   
   a designation typically stamped on foreign adversaries that could derail   
   the company’s critical partnerships with other businesses.   
      
   And if Amodei were to cave, he could lose trust in the booming AI   
   industry, particularly from top talent drawn to the company for its   
   promises of responsibly building better-than-human AI that, without   
   safeguards, could pose catastrophic dangers.   
      
   Anthropic said it sought narrow assurances from the Pentagon that Claude   
   won’t be used for mass surveillance of Americans or in fully autonomous   
   weapons. But after months of private talks exploded into public debate,   
   it said in a Thursday statement that new contract language “framed as   
   compromise was paired with legalese that would allow those safeguards to   
   be disregarded at will.”   
      
   That was after Sean Parnell, the Pentagon’s top spokesman, posted on   
   social media that “we will not let ANY company dictate the terms   
   regarding how we make operational decisions.” Anthropic has “until 5:01   
   p.m. ET on Friday to decide” if it would meet the demands or face   
   consequences, Parnell said.   
      
   Emil Michael, the defense undersecretary for research and engineering,   
   later lashed out at Amodei, alleging on X that he “has a God-complex”   
   and “wants nothing more than to try to personally control the US   
   Military and is ok putting our nation’s safety at risk.”   
      
   That message hasn’t resonated in much of Silicon Valley, where a growing   
   number of tech workers from Anthropic’s top rivals, OpenAI and Google,   
   voiced support for Amodei’s stand late Thursday in an open letter.   
      
   OpenAI and Google, along with Elon Musk’s xAI, also have contracts to   
   supply their AI models to the military.   
      
   Musk sided with the Trump administration on Friday, saying on his social   
   media platform X that “Anthropic hates Western Civilization” after   
   Michael drew attention to a previous version of Claude’s guiding   
   principles that encouraged “consideration of non-Western perspectives.”   
   All of the leading AI models, including Musk’s Grok and OpenAI’s   
   ChatGPT, are programmed with a set of instructions that guide a   
   chatbot’s values and behavior. Anthropic calls that guidance a constitution.   
      
   While some Trump-allied tech leaders have joined the fray — including   
   Musk and Palmer Luckey, co-founder of defense contractor Anduril — the   
   polarizing debate over “woke AI” has put others in a difficult position.   
      
   “The Pentagon is negotiating with Google and OpenAI to try to get them   
   to agree to what Anthropic has refused,” the open letter from some   
   OpenAI and Google employees says. “They’re trying to divide each company   
   with fear that the other will give in.”   
      
   Also raising concerns about the Pentagon’s approach were Republican and   
   Democratic lawmakers and a former leader of the Defense Department’s AI   
   initiatives.   
      
   “Painting a bullseye on Anthropic garners spicy headlines, but everyone   
   loses in the end,” wrote retired Air Force Gen. Jack Shanahan in a   
   social media post.   
      
   Shanahan faced a different wave of tech worker opposition during the   
   first Trump administration when he led Maven, a project to use AI   
   technology to analyze drone footage and target weapons. So many Google   
   employees protested its participation in Project Maven at the time that   
   the tech giant declined to renew the contract and then pledged not to   
   use AI in weaponry.   
      
   “Since I was square in the middle of Project Maven & Google, it’s   
   reasonable to assume I would take the Pentagon’s side here,” Shanahan   
   wrote Thursday on social media. “Yet I’m sympathetic to Anthropic’s   
   position. More so than I was to Google’s in 2018.”   
      
   He said Claude is already being widely used across the government,   
   including in classified settings, and Anthropic’s red lines are   
   “reasonable.” He said the AI large language models that power chatbots   
   like Claude are also “not ready for prime time in national security   
   settings,” particularly not for fully autonomous weapons.   
      
   “They’re not trying to play cute here,” he wrote.   
      
   Parnell asserted Thursday that the Pentagon wants to “ use Anthropic’s   
   model for all lawful purposes” and said opening up use of the technology   
   would prevent the company from “jeopardizing critical military   
   operations.” He and other officials haven’t detailed how they want to   
   use the technology.   
      
   The military “has no interest in using AI to conduct mass surveillance   
   of Americans (which is illegal) nor do we want to use AI to develop   
   autonomous weapons that operate without human involvement,” Parnell wrote.   
      
   When Hegseth and Amodei met Tuesday, military officials warned that they   
   could designate Anthropic as a supply chain risk, cancel its contract or   
   invoke a Cold War-era law called the Defense Production Act to give the   
   military more sweeping authority to use its products, even if the   
   company doesn’t approve.   
      
   Amodei said Thursday that “those latter two threats are inherently   
   contradictory: one labels us a security risk; the other labels Claude as   
   essential to national security.” He said he hopes the Pentagon will   
   reconsider given Claude’s value to the military, but, if not, Anthropic   
   “will work to enable a smooth transition to another provider.”   
      
   —-   
      
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca