ଓଡ଼ିଆ | ENGLISH
ଓଡ଼ିଆ | ENGLISH

2-minor-siblings-found-dead-in-abandoned-well-in-odisha-

Published By : Satya Mohapatra
2-minor-siblings-found-dead-in-abandoned-well-in-odisha-

Military officials ignore federal ban during recent overseas operations

Recent reports reveal that United States armed forces utilized software from San Francisco startup Anthropic during major operations against Iran. This action occurred just hours after President Donald Trump issued a strict directive for federal agencies to halt all usage of these specific systems. Information provided by insiders to the Wall Street Journal confirms this timeline, showcasing a massive disconnect between federal mandates and battlefield realities.

Deep Integration Within Global Military Commands

Military divisions worldwide heavily rely on Anthropic Claude AI to manage complex daily operations. Central Command in the Middle East reportedly uses this technology to evaluate intelligence data, pinpoint targets, and run advanced combat simulations. High-stakes missions continue to feature this software, demonstrating how deeply embedded artificial intelligence in warfare has become. Central Command declined to comment on the specific digital tools deployed during the recent offensive.

Contract Disputes Trigger Federal Phase Out Order

Tensions between defense officials and the tech company have reached a breaking point. Months of arguing over contract details led the Trump administration to completely cut ties with the developer. Federal directives recently labeled the startup a security risk to defense supply chains, initiating a widespread Donald Trump AI ban. This drastic measure happened because company executives refused to grant unlimited usage rights for their technology in all lawful military scenarios.

Finding Replacements Will Take Substantial Time

Defense experts acknowledge that swapping out such complex infrastructure cannot happen overnight. Government leaders have already signed new agreements with alternative providers, including Elon Musk’s xAI and OpenAI, for classified work. However, completely removing the current infrastructure could take several months. Previously, this specific software was the only model approved for highly classified intelligence tasks, thanks to partnerships with Amazon Web Services and Palantir. This operational value was prominently displayed during an earlier high-profile raid in Venezuela.

Ethical Boundaries Clash With Government Demands

Defense Secretary Pete Hegseth recently imposed a strict deadline, demanding unrestricted application of these digital tools for military purposes. Dario Amodei, CEO of the developer, firmly denied this request. He emphasized that dropping safety guardrails crosses ethical lines his team refuses to compromise on, even if it means losing lucrative government contracts worth hundreds of millions of dollars. For readers following Prameya News tech updates, this Pentagon AI controversy highlights a growing, complicated divide between Silicon Valley ethics and national security needs.