84.6 F
San Antonio
Thursday, March 5, 2026

OpenAI Quietly Rewrites Pentagon Deal After Surveillance Concern

OpenAI Updates Pentagon Deal As Sam Altman Admits Negotiations Looked ‘Sloppy’

OpenAI CEO Sam Altman announced Monday night that the company has revised its agreement with the Pentagon to strengthen restrictions on how the Defense Department can use OpenAI’s artificial intelligence systems.

According to language published on OpenAI’s website, the updated agreement states that “the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.” The clarification comes after critics raised alarms that the original contract language left significant loopholes that could allow the government to monitor Americans.

The controversy highlights growing concerns about how advanced AI tools could transform intelligence gathering and surveillance.

Critics Say the Full Contract Remains Hidden

Despite the update, skepticism remains widespread because the full text of the contract has not been made public.

Brad Carson, a former congressman and former general counsel of the U.S. Army who now leads the Washington policy group Americans for Responsible Innovation, said OpenAI has not provided enough transparency to verify its claims.

“OpenAI has said that the Department of War contractually agreed not to use ChatGPT in agencies that surveil American people,” Carson said. “They have been happy to show contract language when it benefited them, but they refuse to release to the public this contractual provision.”

Carson added that without the full contract, it is difficult to determine whether the safeguards truly exist.

“I’ve reluctantly come to the conclusion that this provision doesn’t really exist, and they are just trying to fake it,” he told NBC News.

Legal experts say the situation underscores the importance of examining the full agreement.

“We still need to see the whole contract to say anything with a reasonable level of confidence,” said Brian McGrail, senior counsel at the Center for AI Safety. “It’s definitely a step in the right direction, and I do want to give OpenAI some credit.”

Intelligence Agencies Excluded From the Agreement

OpenAI CEO Sam Altman also acknowledged that the rushed nature of the agreement may have fueled skepticism.

Sam Altman. Saul Loeb/Getty Images
Sam Altman. Saul Loeb/Getty Images

Speaking about the controversy, Altman said the negotiations appeared “opportunistic and sloppy,” while emphasizing that the updated contract language bars OpenAI’s technology from being used for mass domestic surveillance or by intelligence agencies. The revision was intended to address growing criticism that the initial terms left room for government overreach.

Altman attempted to reassure critics in a post on X announcing the revised language, writing that protecting civil liberties was essential.

“It is critical to protect the civil liberties of Americans,” Altman wrote. He added that the Defense Department had confirmed OpenAI’s systems would not be used by intelligence agencies responsible for domestic surveillance, including the National Security Agency.

Katrina Mulligan, OpenAI’s head of national security partnerships, also stated that the contract excludes defense intelligence components, although she indicated the company could consider working with the NSA in the future if appropriate safeguards were established.

OpenAI did not respond to requests for additional comment.

AI Companies and the Pentagon Clash Over Guardrails

The dispute comes amid a broader fight between the Pentagon and rival AI company Anthropic over how military AI systems should be used.

Anthropic had been the only major AI provider cleared to operate on classified government networks until recently. The company maintained strict limits on how its systems could be deployed, refusing to allow them to be used for domestic surveillance or for controlling lethal autonomous weapons.

However, tensions escalated when the Defense Department pushed for language allowing the systems to be used for “any lawful purpose.”

Anthropic argued that such wording could allow the government to bypass safeguards, particularly if legal interpretations changed.

Last week, Defense Secretary Pete Hegseth reportedly threatened to designate Anthropic a national security supply chain risk, a move that would force the Pentagon and contractors to stop using the company’s technology.

Anthropic said the designation would be unprecedented for an American company.

National Security Officials Call for AI Cooperation

Retired Gen. Paul Nakasone, the former director of the National Security Agency and U.S. Cyber Command and now a member of OpenAI’s board, urged cooperation between the government and leading AI firms.

Paul Nakasone. Photographer: Mandel Ngan/AFP/Getty Images
Paul Nakasone. Photographer: Mandel Ngan/AFP/Getty Images

Speaking at an Aspen Institute event in California, Nakasone argued that the U.S. military should incorporate technology from all major AI developers.

“We need Anthropic, we need OpenAI, we need all of our large language model companies to be partnering with our government,” he said.

Nakasone criticized the Pentagon’s threat to label Anthropic a supply chain risk, saying the situation reflected unnecessary conflict between American institutions.

“As an American citizen, someone who served in government, I just think that it’s not right,” he said.

AI Surveillance Concerns Continue to Grow

The debate surrounding the contract reflects broader fears that powerful AI systems could dramatically expand the government’s surveillance capabilities.

Researchers warn that modern AI tools can analyze vast quantities of digital data at speeds that were previously impossible, allowing authorities to track individuals’ behavior, movements, and online activity with extraordinary precision.

One particularly controversial practice involves the government purchasing commercially available data from companies that collect location information, browsing histories, and other behavioral data from smartphones and apps.

Sen. Ron Wyden of Oregon, a longtime critic of government surveillance practices, warned that AI could dramatically amplify the risks associated with this data.

“Location data, web browsing records, and information about mental health, political activities and religious affiliations are all available for pennies on the open market,” Wyden said in a statement.

He argued that using AI to compile such information into profiles of Americans could represent “a chilling expansion of mass surveillance.”

Legal Experts Warn of Potential Loopholes

Even with OpenAI’s updated language, experts caution that the ultimate meaning of the agreement could depend on how the government interprets its terms.

McGrail noted that intelligence agencies historically interpret legal exceptions broadly, especially when national security is involved.

“The pattern we’ve seen play out time and again in these surveillance debates is that the intelligence and national security community ends up interpreting exceptions in an extremely broad fashion,” he said.

Because many national security programs remain classified, he added, the public often lacks the information needed to challenge those interpretations.

Public Backlash Targets OpenAI

The controversy has also sparked backlash from activists and technology critics.

Over the weekend, protesters gathered outside OpenAI’s headquarters in San Francisco, writing chalk messages urging employees to question the company’s partnership with the Pentagon.

Demonstrators gather in front of OpenAI's SF office to protest the company's deal with the Pentagon. Manuel Orbegozo for BI
Demonstrators gather in front of OpenAI’s SF office to protest the company’s deal with the Pentagon. Manuel Orbegozo for BI

At the same time, reports indicated that uninstalls of OpenAI’s ChatGPT app surged after news of the military agreement spread online.

Observers say the dispute reflects deeper tensions between Silicon Valley’s AI industry and the national security establishment.

Michael Horowitz, a former Pentagon official and current political science professor at the University of Pennsylvania, said the disagreement ultimately stems from a breakdown in trust between the companies and the military.

“This dispute reflects a breakdown in trust between Anthropic and the Pentagon,” Horowitz said.

“Anthropic does not trust that the Pentagon will use their tech responsibly, and the Pentagon doesn’t trust that Anthropic will allow its tech to be used for what the Pentagon views as important national security use cases.”

As AI becomes more powerful and more deeply integrated into government systems, the debate over how it should be used is likely to intensify.

Related Articles

  • Morning paper

Latest Articles