On March 2, 2026, OpenAI announced revisions to its agreement with the United States Department of Defense regarding the use of its artificial intelligence technologies in national security-related activities. The update followed public criticism and internal concerns about the scope and safeguards of the initially disclosed contract. According to the company, the revised language clarifies that its systems cannot be used for domestic surveillance of U.S. citizens or residents, including through the acquisition or use of commercially obtained personal or identifiable information. The updated terms also specify that OpenAI’s tools cannot be deployed by intelligence agencies within the Department of Defense without the negotiation of a separate agreement.
The company further stated that the revised contract introduces additional safeguards aimed at responsible use of its technologies. These include a prohibition on mass domestic surveillance, restrictions on the use of its systems for targeting autonomous weapons, and limitations on the deployment of AI in high-impact automated decision-making, such as social credit systems. The discussion surrounding the agreement intensified after the AI company Anthropic declined a similar proposal from the U.S. government, arguing that the initial language did not provide sufficient safeguards regarding sensitive applications such as mass surveillance and autonomous military systems.