Google and OpenAI Employees Sign "We Will Not Be Divided" Petition: Backing Anthropic Against Pentagon's Military AI Demands in 2026
In late February 2026, a remarkable show of cross-company solidarity emerged in the AI industry. Over 330 employees from Google and OpenAI publicly signed an open letter titled "We Will Not Be Divided", supporting Anthropic's refusal to grant the US Pentagon unrestricted access to its AI model Claude for military applications. This petition, hosted at notdivided.org, opposes the use of advanced AI for domestic mass surveillance of Americans or fully autonomous weapons that can kill without human oversight. As the standoff between Anthropic and the Department of Defense escalated—with threats of blacklisting and invocation of the Defense Production Act—this employee-led movement highlights deep ethical concerns in the race for AI dominance and military advantage.
Background: The Anthropic-Pentagon Standoff
The conflict traces back to Anthropic's core safety principles. Founded with a focus on AI alignment and safety, the company has maintained strict "red lines": no deployment of Claude for mass surveillance or lethal autonomous weapons. Despite a $200 million+ contract with the DoD allowing Claude on classified networks (including roles in operations like the Maduro raid), Anthropic CEO Dario Amodei rejected Defense Secretary Pete Hegseth's ultimatum on February 27, 2026. The Pentagon demanded removal of safeguards for "all lawful purposes," threatening to designate Anthropic a "supply chain risk" or force compliance via the Defense Production Act.
Amodei responded firmly: "We cannot in good conscience accede to their request." He emphasized belief in AI's role in defending democracies but drew the line at unethical uses. This position contrasts with more flexible stances from competitors like OpenAI (which lifted its military ban in 2024) and xAI (which signed deals for classified use). The deadline passed without capitulation, raising questions about future DoD contracts and AI ethics in national security.
The "We Will Not Be Divided" Petition: Employee Solidarity
Launched amid the ultimatum, the petition quickly garnered signatures: 266 from Google and 65 from OpenAI as of February 27 morning, with numbers climbing past 330 by day's end. Signers (verified current employees, with anonymity options) urged their leaders to adopt Anthropic's red lines and resist division tactics. The letter states: "The Pentagon is trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand."
We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.
This echoes historical tech activism, like Google's 2018 Project Maven protests (thousands opposed AI for drone targeting). Today, with AI advancing rapidly, employees fear complicity in dystopian applications. Over 100 Google DeepMind staff sent an internal letter to chief scientist Jeff Dean calling for ethical alignment, while broader groups represent 700,000+ tech workers backing similar demands.
Public and X Reactions: Viral Momentum
The story exploded on X (formerly Twitter) on February 27, 2026, with hashtags like #NoWarAI and #AIEthics trending. Employees and observers amplified the petition's message of unity against pressure.
@jasminewsun: 200+ Google and OpenAI staff have signed this petition to share Anthropic's red lines for the Pentagon's use of AI. Let's find out if this is a race to the top or the bottom. https://notdivided.org/ (4.8K likes, 298K views)
Reactions included calls for solidarity: Employees criticized divide-and-conquer tactics, with one noting the petition creates "shared understanding" against overreach. Discussions highlighted risks of autonomous weapons and surveillance eroding civil liberties.
Public sentiment split: supporters praised ethical courage, while critics argued military AI is essential against adversaries like China. The viral spread underscores growing scrutiny of tech-military ties.
Broader Implications: Ethics, Security, and the AI Future
This standoff raises profound questions. Proponents of military AI integration argue it's vital for strategic edges in conflicts, faster decision-making, and countering global rivals investing heavily in similar tech. The Pentagon's new AI initiatives aim to equip warfighters with advanced tools. Yet critics warn of escalation risks, ethical violations, and "killer robots" operating without accountability. Mass surveillance threatens privacy, reviving fears from past programs.
Outcomes could reshape industry norms: blacklisting Anthropic might deter ethical firms, but it could boost its reputation among safety-focused talent. Government tools like the Defense Production Act risk backlash, including talent exodus from defense-aligned companies. Globally, this influences UN debates on lethal autonomous weapons bans and sets precedents for AI governance amid the US-China tech race.
For searches like Google OpenAI petition military AI 2026, Anthropic Pentagon ultimatum ethical red lines, or AI ethics in defense applications implications, this moment is pivotal. It challenges whether innovation prioritizes humanity or unchecked power.
Conclusion: A Defining Crossroads for Tech and Defense
The "We Will Not Be Divided" petition and Anthropic's defiance mark rare unity in a competitive field. As negotiations evolve post-deadline, the resolution could redefine responsible AI deployment, government-tech relations, and global security norms. Employees' voices remind us technology's power demands caution—prioritizing ethics over expediency. This story continues to unfold, with potential ripple effects across the AI landscape in 2026 and beyond.