Statement from Dario Amodei on our discussions with the Department of War
Feb 26, 2026
I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.
Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community. We were the first frontier AI company to deploy our models in the US government’s classified networks, the first to deploy them at the National Laboratories, and the first to provide custom models for national security customers. Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.
Anthropic has also acted to defend America’s lead in AI, even when it is against the company’s short-term interest. We chose to forgo several hundred million dollars in revenue to cut off the use of Claude by firms linked to the Chinese Communist Party (some of whom have been designated by the Department of War as Chinese Military Companies), shut down CCP-sponsored cyberattacks that attempted to abuse Claude, and have advocated for strong export controls on chips to ensure a democratic advantage.
Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.
However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do. Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now:
- Mass domestic surveillance. We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values. AI-driven mass surveillance presents serious, novel risks to our fundamental liberties. To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI. For example, under current law, the government can purchase detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant, a practice the Intelligence Community has acknowledged raises privacy concerns and that has generated bipartisan opposition in Congress. Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale.
- Fully autonomous weapons. Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk. We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer. In addition, without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day. They need to be deployed with proper guardrails, which don’t exist today.
To our knowledge, these two exceptions have not been a barrier to accelerating the adoption and use of our models within our armed forces to date.
The Department of War has stated they will only contract with AI companies who accede to “any lawful use” and remove safeguards in the cases mentioned above. They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.
Regardless, these threats do not change our position: we cannot in good conscience accede to their request.
It is the Department’s prerogative to select contractors most aligned with their vision. But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider. Our strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place. Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required.
We remain ready to continue our work to support the national security of the United States.
以下為全文中文翻譯(我會把 “Department of War” 按字面譯作「戰爭部」,並在語氣上保持原文的公開聲明風格):
Dario Amodei 關於我們與戰爭部討論的聲明
2026 年 2 月 26 日
我深信,運用 AI 來保衛美國及其他民主國家、並擊敗我們的威權對手,具有攸關存亡的重要性。
因此,Anthropic 一直主動推動,將我們的模型部署到戰爭部與情報共同體之中。我們是第一家在美國政府的機密(classified)網路中部署模型的前沿 AI 公司;第一家在國家實驗室部署模型的公司;也是第一家為國家安全客戶提供客製化模型的公司。Claude 已廣泛部署於戰爭部及其他國家安全機構,用於關鍵任務場景,例如情報分析、建模與模擬、作戰規劃、網路作戰等等。
即使這違背公司短期利益,Anthropic 也採取行動以捍衛美國在 AI 領域的領先地位。我們選擇放棄數億美元的收入,切斷 Claude 被與中國共產黨有關聯的企業使用(其中一些已被戰爭部指定為「中國軍事公司」);我們關閉了由中共資助、試圖濫用 Claude 的網路攻擊;並倡議對晶片實施強有力的出口管制,以確保民主陣營的優勢。
Anthropic 理解:做出軍事決策的是戰爭部,而不是私營公司。我們從未對特定軍事行動提出異議,也未曾以臨時、零散的方式試圖限制我們技術的使用。
然而,在一小部分狀況下,我們認為 AI 可能會削弱、而非捍衛民主價值。有些用途也確實超出了當今技術能安全、可靠完成的範圍。以下兩種用途從未被納入我們與戰爭部的合約中,我們也認為現在不應該納入:
1)大規模國內監控
我們支持 AI 用於合法的對外情報與反情報任務。但將這些系統用於大規模國內監控,與民主價值不相容。AI 驅動的大規模監控對我們的基本自由帶來嚴重且全新的風險。即使此類監控在目前某些情況下仍屬合法,也只是因為法律尚未跟上 AI 能力的迅速增長。舉例而言,依照現行法律,政府可以從公開來源購買關於美國人的詳細行動軌跡、網路瀏覽、以及社交關係等紀錄,而無需取得搜索令;情報共同體也承認此做法引發隱私疑慮,並在國會引起跨黨派反對。強大的 AI 使得把這些零散、單獨看似無害的資料,拼接成任何一個人生活的完整圖像成為可能——而且可以自動化、以巨量規模完成。
2)完全自主武器
部分自主武器——例如今日在烏克蘭使用的那類——對捍衛民主至關重要。即便是完全自主武器(即完全把人排除在迴路之外,自動選擇並攻擊目標的武器),未來也可能對我們的國防至關重要。但就當下而言,前沿 AI 系統仍遠不夠可靠,無法為完全自主武器提供動力。我們不會在明知的情況下提供一種會讓美國作戰人員與平民面臨風險的產品。我們曾提議與戰爭部直接合作研發,以提升這些系統的可靠性,但對方尚未接受此提議。除此之外,若沒有適當的監督,完全自主武器也無法被信賴去做出我們高度訓練、專業的部隊每天所展現的關鍵判斷。它們必須在適當的護欄下部署,而這些護欄在今天仍不存在。
就我們所知,這兩項例外迄今並未成為加速我軍採用與使用我們模型的障礙。
戰爭部表示,他們只會與同意「任何合法用途」並在上述情境中移除安全護欄的 AI 公司簽約。他們威脅說,如果我們維持這些護欄,就會把我們從他們的系統中移除;他們也威脅將我們指定為「供應鏈風險」——這是一個通常用於美國對手的標籤,過去從未用於一家美國公司——並揚言動用《國防生產法》強迫移除這些護欄。後兩項威脅本身就互相矛盾:一方面把我們標示為安全風險;另一方面又把 Claude 標示為國家安全所必需。
無論如何,這些威脅都不會改變我們的立場:我們無法在良知上同意他們的要求。
戰爭部當然有權選擇最符合其願景的承包商。但鑑於 Anthropic 的技術為我軍帶來的重大價值,我們希望他們重新考慮。我們強烈希望能繼續為戰爭部與我軍作戰人員服務——並保留我們所要求的兩項安全護欄。若戰爭部選擇將 Anthropic 下線,我們將配合確保平順轉移到其他供應商,避免對正在進行的軍事規劃、行動或其他關鍵任務造成任何干擾。我們的模型將在我們提出的廣泛條款下,持續可用到所需的任何期間。
我們仍準備好繼續支持美國的國家安全工作。