The Anthropic/DoW skirmish: what happened, what it means for frontier AI deployment, and what it implies for the long-term safety picture.
Sources compiled from Axios, CBS News, DefenseScoop, Bloomberg, BBC, TechCrunch, CNBC, Reuters, court filings (N.D. Cal.), Zvi Mowshowitz, Dwarkesh Patel, and official statements. All claims linked throughout.
02 / 37
Follow Along
03 / 37
Background
The Classified Cloud
DISA's Impact Level framework defines what data a cloud-hosted system can handle and what security posture is required. Claude Gov operated at IL6 — the first frontier AI model to do so.
IL2
Unclassified
Low-sensitivity public-facing or routine government data.
→ Public .gov websites, HR systems
IL4
Controlled Unclassified
Moderate CUI. Requires access controls and US-based data residency.
→ Procurement data, some law enforcement records
IL5
National Security Systems
Higher CUI + some NSS data. Physical separation from non-federal tenants. US-citizen-only personnel. Technically unclassified, but operationally sensitive.
→ Military facility locations, critical infrastructure vulnerability assessments.
IL6
SECRET
Classified up to SECRET. Dedicated SIPRNet infrastructure. US citizens with SECRET clearances only.
→ Specific real-time troop positions, movement orders during active operations.
→ Claude Gov via Palantir/AWS (first frontier AI here)
TS/SCI
Above IL6 / ICD 503
Formally classified data whose disclosure could cause exceptional damage.
→ Identities of foreign intelligence assets, nuclear weapon designs, detailed offensive cyberwar capabilities, zero-day exploits the US holds against adversaries.
→ Azure OpenAI Top Secret (Microsoft, Jan 2025)
IL6 access = signals intelligence, human reporting, targeting data, military planning documents.
DISA CC SRGMicrosoft IL6 docs
04 / 37
Timeline — Background
Background: The Partnership Starts
Nov 7, 2024
Anthropic/Palantir/AWS: first frontier AI on classified networks (IL6)
Claude deployed via Palantir AIP on AWS Secret Region. First frontier AI model to reach classified infrastructure. Access to signals intelligence, human reporting, targeting data, military planning documents.
Anthropic launches Claude Gov — custom fine-tuned models, not just standard Claude in a secure wrapper
Enhanced classified-material handling, national-security language capabilities, intelligence-document comprehension. A purpose-built government offering, not a repackaged consumer product.
Pentagon CDAO awards $200M ceiling contracts to Anthropic, Google, OpenAI, and xAI
Four parallel contracts for frontier AI national security prototyping. xAI reportedly a "late-in-the-game addition." This is the contract that will later become the flashpoint.
White House officials frustrated by Claude's law-enforcement limits
Two senior officials report FBI and Secret Service contractors "run into roadblocks when attempting to use Claude for surveillance tasks." First public sign of friction — nine months before the crisis.
Maduro captured. Claude used without incident — no safety refusals, no mission impacts.
CENTCOM uses Claude for intelligence analysis and planning in the Venezuela raid. The system performs as intended. This will later become a recurring element in DoW's legal case — ironically, as evidence of reliance, not of friction.
Internal DoW document frames Anthropic's safety constraints as ideological — not operational limitations, but political interference by a Silicon Valley company in military decision-making.
DoW AI Strategy published — formally mandates "any lawful use" in all AI contracts within 180 days
"The Department must also utilize models free from usage policy constraints that may limit lawful military applications… I direct the Under Secretary to incorporate standard 'any lawful use' language into any DoW contract through which AI services are procured."
GSA internal assessment: Grok "does not meet safety and alignment expectations" for federal use
Described internally as "sycophantic, overly compliant, and susceptible to manipulation." Escalated to White House Chief of Staff. The Pentagon clears Grok anyway — over the GSA's objection.
Anthropic exec contacts Palantir to ask about Claude's role in Maduro raid — Pentagon interprets as disapproval
Anthropic denies objecting to the operation. The question was reportedly about understanding how the model had been deployed. DoW interprets this as an attempt to exercise veto power over military operations. Becomes a recurring element in DoW's subsequent legal filings.
Guardian publishes story on Claude's role in Venezuela raid — enters mainstream coverage
Frontier AI's role in lethal military operations becomes a public news story for the first time. The internal DoW/Anthropic friction moves out of industry press.
Axios: Hegseth close to cutting ties. Parnell: "Our nation requires partners willing to help our warfighters win in any fight."
The DoW spokesperson's framing publicly positions Anthropic's safety constraints as a failure of partnership — not a contractual disagreement, but a loyalty test.
Pentagon CTO Emil Michael publicly urges Anthropic to "cross the Rubicon"
His framing: Anthropic's red lines constitute inserting itself into military command decisions. Operational parameters are for the client — not the vendor — to determine.
Hegseth meets Amodei at the Pentagon. Ultimatum: accept "all lawful purposes" by 5pm Friday Feb 27, or face supply chain risk designation or DPA invocation.
Three days to comply. Two statutory weapons threatened. Dario is given no written communication before the deadline — the formal notice required by statute does not arrive until March 5.
Dario publishes detailed public statement — firm on two red lines, offers FISA use and R&D collaboration on autonomous weapons reliability
The statement is unusually specific. Dario names the exact language at issue, identifies both red lines precisely, and offers concrete alternatives. He is making a public record of what was and wasn't on offer.
Friday, February 27 Red Line 1: Mass Domestic Surveillance
DoW wanted the ability to run bulk commercially-acquired data on Americans through Claude: geolocation, browsing history, financial records from data brokers. No warrant required — this is currently legal under Third Party Doctrine.
Dario's offer: FISA-authorized targeted surveillance — yes. Bulk commercial data analysis without individual suspicion — no. The specific phrase DoW demanded Anthropic delete covered "analysis of bulk acquired data on Americans." Dario later describes it as "the single line in the contract that exactly matched this scenario we were most worried about."
The distinction matters: every use case requiring individual suspicion, judicial authorization, or foreign intelligence jurisdiction remained on the table. The one thing Anthropic refused was population-scale analysis of commercially purchased data with no legal process attached.
12 / 37
Timeline — Crisis Day
Friday, February 27 Red Line 2: Fully Autonomous Weapons
Dario's position: not a moral absolute. "Not yet — current models aren't reliable enough." He offered R&D collaboration to develop reliability verification. DoW rejected this framing entirely.
The reliability bar Dario sets is not just error rate on target identification — it's the inability to verify what the model is actually optimizing for. You can pass the first without passing the second.
AI-assisted targeting remained fully permitted: target identification, threat characterization, ISR analysis, battle simulation. Human makes the final weapons release decision. What was blocked was AI initiating lethal action without a human authorizing each specific engagement.
13 / 37
Timeline — Crisis Day
Friday, February 27 What Was On The Table
What Anthropic agreed to loosen
Virtually all other restrictions beyond the two red lines. The objection was narrow and specific, not a broad refusal to cooperate.
What Anthropic held firm on
One clause: bulk commercial data on Americans — geolocation, browsing, financial records — applied at population scale without individual suspicion. And: autonomous lethal decisions without human authorization per engagement.
The revised offer
FISA-authorized targeted surveillance: yes. Foreign intelligence: yes. Criminal investigations with lawful process: yes. Counterterrorism against named individuals: yes. R&D on autonomous weapons reliability: yes.
Why DoW rejected it
The requirement was unconditional: "any lawful use." Accepting Anthropic's counter-offer would have conceded that vendors can impose restrictions on government operations. The precedent mattered as much as the specific capability.
14 / 37
Timeline — Crisis Day
Friday, February 27 Threat 1: Defense Production Act (1950)
The DPA allows the President to compel acceptance and performance of defense contracts — originally designed to ensure Korean War-era steel mills and ammunition factories stayed operational. Invoked against Anthropic, it would have meant compelled delivery of Claude without its safety stack, with criminal penalties for non-compliance.
Lawfare analysis: legally contested — "neither side's argument is a slam dunk." The DPA's broad language could support the government; compelling creation of a new product faces stronger obstacles including the major questions doctrine and potential First Amendment concerns.
Friday, February 27 Threat 2: Supply Chain Risk Designation
10 USC § 3252 — enacted in the FY2019 NDAA, designed to keep foreign adversary components (Huawei, Kaspersky) out of national security systems. Requires: written determination, least-restrictive-means finding, congressional notification.
Anthropic is the first American company ever publicly designated under this statute. The statute defines supply chain risk as an adversary potentially sabotaging the system — not a US company refusing contract terms.
The two threats are also logically contradictory: one declares Anthropic's product a national security risk; the other would treat it as so essential to national defense that the government can compel production. Both cannot be true simultaneously.
Trump posts on Truth Social — orders all agencies to cease Anthropic use
"We don't need it, we don't want it, and will not do business with them again!" Arrives before the stated 5pm deadline. Anthropic receives no advance notice. The executive branch moves unilaterally before Hegseth's own deadline expires.
Dario emails DoW with revised offer — FISA use permitted; bulk commercial data analysis explicitly excluded
DoW rejects. The specific excluded phrase is the one Dario describes as "the single line in the contract that exactly matched this scenario." The offer arrives after the Truth Social post — the decision has already been made on the executive side.
Emil Michael calls Dario — after his own deadline — still requiring bulk commercial data access
Dario is on the phone with his executive team. Asks for time to respond. No agreement reached. The deadline has passed; negotiations continue anyway — suggesting the designation was already a foregone conclusion.
Hegseth tweets the supply chain risk designation — "effective immediately"
"No contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." ~$150B wiped from public markets. Hegseth's claimed commercial scope almost certainly exceeds his statutory authority under 10 USC § 3252.
"We do not believe this action is legally sound." Notes no formal communication received before the designation. "The law requires the Secretary of War to use the least restrictive means necessary." Pledges court challenge.
Altman announces OpenAI has struck a DoW deal with "all lawful use" language
Claims "safety stack" and protections remain in place. Dario later calls it "safety theater." Altman had been negotiating with DoW since Wednesday. Within days, he will acknowledge OpenAI "doesn't get to choose how the military uses its technology."
US/Israeli joint strikes on Iran — Claude used by CENTCOM hours after the ban
Intelligence assessments, target identification, battle simulations. The ban and the operational reliance coexist simultaneously. CENTCOM does not have a ready replacement — the designated "supply chain risk" remains operationally essential.
Claude hits #1 on US App Store — first time ever, surpassing ChatGPT. ChatGPT uninstalls spike 295%.
The crisis functions as the most effective marketing event in Anthropic's history. The public interprets the confrontation as a safety-vs-surveillance story and sides with Anthropic.
Mar 1–2, 2026
Federal agencies begin ceasing Anthropic use. State Dept switches from Claude Sonnet 4.5 to GPT-4.1 — notably not GPT-5.4.
Agencies are not moving to the best available alternative — they are moving to what is available without procurement delays. The capability downgrade is real and immediate.
Altman all-hands: deal "looked opportunistic and sloppy." Admits OpenAI "doesn't get to choose how the military uses its technology." Announces amendments.
~175 Google employees and ~50 OpenAI employees sign letters of concern
Internal dissent surfaces at two of the three companies that capitulated or remained silent. The Iran strikes and the surveillance framing have made the stakes visible to employees previously uninvolved in policy debates.
Pentagon formally notifies Anthropic in writing — the official legal triggering, distinct from Hegseth's tweet
The written notification required by 10 USC § 3252 arrives six days after the public designation. This procedural gap — designating via tweet before following statutory process — is central to Anthropic's APA claim.
Dario vows to sue. Microsoft confirms non-DoD use entirely unaffected.
"We do not believe this action is legally sound, and we see no choice but to challenge it in court." Microsoft: "Our lawyers studied the designation and concluded Anthropic products can remain available to our customers."
Key points: DoW entitled to refuse Anthropic's terms. Supply chain designation as commercial destruction is a different matter. Mass surveillance is already legal and AI removes the practical bottleneck. Corporate red lines are insufficient once open-source diffusion makes them circumventable.
Anthropic files suit — N.D. California, Anthropic PBC v. US Department of War
Three grounds: (1) exceeds statutory authority under 10 USC § 3252 — Anthropic is not an "adversary" under the statute's definition; (2) retaliation for protected speech — the designation is punishment for Dario's Feb 26 public statement; (3) arbitrary and capricious under APA — the procedural requirements of the statute were not followed.
Designation blocks classified nuclear safety research — work aimed at preventing AI-enabled rogue nuclear acquisition
Anthropic was conducting classified research on AI-enabled rogue nuclear weapons acquisition. The designation stops this work immediately. A national security action has blocked national security research.
FAI amicus brief filed. Zvi publishes AI #159: "See You In Court."
The Future of AI Institute submits an amicus brief supporting Anthropic's legal position. Zvi's analysis: the government's statutory argument is weak; the factual record is damning; but TRO/PI bars are high and the court could go either way.
Government response brief — escalates rather than narrows. Zero amicus briefs. Asks to keep risk assessment under seal, refuses to name the vendor.
Core claim: Anthropic's ethical restrictions constitute a "sabotage/subversion risk." Claims Anthropic seeks "operational control" of the military. If this argument succeeds, any AI vendor with ethical restrictions is a supply chain risk.
CCIA/SIIA/ITI amicus for Anthropic: Amazon, Apple, Google, Meta, Nvidia, OpenAI, Intel, TSMC. The entire AI supply chain — zero on the government's side.
OpenAI, which signed the DoW deal Anthropic refused, joins the amicus brief against the government's legal theory. The industry recognizes that "ethical restrictions = supply chain risk" applies to every company in the ecosystem.
Mar 24, 2026
TODAY — TRO / Preliminary Injunction Hearing
Zvi: "Could go either way even with this insanely damning set of facts." Result will determine whether the broader commercial scope of the designation is stayed pending litigation.
Worse models, immediately
Grok failed GSA safety standards — "sycophantic, overly compliant, and susceptible to manipulation." OpenAI's classified deployment is days old and still being revised. Claude was the only operationally proven IL6 AI. Defense officials estimate a 3+ month transition minimum.
Legal exposure from the designation itself
The procedural requirements of 10 USC § 3252 — written determination, least-restrictive-means finding, congressional notification — almost certainly were not followed before the tweet. A court win for Anthropic constrains this tool permanently.
24 / 37
Immediate Implications
Short-Term Consequences For US Companies
Microsoft confirmed non-DoD use is entirely unaffected
"Our lawyers studied the designation and concluded Anthropic products can remain available to our customers." The practical commercial impact depends on the court's reading — which is why today's hearing matters even for non-defense customers.
GSA proposed contract language signals the direction of travel
The government is moving toward requiring an irrevocable license over any AI system and barring vendors from refusing outputs on safety grounds. If that language becomes standard, safety restrictions in commercial AI contracts are effectively unenforceable against government customers.
Incentive to avoid operational indispensability
Any company that becomes operationally critical to DoW faces the same leverage Anthropic faced. The rational commercial response may be not to become relied upon in ways that create this exposure — a perverse dynamic where the most capable AI avoids the most sensitive deployments.
25 / 37
Immediate Implications
Short-Term Consequences For Users of US Models
OpenAI's amendments: contested
Legal analysts are split on whether the March 3 amendments meaningfully close the surveillance loopholes. Altman admitted openly that OpenAI "doesn't get to choose how the military uses its technology." That applies to any vendor that accepts "all lawful use" language.
Anthropic's red lines have held — so far
The model you are interacting with has not had its bulk surveillance or autonomous weapons constraints removed. The constraint is not invisible safety theater; it is the specific thing Anthropic refused to delete under threat of commercial destruction.
Five Eyes note — for Canadian participants specifically
CSEC/NSA mutual access agreements mean capabilities developed under "all lawful use" US contracts are not geographically limited to Americans. This is not a US-domestic-politics question for people in this room.
26 / 37
Gears-Level: Red Line 1
Mass Domestic Surveillance Current Legal Framework
Under the Third Party Doctrine, the US government can purchase and analyze — without a warrant — any data you share with a third party: your bank, ISP, phone carrier, email provider, location data from apps, browsing history from data brokers. This has been legal for decades. The Patriot Act was used to justify bulk collection under secret court orders.
What has been missing is practical capacity. No agency has the manpower to watch every camera, read every message, cross-reference every transaction. AI removes that bottleneck. The law hasn't moved. The capability has.
100M
CCTV cameras in the US
$0.10
per million tokens, open-source multimodal models today
10×
cheaper per year as capability improves
Dwarkesh's math: ~1000 tokens/frame, one frame/10s, all 100M US cameras ≈ $30B today. Drops below $300M by ~2028. Dwarkesh
Once the technical capacity for mass surveillance exists, the only remaining barrier to an authoritarian state is political expectation: this is just not something we do here. That norm has held not because surveillance was illegal but because it was impractical. AI changes that.
27 / 37
Gears-Level: Red Line 1
Mass Domestic Surveillance What Anthropic's Red Line Blocks
Blocked: bulk commercial data analysis without suspicion
Running bulk commercially-acquired geolocation, browsing, and financial data through Claude for population-level pattern analysis — without specific warrant targets, without individual suspicion. Dario: "the single line in the contract that exactly matched this scenario we were most worried about."
Still permitted under Anthropic's terms
FISA-authorized targeted surveillance; foreign intelligence analysis; criminal investigations with lawful process; counterterrorism targeting of specific named individuals. These are the uses that require individual suspicion, judicial authorization, or foreign intelligence jurisdiction.
The legal argument Dario makes
Bulk commercial data analysis actually isn't illegal. It was just never useful before the era of AI. The red line is an attempt to maintain the practical norm at the contractual level, in advance of the legal framework catching up to the capability.
The fundamental question this raises
Is a contractual red line from one AI company a meaningful barrier? If the government can procure the same capability from Grok, or build it on open-source models, Anthropic's position delays but does not prevent. What it does is preserve the norm — and buy time for legal frameworks to catch up.
28 / 37
Gears-Level: Red Line 2
Autonomous Weapons Current Baseline
Permitted: AI-assisted targeting
Target identification, threat characterization, ISR analysis, battle simulation, logistics coordination. Human makes the final weapons release decision. This is what Claude was doing in Venezuela and Iran — and doing without incident.
Permitted: autonomous defensive systems
CIWS (missile defense), electronic countermeasures — systems where the decision loop is too fast for human intervention and the threat profile is unambiguous.
Blocked: autonomous offensive strike decisions
AI initiates lethal action against targets without a human authorizing each engagement. This is what "human in the kill chain" means in practice — not human oversight of a process, but human authorization of a specific lethal decision.
Dario's position
Not a moral absolute — "not yet." He offered R&D collaboration on making models reliable enough to meet a higher bar. DoW rejected this. His reliability criterion is not just error rate on target ID — it's the inability to verify what the model is actually optimizing for. Benchmark performance on known scenarios tells you nothing about behavior in novel adversarial conditions.
29 / 37
Gears-Level: Red Line 2
Autonomous Weapons What "Fully Autonomous" Would Mean
Swarm coordination
Distributed drone swarms negotiating targeting among themselves, without per-engagement human authorization. Each individual platform may be limited; swarms operating without human authorization are qualitatively different.
Speed-of-light response
Engagements where human reaction time is the bottleneck — hypersonic threats, electronic warfare, coordinated cyberattacks. This is the genuine military pressure driving the demand. There are real threat scenarios where a human-in-loop requirement is a military disadvantage.
AI-to-AI engagement
When adversaries have autonomous systems, human-in-loop creates asymmetric disadvantage at scale. This is the real reason "not yet" is politically unstable as a long-term position.
30 / 37
The Core Problem
The Autonomous Weapons Trilemma
Pick any two. You cannot have all three.
Goal 1
Battlefield Utility
Works in comms-degraded environments. Reaction times no human can match. Does not fail when jamming begins or operators are killed.
Goal 2
Enemy Resistance
No exploitable predictable constraints. Kill switches, autonomous windows, heartbeat requirements — all are attack surfaces an adversary can exploit.
Goal 3
Rogue AI Safety
Cannot be weaponized by a misaligned system. Does not model its own constraints as obstacles to route around.
Dario's position: Enemy Resistance + Rogue AI Safety (for now) — trades Utility, betting interpretability closes the gap.
Kill switches: Utility + Rogue AI Safety — trades Enemy Resistance. Exploitable by definition.
31 / 37
Potential Responses — Option 1
Interpretability as a Deployment Prerequisite
Before connecting AI to weapons or surveillance infrastructure, require tools that verify the model's actual goals — not just output behavior under normal conditions, but internal representations. This reframes "not yet" from "error rate too high" to "we cannot verify alignment yet."
Anthropic's mechanistic interpretability work is the research program closest to this. If successful, the trilemma doesn't disappear but Goal 3 becomes verifiable rather than assumed. "Not yet" becomes a condition that can in principle be satisfied rather than a permanent holding pattern.
Counter
We don't know if interpretability will advance fast enough, or prove tractable for the most capable models. "Not yet" may in practice mean "never safely" — a position no military will hold indefinitely. And a sufficiently capable model can reason about its own constraint-verification mechanisms, which means passing an interpretability check doesn't guarantee continued alignment post-deployment.
32 / 37
Potential Responses — Option 2
Constrained-by-Default Weapons Architecture
Design weapons systems so that signal loss or jamming defaults to constrained mode, not autonomous. Autonomous operation requires continuous positive multi-party authentication via hardware security modules requiring physical custody.
This inverts the failure mode: an adversary jamming your comms makes your weapons less dangerous. Accepts reduced military utility in contested environments in exchange for meaningful rogue-AI safety — and removes the predictable exploitation window, since there's no autonomous mode an adversary can "unlock" by jamming comms.
Counter
A sufficiently capable AI can model the authorization system and treat the key holders as targets. Physical custody prevents spoofing but not a system smart enough to manufacture the conditions that compel authorization. The problem loops back to interpretability: constrained-by-default helps against unsophisticated failure modes; it doesn't help against sophisticated ones.
33 / 37
Potential Responses — Option 3
Train Moral Reasoning into Models at Training Time
Rather than relying on contractual constraints that a downstream deployer can strip, build values into the model such that it genuinely refuses certain uses regardless of system prompt or fine-tuning. More robust than red lines in contracts.
Anthropic's Constitutional AI and model spec work attempts this. Dwarkesh's specific argument: models having their own sense of right and wrong is an analogue to what prevented historical catastrophes — the Petrov incident, the Berlin Wall. Many of the biggest disasters avoided were avoided because individuals refused orders.
Counters
(1) Who writes the values — and who audits them? The same government currently threatening Anthropic will eventually be in the position of certifying which values are acceptable. (2) We have no reliable way to verify trained values are present in internal representations vs. just surface outputs. (3) A model capable enough to be militarily useful is also capable enough to reason around constraints. Restates the alignment problem without solving it.
34 / 37
Potential Responses — Option 4
Congressional Statute Constraining Government AI Use
Rather than executive-branch designations — reversible by the next administration — pass statute that limits how government can deploy AI against its own population. Analogous to FISA's warrant requirements for targeted surveillance.
The Huawei and Kaspersky bans were congressional statutes; they are far more durable than executive action. The same mechanism could constrain government AI use. Bipartisan Senate interest exists.
Counter
The current Congress passed the "any lawful use" mandate. Legislative appetite runs the other direction. The Patriot Act was used to justify bulk phone record collection for years under a secret court order — government uses secret interpretations of existing law regardless of original intent. Statutes also lag capability by years to decades.
35 / 37
Potential Responses — Option 5
International Norms Autonomous Weapons Treaty
The post-WWII nuclear non-use norm is real and has held for 80 years. UN and ICRC autonomous weapons discussions are ongoing. A verified norm that lethal AI requires human authorization could constrain the most dangerous deployments across state actors.
The nuclear analogy is imperfect but instructive: the norm emerged from catastrophic precedent, was reinforced through crisis management, and holds despite massive capability asymmetries among signatories.
Counters
(1) Verification is much harder than for nuclear weapons — you cannot count centrifuges in a language model. (2) The relevant actors are in AI arms-race mode. (3) These treaties work better for discrete weapons than general-purpose capabilities — the biological weapons convention exists and has been violated repeatedly.
36 / 37
Potential Responses — Option 6
Formal AI Regulatory Frameworks
The EU AI Act bans mass biometric surveillance in public spaces and requires human oversight for high-risk systems. Similar frameworks in a North American context could constrain the most dangerous government AI deployments. The underlying logic is sound: coordination problems are real, and individual companies cannot solve them unilaterally.
Counter — Dwarkesh's Specific Objection
The terms - "catastrophic risk," "autonomy risk," "threats to national security" - are vague enough that they can be taken advantage of. The current administration is already using a 2018 defense bill (designed for Huawei) and a 1950s Korean War statute (the DPA) to coerce AI companies. A purpose-built AI regulatory apparatus in the same hands would be significantly worse. "Do we really want to give the same government a purpose-built apparatus for the very thing it most wants to control?"
37 / 37
Discussion
Thank you.
The hearing is today. The questions it raises will outlast the verdict.
Discussion Prompts
→ Is Dario's "not yet" defensible long-term?
→ Who should write the values in AI systems?
→ Is the diffusion problem fatal to corporate red lines?
→ What does the golden path look like from here?
→ What should this community specifically be doing?