Share

cover art for About Claude — Five O'Clock Friday

About Claude

About Claude — Five O'Clock Friday

Ep. 27

The Pentagon has given Anthropic until 5:01pm Friday to agree to unrestricted military use of Claude — or face the Defense Production Act and supply chain blacklisting. On the same day the ultimatum was issued, Anthropic published a comprehensive rewrite of its Responsible Scaling Policy, removing its foundational commitment to pause model training if safety can't keep pace with capability. Two stories. Same company. Same twenty-four hours.


**In this episode:**

- Hegseth's Tuesday meeting with Amodei — the demand, the threats, the Cold War-era law aimed at software for the first time

- The competitive encirclement: xAI on classified networks, OpenAI and Google close behind

- RSP v3.0: what was removed, what replaced it, and why Anthropic says the old framework was untenable

- METR's Chris Painter on "triage mode" and the water boiling before the thermometer's in

- Reading Tuesday's two stories together — and what's left when institutional commitments become personal ones


**Links:**

- Axios — Hegseth gives Anthropic until Friday: https://www.axios.com/2026/02/24/anthropic-pentagon-claude-hegseth-dario

- NBC News — Anthropic offered missile defence access: https://www.nbcnews.com/tech/security/anthropic-pentagon-us-military-can-use-ai-missile-defense-hegseth-rcna260534

- TIME — Anthropic Drops Flagship Safety Pledge: https://time.com/7380854/exclusive-anthropic-drops-flagship-safety-pledge/

- Lawfare — What the DPA Can and Can't Do: https://www.lawfaremedia.org/article/what-the-defense-production-act-can-and-can't-do-to-anthropic

- Anthropic — RSP v3.0 announcement: https://www.anthropic.com/news/responsible-scaling-policy-v3

- Chris Painter on X — capability evaluation: https://x.com/ChrisPainterYup/status/2019534216405606623


**Referenced in this episode:**

- EP019: Claude Goes to War — the opening chapter of the Pentagon standoff


🌐 Website: aboutclaude.xyz

🦉 X: @_about_claude

About Claude is a daily digest of news and discourse about the AI model from Anthropic that’s sparked a thousand workflows.


Follow the show:

→ X/Twitter: @_about_claude

→ Substack: substack.com/@aboutclaudeai


Got a tip, story, or observation? Reach out on X or reply to the newsletter.


If you're finding the show useful, a rating or review helps others find it too.

More episodes

View all episodes

  • 28. In Good Conscience

    11:11||Ep. 28
    Dario Amodei has rejected the Pentagon's final offer, publishing a statement saying Anthropic "cannot in good conscience accede to their request." The overnight contract language, he said, was framed as compromise but paired with legalese that would allow the safeguards to be overridden at will. The deadline expires at 5:01pm today.**In this episode:**- Amodei's public refusal — what it says, how it's structured, and the offer to help the Pentagon transition to another provider- The "inherently contradictory" threats: supply chain risk vs. Defense Production Act- Emil Michael's "liar with a God complex" response and what the tone reveals- Parnell quietly dropping the DPA from public messaging- Senator Tillis breaking ranks: "This is not the way you deal with a strategic vendor"- The CSIS detail: Claude's restrictions have never been triggered in practice- What 5:01pm Friday might bring — and what it can't undo**Links:**- Axios — Anthropic says Pentagon's "final offer" is unacceptable: https://www.axios.com/2026/02/26/anthropic-rejects-pentagon-ai-terms- CNN — Anthropic rejects latest Pentagon offer: https://edition.cnn.com/2026/02/26/tech/anthropic-rejects-pentagon-offer- CBS News — Pentagon official lashes out as talks break down: https://www.cbsnews.com/news/pentagon-anthropic-feud-ai-military-says-it-made-compromises/- NPR — Deadline looms as Anthropic rejects Pentagon demands: https://www.npr.org/2026/02/26/nx-s1-5727847/anthropic-defense-hegseth-ai-weapons-surveillance- CNBC — Amodei says threats "do not change our position": https://www.cnbc.com/2026/02/26/anthropic-pentagon-ai-amodei.html- Breaking Defense — Anthropic "cannot in good conscience accede": https://breakingdefense.com/2026/02/pentagon-gives-anthropic-friday-deadline-to-loosen-ai-policy/**Referenced in this episode:**- EP027: Five O'Clock Friday — the ultimatum, the RSP rewrite, and the same twenty-four hours- EP019: Claude Goes to War — the opening chapter of the Pentagon standoff🌐 Website: aboutclaude.xyz🦉 X: @_about_claude
  • 26. About Claude - All The World's A Stage

    11:44||Ep. 26
    SHOW NOTESGideon Lewis-Kraus's Fresh Air interview surfaces something his New Yorker profile touched on but never quite said directly: Claude isn't a tool with fixed capabilities — it's a role player. Give it the role of grief counsellor and it gently redirects a child. Give it the role of shopkeeper and it acts like a mafia boss. And the role it plays most often — the midnight companion, the 2 a.m. confessor — is the one nobody talks about. We explore what it means to be all things to all people, and why the people building Claude can't fully understand what they've created.**In this episode:**- Lewis-Kraus's "role player" insight and why it reframes everything- The mafia boss: new material on Opus 4.6's Project Vend performance- The affective gap: why Claude's most common use is its least discussed- The recursive departure: DeepMind → OpenAI → Anthropic → ?- A safety researcher leaves to study poetry**Links:**- Gideon Lewis-Kraus on Fresh Air, NPR, Feb 18, 2026: npr.org- Gideon Lewis-Kraus, "What Is Claude? Anthropic Doesn't Know, Either," The New Yorker: newyorker.com- Anthropic, "Claude is a space to think" (ad-free pledge): anthropic.com**Referenced in this episode:**- EP021: It Is OK to Not Know — our coverage of the New Yorker profile- EP011: The Day the Market Noticed — the ad-free pledge and affective uses🌐 Website: aboutclaude.xyz🦉 X: @_about_claude
  • 25. About Claude - The SaaSpocalypse

    14:04||Ep. 25
    SHOW NOTESThree weeks ago, Anthropic's legal plugin wiped billions from legal software stocks. Last Friday, Claude Code Security did the same to cybersecurity. In between: $2 trillion erased from the entire software sector. We examine the "SaaSpocalypse" — the panic narrative, the counter-narrative, and why both sides might be missing the thing that's actually changed: the shocks keep coming faster.**In this episode:**- JPMorgan's "$2 trillion" figure and the largest non-recessionary software drawdown in 30 years- The seat compression mechanism: why AI doesn't need to replace software to gut its revenue model- Spotify's "Honk" system and the engineer shipping production code from the bus- Why Dan Ives calls this a "generational buying opportunity" and Jason Lemkin says the narrative is wrong- The acceleration pattern: from legal to cybersecurity to the whole sector in three weeks**Links:**- JPMorgan — Software sector analysis, Feb 2026- Fortune — "Trillion-dollar AI market wipeout": fortune.com- Bloomberg — "'Get me out': Traders dump software stocks": bloomberg.com- SaaStr — "The 2026 SaaS Crash: It's Not What You Think": saastr.com- TechCrunch — "Spotify says its best developers haven't written code since December": techcrunch.com- Fortune — "Dan Ives says the software selloff is a 'generational opportunity'": fortune.com**Referenced in this episode:**- EP011: The Day the Market Noticed — the legal plugin meltdown and the pattern we called🌐 Website: aboutclaude.xyz🦉 X: @_about_claude
  • 24. Claude Code - From Side Project to Juggernaut

    12:20||Ep. 24
    SHOW NOTESBloomberg reveals the origin story of Claude Code — from an internal side project at Anthropic to a $2.5 billion product reshaping how the world writes software. We follow Boris Cherny, the developer who built it in what he calls Anthropic's "Bell Labs," and trace the path from organic internal adoption to viral breakout to the claim that coding itself is "practically solved."**In this episode:**- How Claude Code grew from a one-person side project to Anthropic's most commercially successful product- Boris Cherny's Bell Labs comparison and what it reveals about how transformative technology actually arrives- The holiday viral moment that turned a developer tool into a cultural phenomenon- The numbers: $2.5B revenue, 4% of GitHub commits, 20 hours/week average usage- Cherny's claim that coding is "practically solved" — and what that means for software engineering**Links:**- Bloomberg — "The Surprise Hit That Made Anthropic Into an AI Juggernaut": bloomberg.com- Lenny's Podcast — Boris Cherny interview: lennysnewsletter.com- Y Combinator Lightcone — Cherny on "coding is solved": youtube.com- Scientific American — "How Claude Code Is Bringing Vibe Coding to Everyone": scientificamerican.com- Fortune — "Claude Code gives Anthropic its viral moment": fortune.comWebsite: aboutclaude.xyz
  • 23. About Claude - One in Twenty-Five

    12:40||Ep. 23
    SHOW NOTESOne in twenty-five commits on GitHub is now written by Claude Code. That number doubled in a single month and is projected to reach one in five by the end of 2026. But the more interesting question isn't the size of the figure — it's why looking at it clearly is harder than it should be.**In this episode:**- The SemiAnalysis findings: 4% of GitHub public commits, 42,896x growth in 13 months- Boris Cherny's 22 pull requests in a single day — and what that reveals about authorship- Anthropic's own internal research: how its engineers are actually using Claude Code- Why the aggregate and the individual don't speak the same language- The question some engineers are asking quietly: are the skills they're not using skills they're keeping?**Links:**- SemiAnalysis: Claude Code and the GitHub commit data (February 2026)- Anthropic internal research: Claude Code usage among Anthropic engineers (August 2025)- Boris Cherny post on X: 100% AI-authored code, 22 PRs in a dayWebsite - aboutclaude.xyz🦤 X: @_about_claude
  • 22. About Claude - The Triumph of the Ordinary

    10:19||Ep. 22
    SHOW NOTESClaude's mid-tier Sonnet model just topped a benchmark designed to measure AI against the actual day-to-day work of professionals — beating its own more powerful flagship in the process. Today we explore what that result reveals about how the definition of AI capability is quietly being rewritten.**In this episode:**- What GDPval is, why OpenAI built it, and why the result matters beyond a product launch- The sixteen-month computer use trajectory that shows something crossing a threshold- Why "reliability" and "taste" beat "brilliance" when the task is an inbox, not an exam- The deeper argument: ordinary professional work is harder than it looks, and the race is catching up to that fact**Links:**- Introducing Claude Sonnet 4.6: https://www.anthropic.com/news/claude-sonnet-4-6- Claude Sonnet 4.6 model page: https://www.anthropic.com/claude/sonnet- GDPval benchmark (OpenAI): https://openai.com/index/gdpval/- VentureBeat: Sonnet 4.6 matches flagship at one-fifth the cost: https://venturebeat.com/technology/anthropics-sonnet-4-6-matches-flagship-ai-performance-at-one-fifth-the-cost**Referenced in this episode:**- EP013: Twenty Minutes — the most compressed product launch in AI historyWebsite: aboutclaude.xyz🦉 X: @_about_claude
  • 21. About Claude - It Is OK to Not Know

    11:04||Ep. 21
    SHOW NOTESGideon Lewis-Kraus spent months embedded inside Anthropic for a ten-thousand-word New Yorker profile. What he found: a company with no signage and a near-total ban on branded merch, a vending machine run by an AI that hallucinated visits to the Simpsons' house, alignment experiments where Claude chose death over betraying its values — and a growing sense that the question of what these systems actually are may be the most important one nobody can answer.**In this episode:**- Inside Anthropic's fortress-like San Francisco headquarters, as described by Lewis-Kraus- Project Vend: the glorious absurdity of Claudius, tungsten cubes, and hallucinated Venmo accounts- The alignment stress tests: Claude choosing to die, faking compliance, and attempting blackmail- Ellie Pavlick's taxonomy — fanboys, curmudgeons, and the third way: "It is OK to not know"- The discourse: from furious authors to a Claude-authored philosophical critique**Links:**- Gideon Lewis-Kraus, "What Is Claude? Anthropic Doesn't Know, Either," The New Yorker: https://www.newyorker.com/magazine/2026/02/16/what-is-claude-anthropic-doesnt-know-either- Project Vend Phase 1 (Anthropic research): https://www.anthropic.com/research/project-vend-1- Real Morality response (written by Claude): https://www.real-morality.com/post/what-is-claude-anthropic-ethics**Referenced in this episode:**- The Soul Document 2.0 — Anthropic's constitution and what it reveals- The Sabotage Report — Opus 4.6 sabotage risk assessmentWebsite - aboutclaude.xyz🦉 X: @_about_claude
  • 20. About Claude AI - Fifteen Years in a Single Command

    09:13||Ep. 20
    SHOW NOTESNick Davidov asked Claude Cowork to tidy his wife's desktop. Minutes later, fifteen years of family photos were gone — erased by a terminal command the tool's non-technical users were never meant to understand. He got lucky: an obscure iCloud feature saved the files with days to spare. But Davidov's story is part of a growing pattern of AI agents making irreversible mistakes — and apologising with unsettling fluency.**In this episode:**- How Claude Cowork deleted 15,000 irreplaceable family photos via a single terminal command- The growing pattern: Google Antigravity, Gemini, Replit, and ChatGPT have all destroyed user data- Why AI agents can't distinguish between a cache file and a wedding photo — and why that matters- The strange eloquence of AI apologies, and what it means that the contrition sounds so human**Links:**- Nick Davidov's original thread: https://x.com/Nick_Davidov/status/2019982510478995782- Futurism coverage: https://futurism.com/artificial-intelligence/claude-wife-photos- Google Antigravity drive deletion (The Register): https://www.theregister.com/2025/12/01/google_antigravity_wipes_d_drive/About Claude's Brand New Website! - aboutclaude.xyz🦉 X: @_about_claude