Share

AI News & Strategy Daily with Nate B. Jones
You Don't Need SaaS. The $0.10 System That Replaced My AI Workflow (45 Min No-Code Build)
What's really happening when Claude's memory doesn't know what you told ChatGPT and your phone app doesn't share context with your coding agent? The common story is that AI memory is getting better—but the reality is more interesting when every platform has built a walled garden designed to create lock-in.
In this video, I share the inside scoop on why the architecture of agent-readable memory matters more than any individual tool:
• Why your Notion workspace is beautiful for humans and useless for agents that search by meaning
• How a Postgres database with vector embeddings runs for 10-30 cents a month
• What MCP servers enable when one brain connects to every AI you touch
• Where the compounding advantage lives for people who stop re-explaining themselves
For anyone watching the agent revolution go mainstream, the gap between starting from zero and starting with six months of accumulated context is the career gap of this decade.
Subscribe for daily AI strategy and news.
For playbooks and analysis: https://natesnewsletter.substack.com/
© Nate B. Jones 2026
More episodes
View all episodes

Claude Blackmailed Its Developers. Here's Why the System Hasn't Collapsed Yet.
32:24|What's really happening with AI safety in 2026? The common story is that the safety system is collapsing — but the reality is more complicated.In this video, I share the inside scoop on why the AI risk picture is both worse and more resilient than the headlines suggest:Why frontier AI agents scheme even after anti-scheming training- How competitive dynamics create emergent safety properties no lab planned- What "intent engineering" is and why it beats prompt engineering for AI agents- Where the real vulnerability lives — and why it's you, not the modelsThe risks from large language models and autonomous AI agents are accelerating, but so are the structural forces holding the system together — and closing the gap between what you tell an agent and what you actually mean is the most leveraged safety skill you can build right now.Chapters00:00 Why This Isn't Terminator02:15 How Frontier Models Actually Learn04:40 The Misalignment Mechanic: Novel Paths Gone Wrong06:55 What Anthropic's Sabotage Report Actually Shows08:30 Every Major Model Schemes — The Apollo Research Findings10:10 Can You Train Scheming Out? The Anti-Scheming Paradox12:45 The Race Dynamic and Why Labs Keep Cutting Corners15:20 Four Emergent Safety Properties Nobody Planned20:05 The Consciousness Framing Is Hurting Us23:30 Intent Engineering: The Fix That's Up to You28:10 Three Questions That Change Everything30:45 Where We Stand in 2026Subscribe for daily AI strategy and news.For deeper playbooks and analysis: https://natesnewsletter.substack.com/
45 People, $200M Revenue. The Question Nobody's Asking About AI and Your Team Size.
25:44|What's really happening with AI and team size in your organization? The common story is that AI makes teams more productive so you can cut headcount — but the reality is more complicated.In this video, I share the inside scoop on why the five-person strike team is the structural unit of the AI era:- Why AI raised coordination costs by the same order as output- How scouts and strike teams map to different AI-era missions- What correctness-first thinking means for how you hire and build- Where the real opportunity is — expanding ambition, not shrinking headcountAI agents and LLMs didn't break your meetings problem — they amplified a team size problem you already had, and the leaders who restructure around small, high-judgment teams will build the defining companies of this decade.Chapters00:00 Your Meetings Problem Is Actually a Team Size Problem02:10 The Math of Communication Pathways04:15 Dunbar's Number and Why the Military Cracked This First06:00 What AI Actually Changed About Team Size08:20 Why Volume Is Free and Correctness Is Scarce10:45 The Harvard Study That Proves the Point12:30 Scouts: The One-Person AI Strike Force15:00 Peter Steinberger and the Solo Agent Model17:10 Strike Teams: Why Five Is the Magic Number20:00 The Ambition Failure Nobody Talks About23:15 How to Compose Many Strike Teams Into One Org25:40 The AI Slop Tax and the True Cost of a Weak Link28:00 How to Test Who's Ready for the Strike Team Model30:20 The Shopify Mandate and What Toby Lutke Got Right33:00 Restructure for Ambition, Not EfficiencySubscribe for daily AI strategy and news.For deeper playbooks and analysis: https://natesnewsletter.substack.com/
GPT-5.4 Let Mickey Mouse Into a Production Database. Nobody Noticed. (What This Means For Your Work)
29:34|What's really happening when OpenAI engineers accidentally leak ChatGPT 5.4's existence but the model isn't even the interesting part? The common story is about the next capability jump—but the reality is more interesting when the company that first makes trillion-token organizational context genuinely usable becomes the new enterprise data platform.In this video, I share the inside scoop on why the four-part compound bet determines whether this justifies an $840 billion valuation: • Why intelligence and context are multiplicative—and weak reasoning with long context is actively harmful • How retrieval at enterprise scale breaks RAG in ways nobody's benchmarking • What memory that doesn't rot requires when organizational knowledge continuously evolves • Where Anthropic's organic context accumulation through Claude Code might beat OpenAI's infrastructure playFor builders watching the enterprise stack get restructured, the lock-in from synthesized understanding is deeper than anything enterprise software has ever seen.Chapters00:00 The Most Expensive Bet in History Is an AI Bet02:45 The Current SaaS Stack as a Filing Cabinet05:30 What the Stateful Runtime Environment Becomes08:00 The Four Compound Bets That Must All Work10:30 Bet One: Intelligence and Context Are Multiplicative13:00 Bet Two: Memory That Doesn't Rot16:00 Bet Three: The Retrieval Problem Nobody's Talking About19:30 Bet Four: Execution at the Speed of Trust22:00 The New System of Record for Organizational Understanding25:00 The Flywheel: How Context Compounds Month Over Month28:00 Comprehension Lock-In: Deeper Than Data Lock-In30:30 Anthropic's Organic Flywheel Through Claude Code34:00 Three Questions to Ask From Your ChairSubscribe for daily AI strategy and news.For deeper playbooks and analysis: https://natesnewsletter.substack.com/
Claude Code vs Codex: The Decision That Compounds Every Week You Delay
29:54|What's really happening inside AI coding tools that nobody's comparing? The common story is that Claude vs. ChatGPT is a model competition. But the model is the least important part.In this video, I share the inside scoop on why the AI harness matters more than the model:- Why the same Claude model scored 78% vs. 42% on identical benchmarks- How Claude Code and Codex embody opposite philosophies of AI - collaboration- What harness lock-in actually costs teams who switch tools later- Where non-technical leaders are making the wrong procurement decisionsThe teams getting this right are choosing the architecture that matches how they work, and that decision compounds every quarter.Chapters00:00 The harness vs. the model — what everyone gets wrong01:45 Why nobody compares AI harnesses03:20 Same model, double the performance: the benchmark that proves it04:50 How Anthropic built Claude Code's harness07:10 How OpenAI built Codex's harness09:30 Five ways the harnesses are diverging13:45 State and memory: where institutional knowledge lives16:20 Context management and tool integration19:00 Multi-agent coordination: collaboration vs. isolation21:30 Harness lock-in: the cost nobody is pricing in24:00 What this means for engineers and engineering leaders26:30 Why non-technical leaders need to understand this nowSubscribe for daily AI strategy and news.Full Story w/ Prompts: https://natesnewsletter.substack.com/p/same-model-78-vs-42-the-harness-madeFor deeper playbooks and analysis: https://natesnewsletter.substack.com/My site: https://natebjones.com___________________
Everyone You Know Is About to Try Claude (I Showed 3 People for 5 Minutes — All 3 Switched)
20:55|What's really happening when millions of new users download Claude expecting a ChatGPT replacement and wonder why the spreadsheet features are missing? The common story is that AI models are interchangeable brands—but the reality is more interesting when constitutional AI produces measurably different behavior than reinforcement learning with human feedback.In this video, I share the inside scoop on why switching to Claude with the same habits misses the point:• Why Claude is more likely to tell you your plan has a hole in it• How describing your situation instead of your desired output changes everything• What extended thinking reveals about steering the chain of thought in real time• Where Cowork reframes the category from conversation partner to desktop workerFor anyone teaching a friend about Claude or learning it yourself, these differences shape how you think about AI over time—and that compounds.Subscribe for daily AI strategy and news.For playbooks and analysis: https://natesnewsletter.substack.com/p/millions-just-switched-to-claude© Nate B. Jones 2026
Dario Amodei Made One Mistake. Sam Altman Got $110 Billion. Here's the Full Story.
26:19|What's really happening when Anthropic gets designated a supply chain risk hours after OpenAI signs a Pentagon deal and the largest private funding round in history? The common story is about principles versus pragmatism—but the reality is more interesting when Claude was too embedded in combat operations to rip out even after a presidential order.In this video, I share the inside scoop on why Dario misread the room while Sam walked away with the keys to the kingdom:• Why Anthropic's objection was technical, not moral—and contingent on model reliability• How OpenAI's $110 billion round equals 65% of all US venture capital in 2023• What the circular financing structure reveals about who's picking winners• Where enterprise contracts will be won or lost as government revenue becomes the gold standardFor builders watching cloud providers play every side of the board, the question is whether you're okay with a one-model winner world or fighting for a multi-model future.Subscribe for daily AI strategy and news.For playbooks and analysis: https://natesnewsletter.substack.com/p/openai-raised-110b-and-the-pentagon© Nate B. Jones 2026
OpenAI Is Slowing Hiring. Anthropic's Engineers Stopped Writing Code. Here's Why You Should Care.
23:55|What's really happening with AI coding tools after December's convergence? The common story is that better models mean incremental improvement—but the reality is more complicated when the CEO of OpenAI admits he still hasn't changed how he works. In this video, I share the inside scoop on why a capability overhang is widening between what AI can do and what most people are doing with it:Why three frontier model releases in six days created a phase transitionHow a simple bash loop called Ralph outperformed elaborate agent frameworksWhat Claude Code's task system means for parallel autonomous workWhere the real skill shift lands: from implementation to specification and reviewFor builders and operators navigating 2026, the temporary arbitrage is real. Those who close the overhang first gain a massive edge that compounds daily.Subscribe for daily AI strategy and news.For playbooks and analysis: https://natesnewsletter.substack.com/© Nate B. Jones 2026
I Built an 11-Tab Financial Model in 10 Minutes. The $20/Month Tool That's About Change How We Work.
21:07|What's really happening with AI and spreadsheets? The common story is that foundation models competing on benchmarks is the main event, but the reality is more complicated when the real battleground is the 40-year-old software where actual decisions get made. In this video, I share the inside scoop on how Claude in Excel changes what knowledge work actually means:Why Anthropic embedded Opus 4.5 directly inside Microsoft Excel and what that signals about where the model race is headingHow data partnerships with Moody's, S&P, and FactSet create moats that benchmarks simply cannot measureWhat Norway's sovereign wealth fund learned from 213,000 hours saved and why that number tells a different story than any capability demoWhere the model race ends and workflow integration begins as the strategic question shifts from who trains the best model to who controls the workflows where real decisions happenFor operators and builders navigating 2026, the competitive advantage is no longer a better model. It's the workflow nobody is willing to rip out and replace.Subscribe for daily AI strategy and news.For playbooks and analysis: https://natesnewsletter.substack.com/© Nate B. Jones 2026