{"version":"1.0","type":"rich","provider_name":"Acast","provider_url":"https://acast.com","height":250,"width":700,"html":"<iframe src=\"https://embed.acast.com/$/69ab3b7c7036d739021982df/69ab3b8ab49eecc0b7c4bb24?\" frameBorder=\"0\" width=\"700\" height=\"250\"></iframe>","title":"Why AI-Native Companies Are Deleting Software You're Still Paying For (The $56K Lesson)","description":"<p class=\"text-node\">What's really happening when AI agents fail at long-running tasks? The common story is that smarter models solve agent failures, but the reality is more complicated when generalized agents behave like amnesiacs with tool belts no matter how intelligent the underlying model is. In this video, I share the inside scoop on what Anthropic revealed about why agents actually work:</p><ul class=\"list-node\"><li class=\"list-item-node\"><p class=\"text-node\">Why generalized agents without domain memory spiral into chaotic loops instead of making durable progress</p></li><li class=\"list-item-node\"><p class=\"text-node\">How domain memory transforms agent behavior from reactive task-running to structured, compounding work</p></li><li class=\"list-item-node\"><p class=\"text-node\">What the initializer and coding agent pattern actually does when you implement it correctly</p></li><li class=\"list-item-node\"><p class=\"text-node\">Where the real moat lies in harness design and testing loops, not in chasing the next model release</p></li></ul><p class=\"text-node\">For builders and operators navigating 2026, the competitive advantage is not a smarter AI. It's well-designed domain memory and the discipline to build testing loops that hold it accountable.</p><p class=\"text-node\">Subscribe for daily AI strategy and news.</p><p class=\"text-node\">For playbooks and analysis: <a target=\"_blank\" rel=\"noopener noreferrer\" class=\"link\" href=\"https://natesnewsletter.substack.com/\">https://natesnewsletter.substack.com/</a></p><p class=\"text-node\">© Nate B. Jones 2026</p>","author_name":"Nate B. Jones"}