{"version":"1.0","type":"rich","provider_name":"Acast","provider_url":"https://acast.com","height":250,"width":700,"html":"<iframe src=\"https://embed.acast.com/$/664486383ffb3a00121efba6/69f9eb2fa6ade25592f56b11?\" frameBorder=\"0\" width=\"700\" height=\"250\"></iframe>","title":"Mythos: The AI Model Too Dangerous to Release","thumbnail_width":200,"thumbnail_height":200,"thumbnail_url":"https://open-images.acast.com/shows/664486383ffb3a00121efba6/1777974312648-a551c698-14ab-45dc-b0f1-cec9c1f4b206.jpeg?height=200","description":"<p>In this episode of AI Right, Kris, Megan and Andy dig into Anthropic's newly announced frontier model, Mythos, a model the company says is too dangerous to release publicly because of its ability to find software vulnerabilities at scale. Anthropic has rolled it out to just 12 companies, such as Amazon, Microsoft, etc., under a private preview called Project Glasswing.</p><p><br></p><p>Is this a genuine safety concern or a clever PR play in the run-up to Anthropic's rumoured IPO? Megan draws the parallel to OpenAI's GPT-2 \"too dangerous to release\" moment and asks what's really changed. Andy walks through Anthropic's system card, the claims around Mythos's near-infinite context window and recursive self-correction, and why the scaling hypothesis means open source will keep catching up regardless.</p><p><br></p><p>From there, the conversation turns to what this actually means in practice. The team works through how the cyber offence-and-defence cycle times start to collapse when an AI is the one hunting vulnerabilities, and what happens when subtle flaws can be quietly injected into the open-source packages most companies depend on. Pinning to older versions feels like a safe move until you realise those older versions might be just as exposed to a problem with uncomfortable echoes of log4j. Megan offers a clean mental model along the way, framing AI development as a recipe with three ingredients (architecture, compute and data) that no single lab fully controls, while Andy shares why his own views on AI safety have shifted in recent weeks.</p><p><br></p>","author_name":"Kristopher McFadyen"}