{"version":"1.0","type":"rich","provider_name":"Acast","provider_url":"https://acast.com","height":250,"width":700,"html":"<iframe src=\"https://embed.acast.com/$/68357ec21b846c88bdcd7480/68b044477620a68ee6a8be52?\" frameBorder=\"0\" width=\"700\" height=\"250\"></iframe>","title":"119: Polished incoherence and other marvels of modernity","description":"<p>Glittery bags of words, scatterbrained tutors, or random concept triggerers? </p><p><br></p><p>In this one we feel our way through the murky reality of AI tools—reaching our tentacles beyond all the silt that's been stirred up in the hype and panic. We think we've found some interesting nooks and crannies.</p><p><br></p><p>We kick off with yet another \"oops, used AI without checking\" message that we received, then we share thoughts triggered by our own experiments with LLM-powered ritual dissent (as mentioned in the previous podcast&nbsp;–&nbsp;email <a href=\"mailto:tentacles@crownandreach.com\" rel=\"noopener noreferrer\" target=\"_blank\">tentacles@crownandreach.com</a> if you'd like a copy of the prompt). </p><p><br></p><p>Then we explore where tools like LLMs <em>could</em> be genuinely helpful versus when they're simply expensive confusion generators, with reference to some interesting experiments we've seen on our travels.</p><p><br></p><ul><li>Effective at the extremes in the role of a tutor: when you're an expert OR a complete beginner, not somewhere in the middle</li><li>The \"random number generator\" theory of LLMs as a trigger for concepts, ideas and processes you already know</li><li>Potential for designing LLM interactions that don't dumb you down</li><li>Why high-fidelity outputs are no longer a good proxy for high-quality thinking – the decades-long descent into polished incoherence</li><li>Bag of words theory: LLMs necessarily can't generate coherence, only fluency</li><li>Real examples of where AI can save time (e.g. risk assessment templates) vs. where it fails (e.g. original strategy or thinking)</li><li>How to avoid the \"vibe-coded prototype\" trap in both design and thinking (and possibly why most people still won't, even though it's technically easier than ever).</li></ul><p><br></p><p>References</p><p><br></p><ul><li>Gerald Weinberg's classic \"Secrets of Consulting\" <a href=\"https://archive.org/details/secretsofconsult0000wein\" rel=\"noopener noreferrer\" target=\"_blank\">https://archive.org/details/secretsofconsult0000wein</a></li><li>Hazel Weakly's excellent piece on AI <a href=\"https://hazelweakly.me/blog/stop-building-ai-tools-backwards/\" rel=\"noopener noreferrer\" target=\"_blank\">https://hazelweakly.me/blog/stop-building-ai-tools-backwards/</a></li><li>Vaughn Tan's paper prototype that scaffolds critical thinking with LLMs <a href=\"https://vaughntan.org/aiux\" rel=\"noopener noreferrer\" target=\"_blank\">https://vaughntan.org/aiux</a></li><li>Ed Zitron's Where's Your Ed At – the firebrand pointing out the nakedity of the emperor <a href=\"https://www.wheresyoured.at\" rel=\"noopener noreferrer\" target=\"_blank\">https://www.wheresyoured.at</a></li><li>Pavel Samsonov's solid critique <a href=\"https://productpicnic.beehiiv.com/p/human-in-the-loop-is-a-thought-terminating-cliche\" rel=\"noopener noreferrer\" target=\"_blank\">https://productpicnic.beehiiv.com/p/human-in-the-loop-is-a-thought-terminating-cliche</a></li><li>Philip Morgan ... couldn't find where he wrote about aspects of risk capacity, but he's here: <a href=\"https://philipmorganconsulting.com/\" rel=\"noopener noreferrer\" target=\"_blank\">https://philipmorganconsulting.com/</a></li><li>Dave Snowden's Ritual Dissent <a href=\"https://cynefin.io/wiki/Ritual_dissent\" rel=\"noopener noreferrer\" target=\"_blank\">https://cynefin.io/wiki/Ritual_dissent</a></li><li>Our method Multiverse Mapping <a href=\"https://multiversemapping.com\" rel=\"noopener noreferrer\" target=\"_blank\">https://multiversemapping.com</a></li><li>Our method Pitch Provocations (old episodes 007-009 for a rough intro) <a href=\"https://shows.acast.com/triggerstrategy\" rel=\"noopener noreferrer\" target=\"_blank\">https://shows.acast.com/triggerstrategy</a></li><li>Class action lawsuit against Anthropic re: training data <a href=\"https://www.lieffcabraser.com/anthropic-author-contact/\" rel=\"noopener noreferrer\" target=\"_blank\">https://www.lieffcabraser.com/anthropic-author-contact/</a></li></ul>","author_name":"Tom Kerwin"}