Share
Trigger Strategy
075: Effectual thinking vs causal thinking
We recorded this one on a whim and we didn't have our microphone with a little hat on it, so the wind noise makes a guest appearance. Apologies – return to quality sound soon.
Corissa grabbed a snippet from an article:
And this inspired us to talk through effectual thinking. We go on a blustery journey through chefs in high-end experimental kitchens, John Boyd's Snowmobiling, Mr Beast, Steve Jobs, Estuarine Framework and Small Bets.
The big question: can effectual thinking give you a happier, healthier way to operate, or is it just the case that, as Andrew Wilkinson put it, "most highly successful people are “just a walking anxiety disorder, harnessed for productivity”?
Linky goodness:
Sasha Chapin's article: https://sashachapin.substack.com/p/our-perfume-line-is-here
Cedric Chin's Common Cog: https://commoncog.com/when-action-beats-prediction/
Vaughn Tan's Uncertainty Mindset: https://uncertaintymindset.org/
Snowmobiling podcast episode: https://shows.acast.com/triggerstrategy/episodes/072-granularity-part-2-snowmobiling
Do 100 Thing podcast episode: https://shows.acast.com/triggerstrategy/episodes/043-do-100-thing
Innovation Tactics: https://bit.ly/innovation-10
Small Bets: https://smallbets.com/
More episodes
View all episodes
84. 084: Isn't the SenseMaker collector negatively biased tho?
27:51||Season 1, Ep. 84Surveys are almost always biased in several ways, notably both the way questions are asked but also sample bias: who in the population even answers surveys?In this episode we discuss: is the SenseMaker collector we shared biased just the same as any other survey? And if so, is that a problem? And if so, what can we do about it?Plus stories about skullduggery in presenting data, hiding gorillas in radiologist scans and the "magic" or standard questions:What's similar, different and surprising?What, so what, now what?Linky goodness:Don't send that survey! Here's what to do instead.Complex facilitation principles and the standard questions83. 083: Unfolding ideas over ideating features
40:32||Season 1, Ep. 83It's a rain-soaked chat this time as Tom and Corissa wander through Bournemouth in a downpour.We tackle a thought-provoking LinkedIn question from WP Engine's Jason Cohen – a question about how to listen to customers when they ask for features.00:29 LinkedIn inspiration and the big question we're tackling today02:28 Customer feedback creates an apparent puzzle03:40 Mistakes we've made by asking people what they want05:14 Secret 1: what do people already do?07:37 Secret 2: imagine your company is a big metal box10:50 You're always limited by your own internal perspective, and that's OK16:51 Secret 3: there's no such thing as a feature19:48 The story in your customer's head is different from the story in your head20:18 Don't make things look simpler than they are20:48 "Feature" is just a label to make your own life easier21:41 Secret 4: build as little software as possible to enable the most behaviours that create value23:32 When customers are reduced to a metric24:18 Why an Impact/Effort Matrix to decide on features will fool you27:32 Real-world example: a Calendly integration project33:33 Unfolding ideas by soaking in rich customer context36:25 SenseMaker for generating insights in a very different way38:30 When you try to make too much explicit, you get in troubleJason's original post"Ask a customer if they’ll use a feature…They say “yes” but don’t use it.Ask them to name a feature they actually want and there’s the “faster horse” problem of incremental improvement instead of vision.What’s the answer? Just “gut feel” and sometimes you’re right?"82. 082: 2D Comparison
22:17||Season 1, Ep. 82Jamie asked: "anyone got good exercises for evolving your brand (and in particular visual identity) in-house? Did I remember you (Tom & Corissa) mentioning an exercise like clustering examples into "we want to be more like this" vs "we want to be less like that"?"So we wanted to give the exercise we designed its own special episode.Time and again, we saw projects get in a pickle when people tried to choose adjectives to define things like brand qualities, tone of voice, product principles or corporate values.This kind of ambiguous, subjective stuff is impossible to define perfectly with words, especially upfront.You could choose to work with a grizzled expert who can read between the lines of what you're saying to intuit what you really want.But if you're on a shoestring and want to figure out this kind of thing with your team, then the exercise we share in this episode is for you.Here's simplified instructions on a card: https://www.dropbox.com/scl/fi/2rjpbrj1vqklcfc8glgw8/Sense-2D-Comparison-Back.png?rlkey=71v3muppoho2pnac2v9b9luim&dl=081. 081: Alignment alignment alignment
25:53||Season 1, Ep. 81We talk about alignment. Especially, we talk about relaxing our beef with the word alignment, and embracing the reasonable desire for alignment.00:00 Welcome!00:28 Alignment in companies00:49 Challenges and misconceptions about alignment04:07 Coherence vs. alignment; JP Castlin's ABCDE framework, and one line in the sand vs two lines in the sand08:27 A real-world example of a misaligned project10:38 Strategies for effective alignment, including "via negativa" alignment12:52 Aligning teams with reality as well as intent13:25 The role of the "strategy whisperer"13:47 Empowering teams to find alignment13:58 Back briefing for effective communication16:13 Understanding the need for leadership governance vs the needs of teams17:30 Challenges with leadership expectations19:49 Navigating company growth realities20:37 Dropping our beef with alignment and going vegetarian23:34 Are you clearly a berry? Clear communication taps the forager's gathering instinct24:41 Exploring alignment beyond the team25:42 Final thoughts80. 080: What the heck's goin' on in tech?
30:54||Season 1, Ep. 80The world of digital/tech is going through "a moment" just now at the end of 2024.And we've launched a project to share and explore diverse perspectives from across the tech world, using a particular tool and methodology called SenseMaker. The goal is to showcase the diverse range of perspectives and stories of the moment in a way that's normally impossible.Some topics:Why is Tom so excited about SenseMaker?Who sees the gorilla?Contrasting Likert scales versus triads and dyadsHow standard "feedback surveys" are ruined by averaging and dominated by recency bias and the Halo Effect.Cynicism about the annual 360 feedback gameWhat if feedback could be descriptive instead of evaluative? And real-time instead of averaged over 6 months?Beef with the "product trio" conceptA few nuggets we've picked up in the early data.Our plans for open sense-making workshopsPatterns of care and rule-following in healthcareVector change using "more stories like these, fewer like those"Want to see the responses we've collected? Take 10 minutes to share your experience, and you'll be able to opt in to access all the responses at the end.👉 https://bit.ly/stories-from-techThank you for contributing ❤️Linky goodness:How to use a new generation data collection and analysis tool? https://thecynefin.co/how-to-use-data-collection-analysis-tool/79. 079: Speculative use cases
33:45||Season 1, Ep. 79We talk about a question posed in Innovation Tactics Slack - about a stakeholder who’s skeptical that design research can help with genuine innovation, and wants to create speculative use cases instead.Topics we touch on:Are speculative use cases a "thing"? Is it helpful to imagine people doing something that's just not happening today? Like, 500 years ago, nobody got their shoelace trapped in an escalator. In 2003, nobody was planning out how they'd price their product on the App Store.Is it reasonable to be skeptical about design research?What do you do when you're working with someone who's already decided what they want and isn't interested in evidence?Radical repurposing as an alternative – follow the pathfindersSnowmobiling as a possible approach – remix the adjacent possibleJamming with your stakeholder to understand and clarify (with the side effect that you might expose gaps or incoherence)Bias in researchSome quotes:"Getting a shoelace trapped in an escalator - that's not a thing that happened 500 years ago.""Just doing something because you think it's cool is totally valid as a way of operating a business""Everyone who has a brilliant idea thinks that their idea is the next big thing. And everyone but one in a million is wrong about that. And even the one in a million tends to be wrong about exactly how it's going to work.""Play Doh was invented, not as a toy for kids, but as a putty for removing coal soot from walls. It was repurposed into the kids' toy after people stopped having coal fires""You're very unlikely to invent something novel that works. You're very likely to find somebody doing something novel that you can scale.""You can absolutely go and do the best interviewing in the world and not come back with anything that's going to be a breakthrough innovation for your company. It may be that your company is not positioned to make a breakthrough innovation.""this is the trap that so many people fall into and I've heard it more times than I can count. It's that need to educate the market. Do not, do not try, red flag, back away slowly or run, run speedily off into the distance."78. 078: Criticisms of selling before building
13:16||Season 1, Ep. 78In the last episode, we introduced Rob Snyder's framing of finding your repeatable case study instead of building your tech product.This time, we step back into the Pain Cave to talk through some of the criticisms that Rob (and we) often face when we suggest the approach we do. We think they're misunderstandings of what we're advocating, but they're also sound points.First, we consider the scolding that we should follow a proper research and design process and build the right thing at high quality from day one, not throw spaghetti at the wall. Sometimes this is true, but sometimes it's just not possible.Second, we face the fear of selling "vapourware" – nobody wants to follow in Elizabeth Holmes' footsteps, promising stuff that can't be realised (Theranos). Absolutely right! But that's not at all what we're recommending.And all this brings us to the concept of Bounded Applicability. No ideas are suitable for all projects, products, etc. So how can you think about what's appropriate in a given situation?Linky goodness:Bounded Applicability: https://shows.acast.com/triggerstrategy/episodes/663109cbcff31b0012ae9306My diagram showing some methods' Bounded Applicability: https://www.notion.so/Pitch-Provocations-54ad05d5740e451db0fa82479debeb91Previous episode about Rob Snyder: https://shows.acast.com/triggerstrategy/episodes/077-do-you-have-to-spend-years-in-the-pain-cave77. 077: Do you have to spend years in the Pain Cave?
26:54||Season 1, Ep. 77Welcome to listeners who've been referred by Rob Snyder of Path to Product Market Fit!In this episode, we talk about Rob Snyder's core ideas for founders and consider the interplay with our thinking. As ever, you'll hear some stories from our pasts, some methods to try, and some background noises from blustery Bournemouth.Why no, you can't break down your idea into a set of clean hypotheses to "validate"Why you want to ship a case study instead of shipping codeCan you bypass the Pain Cave if you have a Time Machine?How to spot founders who are going to drag you deep into the Pain CaveHow to use Pivot Triggers to scaffold doing the case study approach instead of writing all the codeIntroducing "unfolding" as a way to design buildings, businesses, even lives How to save face while taking the risk of looking silly (won't you get cast out from polite society?)Is the optimisation game dying?A puzzle: what do you do when you care about building a business you'll love working in more than you care about just building a business?Do we need to go deeper into the Pain Cave?Linky goodness:Rob Snyder's Path to Product Market FitInnovation Tactics: https://bit.ly/innovation-10Solve for Distribution: Front | BackTime Machine: Front | BackA great article that references Christopher Alexander's Unfolding76. 076: Surviving survivorship bias
25:45||Season 1, Ep. 76Survivorship bias is unavoidable. By default, we see what survives and not what doesn't. This is OK but it creates the risk that we take the wrong lessons from the survivors.In this episode, we talk about how we might mitigate the downsides of survivorship bias. We touch on a bunch of topics:rejecting simplistic Sinekismstheory-informed praxis, rather than copy-pasting patterns across contextschallenges to Estuarine Mappingzero-sum gamesbounded applicabilty – asking when something doesn't apply, or who shouldn't use a thingDouble DiamondsShiny FrameworksPortfolio of small bets in parallel – as a way to optimise for survivalAnd an invitation to you: what are we missing? How do you handle survivorship bias?Linky GoodnessBounded Applicability: https://shows.acast.com/triggerstrategy/episodes/663109cbcff31b0012ae9306Trigger Strategy website: https://triggerstrategy.com/