Episode 125: James Koppel discusses counterfactual inference and automated explanation

Ep. 125

Episode link here.

In this episode, James Koppel (MIT, James Koppel Coaching) joins me and Dominick Reo to talk about how we can write software to help identify the causes of disasters.

These days, there's often a tendency to think of software primarily as a venue for frivolous pleasures. Maybe there's a new app that's really good at hooking me up with videos of alpacas on skateboards, or making my mom look like a hot dog when she's video chatting with me, or helping me decide what flavor of cupcake I want delivered to my home—because gosh, I just am just way too stressed right now to be able to figure that out. Have you seen how few Retweets I'm getting? If we followed the lead of a lot of the popular rhetoric about the software industry, we might very well come away with the impression that tech exists solely to facilitate precious, self-involved time wasting. And if that's right, then if it doesn't work from time to time, who really cares?

But in fact, software correctness is frequently a life or death matter. Computer software controls our medical life support systems, it manages our health care records, it navigates our airplanes, and it keeps track of our bank account balances. If the author of the software used in any of those systems messed something up, it can and often will lead to planes crashing into mountains, or life support systems malfunctioning for no particular reason, or some other tragedy.

James Koppel is here to tell us that software can do better. It can be designed ‘preventatively’ to avoid large classes of bugs in advance, and there are diagnostic techniques that can help pinpoint those bugs that cannot be ruled out in advance. In this episode, Koppel discusses some work he started in 2015 as a follow-up to Stanford's Cooperative Bug Isolation project, which provided a way to gather detailed diagnostics about the conditions under which programs fail or crash. But the problem he kept running into was that the diagnostic information was too much correlation and not enough causation. If the analysis you did tells you that your app crashes whenever it tries to load a large image, that's ok, but it doesn't tell you what about the large image causes the crash, or what other kinds of large images would also cause a crash, or whether the crash even is a result of largeness or something more specific. Correlation information is a great start, but ultimately, it's of limited use when it comes to directly fixing the problem.

To deal with this, in his more recent work, Koppel and his colleagues have turned to the analysis of counterfactuals and causation, which is an interesting point of collaboration between philosophers and computer scientists. Using a recent paradigm called probabilistic programming, they have identified a way to have a computer program run the clock back and simulate what would have happened, had some condition been different, to determine whether that condition is the cause of a bug. The project is still in its initial stages, but if it works, it promises to deliver major dividends in making the technology we rely on more reliable.

Tune in to hear more about this exciting new area of research!

Matt Teichman

More Episodes


Episode 132: Rebecca Valentine discusses queer hackerspaces

Ep. 132
This month, we sit down withRebecca Valentine(co-founder ofQueerious Labs) to talk about anarchism, feminism, tech culture, and creative hacking. Hack this, hack that. What is a hacker, anyway? In pop culture, it’s common to use the term ‘hacker’ as a synonym for ‘cybercriminal’—that is, a person who engages in illegal activity over a computer network, usually involving gaining access to something they shouldn’t. But if you’ve ever spent any time in the tech community, you’ll know that there, the term is used in a very different way. It’s complicated to define precisely, but generally, ‘hacking’ involves taking apart a ready-made product in an exploratory way, whether to understand how it works, or to put it back together in a different, more customized way.We live in a world of mass-produced artifacts, each of which is manufactured in bulk to serve a specific purpose. But despite that fact, we are all individual people, many of whom want different things out of their artifacts. For example, maybe I have a car and want to give it my own paint job that it wouldn’t have gotten in the factory. Or maybe I have a handbag and would like to embroider a cool pattern on it. Those are simple examples, but our guest stresses that hacking often involves going further and subverting the original intentions behind the thing being hacked. For instance, there are people who have managed to get Alexa and Siri to talk to one another, each device responding in speech the way it would respond to a person. Neither was designed to talk to another device in English—rather, each was designed to provide a voice interface to a single human owner. The result can be pretty bizarre and interesting to listen to!In this episode, Valentine discusses why she founded Queerious Labs, a public nonprofit whose purpose is to encourage these sorts of tinker-y explorations. Most other spaces of this kind tend to be dominated by men, especially straight cisgender men, and often that can have the effect of alienating people who aren’t men, or who aren’t straight, or who aren’t cisgender. In addition, Queerious Labs is intended to be a friendly environment for people with socialist, anarchist, and feminist backgrounds. In the course of laying out how all those things hang together, we have the chance to dig in a wide range of topics, including political power, bottom-up vs. top-down organizational structures, mass culture, the patriarchy, natural language, theory vs. anti-theory, and how gender roles are in flux across time and history.

Episode 131: Greg Salmieri discusses egoism and altruism

Ep. 131
This month, Greg Salmieri (University of Texas at Austin) returns for his third appearance on Elucidations, this time to talk about doing right by yourself.What was the last thing you did? The last thing I did was pull a shot of espresso. I wouldn’t say I made coffee as an end in itself, even though I love the taste of the roast I just used. If I had to tell you the main reason I made a coffee, it was in order to speed along my transformation from groggy podcast host to awake podcast host. But why do that? Hmm. I guess I wanted to wake up so that I could start writing this blog post, pay a couple bills, and put together a cool new IKEA lamp? But why pay a couple bills or put together a new IKEA lamp? So that I can continue to live in my apartment, be able to see things in it, and so on, maybe? Plato and Aristotle were interested in these ‘but what are you doing XYZ in order to accomplish?’ type questions, and they had the idea that if you keep re-asking the question every time you come up with answer, eventually you’ll get to something that is the ultimate reason you’re doing everything for. Once you get there, there won’t be any further justification for anything you do.‘Ethical egoism’ is a nickname that philosophers give to the idea that being a good person means that everything you do, ultimately, at the end of the day, you do in order to benefit yourself.Note that there’s already a lot of subtlety in this idea as we’ve defined it. For example, if you’re deceived about what’s good for you, and the thing you think is good for you is actually bad for you, then if you do everything you do in order to bring that about, you don’t count as a good person. Maybe I think that fame will be great for me, because of all the money, power, and attention that comes with it. But in a few years, once I actually become world famous, I realize it’s actually pretty miserable to be hounded by paparazzi, speculated about in the tabloids, and subjected to intense scrutiny every time I make a comment about anything. Once that happens, I might decide the whole get famous plan was misbegotten, longing for the days before I was a celebrity. So one point of subtlety is that what’s good or bad for a person can be complicated to determine—there are lots of cases where you can make a mistake about what’s really good for you.A second point of subtlety is that how your everyday behavior corresponds to what you’re ultimately doing everything for can be complex. Maybe you’ve adopted a monkish lifestyle, sacrificing the day to day comforts we take for granted so that you can help as many other people as possible, volunteering, donating to charities, and so forth. An ethical egoist would say that if you’re ultimately doing all those things because of the deep, persistent, long-term satisfaction it brings you—because of how it enriches your life to the fullest possible extent, then that counts as being a good person. So it’s not like commonly-held stereotypes about what selfishness is necessarily line up with what ethical egoists recommend.Due to those two factors, there’s a lot of wiggle room in what concrete behaviors can count as acting in your self-interest, and different behaviors are going to count as self-interested for different people, because different people often have fundamentally different needs and abilities. And I would say that’s what makes it especially interesting to think about whether ethical egoists have it right.Join us this month as our esteemed guest defends the viability of ethical egoism!

Episode 130: Jessica Tizzard discusses weakness of the will

Ep. 130
This month, Long Dang and I sit down to talk to Jessica Tizzard (University of Connecticut, Storrs) about weakness of the will.You’re at a party hosted by a close friend. It’s been three hours since you got there, and the evening thus far has been chock full of scintillating conversation, a fun round of Charades followed by Assassins, first rate cocktails, and a dessert to die for. You’ve just now been invited to play one of your favorite games, which usually takes about 90 minutes to complete—when out of nowhere, the onset of a yawn yanks you back into reality. Suddenly, you remember you’d promised yourself that you weren’t going to stay out late, because you’ve got to get up early tomorrow for an important meeting. You realize that now is the time to go home and get a good night’s sleep. And yet, the allure of the game pulls you in. Against your better judgment, you play the game deep into the night, future consequences be damned.Since the time of the ancient Greeks, some of the sharpest thinkers in philosophy have tried to figure out what is happening in that scenario. Obviously, we frequently decide that X is the best course of action, and yet our willpower falters and we decide to do Y, even though we know full well that doing Y is counterproductive or self-destructive. But why? In what world does that make any logical sense? Surely, if you decided that X was the thing to do, the natural next move is to do X. Not do the thing you convinced yourself was going to be bad for you. Right?The trouble is that every obvious answer to this puzzle feels unsatisfactory. You could be like: well if I did Y, then I must have really decided Y was best. But if that’s the case, why do you feel so terrible when you do it? Why do you feel guilty staying at the party until deep into the night, if you’ve supposedly decided that staying at the party is for the best? Taking that stance is effectively saying: no one ever has a crisis of willpower. Whenever you do anything, that is definitive proof that you believed it was the best possible thing to do. But insisting that everyone always has the willpower to do everything they think they should just seems to fly in the face of what we know about the human experience.Another option might be to say: well, ok, I did decide that X was the best thing to do, but when the moment to suck it up and actually do X came, I was overcome with desire. The feeling of pleasure at the prospect of partying hard swept over me and signal jammed my rational faculty, blocking me from doing what I knew I should. So I stayed, and had to suffer the consequences the next morning. But then that feels unsatisfactory as well, because if I really was overcome by the pleasure instinct, blocked from doing what I thought I should do, then what I did was really involuntary. Like a muscle spasm. Or a brain tumor that made me do it. That just seems wrong: clearly, in these types of situations, I actively chose to e.g. stay at the party and suffer the consequences. Staying at the party didn’t just happen to me, like a headache.Jessica Tizzard thinks that the 18th century philosopher Immanuel Kant offered an interesting and novel way to understand what’s going on in these moments when you’re weak-willed. Step one in his approach is to take cases like the one described above and assimilate them all to what is often thought of as a different situation: the moral dilemma. A moral dilemma, as standardly construed, is a situation where you really can’t decide which of several options is the best to take. The idea here is that what look like situations where you knew you should do X but instead did Y are often, upon closer examination, really situations where you genuinely couldn’t tell which of those two things you should do. Sometimes, perhaps, when I thought I was having a crisis of willpower, I was in fact just torn and couldn’t decide.Number two in Immanuel Kant’s bag of tricks is to accept a version of the ‘I wanted to go home, but the desire to stay swept over me and made me stay at the party’ explanation, with one key difference: namely, he has a different take on what a desire is. Maybe a desire isn’t some physical pleasure sensation seizing control of your body like a puppet and forcing you to do something other than what you really want to do. Maybe a desire is really more like another set of factors to consider in your reasoning—it may come with a feeling, and present itself to you with a certain urgency, but really what it is is a set of reasons that you’re weighing up like any other. Understanding desire on those lines puts Kant in a nice position to say that lacking the willpower to do what you think is right is actually just a case of being racked by indecision.Tune in to hear Jessica Tizzard lay out the Kantian story about what happens when we act against our better judgment!Matt Teichman