Episode 130: Jessica Tizzard discusses weakness of the will

Ep. 130

This month, Long Dang and I sit down to talk to Jessica Tizzard (University of Connecticut, Storrs) about weakness of the will.

You’re at a party hosted by a close friend. It’s been three hours since you got there, and the evening thus far has been chock full of scintillating conversation, a fun round of Charades followed by Assassins, first rate cocktails, and a dessert to die for. You’ve just now been invited to play one of your favorite games, which usually takes about 90 minutes to complete—when out of nowhere, the onset of a yawn yanks you back into reality. Suddenly, you remember you’d promised yourself that you weren’t going to stay out late, because you’ve got to get up early tomorrow for an important meeting. You realize that now is the time to go home and get a good night’s sleep. And yet, the allure of the game pulls you in. Against your better judgment, you play the game deep into the night, future consequences be damned.

Since the time of the ancient Greeks, some of the sharpest thinkers in philosophy have tried to figure out what is happening in that scenario. Obviously, we frequently decide that X is the best course of action, and yet our willpower falters and we decide to do Y, even though we know full well that doing Y is counterproductive or self-destructive. But why? In what world does that make any logical sense? Surely, if you decided that X was the thing to do, the natural next move is to do X. Not do the thing you convinced yourself was going to be bad for you. Right?

The trouble is that every obvious answer to this puzzle feels unsatisfactory. You could be like: well if I did Y, then I must have really decided Y was best. But if that’s the case, why do you feel so terrible when you do it? Why do you feel guilty staying at the party until deep into the night, if you’ve supposedly decided that staying at the party is for the best? Taking that stance is effectively saying: no one ever has a crisis of willpower. Whenever you do anything, that is definitive proof that you believed it was the best possible thing to do. But insisting that everyone always has the willpower to do everything they think they should just seems to fly in the face of what we know about the human experience.

Another option might be to say: well, ok, I did decide that X was the best thing to do, but when the moment to suck it up and actually do X came, I was overcome with desire. The feeling of pleasure at the prospect of partying hard swept over me and signal jammed my rational faculty, blocking me from doing what I knew I should. So I stayed, and had to suffer the consequences the next morning. But then that feels unsatisfactory as well, because if I really was overcome by the pleasure instinct, blocked from doing what I thought I should do, then what I did was really involuntary. Like a muscle spasm. Or a brain tumor that made me do it. That just seems wrong: clearly, in these types of situations, I actively chose to e.g. stay at the party and suffer the consequences. Staying at the party didn’t just happen to me, like a headache.

Jessica Tizzard thinks that the 18th century philosopher Immanuel Kant offered an interesting and novel way to understand what’s going on in these moments when you’re weak-willed. Step one in his approach is to take cases like the one described above and assimilate them all to what is often thought of as a different situation: the moral dilemma. A moral dilemma, as standardly construed, is a situation where you really can’t decide which of several options is the best to take. The idea here is that what look like situations where you knew you should do X but instead did Y are often, upon closer examination, really situations where you genuinely couldn’t tell which of those two things you should do. Sometimes, perhaps, when I thought I was having a crisis of willpower, I was in fact just torn and couldn’t decide.

Number two in Immanuel Kant’s bag of tricks is to accept a version of the ‘I wanted to go home, but the desire to stay swept over me and made me stay at the party’ explanation, with one key difference: namely, he has a different take on what a desire is. Maybe a desire isn’t some physical pleasure sensation seizing control of your body like a puppet and forcing you to do something other than what you really want to do. Maybe a desire is really more like another set of factors to consider in your reasoning—it may come with a feeling, and present itself to you with a certain urgency, but really what it is is a set of reasons that you’re weighing up like any other. Understanding desire on those lines puts Kant in a nice position to say that lacking the willpower to do what you think is right is actually just a case of being racked by indecision.

Tune in to hear Jessica Tizzard lay out the Kantian story about what happens when we act against our better judgment!

Matt Teichman

More Episodes


Episode 129: Nethanel Lipshitz discusses discrimination

Ep. 129
This month, Ben Andrew and I are joined by Nethanel Lipshitz (Tel Aviv University, Bar-Ilan University) to talk about discrimination.If someone treats me unequally--that is, if they give other people a relative advantage but not me--am I the victim of discrimination? Our guest says yes. That is enough for me to count as having been discriminated against, and that is enough for it to be morally wrong.All fine and dandy. But then what's the big deal? The big deal is that the standard view in political philosophy tells us that discrimination requires more. If a shopkeeper kicks me out of their store merely because they don't like my hat, then according to the definition, I haven't been discriminated against. Why? Because in order for this behavior to count as discrimination, I have to be treated unequally based on my membership in a salient social group. It's maybe a bit tricky to define exactly what a 'salient social group' is, but some familiar examples might include e.g. LGBTQ people, people with a disability, or black people. 'People with a funny looking hat' aren't a salient social group--that's just a random category that popped up in this moment. So although I may have been treated badly, I haven't been discriminated against.Nethanel Lipshitz doesn't see a good reason for including 'you have to be a member of a salient social group' in the definition of discrimination. Note that this is compatible with saying that being discriminated against qua member of a particular social group is worse than being discriminated against as an individual, maybe as part of a one-off. The idea is just that it still counts as discrimination, and that it's still bad, even if it isn't as bad. Lipshitz' main reason for thinking this is that the 'I got discriminated against because of my hat' situation and the 'I got discriminated against because I'm gay' have a key factor in common: in both situations, the victim is being singled out as someone not worthy of the same moral respect/consideration as everyone else. It's a fascinating discussion, and I hope you enjoy it. I think Nethanel Lipshitz provides lots of good reasons to rethink some of our contemporary assumptions about what discrimination is and why it's bad. Matt Teichman

Episode 128: Melissa Fusco discusses free choice permission

Ep. 128
One of the foundational ideas behind philosophical logic is that when you say something, that has further implications beyond the single thing you said. Like, if I think ‘every single frog is green’ and ‘Fran is a frog’, then I am committed to thinking that Fran is green. I don't have to have actually thought to myself or said out loud that Fran is green—I'm just required to believe that Fran is green, given that I thought the first two things, and if I fail to believe that, I've made some kind of mistake. Like I haven't thought through all the consequences of my beliefs.Modal logic studies how we reason about obligation and permission. For example, f I think that Bob is obligated to visit his parents for the holidays, it follows from that that he isn't permitted not to visit his parents for the holidays. (The term for this in philosophical logic is that obligation and permission are duals.) There are lots of inference patterns that pop up, some of them familiar and some of them surprising, the moment you start thinking about how the notions of ‘obligated to’ or ‘permitted to’ interact with notions like ‘if/then’ or ‘and’.Free choice permission is a funny case where it feels like out in the wild, you would have to draw a certain conclusion from something you said, but our best formal, mathematical theory of obligation and permission tells us that you aren't allowed to draw that conclusion. So although the theory gets most other things impressively right, it seems to get this one thing wrong.Here's the example. Imagine you're a customer at a cafe and a waiter says to you, ‘Since you ordered our prix fixe lunch menu option, you may have coffee or tea’. Translated into the terminology of obligation and permission, we could think of what the waiter said as ‘it is permissible for you to have either coffee or tea’. And there seems to be no way the waiter could think that and not thereby also be committed to thinking it is permissible for you to have coffee. If you're allowed to have either coffee or tea, then surely you're thereby allowed to have coffee. Right?The problem is that the best available formal mathematization of how reasoning about obligation and permission works (believe it or not, this is given the humorous-sounding name normal modal logic) predicts that you are not allowed to draw that conclusion. So since it seems obvious that any rational person would draw that conclusion, but our theory predicts that you aren't allowed to draw it, that means the theory has a problem. The trouble is that revising the theory so as to correctly make that prediction is quite technically difficult, because most of the obvious things you might do to have it make that prediction have the side effect of breaking other aspects of it that work well.In this episode, Melissa Fusco sketches out a highly original and ambitious approach to the puzzle, using a more sophisticated framework called two-dimensional modal logic. Two-dimensional modal logic is based on a subtle but interesting distinction between a statement that's automatically true the moment you start thinking about it, and a statement that is necessarily true, no matter what. It may sound a bit counterintuitive, but just wait till you hear the examples that Fusco gives! Trust me—her idea about how you can use that distinction to explain what's happening in the waiter example is super cool.

Episode 127 - Nic Koziolek discusses self-knowledge

Ep. 127
In this episode, Nic Koziolek (Washington University in St. Louis) returns to talk to me and Nora Bradford about self-consciousness.Self-consciousness, as philosophers use the term, is a word for when you know something about one of your own mental states. Like when I really enjoy some pizza and note that I'm enjoying it. Someone else might ask me: ‘Hey Matt, do you like that pizza?’ And I'm typically the best person to ask about that, which is a sign that I typically know whether I like the pizza. Or when I have an itch, and I notice the itch before going to scratch it. If I noticed it, then I know that I have an itch. Self-consciousness, in the philosophical setting, is a name for me being able to tell what's happening in my own mind, when it happens.Now, you might wonder how I know about my own mind, when something new happens with it. Our guest argues that there has to be an answer to that question, because whenever you know something, there's an answer to the question how you know it. And so, he argues that the way you know you're in a mental state is by being in that mental state. So to apply the idea to the two examples we started with, you know you having an itch by having an itch. And you know you like the pizza by liking the pizza. Being in the state is what allows you to know that you're in it.If you think that idea sounds wacky, you're not alone. But our guest provides some pretty interesting arguments in favor of it. And he also makes the case that understanding what's going on when you fail to know something about your own mind can lead us to a clearer understanding of what's going on when you fail to know that you know something—which is an age-old puzzle in philosophy.It's a fun discussion. I hope you enjoy it.Matt Teichman