Elucidations

Share

Episode 126 - Listener Q&A with Agnes Callard and Ben Callard

Ep. 126

Three philosophers. Eight head-scratchers. 50 minutes. In this episode, Agnes Callard, Ben Callard and I respond to the world's most awesome listener-recorded questions.


A lot of people have the impression that philosophy is, first and foremost, an enterprise in which college professor types read books that no one can understand, then issue a response in the form of more books that no one can understand. It's not. Don't get me wrong—I love books. I'm constantly trying to talk friends and acquaintances who don't like reading books into giving them another shot, if only for the simple reason that reading is basically guaranteed to improve your life. It's just that the existence of philosophy books doesn't make philosophy the art of book writing any more than the existence of bodybuilding books makes bodybuilding the art of book writing.


Philosophy is about fearlessly posing questions. Our everyday lives are interwoven with foundational mysteries, some of which turn out to be trivial, others of which prove challenging to resolve. While we can't confront all of them, simultaneously, 100% of the time, philosophy is what happens when you formally give yourself permission to confront some of them head on, at least some of the time. Which is a superior alternative to sticking your fingers in your ears and pretending they aren't there. Or so I would allege.


The point of departure for this episode is what the show's listeners are wondering about. Not journal citations. Not name-dropping over miniature bagels at a conference. Not some incomprehensible jargon that cleverly avoids ever getting defined over hundreds of pages. The real stuff. Why is blahbityblah the case? That's quite surprising, because of such and such. What the heck is going on? Etc. There's nothing I enjoy more than working through conceptual difficulties in the form of a conversation.


In this episode, we end up talking about property rights, the best gateway drugs for getting into philosophy, how to prove ‘ought’ statements, whether the past is real, looseness in how we interpret speed limit regulations, who counts as a philosopher, whether those of us in the first world are shirking our moral responsibilities towards everyone else, and why we never seem to listen to extraordinary claims, even when they are backed by extraordinary evidence. Join us as you, listeners, supply us with things to be surprised about, and Agnes Callard, Ben Callard, and I set out in search of strategies for coping with those surprises.


Matt Teichman

More Episodes

8/16/2020

Episode 128: Melissa Fusco discusses free choice permission

Ep. 128
One of the foundational ideas behind philosophical logic is that when you say something, that has further implications beyond the single thing you said. Like, if I think ‘every single frog is green’ and ‘Fran is a frog’, then I am committed to thinking that Fran is green. I don't have to have actually thought to myself or said out loud that Fran is green—I'm just required to believe that Fran is green, given that I thought the first two things, and if I fail to believe that, I've made some kind of mistake. Like I haven't thought through all the consequences of my beliefs.Modal logic studies how we reason about obligation and permission. For example, f I think that Bob is obligated to visit his parents for the holidays, it follows from that that he isn't permitted not to visit his parents for the holidays. (The term for this in philosophical logic is that obligation and permission are duals.) There are lots of inference patterns that pop up, some of them familiar and some of them surprising, the moment you start thinking about how the notions of ‘obligated to’ or ‘permitted to’ interact with notions like ‘if/then’ or ‘and’.Free choice permission is a funny case where it feels like out in the wild, you would have to draw a certain conclusion from something you said, but our best formal, mathematical theory of obligation and permission tells us that you aren't allowed to draw that conclusion. So although the theory gets most other things impressively right, it seems to get this one thing wrong.Here's the example. Imagine you're a customer at a cafe and a waiter says to you, ‘Since you ordered our prix fixe lunch menu option, you may have coffee or tea’. Translated into the terminology of obligation and permission, we could think of what the waiter said as ‘it is permissible for you to have either coffee or tea’. And there seems to be no way the waiter could think that and not thereby also be committed to thinking it is permissible for you to have coffee. If you're allowed to have either coffee or tea, then surely you're thereby allowed to have coffee. Right?The problem is that the best available formal mathematization of how reasoning about obligation and permission works (believe it or not, this is given the humorous-sounding name normal modal logic) predicts that you are not allowed to draw that conclusion. So since it seems obvious that any rational person would draw that conclusion, but our theory predicts that you aren't allowed to draw it, that means the theory has a problem. The trouble is that revising the theory so as to correctly make that prediction is quite technically difficult, because most of the obvious things you might do to have it make that prediction have the side effect of breaking other aspects of it that work well.In this episode, Melissa Fusco sketches out a highly original and ambitious approach to the puzzle, using a more sophisticated framework called two-dimensional modal logic. Two-dimensional modal logic is based on a subtle but interesting distinction between a statement that's automatically true the moment you start thinking about it, and a statement that is necessarily true, no matter what. It may sound a bit counterintuitive, but just wait till you hear the examples that Fusco gives! Trust me—her idea about how you can use that distinction to explain what's happening in the waiter example is super cool.
7/15/2020

Episode 127 - Nic Koziolek discusses self-knowledge

Ep. 127
In this episode, Nic Koziolek (Washington University in St. Louis) returns to talk to me and Nora Bradford about self-consciousness.Self-consciousness, as philosophers use the term, is a word for when you know something about one of your own mental states. Like when I really enjoy some pizza and note that I'm enjoying it. Someone else might ask me: ‘Hey Matt, do you like that pizza?’ And I'm typically the best person to ask about that, which is a sign that I typically know whether I like the pizza. Or when I have an itch, and I notice the itch before going to scratch it. If I noticed it, then I know that I have an itch. Self-consciousness, in the philosophical setting, is a name for me being able to tell what's happening in my own mind, when it happens.Now, you might wonder how I know about my own mind, when something new happens with it. Our guest argues that there has to be an answer to that question, because whenever you know something, there's an answer to the question how you know it. And so, he argues that the way you know you're in a mental state is by being in that mental state. So to apply the idea to the two examples we started with, you know you having an itch by having an itch. And you know you like the pizza by liking the pizza. Being in the state is what allows you to know that you're in it.If you think that idea sounds wacky, you're not alone. But our guest provides some pretty interesting arguments in favor of it. And he also makes the case that understanding what's going on when you fail to know something about your own mind can lead us to a clearer understanding of what's going on when you fail to know that you know something—which is an age-old puzzle in philosophy.It's a fun discussion. I hope you enjoy it.Matt Teichman
4/17/2020

Episode 125: James Koppel discusses counterfactual inference and automated explanation

Ep. 125
Episode link here.In this episode, James Koppel (MIT, James Koppel Coaching) joins me and Dominick Reo to talk about how we can write software to help identify the causes of disasters.These days, there's often a tendency to think of software primarily as a venue for frivolous pleasures. Maybe there's a new app that's really good at hooking me up with videos of alpacas on skateboards, or making my mom look like a hot dog when she's video chatting with me, or helping me decide what flavor of cupcake I want delivered to my home—because gosh, I just am just way too stressed right now to be able to figure that out. Have you seen how few Retweets I'm getting? If we followed the lead of a lot of the popular rhetoric about the software industry, we might very well come away with the impression that tech exists solely to facilitate precious, self-involved time wasting. And if that's right, then if it doesn't work from time to time, who really cares?But in fact, software correctness is frequently a life or death matter. Computer software controls our medical life support systems, it manages our health care records, it navigates our airplanes, and it keeps track of our bank account balances. If the author of the software used in any of those systems messed something up, it can and often will lead to planes crashing into mountains, or life support systems malfunctioning for no particular reason, or some other tragedy.James Koppel is here to tell us that software can do better. It can be designed ‘preventatively’ to avoid large classes of bugs in advance, and there are diagnostic techniques that can help pinpoint those bugs that cannot be ruled out in advance. In this episode, Koppel discusses some work he started in 2015 as a follow-up to Stanford's Cooperative Bug Isolation project, which provided a way to gather detailed diagnostics about the conditions under which programs fail or crash. But the problem he kept running into was that the diagnostic information was too much correlation and not enough causation. If the analysis you did tells you that your app crashes whenever it tries to load a large image, that's ok, but it doesn't tell you what about the large image causes the crash, or what other kinds of large images would also cause a crash, or whether the crash even is a result of largeness or something more specific. Correlation information is a great start, but ultimately, it's of limited use when it comes to directly fixing the problem.To deal with this, in his more recent work, Koppel and his colleagues have turned to the analysis of counterfactuals and causation, which is an interesting point of collaboration between philosophers and computer scientists. Using a recent paradigm called probabilistic programming, they have identified a way to have a computer program run the clock back and simulate what would have happened, had some condition been different, to determine whether that condition is the cause of a bug. The project is still in its initial stages, but if it works, it promises to deliver major dividends in making the technology we rely on more reliable.Tune in to hear more about this exciting new area of research!Matt Teichman