Speaker 1: A new study claims air pollution is now a leading cause of lung cancer.
Speaker 2: A local group is claiming that short-term rentals are to blame for the housing crisis gripping London in recent years.
Speaker 3: In economic news, the government's response to the pandemic appears to be the main driver of the recent spike in inflation.
Katy Milkman: These are attention-grabbing headlines. They're crafted in part so we'll keep watching or listening or reading, but one of the reasons they resonate is that they offer satisfyingly simple answers to complex problems.
Speaker 5: During the historic congressional hearing, a UFO witness says he saw something that was definitely non-human.
Katy Milkman: In this episode, we look at our tendency to prefer those simple explanations, even when the more complex truth is out there.
I'm Dr. Katy Milkman, and this is Choiceology, an original podcast from Charles Schwab. It's a show about the psychology and economics behind our decisions. We bring you true and surprising stories about high-stakes choices, and then we examine how these stories connect to the latest research in behavioral science. We do it all to help you make better judgments and avoid costly mistakes.
Toby Ball: He sees this orange object that looks to him like an eye that's opening and shutting. They'd observed these three indentations, which they think are marks from where a craft would've landed.
Katy Milkman: This is Toby.
Toby Ball: My name is Toby Ball. I created and hosted three seasons of a podcast called Strange Arrivals.
Katy Milkman: Toby is talking about something known as the Rendlesham Forest incident.
Toby Ball: The Rendlesham Forest incident took place over three nights. Each night had multiple components, seeing things up close, seeing things in the distance, seeing lights of different colors coming from different places. The people who saw it concluded that it was all coming from a craft of unknown origin.
Katy Milkman: Rendlesham is a forest in Suffolk, England, about two hours northeast of London and close to the coast. At the time of these incidents, it divided two Royal Air Force bases that were hosting members of the U.S. Air Force. The sightings started in 1980 on Boxing Day, a public holiday in the United Kingdom that falls on the first weekday after Christmas.
Toby Ball: A guy named Airman John Burroughs and his supervisor went out on patrol around the perimeter of their Air Force base. Burroughs was driving, and his supervisor said he saw something in the sky that then went and landed in the forest.
Katy Milkman: Burroughs didn't see the light land in the forest, but since he'd worked there longer, his supervisor asked …
Toby Ball: Has he ever seen anything like that? And Burroughs says no. As they're driving along, they see from Rendlesham Forest, which is off base, they see a light glowing in the forest. And at this point, the supervisor makes this decision to leave base to get closer to it, to see if they can get a visual on what's going on in the forest. This is actually a very big deal, to leave base. So they open the gate, they drive out, they get to where there's a T in the road and look into the forest. They see the light. They can't really make out anything, and at this point they call base security to get some help and get some backup.
Katy Milkman: A security officer from the military base sent out two men, one named Jim Penniston, to meet Burroughs and his supervisor at the edge of the forest.
Toby Ball: When they show up, they also see the light. It's a white light, and it's going sort of in and out of brightness. So it'll be bright, and then it'll dim, and then it'll brighten again. And at this point, they still have weapons. They leave the weapons behind, and they walk into the forest. They leave one person with the vehicle. The other three walk into the forest.
Katy Milkman: Along the way, one of them stops to try and fix their radios. They're having trouble and want to be able to communicate back to the base.
Toby Ball: So the two people who continue into the forest are Burroughs and Penniston, and they walk through the forest for a little ways towards the light, and then they come across a berm in the forest, and the light gets brighter, and they hit the deck. They duck behind the berm because they don't know what's happening. They end up crossing a creek, and through the woods they see a blue and white light, and that's there for a very short period of time, one or two minutes, and then that disappears. At this point, they're cold. They're wet. They're tired. They're pretty freaked out by what they've seen. So they return to base and sort of report what happened to them, and the people at the base say, "All right, just go home, get some rest."
Katy Milkman: The next night, there are more reports of mysterious lights coming from the forest. Another two security officers are sent out to investigate. One of them claims a light entered her vehicle, but nothing more. And then the third night …
Toby Ball: A lot of things happened. The main person here is a guy named Colonel Chuck Halt. He was at an event for the higher-ups on the base for the Christmas holiday. An airman comes in and says there's been lights sighted again. So Halt is tasked with going to investigate. He grabs some people and some equipment, so then they tromp out into the woods. Again, they've gone off base. They find the site with the indentations that Burroughs and Penniston had seen the two nights previous. They get there, they use the radiation detector, and they pick up elevated radiation signals. Halt puts on the star scope and starts looking around.
Katy Milkman: A star scope is a special nighttime scope or visual aid for a rifle.
Toby Ball: Some of the trees seem to be glowing. This is obviously fairly alarming to him, and then there's this light that's kind of appearing and disappearing rhythmically. And he sees this orange object that looks to him through the star scope like an eye that's opening and shutting rhythmically, and it's kind of moving through the forest.
The final thing they note is that there's three very bright lights that are low on the horizon. They look like stars, but they're brighter, at least one of which is changing colors. And he thinks they're moving in synchronization, but moving away from them. At this point, he calls back to base to see if there's any radar coverage of what's going on. Have they picked up anything unusual? They say no, and in fact, he remarks about how little interest the people on the base seem to have about what's going on out in this field and in the sky above them. And at that point, again, it's late at night, people are tired, and they go back to base, and that's essentially the end of the actual sightings.
Katy Milkman: No one else on the base took the sightings very seriously. Nothing unusual was showing up on radar. And while Holt wrote a detailed account of what he'd experienced, there was no official follow-up on the memo. No one in the U.S. or the Royal Air Force was particularly worried.
Toby Ball: They are incredibly not interested in what's happening. They barely pay any attention to it.
Katy Milkman: But then the media got ahold of the story. It was almost three years later when a News of the World tabloid article loudly claimed, "UFO LANDS IN SUFFOLK, And that's OFFICIAL." Remember, it's the early 1980s. Aliens and UFOs were a big part of the American zeitgeist. The movie E.T. the Extraterrestrial was a massive hit in theaters in 1982 and '83, and a book called The Roswell Incident brought a 1947 story about a purported alien encounter in Roswell, New Mexico, back into the public consciousness. Add to this that the American forces positioned on these British air bases were looking out across the sea to Europe, ready with nuclear weapons should the Soviets decide to attack. This news about an encounter in Rendlesham inspired a range of investigations, this time outside the military. Lots of people wanted to believe that it must have been some kind of alien encounter. But others like Ian Ridpath, a UFO skeptic and amateur astronomer, we're not so sure. But what were all those lights, the radiation and the indentations on the forest floor?
Toby Ball: Ridpath goes to Rendlesham Forest, and there he talks to a forester by the name of Vince Thurkettle. And Vince has been in Rendlesham Forest for quite a while, and he notes to Ian that a place called the Orfordness Lighthouse, and it's one of the brightest in England, and he says it's five miles from the base. So the light is going to be very intense in the forest, especially if you don't know what you're looking at.
Katy Milkman: That was Ridpath's first clue that maybe these sightings were less mysterious than they seemed.
Toby Ball: He then finds out from astronomers that that night at 3:00 a.m., when they were on patrol when they saw this glowing thing descend into the forest, there was actually a very bright fireball that was seen all across southern England.
Katy Milkman: A fireball, of course, is another word for a meteor.
Toby Ball: If you didn't know what it was, it would really throw you off because it's big, and it's bright, and it moves fast. So there's this illusion on these very bright fireballs that they're closer to the earth than they actually are. It's just the way your brain processes that.
Katy Milkman: So that was Ridpath's second finding that explained one of the sightings, and it got him thinking that these events must have been a bunch of separate phenomena that when put together might seem like they were caused by one UFO.
A third hypothesis, the blue and white lights observed in the forest were likely from a police car. There was one confirmed on patrol in the area that night. English police cars don't always use red in their lights. And the marks on the ground? People who lived in the area and were familiar with the forest believed them to be excavations by rabbits.
Toby Ball: Again, nothing really out of the ordinary. And then the third night when Chuck Halt goes out with his people and this equipment, part of the issue is that they don't really know how to use the equipment. For instance, the radiation detector that they were using is really used to detect really high levels of radiation, if there's been an atomic test or something like that. So if you're just using it when there's nothing but the usual background radiation, it's not calibrated to be able to detect differences that are really small.
Katy Milkman: And the starlight rifle scope?
Toby Ball: It actually, it brightens things like 20 or 30,000 times as bright as you would normally see it. So something like looking at a tree that has a flashlight being shone on it will make it look like it's glowing through the goggles. If you're looking at a very bright lighthouse light circling around, it's just going to blast your eyes, right? It's going to be way too bright.
Katy Milkman: So it's much more likely that instead of an extraterrestrial eye opening and closing, what Halt experienced was the result of a highly amplified lighthouse light seen from a distance using a starlight scope. The three low lights in the sky were also likely stars, bright ones, according to astronomical information from that night. Sirius in particular is the brightest star in the sky.
Toby Ball: Halt had mentioned that it was changing colors, and that's actually something that Sirius does.
Katy Milkman: So there are all these credible explanations for what was observed over several nights. These sightings were likely the result of multiple natural and man-made phenomena. But that's not what many people's cognitive architecture led them to believe. Three of the key witnesses have written books. Rendlesham is still mentioned in the media from time to time, and if you visit the forest today, you'll find a UFO trail complete with a model spacecraft based on what was described back in 1980. The Rendlesham Forest incident has become a legend.
Toby Ball: This is how folklore is created. If there's something happens, and there's not an explanation that people can agree upon, that's when folklore comes in to fill the void and provide explanations for things that don't have satisfactory explanations.
Katy Milkman: Toby Ball created and hosted three seasons of Strange Arrivals. He's also on a weekly podcast called Crime Writers On, looking at true crime in popular culture. You can find more details about Toby's work in the show notes and at schwab.com/podcast.
Our brains are wired for pattern recognition, to filter out information that isn’t important and to make sense of a complex world. It turns out that it's often wise to exhibit a preference for simple, single explanations to complex things we might not understand. You've probably heard of Occam's razor, that the simplest explanation is normally the right one. But then of course, there are plenty of times when the truth isn't simple or straightforward. The Rendlesham Forest incident is far more likely to be explained by multiple separate phenomena than it is to be explained by a single and highly unlikely visit from an extraterrestrial craft.
We've invited my next guest, Tania Lombrozo, on the show to share her research on our mind's preference for simple explanations and when it can lead us astray. Tania is the Arthur W. Marks Professor of Psychology at Princeton University. Hi, Tania. Thank you so much for taking the time to talk with me today.
Tania Lombrozo: Thanks for having me, Katy.
Katy Milkman: I'm really excited to talk about your research on people's preferences for simpler explanations. First I was hoping you could just describe what it means to have a preference for explanatory simplicity and how this relates to the widely known philosophical principle called Occam's razor.
Tania Lombrozo: Yeah, that's a great question. I think we're all familiar with some version of Occam's razor. So Lisa Simpson had a version of The Simpsons where she basically describes it as the, when there's multiple explanations, a simpler explanation is probably the true one. But it turns out that when you dig into that, figuring out what counts as simpler isn't really that simple at all. So one of the ways that we have tried to characterize this in psychological research is by thinking specifically about the case where you observe some things in the world, and you're trying to come up with a causal explanation.
So for example, you observe that somebody has some symptoms, and you're trying to figure out what's causing those symptoms. And in that kind of a context, a simpler explanation might be one that involves fewer causes that you just have to assume hold true in the world.
Katy Milkman: So for instance, say I have blurry vision and an upset stomach. The complex explanation might be I haven't gotten enough sleep, and I have a stomach bug. And a simpler one would be some single disease that causes blurry vision and upset stomach.
Tania Lombrozo: That’s right, And so in that case, you would think a preference for simpler explanations would be a preference to think it's that one cause that accounts for both of the symptoms. And the way that we have asked the question in the lab about whether or not people show such a preference is by giving them scenarios just like the one that you came up with, but also giving them cases where they have some additional evidence about what's more likely to be true.
Katy Milkman: I love that.
Tania Lombrozo: Yeah.
Katy Milkman: Because that starts to get into whether this is a bias or a heuristic that's consistently right.
Tania Lombrozo: That's exactly right. So we can ask the question, not just do people show this preference when they have no additional information, but even when they do have some probabilistic information that might allow them to figure out what's more likely. Do they still show a preference for the simpler explanation, perhaps more than they ought to, based on how the numbers work out? And most of the research suggests that people do.
Katy Milkman: That's really interesting. I'd love it if you could actually describe one or two of your favorite studies showing that people prefer these simpler explanations to more complex ones, even when that's not necessarily the right decision to make, given the evidence.
Tania Lombrozo: One of my favorite studies looking at this was done with my then-PhD-student M. Pacer. And what we did is we had people participate in an experiment, and we wanted them to not have a lot of background beliefs about this particular case. And so we introduced them to an alien planet, and they were anthropologists learning about this alien planet, and among other things, they had to learn about various diseases that these aliens suffer from. And so they would get a bunch of training about what the possible diseases were, and what symptoms they cause, and how frequently each of these diseases occurred and so on. And then they basically had to do a fake diagnosis task. So they'd be told about a particular alien, and this alien might have purple spots and sore minttels[1], and they'd have to tell us what they think is the most satisfying explanation for these two symptoms that this particular alien has.
So because we have control over this alien world, we can control what is in fact most likely to be true. And what we found in that study is that people preferred the simpler explanation when it was more likely, but also when it was less likely. It wasn't until the complex explanation was many, many times more likely to be true than the simple explanation that a majority of participants would actually select that as the most satisfying explanation for the alien symptoms.
Katy Milkman: That's so interesting, and I love the example of medical diagnosis because we can so easily map it onto other kinds of decisions that are really consequential. What are the qualifications? When do we not show this effect and make the right decisions?
Tania Lombrozo: Yeah, I mean, well, the first thing to say is that there's some nuance to how people show this effect. What people seem to care about isn't just the number of causes per se, but the number of causes that are themselves unexplained.
So for example, suppose that you had a student who is both hungry and tired, and one way that you might explain that is by saying that they're hungry because they skipped breakfast, and they're tired because they stayed up late, and those would be two independent causes that explain both being hungry and tired. So that might seem like a somewhat complicated explanation. You're positing two causes rather than just one. But now I'm going to give you something like a common cause that might explain both why they skipped breakfast and why they stayed up late, and that's that they went to a big party.
So now I've given you a third cause. This is more complicated. Now we have a party causing them to stay up late and causing them to skip breakfast and so on. But on the other hand, I have reduced all of these causes to one underlying root cause. And so we talk about this as root simplicity, and what people really seem to care about is not just the number of causes per se, but having a smaller number of root causes.
Katy Milkman: That's so interesting, and can you say anything about why it is that we have that preference for both simplicity and for this sort of broader idea of a root cause?
Tania Lombrozo: At this point, I would say this is speculation, so this is going beyond any data that I have or that other people have. But I think one idea that's compelling is that part of what we're trying to do when we're explaining the world around us is figure out useful levers in the world. If you want to be able to predict what's going to happen, if you want to be able to control what's going to happen, figuring out those places where you can intervene or change things that would allow you to control a set of effects might be really useful. And so part of what root causes allow you to do is, in an efficient way, represent what it is about your environment that you'd need to know in order to make a bunch of predictions or where it is in your environment that you would want to intervene if you want to control some effects.
Katy Milkman: I wonder to what extent you also think the preference for simplicity is a heuristic, meaning sort of right on average but wrong in important situations. Do we have a sense of whether or not this is generally accurate, as opposed to just a rule of thumb that exists to get at root cause?
Tania Lombrozo: We don't have a great sense in the real world. What we do know is that in our contrived lab settings, where we have a lot of control over things, we can generate circumstances where it seems like people are clearly making mistakes. But we also know that some of those mistakes could be the result of overgeneralizing strategies that do make sense in the real world. And so I'll give you a couple of examples of that. So if causes are rare, when we're talking about things like diseases where you're hopefully more likely to not have them than to have them, then it's a pretty good rule of thumb that somebody is more likely to have one than two. And so people could be overgeneralizing a heuristic like that to many cases where perhaps they shouldn't, like, for example, cases where they actually know the exact probabilities of the diseases that are involved.
Another thing that we have found in my lab is that when you're considering explanations, there's all sorts of alternative causes that in principle you could consider, but that's just cognitively intractable. You can't possibly consider the values of every possible variable in the world. And so when I ask you to consider something like is it more likely that somebody has one disease than two diseases, you might not do the extra math of figuring out, OK, well, one disease and not the other 10 diseases it could be versus the probability of, say, those two diseases and not the other eight diseases that it could be, and so on. You sort of do this cognitive shortcut of just evaluating the probability of the one disease versus the other two without incorporating what you know about the absence of the other causes. And if you follow that kind of a shortcut where you're doing a mental computation that ignores a lot of the alternatives, it's going to turn out just probabilistically that a simplicity preference is more warranted in more circumstances. So I do think many of these cases are over-generalizations of strategies that are good enough a lot of the time.
Katy Milkman: That's really interesting. What would you say we can do, if anything, to try to fix this preference for simplicity in situations where it can lead us astray?
Tania Lombrozo: I think maybe one step towards correction is thinking probabilistically. Our default approach is often to ask ourselves what feels right, what makes sense, what's more satisfying? But often people actually can think through what the numbers are if they choose to approach a problem in that way. So explicitly asking yourself, "How likely are each of these causes? How likely is it that the causes would co-occur?" And engaging in that explicit probabilistic thinking is something that we expect would push people away from this sort of heuristic preference for simplicity.
Katy Milkman: I know you're writing a book at the moment that's nearly finished about these kinds of explanations, and I'm curious what you think key takeaways should be for people about how they can use what they've learned about simplicity in their everyday decisions and to improve their lives.
Tania Lombrozo: That's a great question. I do think that there's a bit of a double-edged sword of simplicity. On the one hand, I'd encourage people to pay attention to their explanatory preferences. They often do lead us to discover interesting features of the world. So for example, with both kids and adults, just looking for simpler explanations sometimes means that you discover features in the categories you're learning or in the causal structure you're learning that you might not have discovered otherwise. And so that's the good side of simplicity. On the other hand, the research we've been talking about suggests that, in some cases, it leads you astray. It leads you to think simpler explanations are more likely to be true. So I think learning to recognize and value the explanations we find satisfying, without being fully invested in their truth, is a way to try to get the best of both worlds.
Katy Milkman: Can you give me an example of what that might look like?
Tania Lombrozo: I'll give you an example from the lab. So this is not going to be the most realistic example, but suppose you're brought into a lab experiment, and you're learning to try to categorize robots into two kinds of categories. They're called Glorps and Drents, and you're trying to figure out what differentiates Glorps and Drents. So this kind of categorization task actually comes up in everyday life, right? You might be trying to figure out which of your colleagues are the ones that you can trust versus not trust. That's the same sort of everyday categorization task.
And when you're trying to explain, one of the things that we naturally do is try to find patterns that are simple. You're going to want to find the one feature that differentiates all the Glorps from the Drents or perhaps the one feature that differentiates all of your trustworthy colleagues from the untrustworthy ones. And the problem is that the world's not always simple, right? There probably is not one feature that differentiates all the trustworthy and untrustworthy colleagues. And so on the one hand, you don't want to be totally committed to that.
On the other hand, being willing to entertain that possibility might lead you to notice different features and represent those features differently in a way that actually leads you to discover something. So what we find in the lab studies where people are learning about these robots called Glorps and Drents is that, when they are actively trying to explain, they seem to be looking for something like one feature that explains the full difference, and that means that they're more creative about the way that they represent the features of the robots.
So for example, instead of just thinking about the robot as having red on one side and orange on the other side, they'll think, well, maybe it's about having warm colors versus cool colors, or maybe it's about having clashing colors versus matching colors, or maybe it's about the feet being pointy rather than being triangle shaped and so on. So having that kind of creativity to look for different kinds of features and represent those features differently in the service of finding a simple explanation can actually lead you to discover real things about the world.
But ultimately, what you might discover is not a simple explanation because the world isn't always simple. And so that would be a case of taking … leaning into the idea that you want to find a good explanation and engaging those cognitive mechanisms that are going to help you find one. But at the same time, always having a little bit of wariness or skepticism that the world is always going to provide us with simple, satisfying explanations. And I think we've all had the experience as scientists in our everyday research that sometimes the world is just messy and unsatisfying, right? Sometimes what turns out to be true is not the explanation that would've been most satisfying.
Katy Milkman: But I think what you're saying in a sense is this preference for simplicity is one of the things that drives us to generate original hypotheses in the first place, which is one of the greatest things humans can do. So generate the hypotheses, just don't stop and accept the simplest one or the simplest explanation if you don't have evidence for it. Recognize that that instinct may be wrong even if the hypothesis generation instinct is right.
Tania Lombrozo: Yes, and I love that way of putting it.
Katy Milkman: This was so much fun. Thank you very much for taking the time to talk. I know you're extremely busy, and I really appreciate it, Tania.
Tania Lombrozo: Really happy to be here. Thank you, Katy.
Katy Milkman: Tania Lombrozo is the Arthur W. Marks Professor of Psychology at Princeton University. You can find links to her work in the show notes and at schwab.com/podcast.
For the late st insights into what's moving markets—whether there's predominantly a single cause or a more complex array of factors—check out the Schwab podcast On Investing. Every Friday, Liz Ann Sonders and Kathy Jones share their perspective on equities, fixed income, macroeconomic issues, and more. You can find it at schwab.com/oninvesting or wherever you get your podcasts.
Tania Lombrozo explained, simply, our desire for simplicity when working on causal explanations. Occam's razor, or the principle that simpler explanations tend to be better, is actually a pretty solid rule of thumb. My husband is an astrophysicist, and when I asked if he could think of any examples from the history of physics where Occam's razor had proven misleading, he just came up with counter examples, times where it had proven valid. Like the Keplerian model of the solar system. Kepler's model replaced a far more complicated and patchwork Earth-centric explanation for observed planetary movements with a single unified model, capable of simply predicting actual orbital patterns.
Reducing unnecessary complexity in our thinking and in our lives tends to be helpful. But Tania Lombrozo's research reminds us that any good rule of thumb or heuristic can be harmful when overapplied or taken to extremes. If we jump too quickly to simple, single answers like aliens or Bigfoots or cancer or conspiracy, we may overlook more complex, accurate answers like lighthouses and comets, bears and fog, dehydration and a respiratory bug, and poor security and a bad guy with a weapon. The solution? Question your explanatory instincts. Ask yourself if you've come up with the most likely answer to a puzzle—or the best explanation for a change in a stock's price—or just the one that jumped to mind most easily. Consider the probabilities, and you'll draw better conclusions.
So we're not saying the unexplainable isn't extraterrestrial. We're just saying, consider the likelihood and how satisfying you find any answer as you move to draw your own conclusions.
There's a quote attributed to Einstein, though the story behind the line is ironically more complicated, but it goes something like this: "Everything must be made as simple as possible, but not simpler."
You've been listening to Choiceology, an original podcast from Charles Schwab. If you've enjoyed the show, we'd be really grateful if you'd leave us a review on Apple Podcasts, a rating on Spotify, or feedback wherever you listen. You can also follow us for free in your favorite podcasting app, and if you want more of the kinds of insights we bring you on Choiceology about how to improve your decisions, you can order my book, How to Change, or sign up for my monthly newsletter, Milkman Delivers, on Substack.
Next time, a tribute to one of the founders of behavioral economics. We'll look at a bias first identified by Nobel laureate Daniel Kahneman and his collaborator Barbara Fredrickson. They uncovered our tendency to compress our memories in a peculiar way that distorts the recall of experiences, both good and bad. I'm Dr. Katy Milkman. Talk to you soon.
Speaker 8: For important disclosures, see the show notes or visit schwab.com/podcast.
[1] Minttels: Fictional body part of the "aliens" made up in the study