Download the Schwab app from iTunes®Close

Choiceology: Season 2 Episode 7


Listen on Apple Podcasts, Google PodcastsSpotify or copy to your RSS reader.

Where analytical models and algorithms outperform human judgment, it’s still so tempting to just go with your gut.

Netflix recommendations, Amazon suggestions, Google searches, airline ticket prices, your social media feed. All of these things are driven by algorithms—computer models that crunch massive amounts of data to generate useful results. These types of online algorithms are commonplace and so, generally speaking, we’re used to them.

But what about the algorithms behind self-driving cars or airplane autopilots? What about algorithms used to predict crimes or to diagnose medical conditions? These are domains in which it often feels uncomfortable to let a computer model make what could be life or death decisions.

In this episode of Choiceology with Katy Milkman, we’re exploring the places where algorithms and computer models bump up against resistance from their human users.

  • Seeing as it’s Super Bowl season, it seemed like a good time to revisit last year’s contest as a case study in decision making. The 2018 Super Bowl champion Philadelphia Eagles played incredibly well against the formidable New England Patriots. The game could have gone either way, but the Eagles had a secret weapon that gave them an advantage. We speak with Michael Kist from Bleeding Green Nation on the Eagles’ integration of computer models for decision making both on and off the field. You’ll hear the story of how those models were temporarily abandoned and the team struggled before re-embracing them.
  • Next, we explore the way self-driving cars make split-second decisions on the road, with results that can make their human passengers squirm. We test whether or not giving people a small amount of control over how a self-driving car behaves gives those people a bit more confidence about the technology.
  • Then Katy speaks with her Wharton School of Business colleague Cade Massey, who explains some of the fascinating ways that algorithms have improved decision making and looks at some of the scenarios where algorithms face an uphill battle for acceptance. Cade Massey is a partner in Massey-Peabody Analytics.
  • Finally, Katy recaps the ways that people designing—or simply using—algorithms can work to overcome our human tendency toward machine mistrust.

Choiceology is an original podcast from Charles Schwab.

If you enjoy the show, please leave a rating or review on Apple Podcasts.

Click to show the transcript

Andy: So I’m in self-driving mode, so it doesn’t like the … [laughter]. Are you OK?

David: Yeah. Oh yeah.

Katy Milkman: There’s something unsettling to a lot of people about self-driving cars. The idea of putting your life in the hands of a machine that has to make split-second decisions at 55 miles an hour can be disconcerting, but one of the major promises made in the move towards autonomous cars is that they will be much safer than cars driven by humans, so why the disconnect? Why do we sometimes bristle at this kind of technology?

Today, we’re going to explore human tendency to mistrust algorithms that are designed to make decisions for us, even when they make those decisions better or more accurately than we do.

I’m Katy Milkman, and this is Choiceology, an original podcast from Charles Schwab. It’s about decisions, big ones and small ones, along with the subtle biases that affect those decisions. We guide you through a world of hidden psychological forces, forces that can influence college admissions, sports championships and the way you travel. We isolate these forces in order to understand them and to help you avoid costly mistakes.

So full disclosure on this first story. I’m based in Philadelphia, so I may have certain allegiances. If you’re not a Philadelphia Eagles fan, you might find parts of this story hard to accept, but trust me, I am not falling victim to confirmation bias. It’s the algorithms at the heart of the story that make it so interesting to me. At its core it’s a story of an incredible victory and how it nearly didn’t happen.

The 2017–2018 season was a historic one for the Philadelphia Eagles. The team won its first Super Bowl in a game against the defending champs, the New England Patriots. Of course, hindsight can make this win seem totally inevitable, but the Eagles’ path to Super Bowl glory was anything but straightforward.

Michael Kist: I’m Michael Kist. I’m a sports journalist for Bleeding Green Nation.

Katy Milkman: Bleeding Green Nation is quite a name. It’s an online community for hardcore Philadelphia Eagles fans—statistics, odds, tickets, player profiles, stories, you name it. It’s a treasure trove of information. We wanted to mine some of Michael’s deep knowledge about the Eagles to get a better sense of the way the team has been making key decisions both on and off the field.

Michael Kist: The way the decisions have been made on the football field historically is there’s a very conservative approach. Coaches have gone by their gut, not by the numbers.

Katy Milkman: That’s the romantic idea a lot of football fans hold on to. Picture a coach with a dog-eared playbook in his hands pacing the sidelines and shouting decisions to players based on instinct and years of experience. But the story of the 2017–2018 Philadelphia Eagles challenges that classic image. First, let’s go back a few years.

Michael Kist: Howie Roseman started off as an intern back in 2000.

Katy Milkman: Howie Roseman was only in his 20s when he joined the Eagles, but he was passionate about football, and he soon became a contract specialist. He also had a keen eye for player talent. In fact, he was so successful he moved quickly through the ranks.

Michael Kist: He became the youngest GM in the NFL at that time at 34 years old.

Katy Milkman: General manager at 34 years old, not bad. In 2010, Howie Roseman hired a young Harvard grad named Alec Halaby to help him evaluate players.

Michael Kist: He had a particular emphasis on integrating traditional and analytical methods in football decision making, so Roseman would watch film on a player and then Halaby would be behind him and check him basically. Roseman would say, “I’ve seen this on the film. I like this player for this reason.” Halaby would come and say, “Well, the analytics say this about his size profile, his speed profile, his production, his background, and this is why this might not project the way that you see it just based on his films.”

Katy Milkman: Roseman had good instincts, but he knew his limits. He relied on Halaby to analyze players’ stats using custom algorithms, sets of rules they would follow in making their calculations. Halaby’s analysis helped Roseman make better informed decisions when acquiring players, and the data he used had lots of detail.

Michael Kist: Beyond a regular stat line of receiving yards and catches, it goes into much further detail on the number of routes that they’ve run, where those catches come, are they deep, are they short, things of that nature.

Katy Milkman: This might remind you of the book or the movie Moneyball about a baseball general manager and a statistician who find an incredibly cost-effective way to improve the Oakland A’s roster using data analytics. If you know that story, you might remember that a lot of scouts and officials with the team were uncomfortable relying on this kind of analysis. They instead wanted to rely on their instincts and their years of experience.

The Eagles faced a similar challenge in 2013 with a newly hired head coach named Chip Kelly. Kelly had a traditional approach to building the team roster that didn’t quite jell with Howie Roseman and Alec Halaby’s number-crunching style.

Michael Kist: What you had was Chip wanting more control over his roster to build the team in his mold.

Katy Milkman: Chip Kelly pushed so hard to build the team roster his way that after a couple of years …

Michael Kist: Roseman and Chip Kelly didn’t speak very much with each other. Howie Roseman lost a lot of his power at that point, and they decided to give all of the final roster decisions and who they would bring in and who would go out—they put all of that on Chip Kelly.

Katy Milkman: With Kelly in control of the roster decisions, the algorithm-based approach for player evaluations went out the window. Howie Roseman was “elevated” to executive vice president of football operations, but he was also banished from making football and player-related decisions after losing this power struggle with Chip Kelly.

Michael Kist: Because of that frayed relationship with Chip Kelly, they were no longer using the algorithm, the analytics model that Howie Roseman had built with his front office staff. Howie wants to be shown the data to prove him wrong. He wants to be correct more than he wants to be right. Chip Kelly always wanted to be right.

Katy Milkman: According to Michael, Chip Kelly’s approach to acquiring players was based on instinct and optics. He sought out quarterbacks who looked the part. He made offers to players whose character he admired.

Michael Kist: He avoided players with character issues regardless of what the data said about their performance and profile. When he gets to the NFL, from 2013 to 2015, he was at or below the league average when it came to aggressiveness on fourth-down decision making.

Katy Milkman: I’ll explain fourth-down decisions in a bit, but in short, Kelly’s approach wasn’t working.

Michael Kist: For me, that has to do a lot with Chip Kelly shunning analytics. Having that traditionalist gut feel about players and that also mixed with your ego, you know better than what the data may tell you.

I would say the straw that broke the camel’s back for Chip Kelly getting fired is the fact that they didn’t make the playoffs.

Katy Milkman: We have situation where Howie Roseman was forced to take a back seat to a strong-minded head coach. But in 2015, Roseman returned to the driver’s seat. He was reinstalled as general manager, and he brought back his algorithm-based approach both for personnel moves and, as we’ll see shortly, for decision making on the field.

Michael Kist: When Howie Roseman comes back as general manager, he wasn’t reaching out for big-money free agents. He invested in the players that were in the building that he felt fit the culture, fit the team and also, due to their analytical profile, their production profile, knew would be long-term successes.

Katy Milkman: Roseman was playing the long game, but that didn’t mean it would be smooth sailing.

Michael Kist: Going into the 2017 season, there was a lot of doubt about this Philadelphia Eagles team if the pieces would come together.

Katy Milkman: But that doubt soon turned into excitement.

Michael Kist: The Philadelphia Eagles ended up winning the 2017 NFC East with a record of 13-3.

Katy Milkman: The team did much better in the regular season than in previous years, but they were still underdogs going into the playoffs because they’d lost quarterback Carson Wentz to an injury. Still, they played like hell.

Michael Kist: … Ultimately making it to the Super Bowl against the New England Patriots. Going into the Super Bowl, the Philadelphia Eagles were starting quarterback Nick Foles, but he wasn’t a starter all season.

Katy Milkman: Not only were they missing their star quarterback, they were facing football royalty.

Michael Kist: Being the underdog against the New England Patriots, a team that had been there many times before, against an all-time great quarterback in Tom Brady.

Katy Milkman: Facing Tom Brady and the Patriots, who’d one last year’s Super Bowl, imagine the pressure, and yet …

Michael Kist: Being the underdog in that situation did not intimidate them. They embraced it. They get into an absolute boat race, as they would call it, a high-scoring game with the New England Patriots where they’re just dueling it out, scoring points on each other left and right. The score is 15 to 12. The Eagles are up on the New England Patriots, who had been in this situation a thousand times, so they’re there to slay the dragon. They get a fourth and one situation late in the first half.

Katy Milkman: OK, let’s pause the action for a minute. The Eagles have fourth and one, which means they’re on their fourth down and need one yard to get to a first down. What to do next is always a pretty tricky decision in football. If you’re not a fan of this sport, let me explain. The team in possession of the ball has a limited number of downs to advance 10 yards or more towards the other team’s goal line.

If they don’t make it in four downs, possession goes to the other team. On fourth down, a coach has to make a key decision. Do they go for it and try to make it to 10 yards? It’s a risky bet because if they don’t make it, they’ll lose possession to the other team right where they are. But if they do make it, that will reset the downs back to first and allow them to keep trying to score a six-point touchdown.

Another option on fourth down is to make a safer bet. And depending on their distance from the goal line, either try to kick a field goal, worth only three points if they’re close enough, or punt to their opponents, pushing them back as far as possible. Lots of coaches would try to punt or try a field goal in this situation. But the algorithm that the Eagles used suggested that going for it on fourth and one was worth the risk more often than not in this kind of a situation. OK. Back to the action. It’s fourth down and one. Chip Kelly’s successor, Coach Doug Pederson, has a big decision to make.

Michael Kist: Nick Foles, their quarterback, strides over to the sideline. He looks Doug Pederson in the eyes. He says, “You want Philly Philly?” which was their trick play, something that they had installed very late in the process in preparation for this game. And there’s a dramatic pause, and Doug Pederson looks Nick Foles in the eyes, and he says, “Yeah. Let’s do it.”

The play itself—they run a reverse to a tight end, Trey Burton, who was actually a college quarterback as well. And he passes to Nick Foles in the corner of the end zone for the touchdown. The analytical model obviously played a big part in their decision to go for it with the Philly Philly, not necessarily the trick play, but just for the aggressiveness in that situation. Their game-winning chance increased if they were to make it on fourth and one there for the rest of the game would be exponentially higher than it would be with just a simple field goal. They have the lead for most of the game. However, they do give Tom Brady one last chance to get back in it. And normally in these situations, most NFL fans know what’s going to happen. They’ve read this script 100 times.

And late in the fourth quarter with over two minutes remaining, Tom Brady, down five points, the score is 38 to 33 with the Eagles in the lead, takes over on offense. And we’ve seen this 100 times before with Brady. He’s marched right down the field in crucial situations. He’s had some of the most game-winning comebacks in NFL history and has done it before in the Super Bowl in the prime-time moments, so he drops back and the ball is knocked out of his hand. The Eagles recover, and it’s pretty much over from there. The confetti hits. Everyone’s celebrating, and the game’s over. It had been nearly 60 years since an NFL championship had been brought to the city of Philadelphia.

Katy Milkman: There are so many factors that influence the outcome of a game like this. But key decisions made long before the game in terms of building a roster, and decisions made during the game like taking a risk with their trick play on a fourth down, these decisions had a material impact on the outcome. Algorithms to help choose players, algorithms to help decide which plays to make on the field, these data-processing tools were instrumental in the Eagles’ success.

Michael Kist: The reaction from the football world with the Philadelphia Eagles was one of refreshing change, one that was more accepting to the analytical models that people didn’t believe could work in the NFL. It was a wakeup call for the rest of the league, that they could use these things successfully, that they could be more aggressive. The Eagles Super Bowl win shone a light on how useful analytics could be, not just on fourth downs, but for the entire game.

Katy Milkman: Michael Kist is a journalist and contributor to Bleeding Green Nation. If you’re a Philadelphia Eagles fan, or you’re just curious about the team, I’ve got a link to Bleeding Green Nation in the show notes and at Let’s talk for a second about why I shared this story in today’s episode. The key is that I’m interested in how frequently we see people dismissing advice from algorithms in spite of its incredible power. We had evidence long before the Eagles won the Super Bowl that well-designed algorithms could make a big difference in sports. The Oakland A’s baseball team, as detailed in Moneyball, proved this years ago. But Major League Baseball took a long time to widely adopt this kind of approach to analyzing data.

And the NFL has moved even more slowly. It’s this peculiar resistance to algorithms that we’ll dig into. Algorithms are everywhere. And in lots of cases, we get used to trusting them. Think of Google Maps and the directions it provides, or Pandora, or Spotify, or Netflix recommendations, and that forecast on Kayak of whether the flight ticket you’re looking to buy will get more or less expensive over the next month. But the thing is, until we acclimate to them, we’re often really resistant to shifting away from our personal judgment and relying on algorithms. I want to switch to talking about another area where we see this tendency to mistrust them.

David: No need to open it.

Andy: No, no. Using my own hands like a chump. I’m Andy.

David: David.

Andy: Nice to meet you, David.

David: Nice to meet you, too.

Katy Milkman: We sent our intrepid producers out on the street for an experiment in a Tesla Model X. It’s been outfitted with the latest version of the company’s autopilot software. We’re going to test the idea that people often mistrust algorithms, but that they mistrust them less when they have the chance to offer some input into how the algorithms behave. We have three different passengers, David, Mark and Lori. And they’re on three different trips.

Lori: I’m Lori. Nice to meet you, David.

David: Nice to meet you.

Lori: Oh.

Mark: Hello sir.

David: How are you doing?

Mark: I’m well.

Andy: So we’re going to go for a little drive, and we’re going to do kind of a little experiment. I’m going to give you this. It’s a Bluetooth-enabled braking system. So if you get, at any point, if you get uncomfortable, I want you to hit that button. Right?

Mark: And this will brake the car?

Andy: Yep.

Katy Milkman: OK, this seems intuitive. Giving people some control should make them feel better about trusting their lives to a computer program, right? But here’s where things get interesting.

Andy: How are you feeling right now?

Mark: Nervous.

Andy: I’m not doing anything.

Mark: Yeah. Yeah. That makes me nervous.

Andy: So I’m in self-driving mode.

David: Were you braking?

Andy:  No.

David: So it turned the steering wheel for the curve, and it braked to avoid rear-ending this car.

So if you take a right here at Clark …

Andy: Look, I’m not turning. Whoa!

David: Oh. Wow.

Andy: Uh huh.

David: That was good. So it was already braking by the time I hit the button.

Andy: I think so. I think it made the decision before you did.

Katy Milkman: Full disclosure, that Bluetooth-enabled braking system was just an iPhone with a buzzer app. And whether or not our participants believed it would make a difference, they ended up using it quite a lot.

Andy: I’m going to go back into autopilot now. We’re going to try a lane change—watch this. Watch this.

Mark: Oh it’s doing itself. OK, that’s weird, that it did that.

Andy: Look at that. Look at that.

Lori: This is so Star Trek.

Andy: So it doesn’t like the … [laughter]. Are you OK?

David: Yeah. Oh yeah.

Katy Milkman: Again, we didn’t actually hook up a Bluetooth braking system. While the autopilot feature made the right choices most of the time, the illusion of control did make our passengers feel a bit more comfortable. But here’s the question …

Andy: If it could be proven, so if we had the data to show that the self-driving car always made better decisions than a human being, do you think you’d still want some level of control?

David: Yeah, definitely. But I think that what could mitigate that is if I experienced driving in an automated vehicle and it was in situations where it made the right choice over and over again, then I would feel far more comfortable.

Katy Milkman: So a little bit of control and lots of exposure to self-driving cars performing well could go a long way to ease people’s reservations about the technology. But the thing is, in most data-rich domains, algorithms far outperform humans, so adding human input often diminishes an algorithm’s effectiveness. Maybe hitting the brakes in a moment of panic isn’t always the best decision. Maybe the car could better avoid a collision by steering. Or maybe the human in the equation didn’t notice a car behind them and hitting the brakes might have caused a collision.

But even when we have all the data proving that an algorithm performs better without human intervention, people still want control. It makes them feel better. Why is that? This relates to research by some of my friends at Wharton and the University of Chicago on a bias called algorithm aversion. Cade Massey is a scholar who sits down the hall from me at Wharton, and he’s co-authored some incredibly important research on algorithm aversion with Joe Simmons and Berkeley Dietvorst. I asked Cade to visit me in the studio to help explain this phenomenon and the research behind it.

First, I’m really interested to know what sparked the idea to study this topic.

Cade Massey: Oh, very much life experience with people who have algorithm aversion, because when you’re an analytics person, you’re basically peddling algorithms, and the reason I got into analytics is to improve human judgment. I studied decision making at the University of Chicago, I know how biased people are, and analytics aren’t a panacea, but they’re a good counterbalance to some of the foibles of human decision making. So you walk into a situation, and whether it’s an individual or an organization, you might have some good numbers on some topic that’s important, and you think, “Well, I’ve got data. I’ve got a model. I’ve got a way they can improve their decision making.” But people’s reluctance to cede any decision making to the model is a real problem, and it’s frustrating if you can see the value of the model.

Katy Milkman: So what’s the reason for this bias?

Cade Massey: So this is one of the things that motivated our research. It was understood that people resisted algorithms, but we wanted to understand why, and then we wanted to understand, can we do something about it? There’s almost certainly many reasons that people resist it.

I mean, the most intuitive one is that they don’t like giving up control. When we dug into it, we discovered that there’s some other elements at play that seemed to matter a lot, and the one that we focused on in our first paper is that people are just harder … they’re less forgiving of models when they make mistakes than they are forgiving of humans when they make mistakes. And in most difficult domains and most challenging domains, there are going to be mistakes.

And so you have a model, it makes a prediction, even if it comes out right most of the time, sometimes it’s going to be wrong. When people observe that, they lose confidence in models in a way that they don’t lose confidence in individuals. They abandon them more quickly than they abandon humans, even when they’re performing better than the humans, because they’re less forgiving of them.

So that of course just raises the question of, “OK, why?” There’s lots of little reasons, and we don’t have a big, easy one. But they’re things like, they expect that models can’t learn as well.

They also are reluctant to give up on finding the exceptions. People like the exceptions, and they realize that the models aren’t going to be good at finding this one thing, this outlier that needs to be treated differently.

And these are reasonable reservations to have about models, but they apply them even in domains where the model’s performance is superior. Even when it’s transparently superior, they bring these reservations, and as a result they do worse because they continue to resist the algorithm.

Katy Milkman: But this isn’t a totally irrational bias, is it? No algorithm or model is ever perfect.

Cade Massey: Models are imperfect. I mean, they are by definition a reduction of the actual phenomenon. You sometimes throw out good stuff when you reduce that way. If you know the model really well, you know the weaknesses of the model. So the people who build these models should be the most humble about them because they should know the weaknesses.

Katy Milkman: Cade, what’s an example of an empirical study you’ve done where you saw this bias?

Cade Massey: We asked participants to forecast the performance of MBA applicants. And in some conditions, they made their own forecast. In other conditions, they watched a model that was built for this purpose.

After 10 or 15 rounds of this, we then said, “OK, now let’s make some forecasts for real. We’ll pay you based on how well you do.” The choice is, do you want to use your own judgment, or do you want to use a model?” And what we found was, those who had seen the model, and therefore seen the model make mistakes, were much less interested in using the model than people who had it available but had never seen it perform. We didn’t see the same pattern with human judgment.

Katy Milkman: And this is an error, right? So they should be using the model because the model’s more accurate than human judgment in this case?

Cade Massey: In this domain, the algorithms, the models, do better, and so it’s a mistake. And we can even isolate those people who see both themselves and the model perform, and so it’s transparent that the model is doing better than them, and they still go with their own judgment, because both are imperfect and they’re harsher in their judgment of an imperfect model than they are in an imperfect human judge.

Katy Milkman: And if they’d only use the algorithm, they would have earned more money in your study?

Cade Massey: In every case, in every condition we’ve run, they perform better if they lean on the model.

Katy Milkman: OK, so, final question, Final Jeopardy. Tell me about what we do? How do we fix this? What can we do about algorithm aversion?

Cade Massey: So this was the natural question that we dove into once we kind of pinned down one of the mechanisms driving it, and the one that we found most powerful and that we’ve demonstrated pretty clearly now is to give people some control.

What happens is that, if you don’t let them refine it, they’re not very interested in the model. They’d rather use human judgment. But if you let them move it 10%, all of a sudden they’re much more interested and open to the model, so they use it more often. The interesting bit, though, is what happens if you only let them use 5%? Almost the exact same percentage still use the model. And if you go to the 2% condition, almost the exact same percentage of people still want to use the model.

It’s just the ability to put their hands on it, essentially, to have some influence. And it’s a very general lesson for algorithms. You don’t want to get into situations where you’re forcing people to take the model, imposing the model, basically asking them to cede decision rights entirely to the model. You want them to interact with it in some way, have some say in it.

Katy Milkman: So it feels a lot like our experiment with the self-driving car?

Cade Massey: It’s been a while since I’ve thought about cars, but if I was going to advise people who build these cars or who wanted to persuade people to use the cars, you’ve got to give them either real control or the illusion of control. It doesn’t really require that much. It’s just that they want the ability to override in certain circumstances. They want the ability to influence. Even if it’s just a little bit, I would expect it to make a big difference.

Katy Milkman: That’s great. And again, the models are better performing than the people, so giving people the ability to tweak them means higher take-up, which means better outcomes for forecasters?

Cade Massey: It is. That’s the thing. It’s kind of a second-best solution. You can’t get first best. You can’t get enough people to use the model perfectly, so you allow them to degrade the model. They make the model worse off. But so many more of them are willing to use the model, given that option, that overall they do better.

Katy Milkman: Cade, thanks so much for doing this.

Cade Massey: Thank you. You bet.

Katy Milkman: Cade Massey is a professor of practice at the University of Pennsylvania’s Wharton School of Business, and he’s also the guy who turned us on to the Philadelphia Eagles story through his work in sports analytics at Massey-Peabody Analytics. You can find links in the show notes and at

I’m Katy Milkman, and this is Choiceology, an original podcast from Charles Schwab. Biases can affect your ability to stick to a financial plan. That’s why Schwab also offers the podcast Financial Decoder. It’s designed for people who want to make better decisions with their money. Mark Riepe, head of the Schwab Center for Financial Research, hosts the show. Mark and his guests dissect the financial choices you might be facing, and offer tips to mitigate the impact of biases on your financial life.

You can find it at, or wherever you listen to podcasts.

So a nice thing about Cade Massey’s visit today is that he really dug into how to fix algorithm aversion, and I’d like to talk a little bit more about that important advice just to drive it home. These days, with machine learning and other sophisticated analytics tools, the benefits of using algorithms are profound. But the work Cade described shows we don’t do it enough, and there are two things that help.

First, don’t focus on the algorithm’s mistakes, or you’ll overweight them. When people hadn’t watched an algorithm perform and make mistakes, they weren’t nearly so algorithm averse. So that suggests we should try to look away from algorithmic errors when we know, overall, that the algorithm has outperformed human judgment. After all, people make mistakes, too, so expecting infallibility is just too tough of a standard.

Second, if we give ourselves or others even the tiniest bit of leeway to tweak an algorithm’s forecast or recommendation before using it, we’ll be more open to adopting algorithms. Since those tweaks actually degrade performance, the important thing is not to allow too much room for human interference. Just enough to give people the comfort they need to adopt an algorithmic approach, which research suggests can be quite small.

So for all those self-driving car companies out there, give us some buttons to press.

Katy Milkman: You’ve been listening to Choiceology, an original podcast from Charles Schwab. If you’ve enjoyed the show, leave us a review on Apple Podcasts, and while you’re there you can subscribe for free. Same goes for other podcasting apps. Subscribe, and you won’t miss an episode.

Next time on the show, we’ll look at the way people tend to overvalue things they own, and we’ll get perspective on this bias from Nobel Prize–winning economist Richard Thaler. I’m Katy Milkman. Talk to you next time.

Speaker 8: For important disclosures, see the show notes, or visit

Important Disclosures

All expressions of opinion are subject to change without notice in reaction to shifting market conditions.

The comments, views, and opinions expressed in the presentation are those of the speakers and do not necessarily represent the views of Charles Schwab.

Data contained herein from third-party providers is obtained from what are considered reliable sources. However, its accuracy, completeness or reliability cannot be guaranteed.

Apple Podcasts and the Apple logo are trademarks of Apple Inc., registered in the U.S. and other countries.

Google Podcasts and the Google Podcasts logo are trademarks of Google LLC.

Spotify and the Spotify logo are registered trademarks of Spotify AB.


Thumbs up / down votes are submitted voluntarily by readers and are not meant to suggest the future performance or suitability of any account type, product or service for any particular reader and may not be representative of the experience of other readers. When displayed, thumbs up / down vote counts represent whether people found the content helpful or not helpful and are not intended as a testimonial. Any written feedback or comments collected on this page will not be published. Charles Schwab & Co., Inc. may in its sole discretion re-set the vote count to zero, remove votes appearing to be generated by robots or scripts, or remove the modules used to collect feedback and votes.