Mesmerized: With Guests Mara Rockliff & John List
Choiceology: Season 10 Episode 6
Transcript of the podcast:
Speaker 1: When it comes to the effect of coffee on our health, some studies have linked consumption to heart risk factors such as raised cholesterol or blood pressure. So what is …
Speaker 2: We found that those who drank six or more cups of coffee a day had a 22% higher risk of developing a cardiovascular disease.
Speaker 3: … up to 25 cups of coffee a day had no ill effects on your arteries.
Speaker 4: Study out of Australia found that two to three cups of coffee a day is not only associated with a lower risk of heart disease and dangerous heart rhythms, but also with living longer.
Katy Milkman: It seems as if every year there's a new study on whether coffee is good or bad for you. You've probably seen other contradictory reports on how vitamins or water consumption or salt or sugar affect some other health outcome you care about. So why is it that we see so many conflicting reports in the media? Why is it so difficult to determine whether something like coffee is good or bad for, say, your heart? It has a lot to do with the challenge of separating out all the factors that influence our health, things like age, weight, height, genetics, mindset, sleep patterns, stress, exercise, and … well, you get the idea. Health is the sum of many things, and coffee is just one small piece of a much larger puzzle.
Teasing out its influence is no easy feat. In this episode, we'll dig into a tool for tackling a common mistake that affects how we think about everything from coffee to medicine to education. You'll hear a colorful story from history that illustrates how this tool works, and I'll speak with renowned economist John List about how the very same tool can generally help us all think more clearly and make better decisions.
I'm Dr. Katy Milkman, and this is Choiceology, an original podcast from Charles Schwab. It's a show about the psychology and economics behind our decisions. We bring you true stories involving dramatic choices, and then we explore how they relate to the latest research in behavioral science. We do it all to help you make better judgements and avoid costly mistakes.
Mara Rockliff: Think of somebody with an enormous powdered wig that goes several feet above their head and maybe has a ship in it. That's the time that we're talking about.
Katy Milkman: This is Mara.
Mara Rockliff: Hi, my name is Mara Rockliff. I write books for kids usually about strange and fascinating things in history that have been forgotten.
Katy Milkman: This particular strange and fascinating story takes place in France.
Mara Rockliff: So this is not very long before the French Revolution, when Louis the XVI was in power and his queen Marie Antoinette. They wore these very fancy outfits, and the ladies of the court had these giant hairstyles.
Katy Milkman: The year was 1778, and the French aristocracy was about to discover an extraordinary German physician.
Mara Rockliff: This guy came to Paris from Vienna, and everybody goes a little bit crazy over him. His name was Franz Mesmer. He was a very dramatic kind of character. He's elegant and mysterious. He thought he was pretty important. He wears a powdered wig and a fine coat of purple silk, and he carries an iron wand. And he says he's discovered this astonishing new force: a force called animal magnetism. This force worked on people the way magnets work on metal, and he said it was this invisible force that you couldn't see or smell or taste, but it was all over the universe, and it just flowed from the universe into his body and then out through his magic wand.
And he got this reputation for being able to perform miracle cures. Mesmer said, "I dare to flatter myself that the discoveries I have made will push back the boundaries of our knowledge of physics, as did the invention of microscopes and telescopes for the age preceding our own." So he thought that he was probably the most important scientist in the world.
Katy Milkman: It might sound a little strange in today's world, but in 18th-century France, a phenomenon like animal magnetism didn't strike people as terribly far-fetched.
Mara Rockliff: There was good reason why people might have believed that something like animal magnetism could exist—so many unbelievable things were actually happening that it was really hard to know what to believe. It was hard to know what could be true and what couldn't be true. For instance, Antoine Lavoisier was a famous French scientist—he's known as the father of modern chemistry—and he had just done experiments with hydrogen and oxygen. So here are these things that nobody can see or smell or taste, and yet suddenly he's setting fire to them. And what appears to just be air is actually this invisible force. If somebody said, "Hey, there's this force out there. You can't see it. You can't tell that it's there, but it's there, and it has these powerful effects." It was plausible.
Katy Milkman: Mesmer claimed he could use the invisible force of animal magnetism to cure any sickness, and there was some evidence that it worked. Wealthy patients flocked to Mesmer and eventually named this treatment after him. They called it mesmerism.
Mara Rockliff: So pretty soon everybody who was anybody in Paris wants to be mesmerized. So dukes and countesses pull up at Dr. Mesmer's door in their fancy carriages, and they'd disappear into this room. He would be doing this music to create a spooky atmosphere. There's these velvet curtains, and the lights would be low, and it would be kind of airless, and everybody sits around this sort of big wooden tub with iron rods. It's this very odd-looking thing and looks very scientific.
And so people would come into this very dramatic scene with this very dramatic guy, and then he's staring into their eyes, and he is waving his hands. And they start having all these reactions: shrieks, tears, hiccups, and excessive laughter. People were fainting and screaming and falling all over the place, and then they would say that they felt better, and maybe they did.
Katy Milkman: But some people were not fans of this new trend.
Mara Rockliff: Not everybody was absolutely delighted by what was going on with Dr. Mesmer, and the people whose noses were really out of joint were the doctors, because nobody wanted their treatments anymore. So they went and complained to the king.
Katy Milkman: King Louis XVI decided to establish a commission to investigate this new medical phenomenon, and he appointed a famous outsider to lead it. His name was Benjamin Franklin.
Mara Rockliff: Benjamin Franklin, who was a celebrity in France, he was very respected by all the best scientists in France, and at the same time, he was super popular with the people.
Katy Milkman: Ben Franklin had been in France for a couple of years, where he had helped achieve official diplomatic recognition for the United States in the Revolutionary War.
Mara Rockliff: So Franklin was a pretty old man at this point. He had gout, he had kidney stones, and he was not able to get into a carriage and go jostling over the cobblestones of Paris to go see Mesmer. He was living outside of Paris in the country. So he asked for Mesmer to come to him, and Mesmer refused, because Mesmer was, in his own eyes, an extremely important person, and he wasn't going to go to him. But Mesmer's second in command, Charles d'Eslon, went out there to demonstrate for Franklin and the other members of the commission how mesmerism worked. Franklin, of course, the first thing he did was had it tried on himself.
Katy Milkman: It must have been quite the scene with Benjamin Franklin submitting to this strange mesmerism from Charles d'Eslon.
Mara Rockliff: He sort of makes some woo-woo gestures at you, either with his hands or his wand. Ben Franklin and some of the members of the commission just stand there and say, "Huh, I don't feel anything." And so the word got back to Dr. Mesmer, and Dr. Mesmer said, "Well, there must be something strange about this American, because it's not working on him, for some reason."
Katy Milkman: At this point, Ben Franklin observed Charles d'Eslon mesmerizing some regular patients of Dr. Mesmer's.
Mara Rockliff: And the people would scream that they felt like their body was burning all over and they would fall down in a faint and all this kind of thing.
Katy Milkman: So Franklin and the commission were at a crossroads. On one hand, the procedure seemed to work on some patients. On the other hand, when Charles d'Eslon attempted to mesmerize Franklin and the other commission members, they felt nothing. Benjamin Franklin needed a way to figure out what was going on.
Mara Rockliff: Franklin was skeptical in the first place, but he kept an open mind, even though he was the world's most famous scientist, he was open to things. He just wanted to find out, well, is it real or is it not real? He observed what was happening to himself and other people, and he asked himself, "Well, could this be in their minds?" So he's created this hypothesis, and he needs to figure out, "Well, how can I test that?" That was when Franklin and the other members of the commission came up with the idea of blindfolding the patients so that they wouldn't know what was being done.
Katy Milkman: This decision to blindfold the patients was important to the tests and to the future of scientific research.
Mara Rockliff: So one of the tests that Franklin ran was they took this young boy who was one of Dr. Mesmer's patients, and they blindfolded him, and they took him out outdoors into a grove of apricot trees. This boy was supposed to be especially sensitive to animal magnetism. They told him that one of the trees had been mesmerized, meaning this tree had had a wand waved over it and had had this invisible force funneled into it. And they said, "Find the tree that's been mesmerized." He started moving from tree to tree, and first he's coughing, and then he complains of a headache, and then he says, "Oh, I feel really dizzy. It must be getting closer." And finally, he gets to the last tree, and he just faints dead away.
Katy Milkman: It was a dramatic moment, but …
Mara Rockliff: In fact, he hadn't gone anywhere near the particular apricot tree that had been quote-unquote mesmerized.
Katy Milkman: It seemed that the boy couldn't detect where the mesmerized tree was at all if he was blindfolded. Ben Franklin repeated the experiment on several other patients.
Mara Rockliff: There was this one patient who reacted very strongly when he wasn't blindfolded and was being mesmerized. So they blindfolded him, and Franklin said to him, "Hey, you're being mesmerized right now. Can you feel it?" And he said, "Oh, yes. Yes." And in fact, Charles d'Eslon wasn't even in the room at that time. And so then they had Charles come back into the room very quietly without this patient knowing he was there.
You can imagine this patient just standing there blindfolded while Charles d'Eslon is just pulling out all the stops going around him and waving his hands and pointing his wand and staring at him in a mesmeric kind of way. And the guy has no idea that he is there, and he is just not responding at all. Which was really a shock, because d'Eslon was a true believer in Mesmer, and he expected him to respond as people normally did. It had never occurred to him that they were reacting that way because they expected to.
Katy Milkman: Ben Franklin had his answer.
Mara Rockliff: Through multiple blindfolded tests like that, they were able to show that it wasn't actually what Charles was doing that mattered; it was what the patient believed. All of these patients who had responded so dramatically to Mesmer's treatments, once they were blindfolded, that just went away, or their response was clearly not a direct result of the treatment. They proved that animal magnetism as such didn't really exist, but that there was something going on that came out of the patients' minds rather than in an invisible force that was flowing out of a wand.
Katy Milkman: This may be the first time a blind trial was used to test a scientific hypothesis. This kind of test would become the basis for proving that one thing causes another. Instead of simply examining whether mesmerizing people seemed to work, Franklin realized it was critical to assign some people to be mesmerized and others not to be, and it was crucial that people not know which group they were in. This was, in essence, a controlled scientific experiment.
Today we run these kinds of tests with far more people, and we use random number generators to decide who will get a treatment, like being mesmerized, and who will be in a control group, in this case, merely being told they're mesmerized. The procedure allows us to tease out cause and effect. If there are different outcomes for the people in the treatment and control groups, well, then we know it's due to the treatment. If not, well, it was all in our head.
Mara Rockliff: The scientific method was well established already. The idea that you observe a situation, and then you ask a question, and then you make a hypothesis, and then you test that hypothesis. But what had not been done before that Franklin invented here was the blind protocol, the blind test, and that was really an important development.
Katy Milkman: Ben Franklin and the commission published their findings in a report.
Mara Rockliff: It was an immediate bestseller; 20,000 copies were snatched up right away. And so Mesmer, who had been a celebrity in a good way, was now sort of infamous and mocked. He was the subject of parodies. There was this one stage play where they show Mesmer with a patient, and the patient says, "Please, doctor, tell me, does animal magnetism really do any good?" And the guy playing Mesmer jingles some coins and says, "Well, I can assure you it does me a lot of good." So this report, of course, completely ruined his reputation. He fled Paris. He went back to Germany, where he spent his last days with a pet canary, which would wake him up every morning by landing on his head.
Katy Milkman: Mesmerism fell out of favor.
Mara Rockliff: Mesmerism got such a bad reputation that eventually when scientists wanted to go back and work with it some more, they had to rename it hypnotism so that it wouldn't be associated with this big scandal.
Katy Milkman: But the real lasting impact of this story comes from Franklin's use of blind trials.
Mara Rockliff: The blind protocol is basically the gold standard now. For any kind of new medication, you have to test every medication against a placebo. It needs to be blind, meaning the patient needs to not know whether they're getting the placebo or the medication. And that way if the medication has better effects, then it's not just because they believed that it was going to help. And today we have the double-blind protocol, which is even better, in which the doctors who are giving out the medication don't know whether they're get giving out the medication or placebo, so that they can't have an impact on the results either. It's what the FDA requires before any new medication would come on the market. It's super important.
Katy Milkman: Mara Rockliff is the author of several historical books for children and teens, including the award-winning Mesmerized: How Ben Franklin Solved a Mystery That Baffled All of France. You can find links in the show notes and at schwab.com/podcast.
The story of Benjamin Franklin debunking Franz Mesmer's discovery of animal magnetism may be the earliest recorded example of a blinded experiment. And today I want to focus on the amazing power of experiments like Franklin's to cut through the challenges we usually face when we try to understand cause and effect in the world. It's easy to be tricked, just like Mesmer's patients, into thinking one thing causes another, when, in fact, it doesn't.
For example, maybe you heard for years that coffee was great for some health outcome, only to read a headline later that, oops, that wasn't true. What happened? Well, the kinds of people who drink coffee aren't exactly the same as other people. If they're just a little, say, wealthier than average, it's easy to form the false impression that coffee leads to great health outcomes, when it's actually wealth that's so good for you. And when researchers get around to doing an experiment, cause and effect can be untangled.
In an experiment, some people are randomly assigned to drink coffee and others aren't, and then health outcomes are measured. Differences in things like wealth are all washed away by a random assignment experiment, because wealthy people are just as likely to be randomly assigned to drink coffee as to abstain from it. And so you're left with the truth, which is often that there's no causal relationship between a food or drug that was believed to have superpowers and the outcomes we seek.
I remember one headline pronouncing that abstaining from alcohol entirely leads to a shortened lifespan. Maybe, but a much easier explanation is that people who abstain from alcohol completely do so because they're a little different. Maybe a decent chunk of abstainers have a health issue that prevents them from drinking, and that's the reason they, on average, die younger. A foolproof way to get around all this mess is with the experimental method.
My next guest is a renowned economist, and his area of expertise is actually experimental economics. That means he uses the experimental method in essentially all of his work. John List is the chief economist at Walmart and the Kenneth C. Griffin Distinguished Service Professor of Economics at the University of Chicago.
Hi, John. Thank you so much for joining me today.
John List: Hey, Katy, thanks so much for having me. How's everything going?
Katy Milkman: Everything is great. OK, I'm going to dive right in because I have so many questions for you today. So my first question is if you could just describe what it means for two variables to be correlated with one another?
John List: Yeah, that's a good question. I think if you ask people to define correlation, if you ask 30 people, you'd probably get 30 different answers. My preferred definition is two variables that move together—either directly, like when one goes up, the other goes up.
Katy Milkman: Like crime and ice cream sales, say.
John List: Exactly. Or like ice cream sales and drownings. That would be something that when one goes up, the other goes up. And correlation can also be when one goes up, the other goes down. Something like when the price of a good goes up, the quantity demanded in economics goes down. Now, I would say that's causal, but, of course, correlation simply means that two variables are moving together.
Katy Milkman: Great. I love that you brought up causality, because that was my next question. Could you describe what it means for two variables to be causally related, meaning one causes the other?
John List: Causality is a special form of correlation where, when one variable moves, that causes another variable also to move. So again, it could be one variable goes up, that could cause another variable to go up, or when one variable goes up, it causes another variable to go down. A causal relationship now is fundamentally different than a relationship that is merely correlational.
Katy Milkman: And, of course, a huge amount of your work, and mine too, is about trying to figure out what's causal and what's not. And I'm going to get there in just a second, but I wanted to ask why you think it is so hard to disentangle causation and correlation? And why people get them mixed up?
John List: Yes. So I think it's hard because in many cases you don't know assignment. So what I mean by that is if you ask yourself, "Does Head Start work?"
Katy Milkman: Head Start being the early childhood education and health program for low-income children and families.
John List: Exactly. So what Head Start did early on was they looked at outcomes, third-grade test scores, kindergarten readiness, et cetera, of kids who went to Head Start versus kids who did not go to Head Start. And what they reported was that kids who went to Head Start had better outcomes. So they argued that Head Start is good for kids.
Now, what you have mingling here is that parents who really care about their child's education are more likely to put their child in Head Start. So that's the assignment mechanism that I'm talking about. It's actually parents choosing or kids selecting into that particular program. And in many cases, it's that that happens to be the most important, not Head Start. People think just because there's a relationship there that it's causal. Also, it kind of makes sense.
Katy Milkman: Yeah, that's such a great example of a situation where a random assignment experiment could help clarify things.
John List: So the beautiful aspect of randomization is you can go into a really dirty environment, and as long as you control the assignment mechanism—and what I mean by that is, I control who goes into control and who goes into treatment, if you want to think about a medical trial, you can. If you want to think about an early childhood program, you can …
Katy Milkman: So who gets Head start, who doesn't, as long as you control that assignment …
John List: Exactly. You have a bunch of parents who say, "I want my kid in Head Start." And let's say I'm oversubscribed, so then I want to be fair. So I use a lottery system, and I randomly put some of them in control and some in treatment. I realize the world is a messy environment, but what's nice about randomization is it balances that dirt across the treatment and control groups. So then when you difference off the outcomes, you also difference off the dirt, because the dirt is equally represented in each of the two groups.
Katy Milkman: Meaning, the kids that the lottery assigned higher numbers and lower numbers, whose parents all wanted them to be in Head Start …
John List: Exactly.
Katy Milkman: … they're the same number of kids who have older parents and younger parents and kids who have high IQs and low IQs. Because it's just a flip of a coin. There's nothing different about the two groups.
John List: Yeah, 100%. So you have ambitious parents represented in both groups. You have siblings of two or two boys and two girl siblings represented in both groups. So these are the background features that might matter. And in many cases, we want to measure those as well, because those will give us an indication of who does the program work the best for? And are there certain moderators of the relationship that we need to be aware of? That then when we roll it out to the big time, we scale it up, we sort of know which types of families our program works the best for, who should we scale to? And which kinds of families it really doesn't work that well for. And maybe then we need to scale a different program to those types of families.
So the nice thing here that we're talking about is, first of all, I have an approach here that you and I and many others are doing called experimentation. And it allows us to control the assignment mechanism, which allows us to establish an estimate that has internal validity.
Katy Milkman: Do you have a favorite research study you've run that disentangles correlation from causation?
John List: Oh, gosh, I would say every one. So let's think about charity. A lot of your listeners might be interested in charitable giving. And what happened when I started my own research in the late '90s on charitable giving is that there was sort of this bible that was written by Kent Dove. And the bible basically said that "When you're trying to raise money, you should use a matching grant." And what that basically means is we all hear this on NPR when they fundraise, "If you give $100 today, we will match it with $100 from an anonymous donor." So the fundraising bible argued that if you use a two-to-one match, that will be better than a one-to-one match. And if you use a three-to-one match, that will be better than a two-to-one match. So three to one's really good, right?
Katy Milkman: Yeah.
John List: You give a hundred bucks, it's matched for 300 bucks. Now, I talked to fundraisers back then, and I said, "Where's the empirical evidence?" And they would show me evidence that was, in some cases, like around Christmas time, they do three to one, worked really well. Whereas in the summer, they did one to one, and it didn't work so well. And I said, "Well, do you ever have data in the same time period?" They said, "Well, we don't know of any studies like that."
So I tried it. I worked with Dean Karlan from Northwestern. And Dean and I decided to help a charitable organization raise money. And we wanted to test the theory about one to one versus two to one versus three to one. So what we did is we took thousands of households and put some in a control group, which just received a letter with no match, and then another group was one to one, another group was two to one, another group was three to …
Katy Milkman: One, and people were randomly assigned to those groups.
John List: Yes. And we find kind of two things that jump out. First, having a match matters a lot. So if you just have match money available, you raise more money. And we can say that in a causal sense, because we found that the one-to-one, two-to-one, and three-to-one groups raised a lot more money than the control. OK. Now, what about the one to one versus two to one versus three to one? What we find is that makes no difference. So the one to one raises the exact same amount of money as the two to one, which raises the exact same amount of money as the three to one.
So what we can say now is what they were telling us before was a correlation, and they were finding a result because of Christmas cheer giving. The three to one was working better in December, because a lot of people give more anyway, and they never really had a control group to compare their three to one with. So now we can say in a causal way that a higher match rate does not influence giving patterns, but having a match does. And because we used a field experiment, we can make a strong causal statement.
Katy Milkman: I love that. That's a great example, and it's one that really matters, right? Because we don't want to be matching three to one and trying to raise that kind of extra capital, when it's actually not necessary to motivate our donor base, if we're an organization.
John List: No, 100%. So really you're just throwing away, let's say, rewards that, you don't realize it, but they're not helping. You could take the three-to-one dollars, give everyone one to one, and then you can use those dollars for a new drive, and those dollars, of course, go a long way. So what that means is I can make a strong causal statement if I understand the assignment mechanism. I need a few other assumptions too, like compliance and attrition, those are our exclusion restrictions in the experimental world. After those are in place, I can be confident that I'm estimating a causal relationship.
Katy Milkman: I also find that—I'm curious if this is true for you too—that once you learn to think this way, it helps you be more skeptical of the information that others are feeding to you. And it's easier to poke holes and when someone's giving you useful data or useful information or information that they've merely constructed to align with their goals. So I'm sort of curious to what extent doing this kind of work has changed the way you think about the information the world is feeding you?
John List: No, I think you're 100% correct. Another side benefit of understanding correlation versus causality is that it's really much easier for you to understand what's happening in the world. You can ask yourself, "What are the incentives that the person has who's given me the information? Or who has generated the information? Do they have certain incentives to give me a particular result?" If so, you should think twice.
Whether the decision-making is to vote for a particular candidate or to believe the information that you are receiving to think about, is that truly a causal result? Or is there a lurking variable? I think all of this really helps make you a better decision-maker as well.
Katy Milkman: That is a wonderful place I feel to wrap. John, thank you so much for taking the time to talk to me today. I really appreciate it.
John List: Katy, it was so great to be here, and I can't wait to come back.
Katy Milkman: John List is the Kenneth C. Griffin Distinguished Service Professor of Economics at the University of Chicago. He's also the chief economist for Walmart. You can find a link to his terrific new book The Voltage Effect: How to Make Good Ideas Great and Great Ideas Scale in the show notes and at schwab.com/podcast.
Whether you're interested in better understanding correlations between market and economic data or just want to make smarter financial decisions—say, around your own charitable giving—check out the Financial Decoder podcast. It's a great resource for digging into the financial implications of the phenomena we explore here on Choiceology. You can find it at schwab.com/financialdecoder or wherever you get your podcasts.
As John List mentioned, becoming more discerning about the distinction between correlation and causation can help you better understand the world and make stronger decisions. When I teach my Wharton MBA students about the experimental method and its power to help disentangle correlation from causation, the first thing I suggest is that they start greeting causal claims in news headlines and from friends and colleagues with a healthy dose of skepticism. When someone tells you that eating garlic can prevent headaches or that owning more books will improve your kids' lives, no matter how plausible their story, it's important to ask yourself, "Could there be some other explanation for this besides a causal one?"
Next, if the claim is important enough, I'd ask, "What kind of evidence would convincingly prove this is true?" The ideal evidence, of course, would be experimental, and sometimes you'll find it. After all, experiments are how doctors test new vaccines and medications. Increasingly, experiments are being used by economists and companies to test everything from the value of charitable matching campaigns to microfinance.
In fact, the economics Nobel Prize in 2019 was awarded to a group of development economists for their new experiment-based approach to fighting global poverty. And pioneers like John List are bringing experiments inside big companies like Walmart to improve decision-making. But experiments are still rarer than they should be in business and policy-making, given the importance of understanding cause and effect. Experiments can be costly and complex, but it's often worth the effort to determine what's real and what's not when you're facing high stakes decisions. A key takeaway is that you should constantly be on guard for correlations misrepresented as causal relationships. Just remind yourself how easy it is to be mesmerized and don't fall for it.
You've been listening to Choiceology, an original podcast from Charles Schwab. If you've enjoyed the show, we'd be really grateful if you'd leave us a review on Apple Podcasts. You can also follow us for free in your favorite podcasting app. And if you want more of the kinds of insights we bring you on Choiceology about how to improve your decisions, you can order my book, How to Change, or sign up for my monthly newsletter, Milkman Delivers, at katymilkman.com/newsletter. That's it for this season. We'll have new episodes for you in early 2023.
I'm Dr. Katy Milkman, talk to you soon.
Speaker 8: For important disclosures, see the show notes or visit schwab.com/podcast.
After you listen
It's easy to misunderstand correlations between market data and the economy at large—which can adversely impact your investing decisions.
- To learn more, check out the Financial Decoder podcast, hosted by Mark Riepe. It's a great resource for digging into the financial implications of the phenomena explored on Choiceology.
It seems like every other week there's a news report about how coffee will help you live longer or will shorten your life. There are similar reports about vitamins and water consumption and any number of other health-related studies. So why do we see so much conflicting information around scientific research in the media?
In this episode of Choiceology with Katy Milkman, a look at the slippery problem of separating correlation from causation.
You'll hear the fascinating story of Franz Mesmer and the apparently miraculous effects of what he dubbed animal magnetism. Author Mara Rockliff recounts the sway that Mesmer held over the Parisian public and how Benjamin Franklin transformed the scientific method in his quest to find the truth.
Mara Rockliff has written several books for young readers, including the multiple award-winning Mesmerized: How Benjamin Franklin Solved a Mystery That Baffled All of France.
Next, economics professor John List joins Katy to discuss the reasons why we confuse correlation and causation and explains the best practices for separating the two in the study of charitable giving, early childhood education, business, and policy.
John List is the Kenneth C. Griffin Distinguished Service Professor in Economics at the University of Chicago and the chief economist at Walmart.
Choiceology is an original podcast from Charles Schwab.
If you enjoy the show, please leave a rating or review on Apple Podcasts.
Learn more about behavioral finance.
More from Charles Schwab
How Can You Plan for a New Pet?
On a High Note: With Guests Maurice Schweitzer & Matthew Polly
The Truth Is Out There: With Guests Tania Lombrozo & Toby Ball
Related topics
All expressions of opinion are subject to change without notice in reaction to shifting market conditions.
The comments, views, and opinions expressed in the presentation are those of the speakers and do not necessarily represent the views of Charles Schwab.
Data contained herein from third-party providers is obtained from what are considered reliable sources. However, its accuracy, completeness or reliability cannot be guaranteed.
The policy analysis provided by the Charles Schwab & Co., Inc., does not constitute and should not be interpreted as an endorsement of any political party.
Investing involves risk, including loss of principal.
All corporate names are for illustrative purposes only and are not a recommendation, offer to sell, or a solicitation of an offer to buy any security.
The book How to Change: The Science of Getting from Where You Are to Where You Want to Be is not affiliated with, sponsored by, or endorsed by Charles Schwab & Co., Inc. (CS&Co.). Charles Schwab & Co., Inc. (CS&Co.) has not reviewed the book and makes no representations about its content.
1122-26TV