Sam Wineburg on Verified: How to Think Straight, Get Duped Less, and Make Better Decisions about What to Believe Online
Sam Wineburg, professor emeritus of education at Stanford and founder of the Digital Inquiry Group, discusses the book Verified: How to Think Straight, Get Duped Less, and Make Better Decisions about What to Believe Online, with Matthew Rascoff, vice provost for digital education at Stanford. In Verified, Wineburg and his co-author Mike Caulfield lay out practical, accessible steps for assessing the reliability of information on the internet. This conversation took place October 9, 2024, as part of the Academic Innovation for the Public Good series.
Transcript
This transcript has been edited; introductory and closing remarks from the live event have been removed.
MATTHEW RASCOFF: I am so pleased to introduce today’s author, Sam Wineburg. Sam is the Margaret Jacks Professor of Education Emeritus at Stanford University and is an expert on how history is taught and learned. He is the founder of the nonprofit Digital Inquiry Group, previously known as the Stanford History Education Group, which develops social studies curricula and online professional development for teachers. Sam’s most recent book, co-authored with Mike Caulfield, is Verified: How to Think Straight, Get Duped Less, and Make Wise Decisions about What to Believe Online.
Sam and Mike are concerned with the onslaught of dubious information that confronts all of us, who are negotiating our news, our political judgments, and our social relationships on the internet. Together, they’ve developed a set of practices that can be used by anybody of any age to gauge the reliability of that information. So that’s what we’re going to be talking about today.
And I have read the book just this weekend, and I greatly enjoyed it. I’m really excited, Sam, to be talking about it, especially in this moment right before an election in which the issues have just never seemed more salient. So the book, in addition to being an outstanding resource for teaching and learning, could not be more timely. And I hope you have caught the wave of the necessity of this book, and it’s reflected back to you in the audiences, like the one that we’re bringing to you today, which I see is up to 118 people.
So maybe, Sam, you could just get us started by giving a quick synopsis of the book just as briefly as you can. We’re going to get into the details on it. And I’ve got a lot of questions for you. And I know the audience will, too.
But just give us the short version of it. And also, I know you’ve been working on this issue for so many years, what’s the progress that you’ve made here? What was the step that you took with this most recent book relative to your previous scholarship on this, the advance that it represents over the previous approaches that you’ve developed?
SAM WINEBURG: Well, first of all, let me just thank you for having me. It’s really an honor and a pleasure to be with you, Matthew, and to be in this format and speaking to the people that are listening in. I guess the way that I can very quickly describe the book is that it’s the driver’s manual for surfing the internet that no one of us ever received. We’ve all been unleashed on this kind of technological juggernaut.
But we never really — nobody told us about the most basic features of the internet. We discovered it by ourselves. And so there’s huge gaps in what we don’t know.
Most people, if I say the acronym SEO, they’ll say, now, what’s that? Well, SEO stands for search engine optimization. It is a $68-billion-a-year business that gauges and games the search results that appear when you put words into your browser.
And so those are the kinds of things that no one really ever taught us. The vast majority of people on a Google search will stop their search after the first three results of the page. That’s according to Google research.
And when in many cases, those three results have been gamed because it is a cat and mouse game of moving results up and down on the search page. It’s a huge business that has a lot of factors going into it. And it may be the case that the answer to the query that you made is down on result number eight or, God forbid, on the second page of Google. So this is a book that basically says, Here’s how the internet works. Here are the things that you have to watch out for.
It’s the equivalent of, Don’t pass over the yellow lines when you’re on the highway. All of those kinds of things are basically how the internet works and how you can make better decisions about what to believe. We tried to put into a small, fairly inexpensive book. I think that Amazon has it right now at $13.40.
So really, the goal of the book was to come up with something not scholarly. Yes, there are scholarly references at the end. All of the research on which the book is based, you can find it in the end of the book if you look for it. But our goal was not to create a scholarly tome. Our goal was to create, essentially, a field manual for making wise decisions on the internet.
MATTHEW RASCOFF: OK, so here’s one reflection that I have on reading the book, which is that you frame the challenge as a personal challenge, but you also recognize the stakes for our whole society. And the election is a good example of the stakes for society. And do you see this — how do you frame the kind of collective nature of this, which might lend itself to, let’s say, policy solutions versus the individual nature of this, which lends itself to training and education and being more sophisticated, not getting duped or putting the responsibility onto students and their teachers to mitigate some of these harms that are being caused to them by the outside?
And you have this snake oil sales analogy throughout the book. The way we got rid of snake oil sales was not by being better consumers of snake oil, it was the FDA just regulated them out of existence. They created a mechanism of phase trials that proved the efficacy of our drugs. So how do you think about the collective responsibility to address trust online versus the individual responsibility to not get duped and not fall for it?
SAM WINEBURG: That’s a great question. I’m not a political scientist. I’m not an attorney. I’m not a policymaker. Interestingly enough, during my tenure at Stanford, before we spun out of Stanford to become an independent nonprofit, I sat in a bunch of meetings, where there were representatives from industry, there were representatives from government, there were AI specialists, and there were journalists.
And they were all talking about what they can do to staunch the misinformation that is all over the internet. Now, what was really, really interesting, and I can remember a particular meeting that I was at — I was. Also at this meeting, was a political science theorist, a political theorist, Rob Reich, and we both sat there looking at each other. And here we are on a campus, an educational campus, with all kinds of courses and required courses.
And the one thing that is not talked about is, what is the educational response to this? And so I’m going to take your question about policy, and I’m going to bring it back to education. What are the — we have an opportunity with students in school.
They are essentially forced to be there. They’re somewhat incarcerated to be in school. But we have them for a particular chunk of time during the day. And this generation is, whether we like it or not, is becoming informed by opening up a screen, by scrolling on their phone, or opening up a laptop.
School to this point has tried to protect students from these sources, in many ways, building a moat around the internet. Instead, what we need to think about is, what does the curriculum look like if we really want to prepare students to be thoughtful digital consumers, because they already live in a digital age? So to me, that’s the policy question, a kind of, what are the legislative mandates for essentially bringing the curriculum into the 21st century?
MATTHEW RASCOFF: I love the parts where you basically say, where your middle school teacher told you, “Don’t use Wikipedia,” that’s actually wrong. There is a kind of curricular intervention that you’re making here that’s not just additive. It’s not just saying we need a course on information literacy.
What I hear you saying is like, what you’ve got in that course on information literacy is some very stale framework that uses a scoring methodology that is not sophisticated, has not kept up with the SEO industry, which is using the dot org domain to basically whitewash content that might not be legitimate, but it has this kind of halo, as you show in the book. And I wonder, so what is the framework for this?
Is this meant to be embedded in libraries? How is this going to come to life? Is it a required course? Where do you see that? Are there models of those who are taking your ideas and bringing them to life in classrooms you can point us to?
And this series is on academic innovation. And many of the audience are in a position to think about co-curricular learning. And many of them are librarians. Many of them are instructors themselves. How is it coming to life positively in a way that is smart, that is aware of all these tricks that are being used by the other side, the dark side, as you call them?
SAM WINEBURG: So let me take a step back for a second and just fill in a few of the blanks in the question that you asked me. Let’s start with Wikipedia and then go to dot org. Wikipedia in its early years had a well-founded reputation of having fallacious content on the website that often moldered on the website for a great deal of time before it was actually taken off.
We’re talking about 2002, 2003, in the early years of Wikipedia. And I think that what has happened is that the kind of bad rap that Wikipedia got at that time has dogged it since among many teachers: “Don’t use Wikipedia.” Wikipedia at this point is the fifth most trafficked website in the world.
When you ask Siri about a particular question, they’re drawing on Wikipedia. When you Google something, you will see in the side panel, from Wikipedia. Odds are your physician uses it.
A study done in 2015, I think it’s even greater now, 50% of physicians used it. And among medical students, in the 90%, they used Wikipedia. So what has happened between 2002 and 2024?
Wikipedia got smart. It was an early user of AI by creating bots that would go through and quickly, automatically gobble up changes by unrecognized IP addresses. There were a lot of drive-by vandalism, shootings on Wikipedia. And all of that’s been eliminated.
You can, people say, well, anybody can change it. Yeah, the thing is that anybody can change Wikipedia. The real question to ask is, is your change going to be there tomorrow? Because those bots basically take 30 minutes to gobble up an unrecognized change.
And then people will say, well, but still, anybody can change it. Wrong. I’m going to full stop, wrong. Just go to try to change Donald Trump’s page. Try to change Barack Obama’s page.
Try to change the page of Gamergate. Try to change the page of, and listen to this one, the coat of arms of Lithuania. Go figure, I have no idea why that is such a contentious site.
But Wikipedia has a series of locks or protected pages that prevent vandalism and only high-level Wikipedians can actually make changes. And so Wikipedia, if you are trying to figure out — you come to an organization, I’ll use the example of the American College of Pediatricians. It looks like a bona fide organization.
You could spend 10, 15 minutes going through the different results, or you could just put in that name into Wikipedia and realize that it is a socially conservative evangelical Christian group. And when you look down at the references, you see that it’s been censured by people like Francis S. Collins, by a New York Times expose that shows that they are a small splinter group that opposes adoption by same-sex marriage. So that’s Wikipedia.
Things have changed rapidly. We need to get up to speed. One of the things that comes over and over in our research — because we’ve researched altogether over 12,000 students in the past eight years. And one of the things that comes on over and over and over again was when we watched students as well as adults, and they come to a site that says dot org, all of a sudden, that dot org gives them a free pass.
It’s like, Oh, it’s dot org, it’s gone through a process. It’s not like a dot com site. Well, guess what? Dot org is a completely open web domain.
You, your pet border collie, can get matthewandhisbordercollie.org for about 15 minutes and $15. And you can run away with it. And so these kinds of, we call them “cheap signals,” easy ways to play on people’s knowledge. Because at one point, dot org in the early years of the internet did mean something, but we have just not kept up. And so bad actors continue to use cheap signals in order to increase the probability that they will be successful in pulling the wool over our eyes.
MATTHEW RASCOFF: I think it’s such an important point about the pace at which the curriculum needs to evolve. When you’re working in a field like this, it can’t change at the same rate that, let’s say, the algebra curriculum does, because algebra has not evolved at the same rate as the internet. I mean, the pedagogies in algebra also need to change. But they don’t need the same level of adaptation.
One observation that I have about this book and the argument that you make, but also the broader work of the Digital Inquiry Group is, I think there’s an implicit argument here that open is powerful, that open educational resources are a means of keeping up with a faster cycle time. They evolve more quickly. That Wikipedia and resources like it are trustworthy.
And you cite the research that compares them against traditional encyclopedias. What is your view on open educational resources [OER]? I mean, I know the book is published by University of Chicago Press. It is itself a proprietary copyright book.
But you’re also making the case for open learning and the open internet and using the tools of the open internet and lateral reading as a means of dealing with the challenges of the open internet and the vandalism that you sometimes encounter. So do you see OER as a living, breathing movement? There are some folks, who are saying that it’s dead, that AI is killing it. What is your view on that?
SAM WINEBURG: Well, first, a mea culpa. I’m an early drinker of the Kool-Aid of OER. So on our website, which began before we spun out of Stanford, we were known as the Stanford History Education Group. And way back in the early years of the exuberance of open education and open education resources, I encountered somebody by the name of — well, he was a program officer at the Hewlett Foundation.
And the Hewlett Foundation had gone all in on open education resources. And in 2004, they gave us a substantial grant because we made a case that if we want to democratize — if we want to bring equity to educational materials, they should be free and high quality. Because otherwise, only the hoity-toity districts can pay for the kind of most innovative curriculum.
And they bought the idea. They gave us $500,000 in 2003. And we created a website called Historical Thinking Matters. And it was completely free. You could download all of the lesson plans and document-based arrangements that we created.
Now, since that time, that curriculum, which morphed into the Reading Like a Historian curriculum, changed somewhat. And again, fortunately, we had support from places like the Library of Congress. But today, we have 16 million downloads of our curriculum and our assessments, none of which cost anyone who downloaded one cent.
So we are — I’m all in on it. I think that — and we had to make a decision when we spun out of Stanford, were we going to be a for-profit company? Were we going to be a B Corp, like Patagonia, a company with a socially responsible, or we were going to go straight 501(c)(3) nonprofit? And that was not a hard choice for us.
We are all educators. None of us went into this to become rich. We went into this to try to do something significant for the public good. And sure, yeah, we don’t want to be poor, so we do take a salary, but nothing that’s crazy.
Now, you asked about the book and the materials in the book. All of the materials in the book are based on exercises and assessments and lesson plans that we have freely available on the Digital Inquiry Group site, digitalinquirygroup.org. Or no, excuse me, I misspoke, it’s inquirygroup, one word, dot org.
When you go there or if you google “civic online reasoning curriculum Stanford,” you’ll be redirected. It’s very easy to find. And so all of those materials — lesson plans, curriculums, videotapes of what it looks like in classrooms, all of those — all you need to do is register on the site and they’re yours for the taking.
So that is a part of — that runs in our DNA. Fortunately, we have benefited from philanthropic sponsors, who also believe in this mission, that say, if we want to bring about equity, then high-quality materials need to be free for anybody who wants to find them.
MATTHEW RASCOFF: And I confess, I mean, 16 million downloads, it may make this one of the most impactful educational interventions in the history of Stanford. I mean, that’s partly what draws me as Stanford’s digital education administrator to this project because it is a digital education. It’s not framed as online learning.
But that’s exactly what you’re doing, it’s just not through the means of the typical course or structure of the credit. It’s a professional development and a content and a video intervention. But it’s so impactful.
And I’m so grateful, honestly, that we have this example of an open project that is sustainable, that foundations see the value, and that you chose to do it in this public realm, in the public interest. It’s such a powerful model. I hope for other faculty at other institutions, who are considering their own careers, how they want to make an impact, this, to me, is a model. And what you’ve done with the Digital Inquiry Group, I hope, it’s not just the content, but it’s also the way you’ve gone about it. And for anybody who’s trying to make an impact in education, there’s a very powerful case study to be had here.
SAM WINEBURG: Thank you. I’m very flattered by those words. I would add that I wish I could say that I sat down on my desk one day and I took a bunch of pieces of paper and Elmer glued them together. And I sat with a ruler in order to make a grand blueprint of how we were going to do this. That’s not how life worked out in my case.
A brief story: We did a study in San Francisco Unified School District. And it was done with my student, Abby Reisman. And we wanted to test the kind of Reading Like a Historian curriculum that we were developing, that we were using with our teacher candidates and teacher preparation.
And Abby said, no, let’s go into real high school classrooms and do it. And the university gave us about $75,000, which sounds like a lot, but it was really a shoestring. And she devised a summer institute, where volunteer teachers underwent this kind of a document-based approach to teaching history rather than by the textbook.
And lo and behold, 75 lesson plans and seven months later, we got the results and the results were incredibly promising. Yes, the kids did better in the historical understanding. But the thing that really interested the district, we threw in a measure of reading comprehension.
And there was a boost in students’ reading comprehension on a nationally recognized No Child Left Behind measure. And that, somewhat sad to say, was about the only thing the district was interested in. And they said, can we make this available to every history teacher?
And we said, sure. And I said, but — and this is 2007. I said we need to do a website. And again, to set the context, 2007, where the only money that was going to educational innovation was STEM. There was nothing going into anything that touched the humanities. It was all math and science.
And so I just said, yeah, we would need a website and it would cost. And I had no idea. Literally, the first figure that came into my mind was what I said, $20,000. Never done a website before. Never done it.
Fortunately, Abby’s brother was a jazz musician, but a computer programmer on the side. And so he designed a kind of rough website for us. And then the district said, you need to have it conform to San Francisco’s user and password.
And we said, no. That if we’re going to go to the trouble to do this and put it up on the internet, then anybody who wants to use it can use it. We’re not going to impose these restrictions. And six months later, where somebody tells us about Google Analytics, and we noticed that we’re like 250,000 downloads.
That’s like way more than the teachers — and then you could locate them on Google. Why are people — how do people in Alaska know what we’re doing? Why did somebody in Tallahassee, Florida, why did somebody in Little Rock, Arkansas, why are they downloading all of our stuff?
And we realized, oh my God, this internet thing is a distribution system. And if you leave quality materials by the digital curb, they can start to go viral. And it really was a rude awakening for us.
And it was like — and it was at that point the lightbulb went off, Matthew, of, wow, a university professor no longer needs to contact Pearson, or Holt, or a textbook. Using the internet as a delivery system with a little bit of foundation money, you can directly reach teachers. And that for us was a game changer.
MATTHEW RASCOFF: Yeah. I like the line you quote, the internet, the world’s best fact checker and the world’s best bias confirmer. And I feel like you’re searching out the better angels of the internet and trying to put them to work on behalf of educational values, on behalf of trust. And it’s a very optimistic take, actually, in this book. That’s how I read it about the possibilities.
SAM WINEBURG: I mean you can’t be an educator without being an optimist.
MATTHEW RASCOFF: I hope that’s true. I hope we can maintain that optimism. Let’s talk about — there is a section at the end of the book about how to deal with the problem of excess cynicism. So while we’re talking about optimism, I think you make the case that too much cynicism is actually just as dangerous as too much naïveté when you’re navigating the internet. And you give the example of mistrust in institutions like the Mayo Clinic, which may make some occasional mistakes, but on balance, the Mayo Clinic is a reliable source.
And I think, more recently, we’ve been seeing questioning of official sources regarding Hurricane Helene and Milton, where there’s rampant misinformation that’s extremely dangerous in an emergency response, that’s threatening people’s lives. They’re not trusting reliable sources of information. So a lot of information literacy, I think is — and a lot of the book is about being more skeptical, reading carefully, lateral reading, fact-checking.
But you also make the case that it can go too far and that you can basically not take the reliable information that’s there from trusted institutions. And that problem, it seems so intractable, given the information ecology that we’re all surrounded by, given the political polarization. The idea that storm responses could be politicized, it’s just baffling to me.
And I lived in North Carolina, I’m familiar with polarized societies. How are we going to get out of that? What is the solution there? Give me an optimistic take on that.
SAM WINEBURG: So we introduced the concept in the book of trust compression. And you refer to it. When the distance between the Mayo Clinic and Bob and Joe’s homeopathic remedies become like this because Bob and Matthew or Bob and Joe’s homeopathic, they’re trying to make money. And those Mayo Clinic people, well, they need money too.
And that, we say, being gullible on the internet is not just believing everything, it’s believing nothing. And so that’s a really — I mean, it ultimately leads to nihilism. And in some ways, this is right out of chapter and verse of Jason Stanley’s treatise on fascism.
That a fascist leader wants to diminish all information sources and get people to a place where they throw their hands up and say, “You can’t believe anything,” because then you only can believe the strongman. And so that’s a very, very dangerous place to be. The internet, the positive thing that you’re going to hear about from me from the internet is exactly that quote that you quoted from Michael Lynch at Connecticut.
The internet is the best bias confirmer that we can ever invent at the same time as being the best fact-checker that we can confirm. So, I mean, I could give you an example. Somebody, post-debate, sent me a TikTok. And the TikTok showed Kamala Harris’s earring.
And the earring resembles a little microphone that’s sold by a Swedish company that also has a pearl in it. And it said, Whoa, look at this, she was cheating. She was getting Bluetooth instructions. That’s why she was able to so fluently respond.
Now, if you go on TikTok, you will find other videos like that. But if you have a few key words of Kamala Harris’s earrings, the debate, you will find all kinds of pictures that are verified that show that her earrings, which were from Tiffany’s, are not the same as these earrings. The earrings from Tiffany’s have a pearl, but they have two small gold bars going down.
Whereas this particular earpiece that’s a microphone, when you look at it carefully, has only a kind of round piece of metal that attaches to the pearl. You can find a Wired article. You can find a Reuters article.
Even the New York Post, which is favorable to the other candidate and will often print stuff that’s on the line, they had to conclude that this was a conspiracy theory. So again, the ability to — you used the term, I haven’t introduced it yet — read laterally, which is our term for if you are on a site that’s unfamiliar, recognize that on that site’s About page, they can say what they want to say. They can get all of the bells and whistles of credibility, credibility tokens that are cheap.
So for instance, a dot org website, 501(c)(3) status, which is the IRS passes out probably in the last year, I think, it was 96% of all applications. So it’s almost like, oh, you’re getting up to get a 501(c)(3), get me one while you’re up there too. So that no longer is much of a token of credibility.
The longer you stay on a site and get sucked into the tokens of credibility that they can gain, the greater the probability you’ll get sucked into their vortex. The way that you prevent it is by leaving that site and using the internet as a source of reputational calibration. It’s much harder to game your reputation across the entire internet.
It’s not impossible. People can change the Wikipedia page. But you’d have to line up an awful lot of people if you’ve done something really wrong. So, for instance, we use this example of the International Life Science Institute, dot org, great organization.
I think it’s a $16-million-a-year budget, the spiffiest site, a peer-reviewed journal, scientific advisory board, all of the bells and whistles. I’ve watched people spend 10 minutes on this site and say, yeah, this — smart people, not dumb people, smart people say, yeah, this seems OK. Within 30 seconds of putting the name of this organization in your browser, you will start to smell that not everything is kosher with this particular site.
MATTHEW RASCOFF: So you don’t have to give it away. Let that be an exercise for the reader, where people can go digging themselves. I have one more question, but I just want to invite the audience to use the Q&A feature and start filling that up with your questions. And we’ll move to audience questions after this one.
So my last question for you, Sam, is your argument for critical ignoring at the end of the book. And your analogy is to a cyclist, who has to conserve energy in order to do the whole race. As a cyclist myself, it resonated with me.
In a world of information abundance, attention is our most precious asset. Yet, people are working overtime to make you waste it. That’s your quote. And I have heard that in Silicon Valley, people talk about the market for attention.
The original LLM paper is called “Attention is All You Need.” Netflix said, they’re competing for your attention versus sleep. That to me, that is certainly — you quote Simon on this, but that is certainly the precious resource today.
And I wonder, it seems like such an uphill battle for us when they’re — we’re working against dark patterns. We’re working against the TikTok algorithm. I saw TikTok is now being sued basically about this issue and its addictive nature.
How should we think about the conservation of this? You’re up against an addictive drug, but it’s legal, or at least it is for now. Do we have any hope in conserving our attention?
I thought there was an article last week about how novels are no longer being assigned in English literature courses. At Columbia University, Andrew Delbanco, a great English professor, is no longer assigning Moby Dick in his course. And he has the short stories by Melville because we have to adapt to the times.
That, to me, is the consequences of our loss of attention. It has profound educational consequences for students, who are just — they’re being shortchanged in their education. Because that’s the adaptation that the faculty are making in order to get the students to do the reading.
So can you give us a framework for thinking about, how do we conserve our energy and our attention? Are there habits that you use yourself in order to protect your own attention to write a book like this, let’s say, or to read novels, if you do that still? What are your strategies? Old school.
SAM WINEBURG: It is an hourglass. It is an hourglass. And when I face the blank screen and I have to confront my own ignorance and I want to Google my name to reassure myself that there’s somebody that loves me, I impose this hourglass and say, you cannot go online. Sit there and work through the confusion.
And yes, if you have to pee, you can go get up and pee. But that’s it. That’s the only thing you can do. And I found that it is absolutely necessary because I am no less susceptible than anybody else to doomscrolling, to when there’s a difficult problem to go look at cat videos on YouTube.
I mean, I’m just as bad as anyone. So yes, you need to come up with what works for you. I also have Freedom on my phone, which is a program that doesn’t allow me to go to social media sites when I’m trying to do something hard and I hear voices of, You’re never going to get it, you’re not smart enough, blah, blah, blah, blah. But let’s just get back to critical ignoring and what it is.
Our time is limited. Tim Wu, I think, is the person, who was really responsible for the book. I think it’s called The Attention Merchants. And it was Wu, a professor of law at Columbia, who talked about how these companies have some of the most well-paid behavioral scientists trying to figure out how to keep us on their site and keep our eyes glued to the screen in order for us to see more ads. And this is what we’re up against.
We’re up against some really genius people who are trying to steal our attention. And it was Herbert Simon, who said that in an age of information overabundance, overabundance creates a scarcity of some other commodity. And when the overabundance is information, what it creates is a scarcity of attention.
This was Simon prophetically in the early 1970s, he saw the writing on the wall. And so what can we do? I mean, it goes back to, for instance, that website, a website that, for instance, that I referred to, ilsi.org.
You might be looking for information about, Is too much sugar really dangerous? And you go on to this website and you don’t really know who’s behind it. And you spend 20 minutes and you’re not even sure if it’s a good place.
What critical ignoring says is before you give your attention over to a site, the most important decision you can make is to make a determination about whether that site is worth your time. And so critical ignoring is not like turning your back on something. It’s more like when you are walking along the supermarket and you chance down an aisle where all of the potato chips are and you recognize there’s potato chips, but rather than going down the whole aisle where you then see the tortilla chips, and then you see the Cheetos, and then you see the Fritos, and then you see — you turn around and recognize, no, I’m not going to go there.
When you go to a site like this, you recognize that it claims to be a nutrition site, that you might look at the board of directors, but you don’t spend minutes. Minutes are precious. Before leaving that site and saying, opening up a bunch of tabs, in this particular case, putting the name of the organization into your browser, maybe with the word “funding.” And within a minute, you see, Wait a second, there are a lot better, a lot more authoritative sites that don’t have the kind of dark cloud of conflict of interest on them. That’s what critical ignoring is about. It is making sure that where we point our attention is a worthy object of our attention.
MATTHEW RASCOFF: It was one of the best chapters, honestly. And I think it has broad ramifications, not just for verification and information literacy, but also just for well-being on the internet, for mental health, for doomscrolling. I mean, all these behaviors that are they’re not just about trust, there’s so many more, the broader implications to me seem immense.
SAM WINEBURG: I mean If you think of the way — like, for instance, the California election and an initiative state, you know what, if there’s 20 initiatives on the state ballot, if the average person spent 10 minutes on each, that would be 15 minutes, it’d be close to — big act of citizenship. The question to ask in a digital age is after they spend 10 minutes on each one of these initiatives, do they emerge better informed or more confused? The question that we work on is, how can we use those 10 minutes to increase the probability that they’ll be better informed rather than more confused?
MATTHEW RASCOFF: OK, let’s turn to the audience questions. We’ve got a lot of them. There’s 15 here, so I’m not sure we’re going to be able to get through all of them. But what I’m going to try to do is possibly group them a little bit.
So there’s a lot of questions about AI. And I know you added a postscript to the book about AI, but let me just give you a flavor of the questions. Is AI going to turbocharge these problems? Will AI lead to greater confirmation bias?
You talk about Google-driven confirmation bias, how it gives you the answers based on the keywords that you’re searching on, not necessarily the objective search, but just the results. And you quote Google on this. Is AI going to make that even worse? Because it’s serving the customer. It’s giving you what you want. It wants to satisfy you.
There are questions from technologists, who are building AIs, like how could we do this better? People who are in — I’m an AI engineer and I want to hear how technology could support what you recommend. Do we need AIs, for example, that challenge you, rather than confirming you?
Should we program them to push back against you to be your sparring partner, rather than your confirmation bias machine? So how are you thinking about AI in the framework that you developed here?
SAM WINEBURG: OK, so a caveat. First of all, we’re so new. All of us are so new on this that we’re telling tales out of school and we’re speculating. But I think it’s really — this thing is coming at us so fast that if we don’t think hard about it, we’re going to find ourselves behind the eight ball very, very quick if we’re not already there.
Is AI turbocharging confirmation bias? Absolutely. I mean, you can tailor your prompts so that you get more and more answers that feed your own ego. Now, could we introduce a forcing function?
Could we program a forcing function accompanying an AI response that says, here are alternative views that disagree from the first one? So again, this is where, ultimately, the human beings are going to have to learn a little bit about prompt engineering. Is there instruction going on in schools about this? Absolutely not.
Is there instruction in schools for students to understand that they’re talking to a machine that is based on statistical probabilities and finding large patterns within trillions of words, that there is not a homunculus behind the machine? There is not a thinking — the machine is not thinking. The machine is computing.
And it’s computing in very sophisticated ways. And it’s programmed to mimic human speech in ways that are preternatural. I mean, if you spent time with ChatGPT, you can’t help but being flabbergasted.
But the problem is that convincing, persuasive prose does not mean that it’s accurate. And so the thing that worries me, and I can give you examples, is that these AI products are commercial products. And they’re trying to reduce the cost of processing for the question that you give.
So I’m going to give you a very quick example. I asked ChatGPT a couple of weeks ago about the Battle of Lexington, April 19, 1775, the Lexington Commons, the British are coming. And I said, When the British regulars reached Lexington Green, what were the casualties of British soldiers?
And the model says, There were something like six or seven British casualties, da, da, da, da. And then I said, Are you sure? And then it adopted some qualified language. It said, Well, historians disagree about the number of British casualties.
And then I pressed it because I happen to know this particular topic. It’s something that we’ve developed curriculum for, on the Battle of Lexington. It’s often the way that the Revolutionary War is taught in school.
So we’ve been teaching it in our history project for years, so we know a lot about this particular event. And I said, who are the historians that claim that there were some British casualties? And then, again, this is more processing power for the model, it goes back and then says, Thank you very much. No, it seems that there were no British [casualties] — and so, again, this was a whole sequence.
But what the initial — and there are people like Harold Abelson, who say, if you get a response from the model that seems fishy, you should check it out. Well, what high school kid is going to — you only know something smells fishy if you have background knowledge to smell the fish. The only reason that I was to be able to ask those follow-up questions is because I have background knowledge and I could recognize when the model was selling me a canard.
And so this is a problem. Now, are there other ways that the technology can be used that are educationally beneficial? I believe so. So one of the amazing things you can do is that you can say, tell the story of Hiroshima — the bombing of Hiroshima from the perspective of Harry Truman.
Tell it from the perspective of Howard Zinn. Tell it from the perspective of a Japanese historian. And automatically, it can do something that a textbook can’t do. It can give you a vibe of different interpretations that then you have to contend with.
What are the facts versus what’s an interpretation? So I don’t want to sound like an AI Luddite, that everything is — no, there are a lot of powerful uses that this technology can be harnessed for in an educational context that we need to figure out at the same time as recognizing it can do a lot of mischief in the meantime. And so I would be very cautious about putting a chatbot in the hands of kids right now because they are the ones who know the least about any of the topics that they are going to search for. So if anybody is curious about this, within the next couple of weeks, we have a piece coming out in the Boston Globe precisely on this topic, that contrasts why teachers should first start students with teaching them, not about artificial intelligence but human intelligence using Wikipedia as an example.
MATTHEW RASCOFF: Thank you. I look forward to the piece in the Boston Globe. And we’ll share it on social media as well. So let’s return to the topic of the election, there’s a bunch of questions about it.
Greg asked, What would you recommend as the highest ROI actions we could take to help in the upcoming election? Anonymous attendee asked about the partisan divide, will the book and your lessons reach across the partisan divide? In theory, the open internet should allow us to read viewpoints that conflict with our own.
In reality, it’s created echo chambers. That seems somewhat paradoxical, but also solvable maybe, I don’t know, if we had more mechanisms for lateral reading, for cross-checking. And I think you make the point in the book that when there is no trust — I mean, you just made it now — it supports closed mindedness and could end up in fascism because you don’t trust anything else other than your political leaders and you repeat slogans.
So how do you see — what are the political implications of this? I know you said you’re not a political scientist. But could you draw out some of the social implications of this, the political and how we should be thinking about this with an election coming up so soon?
SAM WINEBURG: Well, the first thing is that we’re going to be, going to be flooded by all kinds of images that have been tampered with. And so you’re going to have increasingly sophisticated deepfakes. I saw one with Ron DeSantis that I thought was pretty good, where he is apologizing to President Trump for even challenging him.
And it’s a deepfake and it’s one of the better ones that I’ve seen. I’m more concerned with cheap fakes than deepfakes because any 10-year-old can do it and buy a cheap fake. What I mean is taking an existing picture from an event and changing the caption and reposting it. Something that we saw — it started really big time with the war with Ukraine.
It certainly increased in ferocity with the war between Israel and Hamas, of these horrific pictures claiming to be horrible battleground scenes that, actually, in the latter case, were from Syria in 2015 and 2014. But the caption frames it. So when we see something that seems to do an end round around our prefrontal cortex and hit us in our solar plexus, and emotionally, we are responding to something, the first thing to do and it’s — we use an acronym in the book called SIFT.
SIFT stands for stop, investigate the source, find a better source, and trace it back to the original. And people often skip over stop. But the first thing that you should ask is, wait a second, I am a partisan in this. I have my views.
I am not neutral on the political scene. I’ve got things that are very deep to me and things that make my blood boil. And so something comes across on my Facebook feed that seems to confirm and make me more angry because rage sells on the internet, is the first thing I should do is press share?
No. The first thing I should do is to take a deep breath and ask myself the most important question you can ask when you see something on the screen. Which is, and I’m going to say it slowly, do I really know what I’m looking at? Do I really know that this is a picture of a battlefield scene in the Donbas region of Ukraine?
Or could this be anything else? Is it from somebody I recognize, a source that I recognize? Is it from a source that if they’re wrong, they have consequences from being wrong?
If David Muir from NBC News posts something that turns out to be fake news, he takes a big hit. If Mr. Whatever influencer does it, then it’s just another thing that he does and it goes on to the next thing. And so there are a series of questions to ask.
Do I really know what I’m looking at? And does the person who posts it have something serious to lose in terms of their credibility if it turns out to be false? Those two questions, by asking them, are not going to create an error-free search, but they’re going to take out a significant chunk of the kinds of errors you’re going to typically make.
MATTHEW RASCOFF: Thank you. So here’s a question building on the previous one. Because we have tech practitioners on this call, like engineers, who are designing and developing social media platforms. So Megan asks, What would platforms need to do to guide or better shape civic behaviors and effectively check back the tendencies that you were talking about, the political tendencies, the attention tendencies?
And I’d be interested just maybe to extend that, what is your take on efforts like Starling Lab’s, the project at Stanford and USC to use Web3 to watermark content and to develop more trust-based technology? Meta has a watermarking system also for generative AI images. Is that areas where you see potential, like the technological solutions? Speak to the engineers and to the product managers and the Silicon Valley folks who are on this call, what would you like to have them do?
SAM WINEBURG: I think that we can become creative in thinking about forcing functions, things that make people stop and make people ask a question and create a latency before they can share. And so I think that there are small, not terribly onerous and not terribly annoying forcing functions that we can start to A/B test with and see if they have some effect, if they create a little bit more patience for people. I mean, creating a latency, I think, is an important thing. And then there can be —
MATTHEW RASCOFF: But do you want to read this article before sharing it? I think X had that feature for awhile.
SAM WINEBURG: Yeah, well, but all you have to do there is say “no” and then you’ve circumvented it. So there’s no latency. Maybe a short video about fact finding that’s like 11 seconds is a more serious forcing function that people will find ways to circumvent it, or listen to something else. But at least, it’s there.
And again, these things need to be tested. We need to seed these ideas out there. I am ultimately an applied psychologist. I’m a believer in data. We really don’t know what works.
I think that the other thing to do for — again, with an election coming up, I don’t know about the watermarking. And I don’t know about the data on it. I worry about the plethora of material that can be issued that will circumvent the watermarking process.
And so let me just use Grok as an example. Musk has taken the gloves off. And there are some really, really nasty things that have been done on Grok with people creating pictures of people putting envelopes into ballot boxes from a surveillance camera. And Grok is really good.
And then you have this thing and it rankles me to no end. And the technologists out there, I wonder what they think about it. There’s all of this kind of advice out there, how to spot AI generated images.
So, look at their fingernails and see if the hair is right. Well, folks, come on, get real. I mean, really, if you have an image that Dali’s made or that Grok has made and you don’t like it, there’s this thing called Photoshop. And with a little bit of help and a little bit of pixel first aid, you can get that thing looking really, really good.
And so fine, you want to turn your grandmother into an AI generated detective, give me a break. And this is exactly the kind of thing, it returns us to attention. Do we really want people sitting there with a microscope or a magnifying glass in front of the thing, analyzing the fingernails to see if they’re real?
This is some of the stupid advice that we see right now being purveyed. No, if you get a picture of Donald Trump and Kamala Harris both embracing an AK-47, then what you do is you use either TinEye or you use Google Lens, or you use whatever you want to see where else the images appeared. Or you look at your posts and if it’s been there for a while, somebody’s going to post a comment and say, Hey, you’ve been duped, this is why you’ve been duped.
Or you just, say, put in your browser, go back to lateral reading, “Donald Trump, Kamala Harris holding a gun.” And then you’re going to see Reuters debunking it. And you’re going to see FactCheck debunking it, and PolitiFact. And you’re going to save yourself an awful lot of grief from having to apologize to the people that you sent it to.
So there are a variety of things that we can do. But I think you’re absolutely — the spirit of the question is right on point. We are going to be faced with a hurricane of this stuff in the next three weeks.
MATTHEW RASCOFF: Thank you. Not the most optimistic note to end on. But the whole conversation has been fantastic, Sam. I recommend the book so highly. It’s very readable. It’s very accessible.
I think it’s assignable to students also. And it’s a fantastic resource. It’s a contribution, I think, to the curriculum, to democracy. And I hope it gives a framework for people to take on these challenging issues and try to make progress on them.
SAM WINEBURG: Thank you.
And the Hewlett Foundation had gone all in on open education resources. And in 2004, they gave us a substantial grant because we made a case that if we want to democratize — if we want to bring equity to educational materials, they should be free and high quality. Because otherwise, only the hoity-toity districts can pay for the kind of most innovative curriculum.
And they bought the idea. They gave us $500,000 in 2003. And we created a website called Historical Thinking Matters. And it was completely free. You could download all of the lesson plans and document-based arrangements that we created.
Now, since that time, that curriculum, which morphed into the Reading Like a Historian curriculum, changed somewhat. And again, fortunately, we had support from places like the Library of Congress. But today, we have 16 million downloads of our curriculum and our assessments, none of which cost anyone who downloaded one cent.
So we are — I'm all in on it. I think that — and we had to make a decision when we spun out of Stanford, were we going to be a for-profit company? Were we going to be a B Corp, like Patagonia, a company with a socially responsible, or we were going to go straight 501(c)(3) nonprofit? And that was not a hard choice for us.
We are all educators. None of us went into this to become rich. We went into this to try to do something significant for the public good. And sure, yeah, we don't want to be poor, so we do take a salary, but nothing that's crazy.
Now, you asked about the book and the materials in the book. All of the materials in the book are based on exercises and assessments and lesson plans that we have freely available on the Digital Inquiry Group site, digitalinquirygroup.org. Or no, excuse me, I misspoke, it's inquirygroup, one word, dot org.
When you go there or if you google “civic online reasoning curriculum Stanford,” you'll be redirected. It's very easy to find. And so all of those materials — lesson plans, curriculums, videotapes of what it looks like in classrooms, all of those — all you need to do is register on the site and they're yours for the taking.
So that is a part of — that runs in our DNA. Fortunately, we have benefited from philanthropic sponsors, who also believe in this mission, that say, if we want to bring about equity, then high-quality materials need to be free for anybody who wants to find them.
MATTHEW RASCOFF: And I confess, I mean, 16 million downloads, it may make this one of the most impactful educational interventions in the history of Stanford. I mean, that's partly what draws me as Stanford's digital education administrator to this project because it is a digital education. It's not framed as online learning.
But that's exactly what you're doing, it's just not through the means of the typical course or structure of the credit. It's a professional development and a content and a video intervention. But it's so impactful.
And I'm so grateful, honestly, that we have this example of an open project that is sustainable, that foundations see the value, and that you chose to do it in this public realm, in the public interest. It's such a powerful model. I hope for other faculty at other institutions, who are considering their own careers, how they want to make an impact, this, to me, is a model. And what you've done with the Digital Inquiry Group, I hope, it's not just the content, but it's also the way you've gone about it. And for anybody who's trying to make an impact in education, there's a very powerful case study to be had here.
SAM WINEBURG: Thank you. I'm very flattered by those words. I would add that I wish I could say that I sat down on my desk one day and I took a bunch of pieces of paper and Elmer glued them together. And I sat with a ruler in order to make a grand blueprint of how we were going to do this. That's not how life worked out in my case.
A brief story: We did a study in San Francisco Unified School District. And it was done with my student, Abby Reisman. And we wanted to test the kind of Reading Like a Historian curriculum that we were developing, that we were using with our teacher candidates and teacher preparation.
And Abby said, no, let's go into real high school classrooms and do it. And the university gave us about $75,000, which sounds like a lot, but it was really a shoestring. And she devised a summer institute, where volunteer teachers underwent this kind of a document-based approach to teaching history rather than by the textbook.
And lo and behold, 75 lesson plans and seven months later, we got the results and the results were incredibly promising. Yes, the kids did better in the historical understanding. But the thing that really interested the district, we threw in a measure of reading comprehension.
And there was a boost in students' reading comprehension on a nationally recognized No Child Left Behind measure. And that, somewhat sad to say, was about the only thing the district was interested in. And they said, can we make this available to every history teacher?
And we said, sure. And I said, but — and this is 2007. I said we need to do a website. And again, to set the context, 2007, where the only money that was going to educational innovation was STEM. There was nothing going into anything that touched the humanities. It was all math and science.
And so I just said, yeah, we would need a website and it would cost. And I had no idea. Literally, the first figure that came into my mind was what I said, $20,000. Never done a website before. Never done it.
Fortunately, Abby's brother was a jazz musician, but a computer programmer on the side. And so he designed a kind of rough website for us. And then the district said, you need to have it conform to San Francisco's user and password.
And we said, no. That if we're going to go to the trouble to do this and put it up on the internet, then anybody who wants to use it can use it. We're not going to impose these restrictions. And six months later, where somebody tells us about Google Analytics, and we noticed that we're like 250,000 downloads.
That's like way more than the teachers — and then you could locate them on Google. Why are people — how do people in Alaska know what we're doing? Why did somebody in Tallahassee, Florida, why did somebody in Little Rock, Arkansas, why are they downloading all of our stuff?
And we realized, oh my God, this internet thing is a distribution system. And if you leave quality materials by the digital curb, they can start to go viral. And it really was a rude awakening for us.
And it was like — and it was at that point the lightbulb went off, Matthew, of, wow, a university professor no longer needs to contact Pearson, or Holt, or a textbook. Using the internet as a delivery system with a little bit of foundation money, you can directly reach teachers. And that for us was a game changer.
MATTHEW RASCOFF: Yeah. I like the line you quote, the internet, the world's best fact checker and the world's best bias confirmer. And I feel like you're searching out the better angels of the internet and trying to put them to work on behalf of educational values, on behalf of trust. And it's a very optimistic take, actually, in this book. That's how I read it about the possibilities.
SAM WINEBURG: I mean you can't be an educator without being an optimist.
MATTHEW RASCOFF: I hope that's true. I hope we can maintain that optimism. Let's talk about — there is a section at the end of the book about how to deal with the problem of excess cynicism. So while we're talking about optimism, I think you make the case that too much cynicism is actually just as dangerous as too much naïveté when you're navigating the internet. And you give the example of mistrust in institutions like the Mayo Clinic, which may make some occasional mistakes, but on balance, the Mayo Clinic is a reliable source.
And I think, more recently, we've been seeing questioning of official sources regarding Hurricane Helene and Milton, where there's rampant misinformation that's extremely dangerous in an emergency response, that's threatening people's lives. They're not trusting reliable sources of information. So a lot of information literacy, I think is — and a lot of the book is about being more skeptical, reading carefully, lateral reading, fact-checking.
But you also make the case that it can go too far and that you can basically not take the reliable information that's there from trusted institutions. And that problem, it seems so intractable, given the information ecology that we're all surrounded by, given the political polarization. The idea that storm responses could be politicized, it's just baffling to me.
And I lived in North Carolina, I'm familiar with polarized societies. How are we going to get out of that? What is the solution there? Give me an optimistic take on that.
SAM WINEBURG: So we introduced the concept in the book of trust compression. And you refer to it. When the distance between the Mayo Clinic and Bob and Joe's homeopathic remedies become like this because Bob and Matthew or Bob and Joe's homeopathic, they're trying to make money. And those Mayo Clinic people, well, they need money too.
And that, we say, being gullible on the internet is not just believing everything, it's believing nothing. And so that's a really — I mean, it ultimately leads to nihilism. And in some ways, this is right out of chapter and verse of Jason Stanley's treatise on fascism.
That a fascist leader wants to diminish all information sources and get people to a place where they throw their hands up and say, “You can't believe anything,” because then you only can believe the strongman. And so that's a very, very dangerous place to be. The internet, the positive thing that you're going to hear about from me from the internet is exactly that quote that you quoted from Michael Lynch at Connecticut.
The internet is the best bias confirmer that we can ever invent at the same time as being the best fact-checker that we can confirm. So, I mean, I could give you an example. Somebody, post-debate, sent me a TikTok. And the TikTok showed Kamala Harris's earring.
And the earring resembles a little microphone that's sold by a Swedish company that also has a pearl in it. And it said, Whoa, look at this, she was cheating. She was getting Bluetooth instructions. That's why she was able to so fluently respond.
Now, if you go on TikTok, you will find other videos like that. But if you have a few key words of Kamala Harris's earrings, the debate, you will find all kinds of pictures that are verified that show that her earrings, which were from Tiffany's, are not the same as these earrings. The earrings from Tiffany's have a pearl, but they have two small gold bars going down.
Whereas this particular earpiece that's a microphone, when you look at it carefully, has only a kind of round piece of metal that attaches to the pearl. You can find a Wired article. You can find a Reuters article.
Even the New York Post, which is favorable to the other candidate and will often print stuff that's on the line, they had to conclude that this was a conspiracy theory. So again, the ability to — you used the term, I haven't introduced it yet — read laterally, which is our term for if you are on a site that's unfamiliar, recognize that on that site's About page, they can say what they want to say. They can get all of the bells and whistles of credibility, credibility tokens that are cheap.
So for instance, a dot org website, 501(c)(3) status, which is the IRS passes out probably in the last year, I think, it was 96% of all applications. So it's almost like, oh, you're getting up to get a 501(c)(3), get me one while you're up there too. So that no longer is much of a token of credibility.
The longer you stay on a site and get sucked into the tokens of credibility that they can gain, the greater the probability you'll get sucked into their vortex. The way that you prevent it is by leaving that site and using the internet as a source of reputational calibration. It's much harder to game your reputation across the entire internet.
It's not impossible. People can change the Wikipedia page. But you'd have to line up an awful lot of people if you've done something really wrong. So, for instance, we use this example of the International Life Science Institute, dot org, great organization.
I think it's a $16-million-a-year budget, the spiffiest site, a peer-reviewed journal, scientific advisory board, all of the bells and whistles. I've watched people spend 10 minutes on this site and say, yeah, this — smart people, not dumb people, smart people say, yeah, this seems OK. Within 30 seconds of putting the name of this organization in your browser, you will start to smell that not everything is kosher with this particular site.
MATTHEW RASCOFF: So you don't have to give it away. Let that be an exercise for the reader, where people can go digging themselves. I have one more question, but I just want to invite the audience to use the Q&A feature and start filling that up with your questions. And we'll move to audience questions after this one.
So my last question for you, Sam, is your argument for critical ignoring at the end of the book. And your analogy is to a cyclist, who has to conserve energy in order to do the whole race. As a cyclist myself, it resonated with me.
In a world of information abundance, attention is our most precious asset. Yet, people are working overtime to make you waste it. That's your quote. And I have heard that in Silicon Valley, people talk about the market for attention.
The original LLM paper is called “Attention is All You Need.” Netflix said, they're competing for your attention versus sleep. That to me, that is certainly — you quote Simon on this, but that is certainly the precious resource today.
And I wonder, it seems like such an uphill battle for us when they're — we're working against dark patterns. We're working against the TikTok algorithm. I saw TikTok is now being sued basically about this issue and its addictive nature.
How should we think about the conservation of this? You're up against an addictive drug, but it's legal, or at least it is for now. Do we have any hope in conserving our attention?
I thought there was an article last week about how novels are no longer being assigned in English literature courses. At Columbia University, Andrew Delbanco, a great English professor, is no longer assigning Moby Dick in his course. And he has the short stories by Melville because we have to adapt to the times.
That, to me, is the consequences of our loss of attention. It has profound educational consequences for students, who are just — they're being shortchanged in their education. Because that's the adaptation that the faculty are making in order to get the students to do the reading.
So can you give us a framework for thinking about, how do we conserve our energy and our attention? Are there habits that you use yourself in order to protect your own attention to write a book like this, let's say, or to read novels, if you do that still? What are your strategies? Old school.
SAM WINEBURG: It is an hourglass. It is an hourglass. And when I face the blank screen and I have to confront my own ignorance and I want to Google my name to reassure myself that there's somebody that loves me, I impose this hourglass and say, you cannot go online. Sit there and work through the confusion.
And yes, if you have to pee, you can go get up and pee. But that's it. That's the only thing you can do. And I found that it is absolutely necessary because I am no less susceptible than anybody else to doomscrolling, to when there's a difficult problem to go look at cat videos on YouTube.
I mean, I'm just as bad as anyone. So yes, you need to come up with what works for you. I also have Freedom on my phone, which is a program that doesn't allow me to go to social media sites when I'm trying to do something hard and I hear voices of, You're never going to get it, you're not smart enough, blah, blah, blah, blah. But let's just get back to critical ignoring and what it is.
Our time is limited. Tim Wu, I think, is the person, who was really responsible for the book. I think it's called The Attention Merchants. And it was Wu, a professor of law at Columbia, who talked about how these companies have some of the most well-paid behavioral scientists trying to figure out how to keep us on their site and keep our eyes glued to the screen in order for us to see more ads. And this is what we're up against.
We're up against some really genius people who are trying to steal our attention. And it was Herbert Simon, who said that in an age of information overabundance, overabundance creates a scarcity of some other commodity. And when the overabundance is information, what it creates is a scarcity of attention.
This was Simon prophetically in the early 1970s, he saw the writing on the wall. And so what can we do? I mean, it goes back to, for instance, that website, a website that, for instance, that I referred to, ilsi.org.
You might be looking for information about, Is too much sugar really dangerous? And you go on to this website and you don't really know who's behind it. And you spend 20 minutes and you're not even sure if it's a good place.
What critical ignoring says is before you give your attention over to a site, the most important decision you can make is to make a determination about whether that site is worth your time. And so critical ignoring is not like turning your back on something. It's more like when you are walking along the supermarket and you chance down an aisle where all of the potato chips are and you recognize there's potato chips, but rather than going down the whole aisle where you then see the tortilla chips, and then you see the Cheetos, and then you see the Fritos, and then you see — you turn around and recognize, no, I'm not going to go there.
When you go to a site like this, you recognize that it claims to be a nutrition site, that you might look at the board of directors, but you don't spend minutes. Minutes are precious. Before leaving that site and saying, opening up a bunch of tabs, in this particular case, putting the name of the organization into your browser, maybe with the word “funding.” And within a minute, you see, Wait a second, there are a lot better, a lot more authoritative sites that don't have the kind of dark cloud of conflict of interest on them. That's what critical ignoring is about. It is making sure that where we point our attention is a worthy object of our attention.
MATTHEW RASCOFF: It was one of the best chapters, honestly. And I think it has broad ramifications, not just for verification and information literacy, but also just for well-being on the internet, for mental health, for doomscrolling. I mean, all these behaviors that are they're not just about trust, there's so many more, the broader implications to me seem immense.
SAM WINEBURG: I mean If you think of the way — like, for instance, the California election and an initiative state, you know what, if there's 20 initiatives on the state ballot, if the average person spent 10 minutes on each, that would be 15 minutes, it'd be close to — big act of citizenship. The question to ask in a digital age is after they spend 10 minutes on each one of these initiatives, do they emerge better informed or more confused? The question that we work on is, how can we use those 10 minutes to increase the probability that they'll be better informed rather than more confused?
MATTHEW RASCOFF: OK, let's turn to the audience questions. We've got a lot of them. There's 15 here, so I'm not sure we're going to be able to get through all of them. But what I'm going to try to do is possibly group them a little bit.
So there's a lot of questions about AI. And I know you added a postscript to the book about AI, but let me just give you a flavor of the questions. Is AI going to turbocharge these problems? Will AI lead to greater confirmation bias?
You talk about Google-driven confirmation bias, how it gives you the answers based on the keywords that you're searching on, not necessarily the objective search, but just the results. And you quote Google on this. Is AI going to make that even worse? Because it's serving the customer. It's giving you what you want. It wants to satisfy you.
There are questions from technologists, who are building AIs, like how could we do this better? People who are in — I'm an AI engineer and I want to hear how technology could support what you recommend. Do we need AIs, for example, that challenge you, rather than confirming you?
Should we program them to push back against you to be your sparring partner, rather than your confirmation bias machine? So how are you thinking about AI in the framework that you developed here?
SAM WINEBURG: OK, so a caveat. First of all, we're so new. All of us are so new on this that we're telling tales out of school and we're speculating. But I think it's really — this thing is coming at us so fast that if we don't think hard about it, we're going to find ourselves behind the eight ball very, very quick if we're not already there.
Is AI turbocharging confirmation bias? Absolutely. I mean, you can tailor your prompts so that you get more and more answers that feed your own ego. Now, could we introduce a forcing function?
Could we program a forcing function accompanying an AI response that says, here are alternative views that disagree from the first one? So again, this is where, ultimately, the human beings are going to have to learn a little bit about prompt engineering. Is there instruction going on in schools about this? Absolutely not.
Is there instruction in schools for students to understand that they're talking to a machine that is based on statistical probabilities and finding large patterns within trillions of words, that there is not a homunculus behind the machine? There is not a thinking — the machine is not thinking. The machine is computing.
And it's computing in very sophisticated ways. And it's programmed to mimic human speech in ways that are preternatural. I mean, if you spent time with ChatGPT, you can't help but being flabbergasted.
But the problem is that convincing, persuasive prose does not mean that it's accurate. And so the thing that worries me, and I can give you examples, is that these AI products are commercial products. And they're trying to reduce the cost of processing for the question that you give.
So I'm going to give you a very quick example. I asked ChatGPT a couple of weeks ago about the Battle of Lexington, April 19, 1775, the Lexington Commons, the British are coming. And I said, When the British regulars reached Lexington Green, what were the casualties of British soldiers?
And the model says, There were something like six or seven British casualties, da, da, da, da. And then I said, Are you sure? And then it adopted some qualified language. It said, Well, historians disagree about the number of British casualties.
And then I pressed it because I happen to know this particular topic. It's something that we've developed curriculum for, on the Battle of Lexington. It's often the way that the Revolutionary War is taught in school.
So we've been teaching it in our history project for years, so we know a lot about this particular event. And I said, who are the historians that claim that there were some British casualties? And then, again, this is more processing power for the model, it goes back and then says, Thank you very much. No, it seems that there were no British casua — and so, again, this was a whole sequence.
But what the initial — and there are people like Harold Abelson, who say, if you get a response from the model that seems fishy, you should check it out. Well, what high school kid is going to — you only know something smells fishy if you have background knowledge to smell the fish. The only reason that I was to be able to ask those follow-up questions is because I have background knowledge and I could recognize when the model was selling me a canard.
And so this is a problem. Now, are there other ways that the technology can be used that are educationally beneficial? I believe so. So one of the amazing things you can do is that you can say, tell the story of Hiroshima — the bombing of Hiroshima from the perspective of Harry Truman.
Tell it from the perspective of Howard Zinn. Tell it from the perspective of a Japanese historian. And automatically, it can do something that a textbook can't do. It can give you a vibe of different interpretations that then you have to contend with.
What are the facts versus what's an interpretation? So I don't want to sound like an AI Luddite, that everything is — no, there are a lot of powerful uses that this technology can be harnessed for in an educational context that we need to figure out at the same time as recognizing it can do a lot of mischief in the meantime. And so I would be very cautious about putting a chatbot in the hands of kids right now because they are the ones who know the least about any of the topics that they are going to search for. So if anybody is curious about this, within the next couple of weeks, we have a piece coming out in the Boston Globe precisely on this topic, that contrasts why teachers should first start students with teaching them, not about artificial intelligence but human intelligence using Wikipedia as an example.
MATTHEW RASCOFF: Thank you. I look forward to the piece in the Boston Globe. And we'll share it on social media as well. So let's return to the topic of the election, there's a bunch of questions about it.
Greg asked, What would you recommend as the highest ROI actions we could take to help in the upcoming election? Anonymous attendee asked about the partisan divide, Will the book and your lessons reach across the partisan divide? In theory, the open internet should allow us to read viewpoints that conflict with our own.
In reality, it's created echo chambers. That seems somewhat paradoxical, but also solvable maybe, I don't know, if we had more mechanisms for lateral reading, for cross-checking. And I think you make the point in the book that when there is no trust — I mean, you just made it now — it supports closed mindedness and could end up in fascism because you don't trust anything else other than your political leaders and you repeat slogans.
So how do you see — what are the political implications of this? I know you said you're not a political scientist. But could you draw out some of the social implications of this, the political and how we should be thinking about this with an election coming up so soon?
SAM WINEBURG: Well, the first thing is that we're going to be, going to be flooded by all kinds of images that have been tampered with. And so you're going to have increasingly sophisticated deepfakes. I saw one with Ron DeSantis that I thought was pretty good, where he is apologizing to President Trump for even challenging him.
And it's a deepfake and it's one of the better ones that I've seen. I'm more concerned with cheap fakes than deepfakes because any 10-year-old can do it and buy a cheap fake. What I mean is taking an existing picture from an event and changing the caption and reposting it. Something that we saw — it started really big time with the war with Ukraine.
It certainly increased in ferocity with the war between Israel and Hamas, of these horrific pictures claiming to be horrible battleground scenes that, actually, in the latter case, were from Syria in 2015 and 2014. But the caption frames it. So when we see something that seems to do an end round around our prefrontal cortex and hit us in our solar plexus, and emotionally, we are responding to something, the first thing to do and it's — we use an acronym in the book called SIFT.
SIFT stands for stop, investigate the source, find a better source, and trace it back to the original. And people often skip over stop. But the first thing that you should ask is, wait a second, I am a partisan in this. I have my views.
I am not neutral on the political scene. I've got things that are very deep to me and things that make my blood boil. And so something comes across on my Facebook feed that seems to confirm and make me more angry because rage sells on the internet, is the first thing I should do is press share?
No. The first thing I should do is to take a deep breath and ask myself the most important question you can ask when you see something on the screen. Which is, and I'm going to say it slowly, do I really know what I'm looking at? Do I really know that this is a picture of a battlefield scene in the Donbas region of Ukraine?
Or could this be anything else? Is it from somebody I recognize, a source that I recognize? Is it from a source that if they're wrong, they have consequences from being wrong?
If David Muir from NBC News posts something that turns out to be fake news, he takes a big hit. If Mr. Whatever influencer does it, then it's just another thing that he does and it goes on to the next thing. And so there are a series of questions to ask.
Do I really know what I'm looking at? And does the person who posts it have something serious to lose in terms of their credibility if it turns out to be false? Those two questions, by asking them, are not going to create an error-free search, but they're going to take out a significant chunk of the kinds of errors you're going to typically make.
MATTHEW RASCOFF: Thank you. So here's a question building on the previous one. Because we have tech practitioners on this call, like engineers, who are designing and developing social media platforms. So Megan asks, What would platforms need to do to guide or better shape civic behaviors and effectively check back the tendencies that you were talking about, the political tendencies, the attention tendencies?
And I'd be interested just maybe to extend that, what is your take on efforts like Starling Lab's, the project at Stanford and USC to use Web3 to watermark content and to develop more trust-based technology? Meta has a watermarking system also for generative AI images. Is that areas where you see potential, like the technological solutions? Speak to the engineers and to the product managers and the Silicon Valley folks who are on this call, what would you like to have them do?
SAM WINEBURG: I think that we can become creative in thinking about forcing functions, things that make people stop and make people ask a question and create a latency before they can share. And so I think that there are small, not terribly onerous and not terribly annoying forcing functions that we can start to A/B test with and see if they have some effect, if they create a little bit more patience for people. I mean, creating a latency, I think, is an important thing. And then there can be —
MATTHEW RASCOFF: But do you want to read this article before sharing it? I think X had that feature for awhile.
SAM WINEBURG: Yeah, well, but all you have to do there is say “no” and then you've circumvented it. So there's no latency. Maybe a short video about fact finding that's like 11 seconds is a more serious forcing function that people will find ways to circumvent it, or listen to something else. But at least, it's there.
And again, these things need to be tested. We need to seed these ideas out there. I am ultimately an applied psychologist. I'm a believer in data. We really don't know what works.
I think that the other thing to do for — again, with an election coming up, I don't know about the watermarking. And I don't know about the data on it. I worry about the plethora of material that can be issued that will circumvent the watermarking process.
And so let me just use Grok as an example. Musk has taken the gloves off. And there are some really, really nasty things that have been done on Grok with people creating pictures of people putting envelopes into ballot boxes from a surveillance camera. And Grok is really good.
And then you have this thing and it rankles me to no end. And the technologists out there, I wonder what they think about it. There's all of this kind of advice out there, how to spot AI generated images.
So, look at their fingernails and see if the hair is right. Well, folks, come on, get real. I mean, really, if you have an image that Dali's made or that Grok has made and you don't like it, there's this thing called Photoshop. And with a little bit of help and a little bit of pixel first aid, you can get that thing looking really, really good.
And so fine, you want to turn your grandmother into an AI generated detective, give me a break. And this is exactly the kind of thing, it returns us to attention. Do we really want people sitting there with a microscope or a magnifying glass in front of the thing, analyzing the fingernails to see if they're real?
This is some of the stupid advice that we see right now being purveyed. No, if you get a picture of Donald Trump and Kamala Harris both embracing an AK-47, then what you do is you use either TinEye or you use Google Lens, or you use whatever you want to see where else the images appeared. Or you look at your posts and if it's been there for a while, somebody's going to post a comment and say, Hey, you've been duped, this is why you've been duped.
Or you just, say, put in your browser, go back to lateral reading, "Donald Trump, Kamala Harris holding a gun." And then you're going to see Reuters debunking it. And you're going to see FactCheck debunking it, and PolitiFact. And you're going to save yourself an awful lot of grief from having to apologize to the people that you sent it to.
So there are a variety of things that we can do. But I think you're absolutely — the spirit of the question is right on point. We are going to be faced with a hurricane of this stuff in the next three weeks.
MATTHEW RASCOFF: Thank you. Not the most optimistic note to end on. But the whole conversation has been fantastic, Sam. I recommend the book so highly. It's very readable. It's very accessible.
I think it's assignable to students also. And it's a fantastic resource. It's a contribution, I think, to the curriculum, to democracy. And I hope it gives a framework for people to take on these challenging issues and try to make progress on them.
SAM WINEBURG: Thank you.