Skip to main content Skip to secondary navigation

Justin Reich on Failure to Disrupt: Why Technology Alone Can't Transform Education

Main content start

‌Justin Reich, associate professor of digital media in the Comparative Media Studies/Writing Department at MIT and director of the MIT Teaching Systems Lab, discusses his book Failure to Disrupt: Why Technology Alone Can't Transform Education with Shauntel (Poulson) Garvey, co-founder and partner at Reach Capital. This conversation took place November 8, 2023, as part of the Academic Innovation for the Public Good book series. Here are the video and transcript of the event.

Transcript

This transcript has been edited; introductory and closing remarks from the live event have been removed.

SHAUNTEL GARVEY: I am delighted to be joined today by Justin Reich. We are going to be discussing his book Failure to Disrupt: Why Technology Alone Can't Transform Education. And as an edtech investor, I'm always looking for solutions that empower learners, that improve outcomes. But I do agree with the premise of this book, that there's no silver bullet, and technology alone cannot democratize education.

So before we dive in, I did want to remind our readers and our audience that this book was written before the pandemic and before the latest AI craze. So I am wondering from you, Justin, what was the original impetus for writing this book?

JUSTIN REICH: Oh, so — well, thanks so much, Shauntel, for having me. Thanks, Kristen, and everyone for organizing. I used to be a high school history teacher. In 2003, I taught in a classroom where every student had a laptop and we had this internet service called First Class, which kind of did everything on a server that Google for Education does now in the cloud. So I was not the first teacher in the United States to have a chance to teach in a one to one classroom. But 2003 was pretty early for that.

And I did a lot of consulting with schools after that. I got a doctorate at Harvard. And I finished my doctorate. And soon afterwards, MIT and Harvard came together and they formed this organization called edX as part of the craze of massive open online courses. Part of why I wrote this book is an agent knocked on my door after a talk that I gave about MOOCs and was like, you should write a book about this. I was like, OK. So sometimes I'm like easily manipulable or you can kind of get me to do things in funny ways. So that was really why I wrote the book.

I think the heart of the book, though, is that if you lived through MOOC mania, as well as lots of other crazes or hype cycles, you could just watch schools spend precious resources on technology-mediated approaches that don't help most students, that — it's a little bit more complicated, but that don't work. I teach classes at MIT. For 10 years, I've had students tell me their edtech story. They come into my class on learning media technology or designing education technology. I say, what's your edtech story? And we are right in the middle of the smartboard generation.

This is a group of students who, like, showed up to middle school and like somewhere in the middle of middle school, all of a sudden these smartboards showed up in the back of their classroom. And they can all tell me how they went unused. How they could observe this big investment that schools were making that just made no difference in their learning. And every dollar we spend in schools has an opportunity cost. Every hour that we spend asking teachers to do professional learning on one thing has an opportunity cost.

And so I would say the main thrust of this book is to help education leaders, educators be more aware of what kinds of arguments technology optimists make. What are some of the common flaws in those arguments? And how they can look really critically at those arguments, not to dismiss all technology, but to really try to make smart investments with the very limited money, and especially the very limited teacher time that we have to improve schools.

SHAUNTEL GARVEY: Yeah. One of your arguments on that thread is that most, quote unquote, "new" innovations actually fit into three buckets. And historically, a lot of these new innovations aren't actually new if we look at history. So can you talk about those three buckets and maybe give us a few examples?

JUSTIN REICH: Yeah, absolutely. A thing that I try to remind people in the book and the other work that I do is that since the days that we had computers the size of your living room, computer scientists and learning scientists have partnered together to use computers to teach people. This is a 70-year enterprise. There are thousands, tens of thousands of brilliant people who have worked on this. There are many, many millions, maybe there's many billions of dollars that have been spent on these efforts.

And we made a lot of progress. There's a bunch of stuff that we know. It's a common sales tactic to say that a learning technology in schools is new or does something different. But I think if you become familiar with that history, you're like none of this is new. We've had — it is extraordinarily difficult to produce a new education technology. What we produce are modest modifications on things that have already existed. And we know a lot about those things.

So I suggest that in the field of learning at scale and technologies where there are many, many learners and few experts to guide them, there are basically buckets of tools that we have. And you can classify those tools by asking the question, who organizes the learning experience? Sometimes it's an instructor or a team of instructors who says, the learning experiences go from unit 1 to unit 2 to unit 3 to unit 4. Those are things like massive open online courses, that's instructor-guided learning at scale.

Sometimes we have students interact with some kind of algorithm and the algorithm evaluates their performance. And on the basis of their performance, gives them some different thing to do. So there are parts of the Khan Academy platform that work on that, lots of intelligent tutoring systems that work on that. You solve a math problem. If you do it well, it gives you a harder one. If you do it not so well, it gives you an easier one or a more difficult one or easier one, remediates, that kind of thing. That's algorithm-guided learning at scale.

And then there are online learning environments where groups of people come together to create learning experiences for each other. I call that peer-guided learning at scale. And things like the Scratch programming community would be an example of peer-guided learning at scale.

And we just know a bunch about these three buckets. Self-paced online learning in topics that people are required to learn is incredibly difficult. Most people are not successful at that. The people who tend to be successful are already educated, already affluent folks. So if you're building instructor-guided learning at scale, pretty hard to do it in a way that doesn't disproportionately serve already educated, already affluent people.

We have tried for 70 years to do various versions of algorithm-guided learning at scale. We've had like a little bit of progress in math, a little bit of progress in computer science, a little bit of progress in early language acquisition, though not really in first language acquisition like reading kinds of things. Lots of studies on this stuff. They work for some students, in some subjects, in some contexts, and not others.

And then almost every — I bet almost everyone who's attending this or watching this can think of some kind of learning experience they participate in online where they pursue a hobby where they do things. You learn how to do your hair or do your nails, beat a level in a video game. There's all kinds of peer-guided learning that we do online that almost all of us participate in and is really fun. And when we try to do it in schools, it really flops an awful lot. Taking those kinds of systems and putting them into required subjects in time-limited class periods where we focus a lot on individual assessment, it's just two different kinds of learning systems that don't interface super well with one another.

So the next time, you could pick any new AI thing that comes along that's going to be trying to teach kids and be like, is this an instructor-guided learning at scale? Is it a peer-guided learning at scale? Is it an algorithm-guided learning at scale? And you don't have to say to yourself, we've never encountered generative AI before. We have no idea how these things will work. Be like, no. What is the nearest thing that it's like? How well has that thing worked, for whom, under what circumstances? And that's going to give us some good guesses to start figuring some of these kinds of things out.

And I hope that process sounds tractable to people, especially in contrast to the way that education technology enthusiasts try to describe technologies as unfathomable, as miraculous. I mean, there's a ton of magical thinking that happens around AI. And for sure, some of the math is hard, but the basic principles behind a bunch of it are totally tractable and understandable to regular people. And the things that these tools are trying to do are totally tractable and understandable. They are connected to things that we've tried to do before. And we can cut through the magical talk and be like, OK, what is this thing? How well have they worked in the past? And why might this one be a little bit better or not?

SHAUNTEL GARVEY: Yeah. And the other framework that you do talk about, just in terms of how do we evaluate these new innovations, is what you're calling as-yet intractable dilemmas. You lay out four. So one, most educators will use tech in familiar ways. So you brought up smartboards. I'm just going to still take the same presentation I was going to do on projector and just put it on a smartboard. So just kind of using tech in those familiar ways.

You talk about the trap of routine assessment. At large scale, we tend to only be able to assess routine tasks versus those more meaningful tasks like complex communication or unstructured problem-solving. You touched on this. Those with greater resources resources often benefit more from technology innovations. And then just ethical and privacy concerns, which actually limit our ability to do a lot of data and experimentation. So if we look back at one of the innovations that you just mentioned and given that we have this higher education audience, let's look at MOOCs, what would you say went wrong? Why did MOOCs not live up to their potential?

JUSTIN REICH: Yeah. Here are two things. So first of all, everyone watching this should know that Shauntel was an extremely close reader of this book and probably could answer a bunch of the questions that I am presenting here. And I'm very grateful to you, Shauntel, for giving — it's an honor to get such a close read and such thoughtful questions. You did as good a job summarizing those things as I would have.

There are two truths that show up over and over again in education technology. If you want to make a bet about the future of education technology, here are two things that are very likely to be true. When teachers get access to new technologies, they use them to extend existing systems. They use them to do the kinds of things they were doing before. And that where there are benefits of new technologies, they tend to accrue to more advantaged people. They tend to accrue to folks with the financial, social, and technical capital to take advantage of new innovations, even when those innovations are free. You can bet on those two things with whatever generation of technology comes along for the decades ahead. I feel really good about those.

SHAUNTEL GARVEY: Free doesn't make it accessible to everyone and —

JUSTIN REICH: Free can create access. There was a scholar named — it's Paul Attewell who wrote a great paper in 2000 called "The Two Digital Divides." And he said there's a first digital divide of access, that is always the case, that some people have access to stuff. And then when there's new stuff and it's expensive, some people don't have access. But he said the other digital divide is of usage. And even by 2000, there were really 20 or 30 years of both quantitative and qualitative studies.

So anthropological studies in classrooms and homes, quantitative studies from surveys and test scores, that said more affluent people and people from the majority culture, white in America, are more likely to be able to use learning technologies to do complex kinds of thinking, creative kinds of expression with more adult mentorship and support. And lower income folks, folks from marginalized communities, are more likely to use technology for drill and practice individually, with less adult support and mentorship. So like even if you can snap your fingers and give everybody all the stuff, they still are given different opportunities to use those things with different kinds of support.

So why didn't MOOCs work? Well, the first problem with MOOCs is that the vast majority of people find self-paced online learning incredibly hard. For most of us, when we learn stuff, we learn stuff because we care about the teacher and our peers. You sit a kid down in algebra class in high school, you sit them in introductory bio, their general Institute requirements in college. And sometimes they care about that stuff. A lot of times they don't. And when they don't care about it — and it's good to teach people things they don't care about. I'm not saying that we should rearrange the system that way. But the reason why they learn it [is] because they care about the professor, they care about their peers, they care about external incentives.

It's just — most people find self-paced online learning really hard. The people who are good at it tend to have an apprenticeship in the formal education system. So my sort of bumper sticker for MOOCs is that they prove to be good for people who are earning their second master's degree. Like by the time you're already good at learning and you have a lot of incentives and you have a lot of resources to be able to do it, they work OK.

And then the reason why they didn't democratize education, which was one of their plans and goals, is that for people who struggle with learning, for people who don't have great access to educational systems, for people who don't have a great apprenticeship in the educational system, what they need to learn is other people. They need peers. They need coaches. They need advisors. They need support. Self-paced online learning doesn't have that stuff baked into it. Those are probably kind of the two really missing pieces.

And it doesn't mean — it doesn't mean that MOOCs like disappeared and it doesn't — like Coursera is out there making $1 million or whatever. I mean, millions of dollars. And there are lots of people who take classes. There are lots of people who pass those classes. Those can benefit folks. But they didn't disrupt the educational system. Like higher education was not transformed.

I borrow this language from Mimi Ito, that educational systems domesticate new technologies. They take new technologies and they go, you fit here. Like, here's the whole system and you get to go into this slot. You're pretty good at helping people earn their second master's degree so we'll use you here. Then we're not going to use you in all of these other kinds of places.

So again, as new technologies are coming along, AI is not going to disrupt educational systems. The educational systems are going to domesticate AI tools. They're going to go, oh, that thing, that's actually kind of useful for this and we'll probably find ways that it helps some learners in some contexts. For most of the things that we work on, it's a lot easier to figure out how you help already advantaged learners fly even farther and faster in the educational system than it is to figure out how do you help folks who don't have the same level of resources and support to do similarly to their more affluent peers.

SHAUNTEL GARVEY: And that's the harder work, I would say. And what's also interesting is although Reach, we did not invest in MOOCs, MOOCs did receive a lot of VC funding. So funding from venture capital. And maybe VC might be part of the problem.

And like you said, I did do a close reading of this book. So on page 135, you write venture capital firms, maybe Reach included, are structurally incentivized to be more concerned with getting a return on their investment than in some supporting products that substantially improve education. So it sounds like there might be an incentive problem. How do you think we could change the incentives so that VCs can be more aligned to investing in solutions that do improve outcomes?

JUSTIN REICH: Yeah. And I think I was talking about VCs there because I was thinking particularly about the start-up choice that they have. But I don't want to say that venture capital is different from other forms of capitalist enterprises. Like publishers have that problem as well that people want to sell products because they want to make a living and because they want to eat and all those things. And it's really incumbent on educational institutions to purchase products that improve learning.

And there's all kinds of shocking ways that educational systems don't do that. I think very famously there's some studies on the incredible importance, for instance, of the glossiness of textbook covers. Traditional publishers spend a lot of money making their covers look really nice because that apparently incentivizes textbook adoption committees. It shouldn't because what your cover looks like has zero to do with how much your students are going to learn.

I mean, I would hope that education technology companies, and there are many of them that do this, take seriously their responsibility, not just to make products that are adopted, but to make products that actually improve learning outcomes. And a huge challenge that we have in education technology development is that things that are engaging, things that people like, things that delight people, don't necessarily lead to positive learning outcomes.

In fact, there's this notion in education psychology of desirable difficulty. The things that make us uncomfortable, things that are unpleasant, things that are difficult to get through are often what helps us get to better kinds of learning outcomes. I mean, so I hope that education technology firms, whether they're venture-funded or funded in other ways, take seriously their responsibility to say, it's not enough to sell products. It's not enough to engage people. It is our responsibility to be constantly evaluating how well students are learning and whether they are learning more than they would using competing approaches.

But it really is the responsibility of purchasers, it's really the responsibilities of colleges, of universities, of schools to be informed and to hold new — to hold technology companies to high standards, to say, there should be really rigorous research that's done that demonstrates that these kinds of things work. And so I think both — in my ideal world, it would be both producers and consumers that take this responsibility to say, we got to figure out whether or not this works. I mean, it's —

I mean, MOOCs are sort of great example of this because, of course, there's a moment where there's no way to know, or at least it's harder to guess, how well things are going to work. They've just been released in 2011. Sebastian Thrun and Peter Norvig have this 100,000-person class called Intro to AI. That seems like it might transform everything. We've never had 100,000-person classes before. Imagine what a system could be if the world's best professors teach these enormous classes and they're free or cheap or everyone can access them.

And I think in those moments, in those dynamic moments where the technology landscape is shifting, it can be OK for some institutions to be like, oh, let's put a bet on this. Let's sort of try to take a lead here and things like that. But if most institutions are like, let's just hang back and wait a little while and see what kind of evidence emerges around this, I think that's an entirely sensible approach. I mean, I'm actually quite heartened in 2023 to watch college, university, and K-12 responses to AI because it's quite different than 2012, 2013.

I mean, MOOCs were nuts. I'm a graduate of the University of Virginia. The president at the time there was fired for not moving quickly enough to adopt self-paced online learning. And she was then unfired because of protests by the community and so forth. But it was quite a thing. To my knowledge, there's no university president in 2023, 2022, who was fired for not moving fast enough on AI. There's been a much more, I think, kind of cautious, let's see how this develops. Let's experiment with this. Let's see how this works.

My colleague Morgan Ames came up with this phrase of taking a tinkering approach to education in contrast to a charismatic approach. The charismatic approach is like, everything's going to be different. Throw it all away. Transform everything. And a contrast to that is a tinkering approach, which is like, oh, this might help somewhere. Let's do some experiments to figure this out, but let's not get over our skis. Let's not throw so much money and time at this that we look back in a year or 10 years and be like, what was all that hubbub about?

SHAUNTEL GARVEY: Yeah. And I think you touched on this idea that — in your book you say, we want villages, not heroes. And so we can't be innovating on islands. And I think that sometimes happens in this ecosystem. So who needs to be part of that village? You touched on publishers. You touched on the higher ed stakeholders. Is it research? Who needs to be part of the village? And how do we bridge some of those divides that are happening between the research world, between the developer world, between the institutions of higher education? How do we bridge those gaps?

JUSTIN REICH: I love that question. Technology is only as powerful as the communities that guide its use. So from the beginning, we should not be thinking, OK, this is a technology that we're going to download onto people's machines and learning is going to explode. In most circumstances, that's just not how it works in schools. It's basically never how it works.

What happens is that at best we download stuff on the machines, teachers take it, they do some kind of boring stuff with it for a while. They do some things they were already doing a little faster. Some more people conduct some experiments. They try to do some new things, some better things. Some new practices emerge. And then through a process of coaching and experimentation and time and growth, practices get better, not just because the technology was downloaded, but because students figured out how to use it better, schools figured out how to use it better, classroom teachers figured out how to use it better.

So I think anyone who's doing education technology development can benefit from starting from that kind of ecosystem approach. Like there's nothing that we're going to download onto college students' machines that's going to make them learn enormously faster. There might be things that we can put in their ecosystem that with the support of their coaches, their professors, their teaching assistants, their advisors might be better.

So what can developers do, is first think about, hey, how do you bring in people that have a lot of experience in that nexus?

SHAUNTEL GARVEY: And value that experience, right?

JUSTIN REICH: And value that experience. How can we partner with educators in those communities, with families, with learners in those communities? A lot of times, to become an education technology developer, you have to go to one of six schools in the United States that has a really good CS program and then get kind of drawn into one of these elite companies, which are only formed in Cambridge, or New York, or Palo Alto.

And very often, historically people who've been technology developers have not had the educational experiences of typical students in the United States, but the people you're trying to serve are typical students in the United States. So I mean, I think Reach Capital has done an extraordinary job of this. It's trying to bring in people as venture capitalists who have lots of different backgrounds, who have a strong connection to education.

All around the world, the people who have been historically marginalized are going to become the majority members of our school community. That's just the dynamics of how population — we're in an extraordinary demographic moment and it's going to happen in cities all across the world. And so we're going to have to ask ourselves the question like, if you just like came from Phillips Exeter to Harvard to — from the Mayflower to Philips Exeter to Harvard to the edtech incubator, it's not clear that person is going to be the best possible person to be designing learning experiences that are used by a wide variety of people.

So I think it's who we include, how we think about being in community with folks about — I mean, there's — I don't know how much there's going to be economic incentives in the system to tell people you should really be working on building things that help folks who struggle with learning to do better. There's always going to be a lot of incentive to be like, how do we help the 0.1% become the 0.01%, or things like that. And so some of it is just the kind of moral expectations that we've historically associated with professions. That structural engineers build bridges that keep people alive, not just because they sell bridges, because it's a deep part of their professional identity to build stuff that's safe. And I hope edtech engineers —

SHAUNTEL GARVEY: Do that. Yeah. And I think part of that is like you're saying. We actually need to grow the village. Right now, the village is narrow. Who gets into the sector? Who are the developers? Very narrow. Like I said, it's something we're trying to do at Reach, both with our team, in terms of the diversity of the experiences and perspectives that we bring to the investing, but also the entrepreneurs that we're investing in and making sure they're building out of their lived experience and, like you said, reflect the communities that we're trying to serve. So I do think we've got some work to do there.

OK. So there's no magic bullet. We can't just blow up the system. So how can we apply some lessons from your book to those of us on this call or on this session? I'm looking to encourage higher education institutions to engage in academic innovation so they can serve the public good. What are some, and you touched on a little bit with the tinkering, and I know you have another book that gets more into tinkering —

JUSTIN REICH: Yeah. If we talk about the other book, I get to switch my background. Wait, let me see if I can do this quickly. All right. There we go. So I have this other book that just came out called Iterate: The Secret to Innovation in Schools. And while I wrote a book of technology criticism, a lot of my career has just been helping people use technology in schools. I ran this consultancy for a while called Edtech Teacher that helps people bring technology into schools. I worked with Harvard and MIT to try to help edX be guided by research and things like that.

What are some things that tend to work? Well, I think — I mean, the main lesson of Failure to Disrupt is that we can look at historical examples and research that already exists about education technology to help us guide our practice and direction. Like when new things come along, we can go, oh, we can make a pretty good guess about how these things might work.

Another thing that we can do is to adopt a tinkerer's mindset and to say we should be constantly trying to get better. Technology can help us do that. In every school community, every college university community, there are people who are really energized by improving their practice with new technology. I've never been to a school where you can't find people who are like, it's pretty cool to play around with these things. To do that really well requires having institutional support for educators to innovate. There has to be time, there has to be money, there has to be resources for practicing educators to take on teacher-leader, faculty-leader roles to try new things.

And then if you — like how do new innovations spread and scale? When things start to work, how do they move across the community? If you interview faculty members about why they changed their teaching practice, the number one answer is what they learn from other teachers. Every continuous improvement and problem in schools is actually a peer learning problem. How do we create institutions where faculty have a chance, are supported, are incentivized to learn one another, to adopt new practices from each other?

And the distinctive thing about universities compared to K-12 is that K-12 teachers are pretty willing to learn from lots of different kinds of K-12 teachers. University faculty tend to want to learn from people in their department. Like the physics department can have all kinds of exciting changes that are going on and the chemists will be like, we really — that doesn't count. Like we need to invent all of that ourselves.

And so there are differences in the cultures of different institutions about how they change, but the heart of it is like you need to create resources and space for people to be able to experiment, to try new things. You need to do evaluation to see whether or not those things are actually working and improving schools, and then you need to create the space for teachers to talk with each other, share with each other, learn from one another because there are very few cases where improving teaching and learning really comes from the top-down. Even when things look like it's coming from the top-down, it's really emerging from this bottom-up process of experimentation and peer learning.

SHAUNTEL GARVEY: Great. Thank you. So I'm going to go ahead and move us into the Q&A portion a little early. We have a lot of great questions already coming in. Here's one. Could you share your definition of transformative? Is learning at scale as a main factor in considering whether certain technology is transformative or not? Also in your book, you mentioned the Matthew effect. What is your idea in solving that effect in educational technology? And maybe if you could just describe Matthew effect for those that —

JUSTIN REICH: Yeah, sure. So what would be transformative? I mean, to me — I mean what's so hard about talking about effects in schools, especially when you look at it across the whole PK through graduate curriculum, it's we're trying to do so many things. How do we accelerate how kids learn how to tie their shoes and to get in line and be good members of school communities? How do we accelerate how advanced PhD students journey on their way to becoming faculty members in the next generation of innovators? Like that's just a lot of different kinds of things.

But I think when people discuss transformative things in education, they're saying, could we have people who learn either really new kinds of things or learn the same things that we've been doing for a while much more effectively or much faster? And the fact is almost everything we tried doesn't do that. If you look at a distribution of effect sizes, like measures of how much things change because of new innovations of almost any kind, if you look at really rigorous studies, like the best things we do change things a little bit. Schools tend not to be improved by one great stroke. They tend to be improved because we did 30 or 40 or 50, 100 things a little bit better each time, each year, things like that.

So when people talk about — I think have there been some transformative innovations? Maybe. Montessori schooling you might consider something that was like transformative from early education. Community colleges were transformative in the higher education system and creating new pathways. So there are a handful of those things that you might say —

I guess another really important point is that when we build transformative things, they unfortunately tend not to affect everything everywhere. There's a great paper by Jal Mehta and David Cohen called "Why Reform Sometimes Succeeds." And they basically say there's two ways to win. You either have profound influence in a niche or you have watered down influence over a lot of spaces.

Maria Montessori is a good influence of this. Like there really are not that many Montessori schools. There are not that many places where you would have a true concentrated really successful Montessori education. There are a gazillion schools that have adopted little bits and pieces of Montessori practice. Those are kind of like the two — you sort of wish it would be like, man, what if it would be cool if there were lots of schools that adopted lots of really effective practices? It tends not to happen that way.

So technologies tend to be things where you're like, man, like that particular — I mean, massive open online courses, there's an extraordinary program at Georgia Tech called the Online Master's in Science in Computer Science, where I think they're doing a fabulous job at very reasonable costs educating — they have something like 12,000 students enrolled in their master's [of] science in computer science. It's the largest master's program for computer science in the country. They seem to be doing a great job.

And there's not four of them. They're not five other universities that have duplicated what Georgia Tech accomplished. There's one Georgia Tech. There's one online master's of science in computer science. And then there's lots of other places that are doing little bits and pieces of things in other places. So that, to me, the way I think about transformative, is that you can usually make a really deep change in a couple of places or you can make a shallow change in lots of places.

And then there's this question — oh yeah, do you want to say more about that? Well, I can just say about the Matthew effect part is that one of the as-yet intractable dilemmas is the Matthew effect, which the edtech Matthew effect — the Matthew effect broadly in sociology is the idea that returns tend to accrue to already advantaged people. There's a line in the book of Matthew, which is something along the lines of, for he who has much, much more will be given, and he who has little will have even that taken away, or something like that.

So new technologies tend to disproportionately benefit the affluent. I would say we have really good research demonstrating that. I would say there are many, many cases that demonstrate that. There are also a few case studies that are out there of tools that do the opposite, where they disproportionately benefit people who have prior learning challenges or lower levels of prior learning or other kinds of things like that and we don't understand how or why they work.

There's not a theory out there of like, OK, if you wanted to democratize education, all of the things that do that seem to have four principles. It remains — like we don't know. It remains quite mysterious what's happening in the places where people do the best work, except to say that I think they tend to be staffed and developed by people who have a really deep understanding of educational systems, but that's not — there's also lots of things that don't accomplish that, that are created by people of a deep understanding of educational systems.

SHAUNTEL GARVEY: On this question of MOOCs, so someone in the Q&A asked, I worked in a community college for many years where developmental, formerly known as remedial education, is a huge issue. I recall the Gates Foundation funding a project to create MOOCs for developmental English and math. Do you know whatever happened to this project? So do you — are you familiar with this project? And I guess the case of taking a MOOC and being very specific about the audience you're targeting, and is that a way to reach —

JUSTIN REICH: There were some really early totally disastrous efforts to take MOOCs and to use them — actually, like some of the most vulnerable populations, like people who are entering community college with, on shaky foundations from not being well served in high school or whatever else. And they were just like shockingly disastrous. It was like, do not do this. People need human beings to help them learn.

As I understand it, the Gates Foundation and other people have generally been trying to move away from this notion of remedial classes in general, that the idea that you struggle with math, you show up to college, and the thing that we really need you to do is like jump into that math that you've struggled with all through high school. It's like, what is the hardest thing for you to do? Do that some more. And so people have tried to figure out other pathways that get them some of the benefits of higher education and hopefully, eventually, get them to develop the skills to be able to perform well at those mathematical functions without saying like, do algebra for a third time or you can't become a radiology tech or other kinds of things like that.

SHAUNTEL GARVEY: So we touched a little bit — we didn't go deep on whether we're in another hype cycle, which in AI. The other big potential hype cycle we may be in is AR/VR. So one question is, what are your thoughts on whether VR/AR has a place in higher ed outside of programs where students learn to develop it? It's expensive. It can be hard to adopt, but it might be a huge part of our lives 10 years from now. We know the Oculus Quest price is coming down. It's now $300. Is this more of a wait and see or do you feel like there's a compelling reason to adopt VR/AR in higher ed today?

JUSTIN REICH: I mean, almost everything in education technology is a wait and see. There's a pretty small number of things. When I was in college in the 1990s, I was part of a VR study. And so like — I mean, people were — again, it's one of these things people were saying in the 1990s like, man, we might be 10 years away from having VR/AR be a really important part of what we do. I think there are a bunch of reasons why that technology is not well suited for people's learning experiences. Here are two of them.

A question that my students helped me come up with in class one year was for any learning technology, ask the question, what is the human to human interaction that is generated by this technology? If a thing generates a human to human interaction, it is a much better bet for learning than things that don't generate a human to human interaction. So VR is very much about closing you off from other people and other interactions. You can't see — you can't see around you. Other people can't see what you're doing. You can't have that experience.

The other thing which is always striking to me is that human beings are extraordinarily good at taking two-dimensional surfaces and rendering them in their mind as three-dimensional images. Like my hunch is most of you looking at me right now are not like what is this super flat person doing in front of me? You probably are like, wow, look, like his head is three-dimensional. It moves around, and things like that. But it's not. It's on a screen that only has one flat panel of pixels.

So given that humans are already good at like making things 3D in their mind, we don't really need technology tools to do that better for us. I think we're going to have a really hard time finding VR experiences where you go, wow, that was a whole lot better. Like people talk about them for field trips or science labs or stuff like that. Almost anything that you would think about doing in VR, like you could probably just do as well in a flat screen. That technology already exists in lots of people's places and things like that.

And again, you know I think you can sort of say like we've seen VR a lot. Like there's 30 years of people experimenting with this stuff and trying a whole bunch of different things, and it doesn't really stick. And it's not clear to me that it's about the price point or other kinds of things like that. I mean, for any of this kind of stuff, I mean, this is a sort of VC phrase, but asking the question like, who is eating their own dog food or drinking their own champagne?

Like who works in Meta like has their kids stuck with a VR headset on their head for long periods of time trying to learn stuff? I bet there are not very many senior developers at Meta who work on the VR team that do that because it's super boring. How many of you like have spent a lot of time in the last year talking with AI chatbots to learn stuff? I bet you haven't because it's boring. And it like — when we try to get 10-year-olds or 19-year-olds to do the same thing, I bet they're also going to find it be boring. So to some extent, it's also helpful to think about, what are the experiences that are being kind of organically adopted by people? And if they're not, then we could start asking some questions about how likely it is.

I mean, the last point to wrap this up is there's a big line in Silicon Valley, that people tend to overestimate technologies in the short run and underestimate them in the long run. But Thomas Edison in 1913 went in front of Congress and said in 10 years, all learning will be done by filmstrips and textbooks will be gone. And then in 1923, 10 years later, he went back to Congress and he said, in 20 years, all the textbooks will be gone and all learning will be replaced by filmstrips.

And it's 100 years since that and we still have not replaced all the textbooks, all the print with filmstrips because there's all kinds of good stuff about text. And filmstrips did not disrupt educational systems. They were domesticated by educational systems. They were put in particular places for particular functions, but they didn't transform what we were doing. And VR and AR won't either.

SHAUNTEL GARVEY: OK. You've touched a little bit on this notion of human connection, peer to peer connection, peer to peer learning. We have a question, it's actually for both of us. To what extent are edtech creators and VC investors focusing on Raj Chetty's more recent leading indicator for economic mobility, specifically his research around social capital, who you know and how to bolster network effects for students?

I'll just speak for us. We actually are really focused on this. We had Raj Chetty as our keynote at one of our recent portfolio founder days. And so we really are thinking about, how do we enable and embed social capital features into the products? I don't know if you've, Justin, been also looking at that and how that is an important piece to this puzzle.

JUSTIN REICH: Yeah. I mean, one caution that I would have everyone think about when they come to educational systems is educational systems are not infinitely flexible to do all the things that we want them to be able to do. Like high schools are good at teaching math and history, and language arts, and social studies, and PE, and arts, and music, and theater, and some other things like that.

If you keep asking schools to do more and more things, they will not do those things better. Like you can layer on lots of additional functions of schools, but unless you're very serious about taking things away, unless you're willing to be like, yeah, we don't have to do science anymore. The days are full. People are busy. If — there are ways that schools have done that kind of work historically, and I think building on those things can work.

So there are lots of schools that have figured out internship programs, that have figured out connections in the local city. There's huge pushes — I was at a Boston schools event this morning about early — where early college was kind of a really featured important part of that. And I think those initiatives are really exciting. I get more concerned when we say things like schools need to figure out how to develop young people's social capital and networks, especially in like brand new ways. Schools are not great at brand new things.

If you were like, that is the one thing that I want Bunker Hill Community College to work on for the next 10 years, they might be able to get better at that. But these are — they're fabulous institutions. They're incredibly important. They can get good at like a thing every two to three years. So we have to be really strategic as communities, as societies, of saying what thing they should better at. Usually the thing they should get better at is like teaching students core content.

There's a great question here, Shauntel, from the queue, which I'm going to ask you, which is as an investor, how much do you agree with what Justin is saying? Is there anything you specifically don't agree with? Would love to hear it. And I would love to hear it as well, not just as an investor, but just as another smart person who's thought a lot about education technology. What are the parts of Failure to Disrupt that you're like, this isn't right. This misses the mark here.

SHAUNTEL GARVEY: Yeah. I mean, we touched on a little bit. I think just this idea that venture capital is going after kind of the quick wins and we're all about scale. I think venture capital is not monolithic. So there are different types of venture capitalists and different types of incentives. And we are one that does care about the impact. We are one that does care about, how do we incentivize the founders to really think about solutions that are moving the needle.

And it's actually interesting because it's, one of the examples that Justin did talk about in the book, is one of our portfolio companies that has taken funding in, is Desmos. And I think Desmos is a great example of a lot of the things that you talk about that needs to be needed, is how do you tinker around teaching math? How do you think about helping teachers think about math education differently? And then also, how do you build communities of educators? They've done a great job building a math educator community that then can kind of tinker around these different ideas.

So that was one thing we already talked about that I was pushing back on. But at the end of the day, I do, fundamentally do really agree with a lot of the premises of the book. And it's one of those things where we never believe that technology should be a replacement. We think it's a piece of the puzzle and humans are so core to this work and relationships are so core to education that those things have to be part of the solution as well. And so I think that's where we're kind of definitely aligned, definitely aligned there.

JUSTIN REICH: Desmos is I think a particularly lovely example of a technology that takes seriously the question, what is the human to human interaction generated by this experience? Almost every part of Desmos is about, how did a peer answer that question? Like here's one way of answering this. How might another person answer this? For a teacher to be able to say, OK, here's the way a whole bunch of students answer these things. What do we want to talk about next?

There's, I think, a lovely kind of constant back and forth in the Desmos — the Desmos platform very much imagines like, yes, there are going to be kids with fingers on their keyboards who are looking at their screens at periods of time and not looking at each other and not talking to their teachers, but all of those interactions have to feed in to conversations that students have with each other, conversations that students have with their teacher about math.

Because for an enormous number of children, that is the only reason they're in math class. They're not in math class because they care that much about algebra. They're in math class because they care about their teacher. They care about their peers. That is the thing that gets us excited. The technologies —

Here's an AI thing that I've been thinking about a ton recently, is so many folks when they have thought about AI have said, how do we get individuals to interact with AI entities? How do we get there to be coaching or other kinds of things like that? I have a colleague at CMU named Carolyn Rose who has been playing around with chatbots for decades and she's like, no, no, no. Do not have people talk with chatbots. Have two people talk with each other and have a chatbot in the interaction coaching, shaping, nudging, pushing the interaction.

I would love to see much more development along those lines. How could an AI entity help two people have an interesting — more interesting, more productive conversation with each other rather than thinking that anybody is going to find it that interesting to talk with ChatGPT for very long about how functions are supposed to work or something like that.

SHAUNTEL GARVEY: Great. I want to go back to one we had at the beginning. I think you touched a little bit on this, but in what ways might experimentation with different edtech solutions, maybe even those that don't work, how might this help traditional institutions get better at thinking about how people learn? And can experimentation help institutions get better at changing ourselves?

I think with this question too, which we didn't touch on, is, again, going back to that privacy question and a little bit of the ethical concerns. Like we would love to do a lot of experimentation. In your book, you talk about some of those barriers for that experimentation.

JUSTIN REICH: So there — I would say platforms are in this interesting position where they have enormous amounts of data and they have a tremendous capacity to rapidly alter their system in order to test different things. And generally speaking, the public really doesn't like that, especially when you tell them what you're doing.

I mean, it's kind of strange because the public goes to Facebook and Google and Amazon every day. And those websites are constantly submitting people to, like, unconsented experiments. Every 100th time you go to Amazon, something slightly different about it, that the Buy button is blue instead of red to see whether or not you'll buy more. And when schools do the same kind of thing, healthcare systems, other kind of public interest systems, people can get really upset about it.

But I did want to say one other thing about that idea of what can institutions learn from experiments with technology? This is my 20th year teaching. I've now spent a lot of time thinking about technology in schools. And part of the reason why I've still done it for 20 years is that there is something catalytic about talking to faculty about technology. You can walk into a room of faculty and say the world is changing, our students are different. We really need to rethink our pedagogy, our curriculum, our relationship with students. And a lot of times those conversations will end with, no, actually we're doing pretty good. We're fine.

You can walk in that same room of faculty and be like, look, we have a new device. We have a new thing. We have a new capacity. But for us to really take advantage of it, we have to think about what our pedagogy, our curriculum, our relationship with our students. And I've found faculty to be much more open to that conversation in the presence of technologies that could potentially change their capacities.

So I do think that there can be productive conversations when one to one laptops come into schools, when social media come into schools, when MOOCs come into schools, when AI come into schools. The productive conversations tend not to be what is the thing that we can buy and download onto our machines that will make learning different? The more productive conversations are like, what capacity does this bring in, and how can we integrate that capacity — how can we preserve the best of our traditional practices? What new practices might be emerging that we could be excited about?

I mean, AI is operating a little bit differently because MOOCs sort of had this promise of like kids will be able to — people will be able to teach themselves stuff. The main way that educators are currently experiencing AI is through the lens of cheating. Like I used to be able to ask my students to do these productive thinking tasks and now they don't have to do those productive thinking tasks before because there is a device that will do that for them. And that is not uncommon in the history of education. That's what happened with encyclopedias. That's what happened with calculators. That's what happened with Google Translate. That's what happened with the web. Like this happens all the time.

But it forces us — I mean I've been doing a lot of thinking about — I've been putting some of my assignments into ChatGPT and going, shoot, like this is doing a lot of productive thinking for people. And now I've got to stop and be like, what exactly was I trying to get folks to — what thinking was I trying to get folks to do with this assignment? And is that thinking so important that I'm going to find some way of having them not do that with access to computers, or ChatGPT specifically? Or are there other kinds of ways that I can get at the same kind of learning experiences? And I think that there are a lot of faculty communities that are really open to having those kinds of conversations as new technologies emerge.

SHAUNTEL GARVEY: Where are those conversations happening? Are they in faculty meetings? Are they part of a teacher PD? Do we need to create new communities to have these conversations? Where are you seeing them happen now?

JUSTIN REICH: Yeah. Well, I mean, a challenging thing about this moment is that the conversations have a kind of an unfortunate urgency behind them because there are these academic integrity issues that are sort of driving rapid responses. And maybe that's appropriate. Maybe it's fine that we all need to have policies in our syllabi that say something about ChatGPT or something like that.

But where we have to go next is that we have to bring our colleagues together in departments around teaching and learning centers, in our scholarly communities and our teacher communities and start saying like, OK, here's a new tool. How is it changing the disciplines? There's some disciplines that are changing extraordinarily quickly. Software engineering and the arts are changing extraordinarily quickly.

There are other disciplines that are not changing quickly. Like literature is not changing quickly. The disciplinary practice is not changing quickly. Those disciplines might have more time to think about adoption than software engineers, where in the next couple of years no one is going to do computer programming without a copilot. And so probably our schools should adopt and change that way. But I do think if there's —

A thing that I learned from Peter Senge, who's a faculty member at the Sloan School of Management who studied schools and lots of other things, is like there's no one in a community who feels empowered to make change. CEOs don't feel empowered to make change in their institution. Like, the way change happens is people who don't feel particularly empowered are like, I'm going to do it anyway. So like how are these conversations going to happen? A bunch of folks who have no particular authority and no particular expertise are going to raise their hand and be like, we need to talk about this. And I hope many of the people who are listening decide to do that in their communities as well.

SHAUNTEL GARVEY: I want to — this is like a comment or a question I just want to address and maybe end on this. It says, Martin Weller talks about edtech developers repeatedly reinventing the wheel in 25 years of edtech, which is another theme in your book. Why is there such a lack of historical knowledge among edtech developers?

I think one reason is, you kind of touched on this, where are these edtech developers coming from? And they're coming from just not being steeped in educational research, not being steeped in what has happened in the past. One thing is maybe this book needs to be required reading. So required reading for all edtech developers, read this book and get the historical context. And like you said, also making those bridges between the research community and kind of the edtech developers to learn some of what's kind of happened historically.

JUSTIN REICH: Well, telling everyone to read Failure to Disrupt is a fabulous suggestion that I heartily endorse. The book that has also really inspired me is — it's out of print now, I think. Although, you can find lots of copies of it and there's online copies — by Larry Cuban, who's a professor at Stanford, called Teachers and Machines. I would have been honored if my book had been called teachers and machines part two because teachers and machines covers radio, film strips, the early — it covers about 1900 to 1986, how signals and analog technology in the very first digital technology start changing schools. And then I sort of try to pick up from there and move forward.

But you're exactly right. Like people don't know this because it takes a long time to become a really good software engineer. And while you're spending a long time becoming a really good software engineer, it's hard to also get a doctorate and a rich background in education. But I think a thing that we can all do is to say, whether through this book or other resources, there is a long history of this work and we will do a better job as a community, as educators who buy this stuff, as people who develop this stuff, as venture capitalists who fund that stuff, if we spend some time starting to learn about that history.

SHAUNTEL GARVEY: Thank you. Thank you, Justin, in helping us to learn that history. It's been great being in conversation with you today.

JUSTIN REICH: Thank you, Shauntel.

KRISTEN ESHLEMAN: So Justin and Shauntel, thank you so much. This was such a great conversation. I was personally reminded in your discussion about the real value of technology to me as someone who also works in this field. And it's not the tech itself and the hyped expectations not being met most of the time, but it's really the way it forces us to question the way we've always done things. And for that reason, I love being in this space. So thank you for reminding me of that and for the really great information you shared today.