How generative AI will (and won’t!) transform postsecondary education
What is the potential for AI in higher education? What are promising directions for its development, and what capabilities are overhyped? What kinds of collaborations might produce clear benefits for teachers and students? Matthew Rascoff, vice provost for digital education at Stanford, and Amrit Ahluwalia, host of the EdUp PCO (professional, continuing, and online education) podcast, delved into this topic in a conversation released August 13, 2024. A transcript of the episode follows.
Transcript
[MUSIC PLAYING]
AMRIT AHLUWALIA: Welcome to the EdUp PCO podcast, a proud member of the EdUp Experience podcast family. This podcast shines a light on professional, continuing, and online education, the most innovative and fastest changing sector of the higher education industry. I'm your host, Amrit Ahluwalia, executive director of Continuing Studies at Western University in London, Ontario, Canada.
This week, I'm joined by Matthew Rascoff, vice provost for digital education at Stanford University. Matt is a leader in the digital and online learning arena, and led digital education strategy at both UNC and Duke before he headed out west. He recently penned an essay focused on the implications of AI for higher education. And I'm excited to welcome him here today to dive in deeper. Matt, welcome to the EdUp PCO Podcast. So great to be back in touch.
MATTHEW RASCOFF: Thank you so much, Amrit. I'm really excited to be with you and to be part of your new effort to share our learning and expose people to the cutting edge.
AMRIT AHLUWALIA: --talk to friends in a new forum.
MATTHEW RASCOFF: It's great to be with you.
AMRIT AHLUWALIA: Well, hey, I appreciate you taking the time out. And let me just level set with, you know, we're going to be talking about AI in the post-secondary space. I don't think anyone would accuse either of us of expertise in the field of AI, which is an evolving field, a rapidly evolving field with significant research behind it.
The focus of our discussion today is really around sort of the application of AI as it is today, and as it's progressing in our known sense into the work that's happening at post-secondary institutions. And I think the best place to start that, Matt is to talk about some of the things we're getting wrong on a regular basis. So just to level set, I mean, what are some of the biggest misconceptions you've come across as it relates to AI in the higher ed arena?
MATTHEW RASCOFF: Thank you for the question. And yeah, I'm hardly an expert in this space. I consider myself a learner. I'm trying to figure out my own views and trying to help shape the conversation. And the way I'm doing that is kind of in public. And you can think of it as an open-source strategy in which I'm trying to put out ideas, and get feedback, and try to improve them. But it requires, I would say, a certain amount of humility to enter into this space and also a recognition of how dynamic it is. Like the terms are changing, the definitions are changing, the possibilities are highly emergent. And I think that that should make anybody pause before they make grand pronouncements.
So on misconceptions, what I would say is I think the biggest one is the idea that a chat-based tutor is really a substitute for what we do in education. To me, the key core bundle of education is around skills and knowledge. It's around community and social capital, the way people connect with one another, learn from one another, create knowledge together, and credentials. And credentials are how your knowledge interacts with the labor market and how it interacts with the broader systems, the economic systems, the social systems, the recognition systems.
And like that-- you could say maybe there's been change in the way we acquire skills. But that fundamental bundle, I don't think-- that's not really being disrupted by this technology. I think there's going to be evolutionary change in each of those domains. But the way people approach skills has already been changing very rapidly. Like, YouTube, many people say, is already by far the biggest learning platform in the world. Many students say they're struggling with something in a course, they go to Khan Academy for help. Like there's already been a kind of a technological change and a disintermediation, you could say, of the way people acquire skills.
That's really what we're talking about when we're talking about chat-based learning, not the whole kind of bundle that includes the social capital formation, and community, and the credential system. Those are going to change too, but not quite as rapidly because they're subject to broader bureaucracies, regulation, government systems, like larger social systems. So I would say that's probably the number one misconception. It's what is the scale of the revolution or evolution that we are envisioning, that we are seeing in front of us.
AMRIT AHLUWALIA: That's so interesting. And one thing that fascinates me about the conversation about the proliferation, call it, of generative AI is whether it is innovative or iterative. Like is what we're doing right now broadly innovative in terms of finding truly new ways to do things, or are you seeing that it's largely iterative in terms of finding faster, more efficient, or better ways of doing stuff we were already doing?
MATTHEW RASCOFF: I'm strongly in the iterative camp, I would say. And I think what we're looking for, what ed tech is looking for, is product market fit. But what I don't see is like, I don't see fundamental changes in the learning and educational system that are likely just given the limitations of this technology, given its hallucinations, given its inability to do basic things like count. Maybe that will change. It probably will over time. But I'm generally kind of an evolutionist when it comes to educational change and reform. And there are revolutions. But they're very infrequent. And I don't think that's what's underway.
Some of the improvements that you see, to me, are really about giving teachers better tools, not necessarily direct to students, but giving educators better tools to do their jobs. It's, kind of, the ultimate evolutionary bet on improvement. It's not going to change the fundamental economics, necessarily, of scale and quality. But it can help make the learning experience better. It can help reduce some of the mundane tasks the teachers have to do in grading, in feedback, in creating lesson plans. Those are some of the more exciting prospects that I've seen.
And I'm thinking about companies like TeachFX, which I often talk about, which is, it's natural language processing feedback for educators to improve their instructional practice by analyzing their discourse, by analyzing the words, and language, and sentences, and tone of the students in their classroom, and the feedback that they give their students. And it turns out if you give every teacher an instructional coach, in just one session of feedback, they can improve the very next time. They can build more active learning into their teaching. They can give better quality responses to students, using their ideas more effectively.
And we don't have enough instructional coaches to go around for all the teachers who are out there in the country, and professors who are out there in the country. So that to me is a paradigmatic example of a research-based application of artificial intelligence. It's not generative artificial intelligence. It's kind of natural language processing and older generation of AI. But it's got really strong foundations in terms of the research and an evidence-based approach that we know works, because human-based instructional coaching has so much data behind it. And if you can scale that, you can scale teacher quality, teacher improvement. That seems to me to be a model of what's most promising in this space, but highly evolutionary, not revolutionary.
AMRIT AHLUWALIA: Interesting. But just before we move on, and I apologize, because I sent you a list of questions. And now we're not talking about any of them. What is an example to your mind of a revolutionary change that's occurred in the post-secondary space? Like what's something that we've done that would be considered revolutionary?
MATTHEW RASCOFF: Online learning was a revolutionary change. And the way it's penetrated into the system of higher education, I would say, over the past 25 years, is it's a fundamental shift in who can benefit from higher education. I think it's built on a foundation of open educational resources at its best, in the most exciting models, where we have the opportunity to unlock knowledge and make it more democratic, more accessible.
And that hasn't happened to the extent that I wish it would. But that to me was the fundamental unlock. And I think it will, in my generation and our generation, it'll change institutions fundamentally. And I think that's already underway in many of them. Like community colleges in the US are already fundamentally changed. And we'll never go back to a fully analog model.
AMRIT AHLUWALIA: That's a great example. And it's great to contextualize as well, because I think that's where when we're talking about transformation in the macro sense, it's kind of hard to wrap your head around the idea of what could be truly transformative compared to a better version of what we're doing today. And if you think about how far distance learning came from sending videotapes in the mail, or creating, sort of, public access, televised lectures, and things like that to where we are now with truly remote, truly distance, sometimes asynchronous learning that still facilitates peer-to-peer and peer-to-faculty interaction, it really-- OK, it's a valuable-- it's a valuable example to start to think through what revolution actually--
MATTHEW RASCOFF: And I think AI will likely improve the quality of online learning. But that doesn't necessarily mean it'll be a change on the same order of magnitude.
AMRIT AHLUWALIA: For sure. It's an iterative transformation to make something more effective. Like create truly adaptive learning models isn't a revolution in the context of what online did to learning in general.
MATTHEW RASCOFF: Yeah. I'm more skeptical, honestly, about personalized learning, adaptive learning. That's a fad. I don't think the evidence supports the idea that that is the Holy Grail of education. And I think it was the dream for many years. There was this company, Newton. They promised every student a robot tutor in the sky. It became kind of a joke, honestly, because I think, if you look at-- maybe a good example is the kind of research on learning styles, that was also a fad. And it turned out, personalizing learning around learning styles has absolutely no evidence behind it, zero. And it was purely a meme. It was purely a headline. It sounded good in theory. But in practice, it actually has no research foundation.
Personalized learning, I wouldn't extrapolate that to all of personalized learning. But to me, what we actually need more is communalized learning, collective learning. Like learning from one another, that's actually the Holy Grail of learning. Overcoming our differences, trying to build communities in classrooms that allow people to bridge their different backgrounds, their different ethnicities, their different viewpoints, that's actually the really hard work of learning in a classroom.
And in individuating people, putting headphones on them, and putting them in front of screens, that's not actually my vision of a great learning experience. So that, I would say, that was one of the many kind of overpromise under-deliver moments of digital learning. And there are many of those too. And I'm just I'm skeptical that that's a worthwhile goal. I don't think we need more personalization. I think we need more communalization and collective thinking.
AMRIT AHLUWALIA: It's interesting because there, to an extent, that's-- when we talk about great online learning compared to what we got in remote instruction, great online learning is often characterized by peer-to-peer interaction, peer-to-peer engagement.
MATTHEW RASCOFF: Exactly.
AMRIT AHLUWALIA: And, you know, it's almost getting back to an extent, to the Socratic method of posing a question and leading to rigorous debate so that folks can-- but when we look at a lot of societally, some of the issues we have today is maybe an abundance of personalization, as opposed to a willingness to engage in a community environment.
MATTHEW RASCOFF: I think that's right. And that is kind of like an algorithmic end game, where everybody has their recommendations, their own echo chamber and their narrow views reinforced. And that's not why we're here as educators, in my view. We are here to challenge your views, not to reinforce them, and to expose you to difference in thoughtful ways, with ground rules that allow you to engage meaningfully, to learn from one another, to rethink some of your assumptions. That's what real learning looks like to me. And that's not going to be algorithmically delivered.
And it's not going to be consumer driven, because people don't necessarily want and won't necessarily pay for something that makes them uncomfortable. But that's often what you need the most in a great learning environment. That's what educators' job is to do, in my view.
AMRIT AHLUWALIA: That's a fascinating-- so to your mind, as we move towards a consumerized post-secondary model, do you think it is necessarily diminishing the capacity for education to challenge and invigorate learning?
MATTHEW RASCOFF: Yes, that's, I think, a great fear. There's this concept in cognitive science called desirable difficulty. And it's the idea that some of the challenges that we take on in learning don't feel good. They don't taste good. They're hard. They're a struggle. Some struggle is good. And if you just design a learning system around people's consumer preferences, they probably won't optimize for the appropriate amount of struggle. They want it to taste great and feel good. And you end up with a kind of fast food model of learning, not a nutritious, balanced diet, where sometimes some of the things are good for you, but they don't necessarily really taste great.
And the job of the educator is to make sure you have good meals. It's not to feed you the most high calorie and tastiest intellectual food. And that is, I think it's one of the great limitations of the consumer-driven model of learning, that people don't know what they don't know. And it's educators' job to put them in positions where they can learn. And sometimes those positions are uncomfortable.
AMRIT AHLUWALIA: So first of all, thank you for contextualizing this in a framework that I really like, food based. I'm here for it. I understand what we're talking about. But secondly, that it's such a curious thought, because that's where-- so to my mind, I don't believe people are buying credentials. I believe that people are buying access to a learning experience. But is that then a challenge in terms of how we contextualize a learning experience? Like do we need to spend more time and energy highlighting the natural discomfort that comes from change, as opposed to painting the career outcome that you can achieve? We don't spend enough time on the--
MATTHEW RASCOFF: The process.
AMRIT AHLUWALIA: Yeah.
MATTHEW RASCOFF: I think that's right. Like think about how people describe athletic achievement. The point of running a half marathon is not just the time that you finish. Most of us are not going to be competitive runners. But the training is good for you. The challenging yourself is good for you. The routine is good for you.
Maybe there's a way to think about learning in this process-oriented way that's not just about winning the race, i.e. getting a job at the end of it. But it's more around like valuing the process, valuing the challenge, valuing the building up of muscles, and the confidence, and the intellectual growth, and the community of running with the group, which is often how people approach it. And they're part of a community. So I don't know if that's the right analogy. But I do think, this is part of the role of educators. It's also a part of the role of educational brands, because when you come to a great university, you kind put yourself in the hands of that institution. And--
AMRIT AHLUWALIA: There's a trust--
MATTHEW RASCOFF: --that they know what they're doing. They know that even if this is hard for me, it's going to be good in the end. It's going to be valuable. It's the right thing for me in the end. And I think it's kind of, like, it's an ethical way of thinking about the role of the trust that people place in us and in our brands to say like, it's the thing that enables people to do hard things. It enables them to push themselves.
They're willing to work hard. They're willing to do the homework. They're willing to challenge some of their assumptions because they're here with us at our institution. And they came here believing that we know something that they don't know and that we have their best interests in mind. That's worth thinking about, how do we use the trust that society and our students place in us as institutions, to get them to do things that they couldn't otherwise do on their own, that they would motivate themselves to do on their own?
AMRIT AHLUWALIA: That's it. I mean, we put so much effort into our brands. But then it's got to be brand for purpose. It's a fascinating thought. I want to bring us back to generative AI here. Because I think it's fascinating to think about, the overindexing almost that we do as an industry when we think about the application of AI into the post-secondary space tends to be around a conversation of adaptive learning.
So shifting gears a little bit then, I mean, when you think about the AI-powered tools that are currently on the market, what are some of the most immediate benefit areas, some of the most immediate use cases that post-secondary leaders and educators can derive from AI-powered tools?
MATTHEW RASCOFF: So one of the ones that I'm most excited about is honestly around assessment and assessment reform. And it's very hard for educators to create and grade high quality assessments. Like a valid and reliable assessment is usually created by a psychometrician. And there's expertise that goes in to the creation of that. That's hard to do just as an individual faculty member if you don't have that training, which almost nobody does.
So I see a role for these technologies in helping educators create and grade many more formative assessments, not high-stakes assessments at the end of a course, but formative assessments that are embedded inside a course that gives humans, educators, greater insight into where their students are at. And doing it in a way that's valid and reliable, that does not depend on multiple choice tests for scale. Like, the reason we have multiple choice assessments is not because anybody believes that they're good. It's because they're cheap, they're scalable. They were created because you can offer them to a lot of people and quickly get a sense of where your students are at.
But imagine you could have the same scale, the same low cost, the same access for a large group of students, but a much higher quality, open-ended assessment that was still gradable. They could still deliver you the same speed but much richer feedback, much richer sense of what are my students' misconceptions. That, to me, is a very powerful opportunity that we have in front of us. It's not around students writing essays on it.
This is around embedding formative assessment. And it's sometimes called opening the black box of learning and systematically making that a part of learning experiences to give the educators greater insight into where their students are at. And that, to me, I would say, is it's probably the most exciting and most evidence-based area of digital learning innovation, because we know formative assessment is very powerful, because it has transformed other parts of education.
Like early reading has been completely transformed by the science of reading and the formative assessments that drive our understanding of students' misconceptions about reading. That has not happened in other parts of the curriculum. It has not happened in higher education at all. But it could if we had the assessments, which to drive in, like in my view assessment is like, it's the fundamental infrastructure for learning. It's mostly misunderstood how important it is.
And it's, they're mostly created by people who don't have the training in a kind of amateurish fashion. But if we could professionalize it without standardizing it in the sense of summative assessments and high-stress testing, but standardizing in the sense of making it part of the learning experience that we offer and improving the quality and, kind of, taking it out of the cottage industry and professionalizing it in a way. That, to me, is the most exciting application of AI, by far.
AMRIT AHLUWALIA: Well, it's interesting there too, because now we're starting to talk about getting to the crux of designing great learning experiences. I mean, I think Paul LeBlanc started a conversation over a decade ago around the concept of unbundling the faculty role, not to devalue what the faculty members do, but to professionalize the delivery of different aspects of the learning experience that really do require specialization. So the structuring of assessments isn't necessarily something that has to be built into the teaching of a course or the structuring of a curriculum for the course. Is specialization involved?
MATTHEW RASCOFF: That's a great example. That's a great-- I would point to Western Governors also, where they have professionalized and decoupled the assessment creation from learning kind of support, delivery, coaching, and like-- but that's very unusual. And it happens in a handful of high-scale institutions. Like imagine you could do more of that. And you could make an ordinary faculty member, give them access to an expert assessment creator in the form of an AI that might be on the staff of Western Governors but is not going to be on the staff of your local community college.
AMRIT AHLUWALIA: Actually scaling access to professionalized assessment. Out of curiosity, though, I mean, we're talking about creating a rigorous assessment model that allows for better validation of knowledge. I mean, let's be honest. Our role as institutions is starting to skitter away from gate-keeping information. Instead, now we're validating or contextualizing. We're creating pathways to greater quality learning. How does that differ from adaptive learning?
MATTHEW RASCOFF: Because adaptive learning, to me-- like the promise has been that a computer understands your learning. And it can deliver you the optimal best piece of content based on-- I'm not talking about computers understanding humans. I'm talking about humans understanding humans, enabled by a computer. But the computer is there to foster a better understanding of one human by another human so that they can do better small group discussions, so that they can give you the support that they need, so that they can--
AMRIT AHLUWALIA: Or even moderate conversation--
MATTHEW RASCOFF: --their office hours. But the human relationship still exists. And it becomes stronger because I understand you in a deeper way. But it's not like-- there is no algorithm that is becoming your teacher. There's no disintermediation. It's just helping the teacher do their job more effectively by giving them richer data about their students.
There is a really great study that was done by Harvard Business School last year in their entrepreneurship class. It may be in the show notes. You can post it. I'll give it to you in the chat here.
AMRIT AHLUWALIA: Yeah, it'd be great.
MATTHEW RASCOFF: And it was an AI trained on the cases in the course. It's an entrepreneurship course. And what the professor found is that the students would engage with the AI to try to get help in preparing for class the next day. And the greatest use that he saw was in seeing the transcript, seeing the logs of what his students were struggling with.
The night before class, you could see, oh, all my students are struggling with this concept. They all have this misconception about some idea in the class. And it helped him understand like, what do I want to do with my students the next day? So it really did open the black box of learning. And it created enough value for the students that they wanted to engage with this AI. And they didn't want to go off in somewhere else, like to OpenAI, kind of, the internet. Like they were willing to use this AI because it was trained on the content of the course.
But it created value simultaneously for the educator in understanding what the student's interactions were, and for the students in having access to this kind of coach that had read all the cases in the course. That, to me, is a really powerful use of the technology, to help build a better connection between student and teacher. And I love that model. I keep talking about that. Like that-- I'm waiting for that to be built.
I think it's unlikely that the big AI platform companies are going to build that, because I don't think they want to reveal to one group of users what other users' interactions are on their platform. I think there's a role for ed tech in actually building out something that has a teacher dashboard in a way that reveals the interactions but protects them, but may anonymize them. And figuring out that level of privacy protection, synthesizing those logs, and turning it into useful insights for a teacher, that, to me, is much closer to something that I'm excited about than chat-based tutoring, which I think is totally overpromising.
AMRIT AHLUWALIA: Well, and let's dive into that because there is a lot of stakeholders currently dancing around each other in the field. We've got vendors across the ed tech space that are all promising the delivery of some component of their product to be AI-powered. You've got post-secondary institutions themselves. You have consortia of post-secondary institutions that are trying to bring the resources of multiple institutions to bear at any one time. What role do these different players need to play in the creation of next-gen AI-powered tools and services?
MATTHEW RASCOFF: I think it's a great question. I don't know the answer to it. In my recent essay on this topic, I was arguing for more collaborative approaches among institutions. So rather than one university, kind of, building its own AI for its own community, think about how much in common do you have with the university next door to you. Like, how fundamentally different is your content from theirs, or is your students and faculty and staff from theirs? Probably not enough to justify pouring tens of millions of dollars into some custom AI on your own campus.
And therefore, maybe we should be thinking more along the lines of consortia and not individual institutions trying to outcompete in an area where they're unlikely to be able to keep up with the pace of the technology. They're probably not going to keep up with the big platform companies. So I think there is a role, as I was saying before, for education-specific AIs. I don't think OpenAI is just going to swallow up the entire-- I think there's going to be educational layers on top of general AI platforms. But I don't think they should be built by single institutions. And I think there's kind of a role for thinking about, how could we do this in a more collaborative fashion.
Now I shared this idea with one of my colleagues. And he said like, yeah, well, the learning management system world, there was also a kind of a proliferation of individual campus solutions at the beginning. And over time, we kind of rolled it up. And we ended up with this equilibrium of like, D2L and Canvas, and a few different players.
But that seems to be so wasteful. Like the idea of, that that's actually the optimal way to design this when there's so much goodwill, there's so much trust among institutions. I think there's a better way to build it that does not duplicate effort, that allows us to learn from the successes and failures of our colleagues who are trying to build one institution or another. So like--
AMRIT AHLUWALIA: Almost like a [INAUDIBLE] approach too--
MATTHEW RASCOFF: [INAUDIBLE] exactly, exactly. Or earlier in my career, I worked for Ithaka S+R, JSTOR or that family of organizations. And JSTOR was founded on the recognition that, sure, you could digitize all your journals at Western University. And we could digitize all of our journals at Stanford University. But they're the same journal. So why not just digitize them once and make them available to your faculty and to my faculty? And there you go. We just saved half the cost of doing that project. And it just, kind of-- I play out that scenario, digitize once, make it available to 15,000--
AMRIT AHLUWALIA: Upscale it once, 6,500 post-secondary institutions, I--
MATTHEW RASCOFF: Yeah. So now, granted, digitization is a much more standardized and commoditized process. We all know what that is. And there are standards that say like, OK, we've done it to this DPI, this technology for optical character recognition. So like-- so my digitization that I do here is good enough for you. I don't think AI is quite there yet. But we've ended up in this totally opposite side, opposite extreme of radical fragmentation, no learning from one another, very little sharing of with one another. And yes, there's a lot of conference papers and presentations. But I don't think CIOs are going to those conferences done by the CS faculty and saying, OK, as a result of this paper, I'm going to invest more or less in this campus solution.
And the article that I pointed to in my paper was around embedding ChatGPT in an online course. And it turned out it reduced engagement inside the course. That should give pause to the CIOs who are putting the millions of dollars into implementing ChatGPT or similar for all their students. If it's reducing engagement and learning, we should take a moment and make sure we're doing this right, make sure we're gathering enough evidence, make sure we have confidence that this is in support of learning, and in support of our broader goals, and make sure we're reading those papers, and digesting them, and strategizing around the implications of this technology, which we don't fully understand yet.
AMRIT AHLUWALIA: Well, I think it's an important caveat here because we're talking about, this is something we don't understand. What I think is, kind of, fascinating when you brought up, sort of, LLM as the allegory and the development of the LLMs, where there was a recognition that there were a few common traits. So a few companies sort of owned that space delivering a relatively similar product across the board. But then I think about the SIS or ERP space, which developed in a somewhat similar way, but now we have massively outdated monolith systems in place at almost every single post-secondary institution that are totally customized to the institution as it was 25 years ago. And the lack of flexibility in that architecture actually limits our capacity to be innovative today. And--
MATTHEW RASCOFF: I agree.
AMRIT AHLUWALIA: Think about, OK, well--
MATTHEW RASCOFF: It could be too centralized.
AMRIT AHLUWALIA: --generative AI. Like how do we wind up--
MATTHEW RASCOFF: I think that would be another bad outcome. And I think there's probably some middle ground between extreme centralization and an oligopoly and extreme fragmentation in which we're unable to learn from one another. And that middle ground is called a competitive marketplace. That's what the goal, I think, should be. And the marketplace should include open source. And it should include proprietary options for different approaches, different strategies. It should serve many different users, different institutional types. I think that should be the goal. And we should be creating the circumstances for a competitive marketplace to arise. And I'm not sure--
AMRIT AHLUWALIA: --realistically moving that way?
MATTHEW RASCOFF: I don't know. I don't know. There are so many moving parts to this. I mean, the FTC is concerned about this in the United States, about the control that big tech has. And Lina Khan has been to Silicon Valley a few times talking about small tech, the value of supporting entrepreneurial innovations that might otherwise get crushed by big companies, but make sure we're advocating for especially education technology, kind of bottom-up solutions that may be valid and that may be the basis for our R&D system for a more evidence-based approach through this kind of organic process of minimum viable products. Through the entrepreneurial process, we'll get product market fit, we'll get good options, good solutions.
I'm not sure institutions are best situated to drive that innovation. I think it's more likely to come from ed tech innovators. I think our goal at institutions should be to expose the real problems of practice to ed tech and build a better translational system that can basically say like, here are the important problems to solve. You entrepreneur, you may not be a faculty member at our institution. But we can tell you what the struggles of our faculty are like to ensure that you're more likely to build something that solves a real problem. It doesn't just scratch your own itch that maybe is based on your own experience as a student, but is a big problem.
And there are many such problems to be solved. But I don't think we have a great mechanism for communicating those problems to industry, and industry testing those ideas with us. Part of what I see as most compelling is this idea of co-creating technology between ed tech and users, and having a kind of creative partnership that doesn't just treat them as vendors to us. And from their perspective, we can be exposing our challenges, our use cases, the real problems that need to be solved to them. And that de-risks their own product roadmap. That ensures that what they're building is likely to have a buyer at the end of it. So I think there's a mechanism like that.
Earlier in my career, I worked for an ed tech company called Wireless Generation. It was a formative assessment company. It's partly why I'm so obsessed with formative assessment, worked on early reading. So like, it was one of the enablers of this revolution called the science of reading. It's why we know how to teach kids to read. And when we implement it, we can basically ensure that everybody is able to read.
Wireless Generation built its assessment technology in partnership with Montgomery County Schools. And it was a school district that knew it needed this and basically exposed this challenge, this formative assessment challenge to the company. And what they did was build assessments onto early mobile devices and gave teachers this capacity to understand what my students' struggles were in reading, and to mark up an assessment. As they listened to them, they could mark up an assessment.
But all that was co-created it. The founders didn't know exactly what it was like to be a reading specialist. That wasn't their background. And the reading specialist didn't have access to build the technology that was going to put these assessments onto mobile devices, and analyze them, and gather them, and aggregate them, and turn them into usable data that could be the basis for this reform. And so that model of partnership, that, to me, is the thing that I'm most excited about.
It's very, very hard to do. People are concerned about conflict of interest. There's a lot of challenges to manage in that. That to me is how we could build something better for the future. And it's not all university driven, or all commercial driven. It's kind of creative partnership and thinking about a better translational system that connects the two.
AMRIT AHLUWALIA: Such an interesting point. I mean, that is so much where great innovation happens is when there's trust. And I do feel there's maybe a lack of trust between post-secondary institutions and the vendors, the partners that serve them.
MATTHEW RASCOFF: Yeah. Just vendors itself suggests arm's length agreement--
AMRIT AHLUWALIA: That's it. Yeah. I mean, as soon as I said the word, I'm like, that's not a word you use in this conversation.
MATTHEW RASCOFF: Yeah, but I think that is how most CIOs think about the technology providers. And I think they think of us as their customers, not as their creative partners.
AMRIT AHLUWALIA: That's it. Well, Matt, I think we need to leave it there, unfortunately. I so appreciate your time. Now, just in closing, one thing that we ask every episode, if someone's visiting Palo Alto, what's one thing they need to see?
MATTHEW RASCOFF: Lake Lagunita. It's an artificial lake on campus at Stanford. It's seasonal. It fills up sometimes in the winter when there's good rains. And it creates this incredible ecosystem of species that you don't otherwise see. It's kind of hidden away in the back of the Stanford campus, but highly recommended--
AMRIT AHLUWALIA: Get out--
MATTHEW RASCOFF: --even in the dry season when it's quite different. But especially in the wet season in the winter, that would be my recommendation.
AMRIT AHLUWALIA: That's awesome. Matt, I appreciate you, man. Thank you so much for coming on.
MATTHEW RASCOFF: Thank you very much for having me. Take care.
[MUSIC PLAYING]
AMRIT AHLUWALIA: Thank you for listening to the EdUp PCO Podcast. Please subscribe to the podcast through your preferred channel. And connect on LinkedIn to let me know if there are any topics you want to learn more about.
Follow Stanford Digital Education: Sign up for Stanford Digital Education's quarterly newsletter, New Lines, to learn about innovations in digital learning at Stanford. Subscribe to New Lines, and connect with us on LinkedIn and Instagram.