Educational AI with ‘humanity in the loop’

The UNESCO International Day of Education 2025, “Artificial Intelligence and Education: Challenges and Opportunities,” brought together an international group of educators and technology leaders in order to “examine new possibilities offered by AI,” “promote the development of critical AI literacies,” and “ensure that AI complements rather than replaces human elements of learning.” Matthew Rascoff, vice provost for digital education at Stanford, gave the keynote address at the event, which was co-organized with the UN Group of Friends for Education and Lifelong Learning and held at the United Nations in New York on January 24, 2025. The ideas in the address were jointly developed by Rascoff and Josh Weiss, director of technology and innovation at the Stanford Accelerator for Learning.
The text of the keynote follows; you can also view it at 34:41 in the event recording.
Keynote at the International Day of Education 2025
Your excellencies, distinguished guests, and friends watching around the world,
There are many competing frameworks for thinking about AI in education. There is an active debate about whether AI will primarily lift the floor or the ceiling of human performance. Will the already skilled become even more so, to the detriment of everyone else? Or will we widen access to previously rare skills and knowledge? Are we headed for a world of educational abundance, as was promised by previous waves of education technology? Or will these technologies exacerbate inequality, increase instability, and raise fears of technological unemployment?
These are important questions. But in this talk I want to introduce what I think is a new framework for thinking about AI in education. The framework I offer reflects both the scientific evidence about learning and the educational philosophy of constructivism, which elevates learning as a process by which knowledge is co-created by learners themselves. In so doing I hope to offer a roadmap for educational AI that might guide those who are building new technologies and tools, as well as educators and policymakers who are considering how to implement them.

My argument will proceed in three steps. Step 1 will survey the current landscape of educational AI. Step 2 will look to the frontier of AI innovation in education. And Step 3 will attempt to point a telescope at the distant horizon, and envision a possible future in which AI truly benefits learning and humanity. In each of these steps I will try to offer examples from different levels of education, from elementary to adult, since I know there are many different priorities and emphases represented in the audience.
Let’s start with a map of the current state of educational AI. Most of these tools are designed on a model of one-to-one interactions such as tutoring, coaching, or personalized learning. There are many examples of how this approach can be beneficial to students.
Mainstay is a startup whose chat product guides university students through complex administrative processes. Trained on a university’s policies, Mainstay plays the role of an advisor or counselor who is available at all hours and can answer questions about enrollment, financial aid, and choosing a major.
Replika is a chat-based AI companion that appears to be designed for entertainment. Users role-play different characters and interact with imaginary friends. On one level, it is a game. Unexpectedly, though, in the context of an epidemic of loneliness and isolation, a group of a researchers found that Replika reduces suicide rates among its users by a statistically significant amount.
My last example is in the early reading realm of elementary school. Ello is a tool that can listen to a child as she reads a book, and offer feedback based on the science of reading. “Reread that word, breaking up the sounds,” it might suggest to a 6-year-old encountering a new word she mispronounces. Unlike Siri or Alexa, which listen for commonalities in our speech, Ello listens for divergence, to help correct errors and provide feedback.
In each of these examples the primary goal is differentiation. If we could give just the right bit of information, support, or affect, to the right student at the right time, we could help them make progress at their own pace. One-to-one AI is predicated on the ability of a machine to process some interaction with a learner, either as a prompt, as audio, or as a document, and give them something useful in return. The more a learner uses the systems, the better the system gets to “know” them and serve more useful and relevant information. That idea is called “human-in-loop.”
The one-to-one approach comports well with the consumer business model of many educational AI companies. Their goal is to develop products that individuals will license for their own benefit. The implied educational philosophy here is autodidactic. Engineers who taught themselves, who program largely on their own, build products for others to do the same. While there are clearly some benefits to the individual user of these technologies, there are also real risks to collective learning. One-to-one AI might breed a kind of intellectual isolationism and risk a tragedy of the commons.
I will give just one example from the school where I teach, the Stanford Graduate School of Business. In response to the prevalence of one-to-one AI technologies, we have a new policy this year that says instructors may not prevent students from using AI tools for assignments done outside of class. The idea is that banning these technologies is unenforceable and it is unfair to punish the handful of students who might comply with a ban if most of their classmates would ignore it. In the process, though, we may now be coercing students to use AI even if it does not support their learning.
I would give the example of writing essays, which is an important part of the class I co-teach. For many students, the abilities to write and think are intertwined. There is no way to outsource the writing without also outsourcing the thinking. So AI writing assistance might reduce the intellectual risks that students take, harm their personal explorations, and homogenize their outputs. I don’t know if that will prove to be true, or if the benefits will outweigh the risks. The first assignment I have given my students is due next week so we shall see! What I would say is that it would certainly be ironic if technologies designed to support personalized learning ended up reducing individuals’ divergent thinking.
My goal as a teacher is not for my students’ ideas to revert to some kind of statistical mean. It is for them to develop their own perspectives in a community of shared learning. How might AI support such a learning community? Let us turn our attention to the frontier of educational AI and consider a few examples of what I will call many-to-one AI.
M-Powering Teachers is an open source tool for teachers built by Dora Demszky, assistant professor in education data science at the Stanford Graduate School of Education. Using machine learning and diarizing techniques, M-Powering Teachers can listen in on a classroom discussion and provide feedback to teachers based on an analysis of the discourse. That feedback might include — you are calling on males more than females. That your uptake of student ideas could be better. Or that you could more effectively use active learning strategies such as “wait time,” which means waiting a few seconds after asking a question before calling on a student to answer it.
M-Powered democratizes instructional coaching, a proven practice for improving teaching that is difficult and expensive to implement with human master teachers. At best, instructional coaching is currently available intermittently. With M-Powered, teachers can get regular low-stakes formative feedback. In 2023 Prof. Demsky published a paper showing, in a randomized controlled trial, that M-Powered “improves instructors’ uptake of student contributions by 13 percent” and improves students’ satisfaction and assignment completion.
Another example of many-to-one AI: Goblins is a middle school math assessment startup built by a former math teacher, Sawyer Altman, and funded by the Gates Foundation. Using computer vision technologies and a math engine, Goblins can read handwritten equations and provide step-by-step support to students who may have answered a question partially correctly. As in one-to-one AI, students get their own machine-produced feedback. But crucially, that feedback is also aggregated and provided to their teacher. And that is what makes it a many-to-one approach. By allowing students to more systematically show their work, teachers can ask more open-ended questions and better understand gaps in their students’ knowledge and abilities. While it has not yet been externally validated, Goblins’ approach may hold the key to finally moving away from reductive multiple choice tests and building a modern formative assessment system.
In both of these many-to-one cases, the data originates with students, is processed through an AI, and presented to a teacher. With AI help, the teacher can encourage equitable participation among students, address student misconceptions early, and open the “black box of learning” through formative assessment. We might think of many-to-one AI as “humans-in-the-loop” because the output is a joint effort of multiple students, and the goal is shared improvement for the whole class through greater insight for educators.
Now finally in Step 3 let us attempt to look over the horizon, beyond what is possible with technology today. Let’s call it many-to-many AI. The vision here isn’t human-in-the-loop, or even humans-in-the-loop, but humanity-in-the-loop.
While this does not yet exist in education I can offer a few glimmers of possible futures based on other fields. The UCLA mathematician Terence Tao, widely considered to be among the most gifted theorists of his generation, is also among its most prolific collaborators. But Tao believes that collaboration in math is limited by its hyper-specialization. Unlike other areas of science, most math papers are sole-authored or co-authored by small groups. Tao thinks there is a role for AI to orchestrate “big science”-style collaboration in math by automating proof-checking and thereby increasing trust among researchers from different subfields.
AI might foster collaborations among mathematicians who don’t know each other but whose expertise is complementary. The result might be more mathematical innovations and publications, co-authored by ever-expanding circles of collaborators. Self-driving cars are another example of the potential of AI orchestration. Self-driving cars are often described in one-to-one terms as technologies that benefit me, as a rider. But I suspect once they achieve critical mass in cities, the benefits will be shared with pedestrians, cyclists, and other drivers, because these cars are programmed never to speed, and to be generally polite to all road users. (And definitely more polite than New York City drivers!) This is real technology that provides a glimpse into the future. If you come to San Francisco I encourage you to ride in a Waymo so you can experience it yourself! This is many-to-many AI, with humanity-in-the-loop.
* * *
If the progress of civilization is driven by successively larger collaborations of humans, we might begin to fathom the potential of AI orchestration to accelerate innovation. It is difficult to conceive of what this might mean for education because education has a bias against scale. We think bigger must somehow mean worse. But that won’t be true in AI-enabled education. With scale comes improvement in many-to-many AI. Network effects mean an experience gets better as more people participate. If Tao is right, we will have more mathematical breakthroughs through worldwide collaborations that tap into talent everywhere. If Waymo is right, cities will be safer for children to navigate independently.
Many-to-many AI has not yet come to education, but with these examples you can start to imagine the implications if it does. We will have more opportunities for learning together, from one another, across the divides of culture and language that separate us today, for the benefit of all of humanity.
Follow Stanford Digital Education: Sign up for Stanford Digital Education's quarterly newsletter, New Lines, to learn about innovations in digital learning at Stanford. Subscribe to New Lines, and connect with us on LinkedIn and Instagram.