Skip to main content Skip to secondary navigation
Main content start

How to make AI work for higher education

A convening of educators at the White House leads to questions about how to bring more evidence into ed tech.
Arati Prahhakar, director of the White House Office of Science and Technology Policy, addressing a roomful of postsecondary educators
Arati Prabhakar, director of the White House Office of Science and Technology Policy, addresses university leaders convened by the U.S. Department of Education for a working session on AI in postsecondary education.

A few months ago Chris Piech, assistant professor of computer science at Stanford, posted a co-authored research paper with the surprising result of a randomized controlled trial in a large online programming course he teaches: ChatGPT 4 reduced student engagement. Despite its contrarian finding, and its educational (and statistical) significance, the paper, which was shared on the scholarly commons OFS Preprints, received little fanfare. It didn’t fit the narrative of the current artificial intelligence (AI) education hype cycle.

Matthew Rascoff photo
Matthew Rascoff, vice provost for digital education

Boom and bust cycles have been part of capitalism since the Dutch tulip mania of the 1630s, when a single bulb was more valuable than a Rembrandt. But these cycles are pernicious to education, where demand is constant, not cyclical. Ed tech is caught in between these two conflicting systems. It is largely funded by the same capital markets as the rest of the tech industry, in which the “IPO window” (the opportunity to take a company public) opens and closes based on macroeconomics. But its users are students and teachers, whose needs don’t rise and fall with stock prices and interest rates. Education requires reliable infrastructure, modern tools, and support for the perpetual challenges of learning and teaching: healthy, stable nutrition, not famine and feast. As schools have become more reliant on ed tech, the mismatch between the way it is financed and developed, and the importance of its products, has magnified.

Ed tech decision-makers are faced with the task of navigating between the naive embrace of shiny new objects and the disappointment and cynicism that follow when startups and their products don’t work out as hoped. And the stakes aren’t just financial: Millions of children and young adults are struggling to overcome pandemic learning loss. Education technology will shape the capabilities and productivity of the next generation.

That navigation task is formidable but not impossible. Research and development frameworks from other sectors can help. The U.S. Food and Drug Administration uses a phased trial system to review new pharmaceuticals. New drugs move through four phases as studies demonstrate their safety and efficacy in successively larger populations. Now imagine an alternative universe in which the FDA didn’t exist. Each hospital would have to determine on its own whether to approve a drug, since there would be no trusted source of information. Physicians would redundantly evaluate therapies themselves (unnecessarily harming many patients in the process). There would be no way to systematically share results, ensuring that new doctors would have to figure this all out for themselves.

It seems unimaginable today, but this was how healthcare operated until the early 20th century, when medicine was professionalized, the FDA was established, and effective drugs replaced snake oil. And it is more or less how education innovations diffuse today — or more often, don’t. In medicine, it took decades but the scientific method finally prevailed. And if medicine could develop an evidence-based framework for research and development 100 years ago there is no reason why education cannot. 

The necessity of greater scientific rigor and transparency in the implementation of educational AI was a reflection I offered at a gathering at the White House on June 17. Convened by the U.S. Department of Education, the meeting drew leaders of 50 colleges and universities to discuss AI and postsecondary education. 

I heard that some universities are pouring tens of millions of dollars and assigning hundreds of staff members to work on generative AI technologies, and are rolling out custom tools to their whole institutions. This campus-centric approach is simultaneously too big and too small: Too big because there was no “minimum viable product” or other R&D process to validate the utility of these products before they were offered to every student, faculty, and staff member. And too small because the needs of students and educators are similar across institutions — it makes no sense to build alone!

Universities quite reasonably don't want to just license off-the-shelf large language models from OpenAI, Anthropic, and Google. Education has some specific needs in AI. And this is an opportunity for ed tech, both proprietary and open source, which could yield higher quality, more evidence-based, and more scalable products than individual institutions could create on their own, and which are unlikely to emerge from the generalist platforms.

Generative AI has the potential to democratize learning, improve assessments, and increase engagement. An emerging generation of ed tech entrepreneurs is building the products that will help realize this potential and demonstrate how AI can be used effectively.

Many entrepreneurs are working carefully to produce responsible products, but there are others whose bold claims are drowning out the more mixed findings like those produced by Prof. Piech and his colleagues. As a start, we should be honest about the fact that this technology is in the early stages of development. And we should be testing it in smaller contexts as we gain confidence that its benefits outweigh the costs. 

Higher education requires a plan for experimenting with AI, studying its safety and efficacy, and publishing the results. I am not suggesting that we duplicate the FDA in the education space, but we need an alternative to the chaotic status quo. At the White House meeting I learned about various efforts to build a better system. InnovateEdu is a startup nonprofit that bridges gaps in data, policy, practice, and research. SafeInsights, based at Rice University, facilitates sharing educational data while protecting individuals’ privacy. These initiatives are addressing critical components of an emerging R&D infrastructure.

The “ed” comes before the “tech” in ed tech. As ed tech gains in educational and economic power through AI, we should hold it to standards that are more akin to healthcare than to media or consumer technology (categories where it’s often situated by generalist investors). Education is not just an industry “vertical” — it’s a crucial public good. And that makes ed tech different.

To ensure that the best AI-enabled learning experiences reach as many teachers and students as possible, we must raise our standards of evidence. As the placard held up at a pro-science protest says: “What do we want? Evidence-based change! When do we want it? After peer review!”


Matthew Rascoff is vice provost for digital education at Stanford.

Follow Stanford Digital Education: Sign up for Stanford Digital Education's quarterly newsletter, New Lines, to learn about innovations in digital learning at Stanford. Subscribe to New Lines, and connect with us on LinkedIn and Instagram.

More News Topics

More News