Natural language processing conference explores use of AI for student-centered instruction
“We share a vision of using artificial intelligence to empower teachers, to uplift them and keep them in the field,” said Jennifer Jacobs, associate professor at the Institute of Cognitive Science, University of Colorado Boulder, as she took her turn introducing herself to a packed room at the Stanford Faculty Club in late May. “Will it do that?”
A sense of possibility — fervor mixed with trepidation — flowed through a daylong conference devoted to natural language processing that brought education researchers together with ed tech entrepreneurs and funders May 23.
The influential psychologist Carol S. Dweck said that she had come as a “lurker,” to learn what she should be excited about and what she should be troubled by.
Natural language processing is the artificial intelligence (AI) technology that allows people to interact with computers via human language. It enables your child to ask your smart speaker to play a song or your phone to announce and dictate a text you just received. It has multiplied researchers’ ability to study teaching and learning environments: an app on a smartphone with an unobtrusive mic can now “hear” and transcribe much, if not all, of the communication in a classroom. Such technology can turn a relative trickle of data into a flood, and it can provide the tools to make sense of that flood — to organize, interpret, and act on it.
Dora Demszky, assistant professor at Stanford Graduate School of Education (GSE), moved quickly this spring to assemble the conference, “Empowering Educators via Language Technology,” along with her colleagues Heather Hill, Hazen-Nicoli Professor in Teacher Learning and Practice at Harvard Graduate School of Education; Jacobs of University of Colorado; Jing Liu, assistant professor at University of Maryland; Susanna Loeb, professor at Stanford GSE; Bethanie Maples, PhD candidate at Stanford GSE; Tamara Sumner, director of the Institute of Cognitive Science and professor in computer and cognitive science at University of Colorado Boulder; and David Yeager, associate professor of psychology at University of Texas at Austin. The conference was also hosted by Stanford Digital Education.
“A growing number of us have been working at the intersection of natural language processing and education, yet there was no community or conference that we were all part of, where we could share our ideas and establish collaborations,” explained Demszky. “ChatGPT has suddenly drawn a lot of public attention to this space, from researchers, practitioners, and funders alike. We wanted to seize this opportunity to build a community focused on the most important question we all care about: how do we leverage language technology to empower all teachers, and in turn, facilitate access to high-quality, equitable education for all?”
Demszky and her colleagues believe that conversations unfolding now among researchers and ed tech creators can lay the groundwork for shared principles and infrastructure that will influence the field’s direction. They envision scholars, practitioners, and entrepreneurs collaborating to build tools for increasing excellent, student-centered instruction, and to make those tools widely accessible. Their mantra is “AI in the loop, educators in charge.”
On May 23, as participants responded to Demszky’s invitation to share what they were excited and troubled about regarding natural language processing in classrooms, explicit and implicit questions hung in the air. Will AI products for education be built on robust and representative data? Will their design reflect the insights of social science? How well will they serve students, teachers, and the larger purposes of learning?
“I think we are seeing a new field emerge,” said Matthew Rascoff, vice provost for digital education at Stanford. “The question before us is how to harness this technology for the benefit of society.”
He continued, “The collective choices that we make may impact millions of learners. The question is how this field will be shaped. As we translate AI research into ed tech, can we protect educational and civic values? How can we embed democratic participation and educator involvement as we project knowledge from our labs into Silicon Valley innovations?”
“We need to maximize sharing and collaboration”
The conference organizers designed the day so that participants would form and reform groups, tackling a series of guiding questions that ranged from overarching (What are the most pressing problems in education, keeping equity in mind?) to the practical (How can we involve teachers in AI tool design and implementation?). Over the course of the day, attendees were bound to talk with someone working on a similar problem from a different angle, or someone with an entirely different problem or different level of focus, from abstract to technical. They brought expertise in education, data science, cognitive science, and psychology, and they came not just from universities, but also from nonprofits and industry leaders devoted to transcription, digital classroom assistance, tutoring, and teacher professional development. Advocates for open science chatted with entrepreneurs whose success might be thought to depend on proprietary intellectual property.
It was fertile ground for interactions that could lead to research-practice-industry partnerships, a goal of the conference. The difficulty of the tasks ahead seemed to argue for breaching silos.
“I think we're so early that the intellectual property risk is low,” said Jamie Poskin, as the group sought to identify fundamental scientific questions that the field needed to address. Poskin founded TeachFX, which uses AI to give teachers feedback on instruction. “What we're doing is still very hard, and we need to maximize sharing and collaboration. I would love to have 10 competitors in the space. It would be a great sign for the field.”
“Amen,” came from the room.
Collaboration in the research, practice, and industry ecosystem can produce a reinforcing cycle that increases knowledge about what works in education, and tools to support it. For instance, AI tools being built by ed tech companies hold out a promise of supercharging academic research, because of the amount of data that they can absorb and make sense of. Researchers have a strong interest in the quality and performance of those tools, and the quality of the data they yield. The larger the datasets, and the more inclusive they are of diverse demographics and instructional settings, the better. Subsequent AI products built on that data will be more likely to succeed — to be adopted enthusiastically by teachers, administrators, and students — if their design is informed by learning science and education theory, not to mention by educators themselves. New experiments designed by researchers could be embedded in those products.
Conference organizers and attendees were clear that AI should serve and enhance not just learning goals, but also the people in our education systems, and that the tools we design should reflect our understanding of the goals of schooling. Are classrooms laboratories, where teachers and students learn through experimentation? Social forums where students practice the skills needed to participate in democracy? Focused spaces to master content knowledge and to demonstrate proficiency that builds over time — on-ramps to productive and rewarding employment?
“Instead of replacing human relationships, we want to make them richer, deeper…really leverage the capacity of humans to do what AI can't yet and hopefully never will do: connect with other humans,” said Jeff Bush, a research scientist at University of Colorado Boulder.
As Hill summed up the input that she had gathered through a pre-event survey, she echoed that sentiment, saying, “The aim for all of us is not to replace teachers, but to help them achieve their potential.”
“It feels like a field-shaping moment,” said Isabelle Hau, executive director of the Stanford Accelerator for Learning. “I want kids, teachers, and families to be in mind during design, for the good of society.”
Yeager, who addressed the group as the day came to a close, had heard both the conviction about AI’s transformative potential and the undercurrent of concern. He encouraged fellow academics to engage with cultural and technological change. “GPT and large language models are out there, are in use — and people who care about educational equity and inequity are concerned,” he said. “The tendency among academics is to stay at ‘stop.’ The ‘go’ version is to ask where we can equitably and responsibly make progress by living and embodying our principles in our projects.”
As part of that movement forward, Demszky and her colleagues will be collaborating with a writer on a white paper to distill the information and recommendations from the conference. They are also planning another conference that will center the voices of educators, including K-12 teachers, who are considering the use of AI in their classrooms.
"We are just beginning to understand how AI tools can help teachers in classrooms,” said Hill. “A critical piece of our learning will be determining how to enhance, rather than complicate, teachers' work."
The “Empowering Educators via Language Technology” conference was supported by the Bill & Melinda Gates Foundation, Stanford Accelerator for Learning, Stanford Digital Education, Stanford Graduate School of Education, and the Texas Behavioral Science and Policy Institute at University of Texas at Austin. Academic collaborators also came from Arizona State University; Harvard University; University of California, Berkeley; University of California, Irvine; University of Colorado Boulder; University of Maryland; and UT Austin. Organizational collaborators included Edthena, TeachFX, Merit America, Merlyn Mind, the Minerva Project, Teach For America, Schoolhouse.world, Research Partnership for Professional Learning, and San Francisco Unified School District.