Skip to main content Skip to secondary navigation

The Future of Learning: AI Agents and Human-Centered Education

Main content start

How are university faculty and students responding to the evolution of generative artificial intelligence (AI)? What could the arrival of autonomous agents mean for human-centered education? 

Panelists Alessandro Di Lullo, chief executive officer of the Digital Education Council; James Genone, senior vice chancellor for learning strategy at Northeastern University; and Matthew Rascoff, vice provost for digital education at Stanford University, discussed the implications of generative AI for learning with Suzanne Dove, assistant vice president for strategy and innovation at Bentley University.

This event was organized by Harvesting Academic Innovation for Learners (HAIL) and co-sponsored by the Digital Education Council and Stanford Digital Education. It took place March 17, 2025, as part of the Academic Innovation for the Public Good series.

Transcript

SUZANNE DOVE: Welcome, everyone. My name is Suzanne Dove, and I'm the assistant vice president for strategy and innovation at Bentley University, and I also am the executive director of the Badavas Center for Innovation in Teaching and Learning. We're so pleased that you could join us today for the HAIL webinar, The Future of Learning, AI Agents and Human-centered Education. 

HAIL stands for Harvesting Academic Innovation for Learners, and it's really a network, kind of an organic network of higher ed leaders who've demonstrated an awareness and commitment to experimentation and transformational change in academic institutions. So this HAIL conversation is co-sponsored by a couple of really special partners, the Digital Education Council, based in Singapore, and the Stanford Digital Education office, which is based at Stanford. 

And this is part of Stanford's Academic Innovation for the Public Good series. And so big thank you to those partners, and especially the Academic Innovation for the Public Good team for producing this webinar. We could not do this without your assistance. So very grateful for that. 

Before we begin our conversation today, I want to offer a few details about the format of this afternoon's program. We will take questions during the second half of our program, so please go ahead and start submitting your thoughts, your comments, your questions using the Q&A button at the bottom of your screen. The chat window is also open, so you're welcome to engage there. You can use emojis. Just please be mindful that this is a community space, and we want everyone to feel welcome. Also auto generated captions for the conversation are enabled, so you can show those by clicking the Show Captions button. 

And again, as part of this preamble, I just want to make space to acknowledge the fact that three of us on this call-- three of us on the panel here, on the screen here, are based in the United States at traditional academic institutions. Today, this period that we're living through, is a time of a lot of change and uncertainty for higher education. 

Certainly, it's a time of uncertainty for people all around the world. And I know that we do have a global audience with us today. So I just want to acknowledge that and ask you all to take a breath and be here, present with us. I think that, for me at least, working in higher education is one of the most important callings there is out there, and I feel so privileged to be able to do this work at this time with this group of people, this great community. 

The other thing I want to acknowledge is that there are perspectives and identities that are missing from the screen, again, that you're seeing in front of you. And those are really important to include as well. So I ask each of the people who is speaking, presenting, but also listening and participating, please take away from this conversation your ideas and thoughts and carry this conversation forward with your communities and stay in touch with us. 

We want this to seed new ideas, new discussions, new questions and challenges to this work. So thank you so much, all of you for being here. And please, again, help us to spread this work and keep the energy and the momentum going around some really, really critical topics. 

With that, I'm going to introduce our three panelists. I'm so excited to be having this conversation with them. So Alessandro Di Lullo is the chief executive officer of the Digital Education Council, a global community of practice for innovation in higher ed and workforce development. Also, Alessandro is a fellow in AI governance at the University of Hong Kong, where he has led projects with governments and institutions to upskill thousands of students and professionals in digital skills and AI. Thank you, Alessandro, for being here. 

James Genone is senior vice chancellor for learning strategy at Northeastern University, where he develops the overall philosophy, approach, and strategies associated with learning across the university. In prior roles, James has been a faculty member and academic leader of multiple institutions, as well as a startup operator and product strategist for educational technology and services company. James, great to see you. So glad you could be here. 

Finally, we have Matthew Rascoff. Matthew is the vice provost for digital education at Stanford University, where he advises Stanford's president and provost on digital learning initiatives and leads the Stanford Digital Education team. Matthew has worked across sectors to democratize access to knowledge and opportunity. Welcome, Matthew. We're so glad to see you here as well. 

So with that, this is really going to be, as I said, a conversation amongst the four of us. And then we'll pause at a certain point and have all of you, in the audience, participate with your thoughts and questions as well. So I'd like to kick us off thinking about our audience, at least from the Harvesting Academic Innovation for All Learners. 

Academic innovators. Many of us in this room are charged with fully or part of our role being to effect some change or transformation at our institutions. We're involved in innovation in some way. So could each of you three-- Alessandro, James, and Matthew-- tell me what is on your mind about a way that academic innovators can help their institutions balance generative AI with human-centered education? Alessandro, we'll start with you. 

ALESSANDRO DI LULLO: With pleasure. It's a great question to start and just let me say also how glad I am to be participating in this session. I'm today participating online, but I'm physically at Bentley University, so I'm really happy to be here in person. Great question. And it's really-- I'll try my best to give you sort of short answer. We could go on for-- we could go on for an hour. 

It's interesting to notice how we are seeing many educators and professors trying different tools, different methodologies. And this is really great. But I think it's important to take a step back and recognize why we're using gen AI innovations. And if we think about AI innovation, it is important to reflect on what are the key goals that we want to use it for. 

We use a simple methodology, which is a five-pillar framework. Essentially, AI can be used for-- to automate processes, to discover new trends, to personalize experiences, to predict outcomes or trends, and to foster inclusion. These five pillar is interesting and effective because it can help educators and learning professionals to structure their thoughts and make sure that we are not just trying tool after tool without distraction approach to it which is something really important as much as curiosity is, of course, important. 

In parallel, we need to really reflect on how these applications are having an impact on the way we, as professionals do our jobs, but also in the way students learn and live. Fundamentally, right now, we see that AI tools are really, really becoming important, especially when it comes to automate different processes and tasks. 

And so naturally, the most immediate thing is that in some instances, those tools can be a negative impact on cognitive abilities of students or even us as professional, arguably. And so it is important to reflect what is the impact of these tools on skills like critical thinking, creativity, and what are the ethical considerations around these elements. 

So if we proceed, like first, to map the type of goals that we want to achieve with the AI, and second, to really reflect on what is the impact on these human skills, then we can have a much better understanding on really framing the discussion, knowing that it's going to be different between institutions, even within an institution. Professor of geography or professor of history will not have the same reality as a professor of engineering or of finance. And so it's important to really structure the discussion to recognize that we're living in different environments, even within the same university. 

I'll stop here because I'm really keen also to hear from James and Matthew, and then we can continue more. 

SUZANNE DOVE: Thanks, Ale. James, jump in. What are your thoughts on this? 

JAMES GENONE: Yeah, it's such a great question and I'll also echo Alessandro that I'm very grateful to be here and part of this conversation, which is so important. At Northeastern, a lot of the conversation that we're having right now is about adaptation. I think we see adaptation, the ability to adapt, as one of the most important skills we can cultivate in ourselves and in our students. 

And as an institution, we're just really at the beginning, even though it feels like we've been talking about AI nonstop for quite a while now, we still think we're at the beginning of the change that we're going to see because of this technology. And so we have to prepare ourselves to continuously adapt. And that's true at these different levels that we organize ourselves at. So myself as an individual, my colleagues, so on, we all have to understand how our work is changing because of the technology. 

And then the groups that we're a part of faculty, staff, students what does it mean for them in the particular roles that they play in the university? And then the institution as a whole, how does the work of the institution have to change? And so there's no single easy answer to that but what we believe is that, increasingly, we have to put a focus on the kinds of skills Alessandro was just talking about. 

And these are skills that, in many cases, you would have been expected to cultivate over the course of a long career. Prioritization, goal setting, effective decision making, moral judgment and so on. And now, these are becoming really important as we interact with this technology for our students, potentially at a much younger age. And so they need the kind of learning environment in which they can practice those things. 

And that's why Northeastern has always been incredibly focused on experiential learning. And we think that's even more important now given how we're engaging with artificial intelligence. So that's just the start of an answer but it's definitely what we've been talking and thinking about quite a bit. 

SUZANNE DOVE: Yeah, I love those points, James. It's great for traditional academic institution to really be thinking about how do we craft and shape that learning environment. Matthew, bring us home here with your thoughts on this opening question. 

MATTHEW RASCOFF: Love the comments from Ale and James so far. Thank you for getting us started. I mean, this is my perspective. I don't claim to represent Stanford, and I don't think we've developed like a single strategy yet as an institution. But one observation that I make in online learning world, which several of us share, is that I think there's a shift that we need from a human capital centric model to a social capital model. 

And human capital was all about individual skills, it was about cognition, content that drove it, like what I can do. It was kind of developed in the middle of the 20th century as a framework for thinking about, What will drive economic development? How should countries be thinking about how they should invest? And education was too big, so skills and human capital kind of arose as the things that you could invest in, particularly. 

To me, like the biggest disruptions are coming in that human capital-centric model of learning. And what we need now is a new emphasis on relational skills. The number one skill on LinkedIn is not Python or C or Java, it's communication. I learned that recently. And like that to me feels like a fairly big shift in especially technical programs. 

That's where some of these biggest disruptions are happening. That's where the generative AI tools, I think are most impactful. And I think the computer science graduates are the ones who are most anxious about the obsolescence of their skills. They're going to need to be prepared for hybrid jobs that bring in more of the social and individual skills. 

Like that, to me, feels like there's a model for human-centered learning, which was your question there, Suzanne. Digital learning hasn't always been that. It has been very individualized. It's been personalized almost at the cost of the social. And to me, I think we need a new emphasis on community, on how to engage with others, how we engage across difference. And work together in larger collaboratives that allow us to do those particularly human things. And what those things will evolve over time. But we, as collectives, are going to figure that out together. That's how our civilization has evolved. 

SUZANNE DOVE: Yeah, I'm seeing a lot of thumbs up and heart reactions and chats to what you're saying, Matthew. That's really, really helpful. And so I want us to build maybe a little bit from the ground up, so to speak, in the next segment of this conversation. So I'd like to start with Alessandro and some of the work that your organization, Digital Education Council, recently published, which is an AI literacy framework. 

I think this spoke to me because I feel like we keep talking about AI literacy, but it's not always clear what exactly we're talking about. And I highly recommend that people check out what Alessandro and his team have produced. It's really, really rich. Really does a great job looking at the state of the literature or landscape on this, and then provides some very helpful and practical ideas for a way forward. 

So, Ale, I wanted to ask you, what do you think about AI literacy? Is it the same for everyone, or are there certain skills that are more essential, similar to what Matthew was talking about, team-based or community-based skills? Talk a little bit about this question of AI literacy and how you're thinking about it. 

ALESSANDRO DI LULLO: Great pleasure. Very happy to share more about our framework. And also, I also want to connect with what Matthew was mentioning before about these skills that are not necessarily something new. Collaboration, critical thinking, is not that we are inventing them today. However, what is important to recognize that today we are living a completely different era. 

And what is different is that if you take digital literacy and you take AI literacy, if you look at the skills, it's not that we're talking about something completely different. However, what is different is the speed at which change is happening. And specifically, if you think about critical thinking, creativity, and ethical considerations is like the magnitude of the change or the impact, positive or negative, that this technology can have on these skills. 

So practically speaking, we are seeing actually a number of professions that are really at risk when it comes to their ability to evaluate AI output, and also really to understand what is the ethical implications on their job. And so we did a big piece of work in really understanding-- trying to understand and make our own contribution to the literacy-- to the literature on AI literacy. 

And so we came with a five-pillar framework that presents both a general AI literacy for all as well as a specialized AI literacy for specific domains, because we realize it's important to merge both. Why? Because there are certain skills and certain foundational elements that need to be to across, and we cannot go too soon directly into specific jobs otherwise, we are missing the big picture. 

Similarly, we need to get real, and we need to get practical about what jobs people are doing with AI and what tasks are executing. And so we are proposing a framework which includes these five pillars that essentially guide the reader and someone looking at the framework in a, almost like it's a story or how you will use AI. 

So the first dimension looks at, How does AI work? So understanding AI and data. The second dimension looks at, How do I evaluate AI output? So critical thinking, judgment, and the ability to evaluate AI output. Third, ethical and responsible AI use. So, How do I ensure I'm using it correctly and responsibly? 

Fourth, human centricity, emotional intelligence and creativity. So, How do I ensure, very importantly, that humans remain at the core? And fifth, How do I apply everything to a specific context? Which is really interesting. And we sort of were thinking, how does this fifth pillar, the domain expertise pillar-- how can this be immersed and applied into different dimensions and realities. 

And so we are also advocating for almost like a skills-based approach. In the document that you can download also on our website, you will see that there is an example for faculty members. And so we reflected on what are the key skills that faculty members need. Arguably it's about facilitating students' critical thinking. It's about innovating pedagogy. It is about cultivating a mindset of change, both in terms of their readiness to change, but also in terms of adapting the curriculum, and a number of other skills. 

And so you will see that then the framework becomes modular and is applied into different dimensions. And we think it's important also that-- and we would love if you are using this tool as well to understand how you use it. Maybe to understand what an investment banker-- what is AI literacy for an investment banker? What is an AI literacy for a leader in a university? What is the literacy for an admin professional? What is an AI literacy for an health professional? 

It varies, but it's important to recognize, as we are talking today, about what human centricity means in the context of AI and how we can make sure that these human centricity concepts are well applied and related to the actual job that AI will do. Because one without the other risk to-- you risk to miss important points. 

SUZANNE DOVE: Yeah, that's really helpful. And I see a lot of requests to access the AI literacy framework. So we'll make sure that that's link is provided. What you're describing, Ale, about the domain specificity and how AI literacy applies in a particular profession, really is a great segue into a question that I had for Matthew, which is related to your recent speech to UNESCO, where you gave this really cool three-part framework. 

And you talked about one-to-one, many-to-one, and then many-to-many. So I wondered if you could maybe briefly share with the audience what that framework was. But thinking specifically about the many to one is really a way that instructors or educators can learn from the AI about their learners. 

And what are the skills, the domain-specific skills for those educators to know how to bring-- how to take on that new knowledge and how to use those new tools because it's already happening. It's-- this arrival technology it's here. People are-- these products are available, they're all over the place, and educators are using them. So tell us some of your thoughts about that. 

MATTHEW RASCOFF: Yeah. So this was a framework I co-developed with my colleague Josh Weiss. And it was ambassadors from all these countries at the United Nations. I'll ask one of my colleagues to post a link in the chat. My talk began, like, Your Excellencies, and I've never done that before. So it was kind of interesting to try to make this relevant to people outside of education. 

One-to-one AI is like personalized learning, tutors designed for your individual purposes, familiar from the consumer tools that we've got at our disposal. Many to one AI is this idea that the role of the AI is to aggregate information, synthesize it, and then present it to an educator, let's say, in a way that's more usable, more actionable for them, and it becomes a kind of formative assessment tool or a feedback tool or an analytic tool. 

So the ChatLTV project at Harvard Business School is a good example of this, where there are aggregating interactions that students had with an AI and turning it into information for the professor that they could use for class preparation for the next day, and giving them insight into where the student is struggling. And then use that in your differentiation, in your ability to meet students where they are. 

Many to many AI is the idea of AI as an orchestrator. I think the title of this panel was agentic, and I think that might be an agentic role. It's an air traffic controller that allows people to work together more effectively in small groups. That allows potentially, for that idea of larger and larger congregations of people working in big projects, maybe that feel quite small for them because they have all-- small numbers of them have AI facilitation that's really good, that understands our biases, that it can correct some of those biases and make sure everybody speaks an equal amount of time. 

And my model for this comes from the mathematician Terence Tao, who's a UCLA scholar who believes that AI is going to enable a new kind of math collaboration that allows people to work together more effectively. Even in subdomains where they may not fully understand each other's technical expertise, the AI will be there to help people with co-authorship, with collaboration. And you can imagine something like that playing out in the context of a very large online course, where there are small groups that allow people to feel like they're part of something cohesive, even as it scales to become very large. 

I think each of those has a lot of potential. I think of them in a progress. And, I don't think we're at the many to many moment quite yet, at least not in education. That, to me is where the new possibilities will come in, though. That's what's most exciting. That's where we'll be doing things with these tools that are not just replacing something, not just augmenting something, but really creating new potential for learning that did not exist in the world before. 

And that, to me, it should be a north star for those who are on this as builders, for those who are thinking about what is an education-centric model of a roadmap for an AI company? That to me feels like a direction that we should be heading. 

And I worry that we've become too wedded to this one-to-one model, which derives from, like the consumer technology world, not from the world of education, and its constraining some of our thinking about what the roles of human community might be. And it's limiting the social capital, enhancing possibilities of this technology, which I think is really the most exciting long-term potential for it. 

SUZANNE DOVE: Yeah. Those are excellent points, Matthew. I really appreciate that and that you're really calling on us to be thoughtful about avoiding, some of the pitfalls of just using-- taking this or approaching this as just one other standard technology, as you said, it's transformational. And this panel is more about the education and the learning, than it is about the technology. That's our intent anyway. 

So Matthew, you mentioned agentic AI and James, you're the person who really has taught me a bit about what that is. I'm sure I don't know the half of it, but it's something I know you've been thinking deeply about. And it is clearly becoming a much bigger part of the conversation and the landscape now. So can you share with the audience a little bit about how you would explain briefly what agentic AI is. 

And then specifically, talk a little bit about what does it suggest for the role of an educator. And how do we encourage human educators and students to work with agents. Or should we encourage or should we caution or some blend of the two? What are your thoughts on that? 

JAMES GENONE: Yeah well, I'll build my answer off of Matthew's very-- what I find to be very inspiring remarks about this. I think what I see in a lot of higher education right now is people are still trying to catch up with what was happening with generative AI and AI more broadly, over the past couple of years. And yet, as we've been saying, this technology is moving very quickly. And so what's on the horizon right now is broader and broader adoption of what people are calling agentic AI. 

And people define it a bit differently. But the way I understand it are AI tools and workflows that can accomplish complex tasks either fully or at least semi-autonomously. So they're making decisions about how to complete various kinds of work without the human necessarily needing to make a decision at every single stage. And this often involves an AI model being able to call on different kinds of tools and often check its own work, decide to do something differently. 

And in many cases, you have different agents that are performing different tasks, and there's a orchestrator model that's choosing from among them which ones should perform based on what the overall goal is that the user has specified. So what I think is important about this is on the one hand, there's quite a bit of caution that we need to bring to the adoption of AI agents. Because the more autonomous they become and they start to interact with computers and websites and other applications and so on, the opportunities for things to go wrong are going to increase and multiply. So obviously a lot of care needs to be-- and thoughtfulness needs to be brought to that. 

But if I take up a more optimistic mindset on this, I think it has actually quite an exciting moment for learning. We've been talking for many years, decades, about personalized learning and adaptive learning, and there have been various different technological approaches that have been brought to that. But I think many, if not most people would agree that, neither of those terms has realized its full potential. 

And part of the reason for that is because each individual learner is such a rich individual. We all have our own capabilities and needs and interests that we bring to the learning process, and it's been hard for the technology today to be able to accommodate all of that. The kind of behaviors that we see from AI technology broadly, but especially generative AI, start to tell us that that could look really different. 

And I'll bring in an example here from the world of software engineering. Folks have been very excited about a concept that's been called vibe coding lately, if you're an enthusiast for the technology. And vibe coding is really the idea that, as a software developer, you're just having a conversation with an AI model that's writing software for you, and it's highly iterative and it's highly creative and it really allows you a certain whimsy in just being able to try to realize a prototype of something you've thought up. 

And that's really interesting and exciting. And part of what's interesting about it is you can imagine a world where all of us, even those of us who are not programmers, like myself, can have our own library of customized applications that we build for various things we want to do in our life, including learning. 

And so I think a little bit about how this applies to learning and I think about it in terms of personalized learning or even project-based learning, where there's a lot of learning you need to do on demand, just in time. You're working on a complex project, and there's something you don't know how to do yet. And so you can imagine working with one of these agents to help teach you how to do that thing. And that makes the learning that you're doing much more responsive and dynamic, rather than a linear process towards a fixed set of goals that someone else has specified for you. 

On the other hand, it brings a challenge in terms of the collaborative social human aspect that Matthew mentioned. And I was talking about this with one of my colleagues last week who suggested, well, in a world of learning that's like vibe coding where you can be creative and do things that you're personally interested in on demand, it starts to make a classroom session look a lot more like a hackathon. 

And I loved that idea. The idea that there's this opportunity for learning that is incredibly dynamic, that's incredibly creative, that's social, where we're determining together how and what to learn based on goals that we're setting in real time. So it hearkens back to those kinds of higher order thinking skills that Alessandro and we were talking about a bit earlier. This is a process of engaged, highly experiential, highly dynamic learning that's going to mirror all the kinds of changes that we're going to see in how we work, I think. 

So that's how we're thinking about the adoption of agentic AI. It's still got a long way to go before we get to that place, but we want to learn and grow with it so that we can hopefully achieve the optimistic vision of what's possible here. 

SUZANNE DOVE: That was really helpful, James, and I appreciate that you're giving the two sides of the coin approach. There are reasons to be concerned or worried about a agentic AI. Alessandro and I were just in a roundtable luncheon where a faculty member was concerned that maybe I'm going to issue an assignment or assign-- give an assignment to my class and then it has been generated by AI, and then my students will use their AI agent to respond, and then my AI agent will grade it and give them feedback. And it ends up really just squeezing the humanity out of the equation altogether. 

So how do we avoid that? How do we guard against those kinds of things? Matthew, do you have a thought on that? 

MATTHEW RASCOFF: I'm a personalized learning skeptic, honestly. I was part of that movement. I bought into it. It seemed so exciting. But I-- like the images of children with headphones on and screens just when I saw it in action, when I had my own children, when I came-- when I spent more time in the classroom myself, it really lost its romance. 

And I think what we need now is communalized learning, not personalized learning. Or maybe there is a way to do personalized learning with AI enablement that is personalized, but it's still in groups. And the group-- the thoughtful design of those groups is the personalization. 

It puts you in conversation with somebody who's complementary to you on some idea, on some background, on some identity, but it does not lose out on that connection, which I think is the most needed skill for our society. It's the one that's least likely to be disrupted. It's the one that our democracy depends on. Like those things I'm not willing to give up on them. And I have now seen a few waves of the promises of personalized learning not coming true. And I worry that. 

I think that's the wrong goal that we should set for this technology. And we need to be more thoughtful about the education goals first, and then bring the technology in and support for them, not start with the technology, think about its affordances and cram our educational model into those affordances, and I worry that's most of what's happening in this conversation, not this conversation, but in the broader ecosystem. 

SUZANNE DOVE: Yeah. Yeah. Excellent points and so right to really caution against that. And I think this group and the people who hopefully are here participating with us today and the people who are at the luncheon that Ale and I were at, are the people who are trying to shape this. We are trying and we are taking ownership over it and feeling a great deal of responsibility not to hamper innovation or experimentation, but to experiment in ways that are responsible and be really thoughtful. 

And I think having these conversations is crucial because when we really examine our assumptions or our approaches and share them out loud with other humans, we can really guard against some of those inadvertent missteps. James you talked about the classroom experience being more like a hackathon. And that makes me think about moving that into the workforce where people need to work on teams. 

Ale, I know you all at the Digital Education Council work with both academic institutions and workforce. Can you share some of your thoughts on the classroom of the future, being a classroom of one to one with the way Matthew is cautioning against versus workforce, where you're seeing perhaps more team-based kinds of interactions and what sorts of AI literacy is required for that? 

ALESSANDRO DI LULLO: This is really a great point. And, I really cannot echo what Matthew was saying enough. Which is also because if we take a step back and really realize and think about what machine learning is, we were discussing this at lunch, so I really want to repeat this, Suzanne, because I think it's an important point with this audience as well. Machine learning essentially is, they're structured and created to optimize, learn from what we like, and give us more of that. 

So practically, if we think about Netflix, recommendation engine is a very good example of that type of technology. What we like, but that is really meant to boost our consumption of Netflix series, is not meant to assess what we are learning. And if we think about our university days or what is really important about university, is also learning different perspectives. Learning what you don't know yet. 

And so if we just think about personalization there is that I see is just reinforcing potential biases or not missing another perspective altogether. And so I fundamentally see the importance of working in teams and collaboration across much more important. Of course, it may sound obvious, but I remember that when I started my career, some years ago, it was not as easy to collaborate across the world online very quickly, very seamlessly. 

And so we are also naturally living in a whole new digital world that is boosted by what we lived also during COVID, and with technologies like Zoom, that yes, they were there before, but it was not the same simplicity. And so we are living in hyper-connected, interconnected world. And I also saw some messages in the chat. It's nothing new but finally, we are at the stage where these things are very easy and accessible. 

And so this is really changing. And so I do see an importance for educators really to be mindful of being-- looking at this hyper-connection and really stimulating these type of discussions and interconnections because the tools you can learn but these type of skills are very important to be acquired early on because I do see even myself, which we are a small company ourselves, and I do see even with the new people that join us, I do see a challenge in training them for critical thinking, creativity, and collaboration with people internationally. 

And so in a big company that is going through. This is going to be profound. So I think we're going to see more of this and it's becoming more and more evident. And I say it from someone who studied very hard course finance stuff, and I thought all these things was not important and I realized how important it's becoming. And it-- just I really see it's just going to accelerate. 

SUZANNE DOVE: Yeah these are such-- this is such a great discussion. And as we said at the top of this webinar, we want very much for this to be interactive as best we can in a webinar setting. So thank you Alessandro for pulling in comments from the chat. We've had some amazing questions. I'm trying to keep up here in the Q&A box. So I'm going to throw out a few. We may not have time to get to everything, but a few of the Q&A audience questions. And really, this is open to any one of you panelists who would like to respond. 

But I'll start with this question. Personalized learning still puts the onus on the end user to learn. If agentic AI can fully complete the task, then what is left to learn? 

JAMES GENONE: I can jump in here, and it's a good chance to also add a "Yes and" to what Matthew said earlier. I strongly agree that this world of headphones and a screen for learners is dystopian and not the kind of world that we want. And likewise a world in which we're just offloading our work of thinking and decision making, and producing knowledge to machines is also, I think, dystopian, because-- how will we be able to evaluate whether what it's doing is what we want and what is right for the world? 

So we have to ensure that we're still in a position to be the decision makers and have the expertise to know if it's working in ways that are beneficial to humanity. At the same time, I think we all probably agree, especially those of us who've spent a lot of time in a classroom, that any group of learners is highly differentiated, and you'll have those at a given moment in a course or a class who are not challenged and those who are on the verge of being left behind. 

And it's one of the great difficulties of being a teacher, of how to meet and reach all of your students and remediate all of their needs and difficulties and confusions and challenge them appropriately. So I really want to hold out some hope that this technology can help us with that. It's a very, very difficult human problem, to teach effectively in groups. 

And if the technology can enhance the work of teachers in that way by, for example, as Matthew was speaking about earlier, giving us access to data about the needs of individual learners so that we can use that to then set them the tasks and the focus that they need in order to learn what's next. I think that has the potential to be very powerful and ensure that we're not just doing this kind of offloading that the question is asking about. 

SUZANNE DOVE: Thanks, James. I want to just pause for a second to see if Matthew or Ale wanted to jump in. Otherwise, there were plenty more questions and directions [INAUDIBLE]. 

MATTHEW RASCOFF: We have a new policy at the Stanford Graduate School of Business that bans the banning of AI tools for anything students do at home. So you're required to allow them to use these tools. And one observation is that the best writers in the class are far better than any of the AI tools. They were citing whether they used it. So I think there was a kind of reversion to the mean. Ale, you were talking a little bit about this. 

Much less outlier ideas emerge-- no outlier ideas emerge from people who used AI tools. Nothing really, really interesting in the A+ category. There was no D work, and there was no C work, it was all like in the B- to B+ range. All the big ideas, the most interesting, most provocative ones, were human written, human authored, no AI usage at all. And I think that confirms your thesis about the edge cases, the outliers. 

Like that's where the real creativity happens. And like in my students' writing assignments, I'm not interested in just, like, the mean, I don't need the best practices. I don't just like-- you don't need to repeat back to me what I already told you. Like, I want to hear what your thinking is. I want to build a connection with you. 

I want you to-- I want to understand that you are listening to our guests in our class, and you're reflecting that back through your own experience. Like, I don't need you to get the right answer. There is no right answer. And that, to me, is-- and if you're asking questions that have one clear right answer and an AI can pass that test, it's probably the wrong question and you should probably be rethinking your assessment strategy, if really an AI can master it in such a way that it's a good answer from a machine is indistinguishable from a good answer from a human. 

But I felt confident that I could distinguish that and my assessments, the writing prompts that we were giving for our students, were enough to make that distinction and enough to help my students progress. And I told them this too. I told them I think their best writing is better than the work of the most advanced frontier models in the world, so keep at it. 

SUZANNE DOVE: Yep, humans rock. So another theme that I'm seeing is coming out in the Q&A is really around the enormous stress and taxing that has been on educators. I mean, it's five years since the COVID pandemic hit. It's been a difficult time for higher education institutions all around the world. And since then, we've been kind of buffeted by a number of different headwinds. 

And so now, once again, there's this rival technology that shows up in the classrooms. And so a couple of people are wondering if the panelists, if you all could comment on how do we support educators with this rapid pace of AI innovation? Is there reason to argue for a pause? 

Is there some sort of resource or set of supports that we need to create? Something else. What are your thoughts on how we really support the people who are at the front lines of our classrooms so that they can do their best work? And they are humans too, so we need to support people and understand that this is difficult. 

ALESSANDRO DI LULLO: Maybe I can start and then of course, I would love to-- since I'm not working at the university, but I have the pleasure and honor of speaking to many of them. As some of you may know, we've done a big piece of work in-- of a faculty survey that we published in January this year, and it's also available for free on our website. What is interesting is that I think, in short, and then of course, I would love to hear also Matthew and James on this, but one thing that I would naturally mention is the importance of understanding what educators want. Which may sound simplistic, but let me go a bit more specific. 

One thing that I'm surely against is banning progress. So I'm very much aligned with what Matthew was also saying around the decision of banning a ban. I also think that there is no need, and it will be hugely counterproductive to pausing AI tools, testing, or this type of innovation. Because it's not that if we look the other side, then innovation stops or students stop using these tools. 

And so it's important that we just realize the world we live in, and we embrace it. But how to embrace it? We've seen it out of this survey some of the things that were most interesting and desired by professors and faculty around the world was the importance of having clear guidelines on how they can use it, but also how they can then tell students to use it, and the repository and collection of best practices. 

What I find particularly interesting is that naturally, we tend to be very descriptive and precise in terms of best practices, but what I'm seeing is that the most effective strategy is to provide to educators a number of examples and practical use cases and frameworks so that they can build their own case studies and frameworks. So it's not really about saying "Do this, sir," but it's about sharing and collecting a different set of experiences so that they can build their own methodologies and tools or applications of tools based on their own reality. 

Because as we said at the beginning, this is an important part. Some of you may be professors of history, some of you may be professor of engineering, and it's going to differ. So we cannot expect everyone to approach it in the same way. But I would love to hear also from James and Matthew at their two universities. 

JAMES GENONE: Can I add on to Ale's comment Suzanne? Yeah. 

SUZANNE DOVE: Go ahead. Yeah. 

JAMES GENONE: Yeah. I love everything that you said there. And then I would say that as we've implemented a number of those practices at Northeastern, maybe one of the most effective approaches we've seen is building and supporting communities of practice. So we appointed a set of AI faculty fellows last fall for the first time, one from each of our colleges. And that group has worked together to think about the application of the best practices, the tools that are available, and so on, within their respective colleges and disciplines. 

But they've also gone into their colleges and created communities of practice within each of those colleges, among the different disciplines and fields that are there. And I think we probably all have this experience that when you see your peers and colleagues doing really exciting work, it's inspiring and it helps you understand more concretely how you could take that on yourself. Kind of lowers the anxiety level, and so on. 

And I think it's more effective than some of the more centralized, as well meaning as all of it is, guidance and advice that we get. And it really speaks to this human connection piece of this tremendous change. I think people are really reassured and able to understand the more optimistic side of things. And also have serious conversations about the challenges and risks. I've heard people express those concerns very articulately in those meetings and be able to work through them in productive ways. 

So I think-- if there's one thing I would encourage all of our colleagues throughout higher education and elsewhere to do is to make the space for those kinds of conversations in that kind of collaboration. 

MATTHEW RASCOFF: Let me give credit to my colleagues at the AI Tinkery at Stanford, which is modeled on a makerspace, but it's for AI tools. It's got a physical location in the Graduate School of Education. They run workshops, though, around the campus. And it's exactly what James just described. It's training, support. They've licensed all the tools. You can try them out in this closed environment and just kind of-- it's OK to be a novice there. 

I went to a workshop last month. I learned Cursor AI for the first time, this programming tool. I'm going to be a vibes programmer, maybe in my next career, James, thanks to Cursor. And I learned it from Jessica, who's the lab manager of the AI Tinkery. And that model I felt really good about it. Like she's captured this kind of bottom-up, creative, fresh air that's exciting. 

And if there's a top down policy that's imposed before we really understand this technology, is there any chance that technology is going to get it right? It's going to get all these nuances like what's in, what's out, under what circumstances? And like to me, I think we should spend more time in this bottom up mode and also spend more time thinking about my educational goals. 

Like Ale was saying in my discipline, and I worry that we've got this very enterprise license model, we're going to buy it for everybody. But buy what? To what end? And that's how we've ended up with the AI tools that seemingly replicate a lot of the disparities and a lot of the lackluster ed tech that we've seen before. I don't want to name names, but things that don't really progress in terms of education and pedagogy. 

SUZANNE DOVE: You put your finger on another theme that's been showing up amongst participants in the Q&A, Matthew. So I'm going to frame this question. We'd love to hear the panel's thoughts on metrics for measuring how effective or impactful AI is in a particular educational use case, or in learning design. Thinking about people using it to design courses. So any examples of measures that you've used? Anywhere you've seen reporting on what works or what doesn't? 

JAMES GENONE: Well, I'm going to go very high-level here and not particularly pedestrian. But I get excited about the transformational potential of AI, not the ability to nudge metrics on student outcomes or even important things like dropout, withdrawal rates and so on. I'd like to see those impacts. But, to me, transformational learning comes when we come to see the world in a different way, when we come to think about it in a different way, when we come to act in a different way, that's what transformation is. 

And so if we're able to use this technology and adopt it in a way that really does give learners-- and also faculty and the administrators and staff that support them-- agency, I think that will be powerful. But I think it's going to be incredibly hard. 

We were talking a little bit earlier. You asked Ale a question about the relationship between, of work-based learning and learning in inside the institution, in a degree program, that we're all used to. And I have a hypothesis we're going to see a lot more convergence between those styles of learning. That we're going to see students increasingly doing more of the kind of work that you do in a job and also people in jobs needing to carve out time to acquire new skills in the way that you would in a higher education institution. 

And I think in some ways, if we see that, that will be a metric of success. Because it will mean that this technology is to help us embed learning in a more ecologically valid way than I think some of the ways that we've approached it traditionally. 

SUZANNE DOVE: Yeah. Yeah. Great point. OK, I'm going to look at one more question from the Q&A that I think several people have touched on in your comments, but I really want to maybe wrap us up with this question, which is, how-- can you share some examples of how you have seen or how you're hoping to see AI foster inclusion? 

ALESSANDRO DI LULLO: Perhaps I can quickly start since also I saw that it was following up the framework that I presented at the beginning. I think there are two very interesting angles for when it comes to inclusion. One is really supporting learners with different needs, but also potentially visual impairments and another type of impairments. Practically, the technology is not fundamentally different to what was available before, but what is starting to be different is the ease of use and the access to these type of tools. 

I was playing with a number of tools that were essentially allowing learners to use captioning or turn text into audio, or vice versa in different ways. And it was interesting because before it was-- yesterday, it was text to speech and speech to text has been around for some time, but right now there are new tools that also allow essentially every learner to customize on the platform very quickly and change even the type of language that they want to hear which is quite fascinating. If we just make the simple example of captioning and translation. 

There is the other one, which is actually quite fascinating is the business opportunities that universities may have because of this inclusion. Because practically, for those of you who are thinking about launching executive programs to executives in other parts of the world where English may not be the first language, these type of examples can be quite interesting to gather to executives in China, Indonesia, or some other parts of the world that are not English-speaking at first. And so it may be easier to capture their interest and potentially sell to them. 

These are just some easy examples by starting to see-- and again, it's not fundamentally different, but it's the ease of use at the level of the technology that starts to be fundamentally different, that can be actually applied. 

MATTHEW RASCOFF: I'm excited about the abundance that's coming in learning. And like we live in a world of scarcity that is largely artificial. It's a product of the physical learning environment that we've had. Online learning has been limited, to me, by the technical capabilities, by the affordances that, like Coursera, was built around the ended up with this broadcast model because that was what was possible with technology 10 years ago. 

But if you were going to build Coursera today, of course, it would have AI facilitation, of course, it would feel cohort based, of course, it would feel much more cohesive, closer to what we were able to do in the relatively small environments of physical campuses. That's coming for online learning. And I think there's going to be a new democratization of learning that will breathe new life into online learning and do it with a quality and a rigor that feels closer to what's familiar to the faculty when they're running a seminar, when they're facilitating. 

And that's going to I think, unleash another wave of open education, kind of public good creation. I was at a meeting last week, Hewlett Foundation funded, at New America in Washington, DC. These are the people behind the open education movement. And there's a big questioning, like, what's going to happen to open educational resources when AIs can do all this stuff. 

And I think the consensus coming out of that meeting was this is like a very exciting revitalization of open education. It's not a threat to it, as some have said, like, I think this is going to reimagine it. And we're going to create-- all these new public goods that spill over from these technologies that we may create on campus could be widely beneficial for a world that desperately needs opportunities to learn, that we need to include in our learning audiences as well. So that's my hope. 

JAMES GENONE: And I'll just add that I-- 

SUZANNE DOVE: Great point. I'll give you one minute, James. 

JAMES GENONE: Yeah, I'll go really fast. I love Matthew's framing of abundance. And I think so much of traditional learning has been driven by textbooks, by standardized curricula and so on. And the ability for the individual learner to engage in things that are of interest to them while also learning the skills that they need, and then to do so, in these social ways that we've talked about, is a real opportunity. 

These things are actually really hard to do manually with a single teacher trying to connect with all their students, all their student interests and so on. If they can have an opportunity to use this technology to accomplish that, it really will make learning abundant, I think. 

SUZANNE DOVE: Great point. And I feel like this is a really good theme for us to be wrapping up on. Thank you each of you so much for your time this afternoon. And thanks to our amazing audience. And thank you, again, to the great support team. Priscilla, I know you were going to share a slide. 

We do have another webinar coming up next month, so please check out the HAIL webinar and contact any member of the leadership steering committee that's listed here on the screen. We hope to see you in April. And, again, to our panelists, Alessandro, James, and Matthew, thank you very, very much for this conversation. It's been such a pleasure.