Skip to main content Skip to secondary navigation

Join us in Academic Innovation for the Public Good

Register now for our online book conversation series with authors. Next event: May 15.

Main content start

An unusual Stanford course unlocks ethics in my tech life

What a group of tech professionals learned while exploring how to design technology that serves the public good.
Author Ursula Le Guin leaning forward on her arms in a black and white photo next to an illustration of her story "The Ones Who Walk Away from Omegas"
Ursula Le Guin in August 1995 (photo credit: Marian Wood Kolisch)
Embed Code

“With a clamor of bells that set the swallows soaring…”
          — Ursula Le Guin’s “The Ones Who Walk Away From Omelas”

I should have known I was in for a ride.

It was day one of Stanford’s “Ethics, Technology + Public Policy for Practitioners,” and by the end of the first online session my classmates and I were pondering a seminal question: “What type of person are you — really?” 

Like a Buddhist koan, primed to prepare you for the challenging journey ahead (hopefully to enlightenment), that question arose from our first reading, the short story “The Ones Who Walk Away From Omelas,” by Ursula Le Guin. It ingeniously invites readers to a utopian city called Omelas. But as soon as you fall in love with that utopia, Le Guin reveals that it comes at a cost: a filthy, starving child, held in solitary confinement for life in a closet-sized room in a basement underneath the town. As long as the child remains suffering and captive, unloved and alone, your utopia flourishes. 

Knowing now what utopia costs, I and the rest of this class of tech professionals were asked to discuss the two choices that Le Guin lays out in her story: to remain in Omelas or to walk away. Some — including me — declared that the ethical action was to go. Others said that their conscience allowed them to stay. Some proposed that they would conspire to liberate the child at the cost of utopia.

This thought experiment framed the seven-week Stanford course, which I took last fall with 250 other educators, ethicists, and technologists. The course’s goal was to enable us, whatever our professional positions, to think through our role as enablers and shapers of technological change in society. It was a transformative journey for me that reshaped my understanding of the intricate relationship between technology, ethics, and society. 

Jacinda Ardern smiling as she is sworn in as prime minister of New Zealand, flanked by officials

Stanford course for working professionals centers ethics in discussions of technology

In its fourth year, Ethics, Technology + Public Policy for Practitioners experiments with setting up long-term communities of professionals interested in responsible tech governance.

Bringing ethics into my life

My classmates and I wrestled with the implications of algorithmic decision-making and the dangers of human bias within the system. We debated the ethical dilemmas of artificial intelligence (AI), from how it’s trained to the impact it has once deployed. We then delved into the role generative AIs could play in the future of global economies. And throughout, we considered perhaps the scariest issue of all — the role digital illiteracy plays in public policy and tech’s effect on inclusion and equity; that’s the one that I personally wrestle with every day as a Black educator and technologist.

Erik Brown

It’s worth noting here that I am the associate creative director of Stanford Digital Education. Before that, senior creative producer of Stanford Online. And before that, a video producer at Harvard and MIT’s brainchild edX. My personal and professional throughline is at the intersections of technology, education, and social justice. I came into this course as a tech optimist. While not as blindly optimistic as Marc Andreessen, a leading venture capitalist, and Sam Altman, the chief executive officer of OpenAI and a guest speaker in the course, I saw (and still see to some extent) how technology has the potential to build more equitable futures — better, stronger, and faster than a world without. But the course made the other learners and me look harder at the hidden costs. 

Learning to live with discomfort

There was never a class over the seven weeks that left my worldview unchallenged. Each session began with Rob Reich, Stanford’s McGregor-Girand Professor of Social Ethics of Science and Technology; Mehran Sahami, the James and Ellenor Chesebrough Professor; or Jeremy Weinstein, the Kleinheinz Professor of International Studies, delving into the theoretical concepts, historical contexts, and overarching principles underlying the topic of the week. And in keeping with the tone, ethics as a concept felt so … doable.

Then followed a guest speaker from the highest levels of various sectors including technology, politics, academia, or consumer advocacy who bridged the gaps between the abstract and the more challenging, tangible reality. And again, in keeping with the tone, ethics as a practice felt so … provocative. 

We next broke into cohorts — the same group of 10 of us every week — to grapple personally with the role technology would, could, and should play in society and what role we would, could, and should play within it. And it was here that I was able to prototype what this type of everyday discourse and discovery would look like in my own life. My classmates and I sometimes were angst-ridden as we explored our own positions, while challenging ourselves to truly listen to opposing views. Ultimately, I learned to sit with the discomfort that comes with appreciating that these are complex issues, that value-tensions and trade-offs abound, and that, despite that appreciation, I was still required to act for a better world.

Seeing both sides of the coin

It started with the opening lecture. Reich asked us to think of the parable about Omelas as a technological coin. One side is utopian advancement, innovation, efficiency, wealth, and access. The other side is severe inequity, a glass ceiling, an ever-growing data devouring chimera that eliminates privacy and personal freedoms, the understanding that what is possible is not necessarily probable — and metaphorically ash-covered, inhumanely treated children whose suffering makes possible the happiness of those on the other side of the coin. Le Guin virtually demanded us to ask: How many of our current liberties and privileges are enjoyed at the expense of others?

After Week Two’s class, I could no longer doomscroll through Instagram without deeply considering: In what ways is my worldview being shaped by algorithmic bias? And, in times of global unrest, just how dangerous is it for all of us to rely on a mechanism that can so ubiquitously reinforce stereotypes and exclude perspectives? 

After lessons in Weeks Four and Five on platforms, I questioned society’s leaving far-reaching decisions about tech policy to digitally illiterate political leaders influenced by biased feeds. At work, I couldn’t sit with collaborators discussing platform decisions without wondering: How was power imbalance showing up in our decision-making? Was that harming the community I sought to support? And an even larger question: Due to the systemic nature of acquiring and leveraging power, what is my responsibility in dismantling the same systems that placed me in a position to even do so?

By Week Seven, I couldn’t stop seeing how technologies as tools could build a more inclusive and reliable future, but only if we as developers, investors, educators, policymakers, and consumers owned our roles in making it so. I had revised my thinking about how I would respond to Le Guin’s story that we discussed at the start of the course. I felt as if Omelas could still be saved from itself. That didn’t necessarily mean preserving or destroying “utopia.” Rather, it begged to ask if there was an alternative to be struck, where humans and tech could co-exist and society’s most vulnerable populations didn’t need to be harmed to maintain it. What started as a solely academic endeavor became a personal mission to better understand exactly what it was that we were facing. What questions needed to be asked? What systems needed to be challenged? And what exactly was I going to do with all of this learning?

The importance of space

Perhaps as significant to me as the course takeaways was the dialogue the course fostered with other participants. One of my best learnings took place adjacent to the lectures on platform moderation. While Reich talked on Zoom, a Slack conversation around current events and the politics of semantics escalated. Who determines on social media platforms the labeling and legitimacy of a particular group, and how should that inform what content should be banned and deleted on a platform? The participants split into camps, and the discussion became heated. This group of well-intentioned adults found themselves at loggerheads over their differences in ethics. This experience highlighted for me more than anything else in the course the difficulty (and the importance) of creating space for discussions about applying ethical decision-making to tech. It was to the credit of the course team that I and others felt safe in this discussion and that we ultimately had a successful exchange of views. This lesson about open dialogue is something that I and my classmates can take to our real-world roles as government officials, policymakers, educators, and technologists, when we struggle with a contentious issue, with no “right” solution. 

Lessons learned

I had five key takeaways from the course: 

  1. We need an informed and proactive government to pursue regulation that steers technology toward the public interest. Enhancing tech literacy among policymakers and regulators is a step towards responsible stewardship of technology.
  2. As tech practitioners, we have a responsibility to integrate ethical considerations into every stage of development. This includes establishing robust testing and feedback mechanisms that prioritize ethical outcomes over mere functionality or profitability.
  3. Consumers are not passive recipients; they actively participate in shaping the tech landscape. Educating and empowering consumers to make informed choices can drive the demand for more ethical tech products.
  4. Solving the complex puzzle of tech ethics requires collaboration across sectors. Bringing together diverse voices can lead to more comprehensive and effective solutions.
  5. The tech world is dynamic, and staying informed is key to navigating its ethical challenges. Continuous learning and adaptation are essential for practitioners and policymakers committed to ethical tech.

The course doesn’t end

It's been four months since the course concluded, and I find myself deeply immersed in a vibrant community of like-minded individuals. Currently, I'm actively engaged with my cohort members in the collaborative development of a zine. Together, we aim to challenge the prevalent techno-optimist narrative often embraced by Silicon Valley enthusiasts. Instead, our focus lies on crafting a more inclusive techno-harmonist manifesto. The course not only inspired individual action but also fostered collective empowerment. 

As we navigate the intricate relationship between technology, ethics, and society, we are reminded that the choices we make have far-reaching consequences, shaping the world we inhabit and the future we aspire to create. However we choose to respond to the injustices of Omelas, one truth remains clear: ethics is not merely an abstract concept but a lived experience that demands thoughtful reflection, compassionate engagement, and courageous action.


Erik Brown is associate creative director for Stanford Digital Education.

Follow Stanford Digital Education: Sign up for Stanford Digital Education's quarterly newsletter, New Lines, to learn about innovative research and teaching in digital spaces, as well as our team's initiatives. Subscribe to New Lines.

Marian Wood Kolisch's photo of Ursula Le Guin was accessed on Wikimedia Commons and is used through the Creative Commons Attribution-Share Alike Generic 2.0 license.

More News Topics