“AI Fairness: Designing Equal Opportunity Algorithms” with Professor Derek Leben

As artificial intelligence takes on a growing role in decisions about education, jobs, housing, loans, healthcare, and criminal justice, concerns about fairness have become urgent. Because AI systems are trained on data that reflect historical inequalities, they often reproduce or even amplify those disparities. In his book “AI Fairness: Designing Equal Opportunity Algorithms” Professor Derek Leben draws on classic philosophical theories of justice—especially John Rawls’s work—to propose a framework for evaluating the fairness of AI systems. This framework offers a way to think systematically about algorithmic justice: how automated decisions can align with ethical principles of equality and fairness. The book examines the trade-offs among competing fairness metrics and shows that it is often impossible to satisfy them all at once. As a result, organizations must decide which definitions of fairness to prioritize, and regulators must determine how existing laws should apply to AI. In this episode of Bridging the Gaps, I speak with Professor Derek Leben.

Derek Leben is Professor of Business Ethics at the Tepper School of Business at Carnegie Mellon University. As founder of the consulting group Ethical Algorithms, he has worked with governments and companies to develop policies on fairness and benefit for AI and autonomous systems.

I begin our discussion by asking Derek what “AI” means in the context of his work and how fairness fits into that picture. From there, we explore why fairness matters as AI systems increasingly influence critical decisions about employment, education, housing, loans, healthcare, and criminal justice.

We discuss how historical inequalities in training data lead to biased outcomes, giving listeners a deeper understanding of the problem. While some view AI fairness as a purely technical issue that engineers can fix, the book argues that it is also a moral and political challenge—one that requires insights from philosophy and ethics. We then examine the difficulty of balancing multiple fairness metrics, which often cannot all be satisfied simultaneously, and discuss how organizations might prioritize among them. Derek explains his theory of algorithmic justice, inspired by John Rawls’s philosophy, and we unpack its key ideas.

Later, we touch on questions of urgency versus long-term reform, exploring the idea of longtermism, and discuss the tension between fairness and accuracy. Finally, we consider how businesses can balance commercial goals with their broader social responsibilities.

Overall, it is an informative and thought-provoking conversation about how we can make AI systems more just.

Complement this discussion with “The Line: AI and the Future of Personhood” with Professor James Boyle and the listen to Reclaiming Human Intelligence and “How to Stay Smart in a Smart World” with Prof. Gerd Gigerenzer

“The Line: AI and the Future of Personhood” with Professor James Boyle

The Line James Boyle on Bridging the Gaps

Humans are no longer the only entities capable of generating language and ideas with fluency. But this advancement raises a profound question: are artificial intelligence systems truly conscious, or are they simply performing an elaborate imitation of human thought? And if, in the future, claims of AI consciousness become more credible, how will that shift our understanding of these technologies? Will we continue to see them as tools, or begin to recognize them as something closer to persons? In his book “The Line: AI and the Future of Personhood”, Professor James Boyle confronts these complex issues. He explores how questions that once belonged to the realm of science fiction have now become urgent and real. He examines how this technological shift might redefine our understanding of personhood and challenge the boundary—”the line”—that separates humans from everything else, as well as persons with legal rights from inanimate objects. In this episode of Bridging the Gaps, I speak with Professor James Boyle to explore these timely, thought provoking and transformative questions.

James Boyle is a professor of Law at Duke Law School, founder of the Center for the Study of the Public Domain, and former Chair of Creative Commons. He is the winner of the Electronic Frontier Foundation’s Pioneer Award for his work on digital civil liberties.

We begin by talking about the idea of personhood—what it means, how we define it, and what separates beings that have personhood from those that don’t. I ask whether it’s simply consciousness that makes the difference. We then explore why empathy is an important part of thinking about personhood.

Next, we discuss the challenge of using consciousness as a dividing line. If consciousness is what separates those with personhood from those without, how can we use such a complex and unclear concept to draw that line?

In his book, The Line: AI and the Future of Personhood, James Boyle uses many examples from science fiction to explore these tough questions. I found this approach fascinating, and I asked him why he chose science fiction as a way to talk about such deep and complicated ideas.

We also talk about similar issues that came up when thinking about the rights of corporations and other non-living things. Then we move on to what we can learn from how rights have been developed for non-human animals. Finally, we touch on recent court cases involving decisions made by artificial intelligence.

Overall, this was a very engaging and insightful conversation.

Complement this discussion with “Evolutionary Intelligence: How Technology Will Make Us Smarter” with Professor W. Russell Neuman and the listen to Reclaiming Human Intelligence and “How to Stay Smart in a Smart World” with Prof. Gerd Gigerenzer

“The Secret Life of Data: Navigating Hype and Uncertainty” with Aram Sinnreich and Jesse Gilbert

Secret of Data Book Reviewed at Bridging the Gaps

As fresh algorithms and new analytical methods emerge, existing datasets can uncover insights beyond their original purpose. Regardless of their intended use, data often possess hidden potentials and always have a “secret life”. “How this data will be used, by other people in other times and places, has profound implications for every aspect of our lives—from our intimate relationships to our professional lives to our political systems”, argue Professor Aram Sinnreich and Jesse Gilbert in their new book “The Secret Life of Data: Navigating Hype and Uncertainty in the Age of Algorithmic Surveillance”. This phenomenon raises various moral, ethical, and legal questions: Should we use datasets for unexpected and unforeseen insights? Should we create new frameworks to control and regulate using datasets for purposes beyond their original intent? Ignoring opportunities to analyse data in new ways might limit their potential, but pursuing them introduces ethical dilemmas. In this episode of Bridging the Gaps, I speak with Professor Aram Sinnreich and Jesse Gilbert.

Aram Sinnreich is an author, professor, and musician. He is Chair of Communication Studies at American University. Jesse Gilbert is an interdisciplinary artist exploring the intersection of visual art, sound, and software design at his firm Dark Matter Media. He was the founding Chair of the Media Technology department at Woodbury University.

I begin the conversation by setting the stage for our listeners and delving into the idea of the “secret life of data” as discussed in the book. We then delve deeper into the nature and scale of the challenge posed by this phenomenon. One key point highlighted in the book is that the challenge is primarily conceptual—we’re uncertain about what exactly we’re aiming to control, handle, and regulate. We thoroughly explore this aspect.

To address this “conceptual challenge,” the book provides a comprehensive overview of various frameworks and approaches that have been developed and are currently utilised to tackle similar challenges. The book then extends these concepts to envision and propose a framework that could aid us in addressing this particular challenge. We discuss in detail a number of existing frameworks outlined in the book, and then explore the possibilities to address these emerging challenges. Overall, this conversation has been immensely interesting and enlightening.

Complement this discussion with “Dark Data: Why What You Don’t Know Matters” with Professor David Hand and then listen to Reclaiming Human Intelligence and “How to Stay Smart in a Smart World” with Prof. Gerd Gigerenzer