Building a Thriving Future: Navigating the Metaverse and Multiverse with Dr Paola Cecchi-Dimeglio

The metaverse is no longer a distant concept. It is unfolding now, driven by advances in artificial intelligence, big data, and immersive digital technologies. As it reshapes how we work, interact, and create value, it presents significant opportunities alongside growing competitive pressures. In the book “Building a Thriving Future: Navigating the Metaverse and Multiverse” Harvard researcher and AI strategist Dr. Paola Cecchi-Dimeglio provides the first strategy-focused guide designed for business leaders, policymakers, and decision-makers navigating this transformation. The book emphasizes that the metaverse is not simply about virtual environments. It represents a broader transformation of business, governance, and global markets. Organizations that combine AI, behavioral insights, and digital strategy will be the ones that define the future. In this episode of Bridging the Gaps I speak with Dr. Paola Cecchi-Dimeglio.

Dr Paola Cecchi-Dimeglio is an expert on AI and Big Data, Behavioral Science and Future of work. She holds dual appointments at Harvard Law School and the Kennedy School of Government, where she chairs the Executive Leadership Research Initiative for Women and Minority Attorneys (ELRIWMA). She is also a vice chair for the Global Initiative on Virtual Worlds and AI and leads Sustainability for AI, Big Data, and Metaverse regulation at the UN’s International Telecommunication Union. She is the CEO of People Culture Data Consulting Group.

We begin by acknowledging that virtual worlds are not a new concept. What is new, however, is the rapid advancement of hardware, big data, and AI, which is making the metaverse far more powerful and practical. We then examine why the metaverse should be understood as a critical development for business and organizational strategy. It is not just a technological trend, but a structural shift in how value is created and managed. We also explore how technologies such as blockchain can strengthen digital ownership and enable more secure, efficient trading within virtual environments. An important point we address is that, when developing business solutions for virtual worlds, organizations must give careful attention to diversity, inclusion, and equal access. Overall, the conversation offers a clear and insightful look at the strategic implications of the metaverse.

Complement this discussion with “From Pessimism to Promise: Lessons from the Global South on Designing Inclusive Tech” with Professor Payal Arora and then listen to “The Line: AI and the Future of Personhood” with Professor James Boyle.

By |February 17th, 2026|Computer Science, Future, Technology|

“AI Fairness: Designing Equal Opportunity Algorithms” with Professor Derek Leben

As artificial intelligence takes on a growing role in decisions about education, jobs, housing, loans, healthcare, and criminal justice, concerns about fairness have become urgent. Because AI systems are trained on data that reflect historical inequalities, they often reproduce or even amplify those disparities. In his book “AI Fairness: Designing Equal Opportunity Algorithms” Professor Derek Leben draws on classic philosophical theories of justice—especially John Rawls’s work—to propose a framework for evaluating the fairness of AI systems. This framework offers a way to think systematically about algorithmic justice: how automated decisions can align with ethical principles of equality and fairness. The book examines the trade-offs among competing fairness metrics and shows that it is often impossible to satisfy them all at once. As a result, organizations must decide which definitions of fairness to prioritize, and regulators must determine how existing laws should apply to AI. In this episode of Bridging the Gaps, I speak with Professor Derek Leben.

Derek Leben is Professor of Business Ethics at the Tepper School of Business at Carnegie Mellon University. As founder of the consulting group Ethical Algorithms, he has worked with governments and companies to develop policies on fairness and benefit for AI and autonomous systems.

I begin our discussion by asking Derek what “AI” means in the context of his work and how fairness fits into that picture. From there, we explore why fairness matters as AI systems increasingly influence critical decisions about employment, education, housing, loans, healthcare, and criminal justice.

We discuss how historical inequalities in training data lead to biased outcomes, giving listeners a deeper understanding of the problem. While some view AI fairness as a purely technical issue that engineers can fix, the book argues that it is also a moral and political challenge—one that requires insights from philosophy and ethics. We then examine the difficulty of balancing multiple fairness metrics, which often cannot all be satisfied simultaneously, and discuss how organizations might prioritize among them. Derek explains his theory of algorithmic justice, inspired by John Rawls’s philosophy, and we unpack its key ideas.

Later, we touch on questions of urgency versus long-term reform, exploring the idea of longtermism, and discuss the tension between fairness and accuracy. Finally, we consider how businesses can balance commercial goals with their broader social responsibilities.

Overall, it is an informative and thought-provoking conversation about how we can make AI systems more just.

Complement this discussion with “The Line: AI and the Future of Personhood” with Professor James Boyle and the listen to Reclaiming Human Intelligence and “How to Stay Smart in a Smart World” with Prof. Gerd Gigerenzer

“The Line: AI and the Future of Personhood” with Professor James Boyle

The Line James Boyle on Bridging the Gaps

Humans are no longer the only entities capable of generating language and ideas with fluency. But this advancement raises a profound question: are artificial intelligence systems truly conscious, or are they simply performing an elaborate imitation of human thought? And if, in the future, claims of AI consciousness become more credible, how will that shift our understanding of these technologies? Will we continue to see them as tools, or begin to recognize them as something closer to persons? In his book “The Line: AI and the Future of Personhood”, Professor James Boyle confronts these complex issues. He explores how questions that once belonged to the realm of science fiction have now become urgent and real. He examines how this technological shift might redefine our understanding of personhood and challenge the boundary—”the line”—that separates humans from everything else, as well as persons with legal rights from inanimate objects. In this episode of Bridging the Gaps, I speak with Professor James Boyle to explore these timely, thought provoking and transformative questions.

James Boyle is a professor of Law at Duke Law School, founder of the Center for the Study of the Public Domain, and former Chair of Creative Commons. He is the winner of the Electronic Frontier Foundation’s Pioneer Award for his work on digital civil liberties.

We begin by talking about the idea of personhood—what it means, how we define it, and what separates beings that have personhood from those that don’t. I ask whether it’s simply consciousness that makes the difference. We then explore why empathy is an important part of thinking about personhood.

Next, we discuss the challenge of using consciousness as a dividing line. If consciousness is what separates those with personhood from those without, how can we use such a complex and unclear concept to draw that line?

In his book, The Line: AI and the Future of Personhood, James Boyle uses many examples from science fiction to explore these tough questions. I found this approach fascinating, and I asked him why he chose science fiction as a way to talk about such deep and complicated ideas.

We also talk about similar issues that came up when thinking about the rights of corporations and other non-living things. Then we move on to what we can learn from how rights have been developed for non-human animals. Finally, we touch on recent court cases involving decisions made by artificial intelligence.

Overall, this was a very engaging and insightful conversation.

Complement this discussion with “Evolutionary Intelligence: How Technology Will Make Us Smarter” with Professor W. Russell Neuman and the listen to Reclaiming Human Intelligence and “How to Stay Smart in a Smart World” with Prof. Gerd Gigerenzer