The future of technology is a subject of debate among experts. Some predict a bleak future where robots become dominant, leaving humans behind. Others, known as tech industry boosters, believe that replacing humans with software can lead to a better world. Critics of the tech industry express concern about the negative consequences of surveillance capitalism. Despite these differences, there is a shared belief that machines will eventually surpass humans in most areas. In his recent book “How to Stay Smart in a Smart World: Why Human Intelligence Still Beats Algorithms” professor Gerd Gigerenzer argues against this notion and offers insights on how we can maintain control in a world where algorithms are prevalent. In this episode of Bridging the Gaps, I speak with professor Gerd Gigerenzer to discuss challenges posed by rapid developments in the tech sector, particularly in the field of artificial intelligence. We discuss different approaches that individuals can adopt to enhance their awareness of the potential hazards that come with using such systems and explore strategies to maintain control in a world where algorithms play a significant role.

Gerd Gigerenzer is a psychologist and researcher who has made significant contributions to the fields of cognitive psychology and decision-making. He is director emeritus at the Max Planck Institute for Human Development, and is director of the Harding Center for Risk Literacy at the University of Potsdam. He is a professor of psychology at the University of Chicago and is a visiting professor at the University of Virginia. His research focuses on how people make decisions under conditions of uncertainty and how to improve people’s understanding of risk and probability. He has trained judges, physicians, and managers in decision-making and understanding risk.

Our discussion begins by exploring the limitations of present-day narrow and task-specific artificial intelligence systems in dealing with complex scenarios. Professor Gerd Gigerenzer’s argument that simple heuristics may outperform complex algorithms in solving complex problems is particularly noteworthy. In fact, in some complex scenarios, relying on our intuition or “gut feelings” may result in better decisions than relying on sophisticated technological systems. We then discuss the importance of assessing the risks associated with using seemingly free services that actually collect and exploit users’ data and information to sustain their business models. We delve into the topic of recommender systems that subtly influence users’ choices by nudging them towards certain features, services, or information. Next, we examine various strategies for individuals to become more mindful of the potential risks associated with using such systems, and consider ways to maintain control in a world where algorithms wield considerable influence. This has been an insightful discussion.

Complement this discussion with “Machines like Us: TOWARD AI WITH COMMON SENSE” with Professor Ronald Brachman and then listen to “Philosophy of Technology” with Professor Peter-Paul Verbeek”