Reclaiming Human Intelligence and “How to Stay Smart in a Smart World” with Prof. Gerd Gigerenzer

The future of technology is a subject of debate among experts. Some predict a bleak future where robots become dominant, leaving humans behind. Others, known as tech industry boosters, believe that replacing humans with software can lead to a better world. Critics of the tech industry express concern about the negative consequences of surveillance capitalism. Despite these differences, there is a shared belief that machines will eventually surpass humans in most areas. In his recent book “How to Stay Smart in a Smart World: Why Human Intelligence Still Beats Algorithms” professor Gerd Gigerenzer argues against this notion and offers insights on how we can maintain control in a world where algorithms are prevalent. In this episode of Bridging the Gaps, I speak with professor Gerd Gigerenzer to discuss challenges posed by rapid developments in the tech sector, particularly in the field of artificial intelligence. We discuss different approaches that individuals can adopt to enhance their awareness of the potential hazards that come with using such systems and explore strategies to maintain control in a world where algorithms play a significant role.

Gerd Gigerenzer is a psychologist and researcher who has made significant contributions to the fields of cognitive psychology and decision-making. He is director emeritus at the Max Planck Institute for Human Development, and is director of the Harding Center for Risk Literacy at the University of Potsdam. He is a professor of psychology at the University of Chicago and is a visiting professor at the University of Virginia. His research focuses on how people make decisions under conditions of uncertainty and how to improve people’s understanding of risk and probability. He has trained judges, physicians, and managers in decision-making and understanding risk.

Our discussion begins by exploring the limitations of present-day narrow and task-specific artificial intelligence systems in dealing with complex scenarios. Professor Gerd Gigerenzer’s argument that simple heuristics may outperform complex algorithms in solving complex problems is particularly noteworthy. In fact, in some complex scenarios, relying on our intuition or “gut feelings” may result in better decisions than relying on sophisticated technological systems. We then discuss the importance of assessing the risks associated with using seemingly free services that actually collect and exploit users’ data and information to sustain their business models. We delve into the topic of recommender systems that subtly influence users’ choices by nudging them towards certain features, services, or information. Next, we examine various strategies for individuals to become more mindful of the potential risks associated with using such systems, and consider ways to maintain control in a world where algorithms wield considerable influence. This has been an insightful discussion.

Complement this discussion with “Machines like Us: TOWARD AI WITH COMMON SENSE” with Professor Ronald Brachman and then listen to “Philosophy of Technology” with Professor Peter-Paul Verbeek”

By |April 1st, 2023|Artificial Intelligence, Future, Technology|

“Working with AI: Real Stories of Human-Machine Collaboration” with Professor Thomas Davenport and Professor Steven Miller

Working with AI Reviewed at Bridging the Gaps

There is a widespread view that artificial intelligence is a job destroyer technical endeavour. There is both enthusiasm and doom around automation and the use of artificial intelligence-enabled “smart” solutions at work. In their latest book “Working with AI: Real Stories of Human-Machine Collaboration”, management and technology experts professor Thomas Davenport and professor Steven Miller explain that AI is not primarily a job destroyer, despite popular predictions, prescriptions, and condemnation. Rather, AI alters the way we work by automating specific tasks but not entire careers, and thus freeing people to do more important and difficult work. In the book, they demonstrate that AI in the workplace is not the stuff of science fiction; it is currently happening to many businesses and workers. They provide extensive, real-world case studies of AI-augmented occupations in contexts ranging from finance to the manufacturing floor.

In this episode of Bridging the Gaps I speak with professor Thomas Davenport and professor Steven Miller to discuss their fascinating research, and to talk through various case studies and real work use cases that they outline in the book. We discuss the impact of Artificial intelligence technologies on the job market and on the future of work. We also discuss future hybrid working environments where AI and Humans will work side by side.

Professor Thomas Davenport is a Distinguished Professor of Information Technology and Management at Babson College, a visiting professor at the Oxford University and a Fellow of the MIT Initiative on the Digital Economy. Steven Miller is Professor Emeritus of Information Systems at Singapore Management University.

We begin our discussion by looking at various aspects of the environments where AI and human workers work side by side, and then discuss the concept of Hybrid Intelligence. Then we talk about the challenges that organisations are faced with while developing and implementing Artificial Intelligence enabled technologies and solutions in enterprise environments. An important question that I raise during our discussion is, are the organisations ready for large scale deployment of AI solutions. The book is full of real world case studies and covers a wide variety of use cases. We delve into a number of these real world case studies and use cases. This has been a very informative discussion.

Complement this discussion with “The Technology Trap” and the Future of Work” with Dr Carl Frey and then listen to “Machines like Us: TOWARD AI WITH COMMON SENSE” with Professor Ronald Brachman

By |October 31st, 2022|Artificial Intelligence, Computer Science, Future, Podcasts, Technology|

“Machines like Us: TOWARD AI WITH COMMON SENSE” with Professor Ronald Brachman

Machines Like us reviewed on Bridging the Gaps

There is a consensus among the researchers in the field of artificial intelligence and machine learning that today’s artificial intelligence systems are narrowly focused, are designed to tackle specialised tasks and cannot operate in general settings. An important feature of the human brain that enables us to operate in general settings, and in unfamiliar situations is our common sense. In their new book “Machines like Us:
TOWARD AI WITH COMMON SENSE” Hector Levesque and Ronald Brachman explain “why current AI systems hopelessly lack common sense, why they desperately need it, and how they can get it”. In this episode of Bridging the Gaps, I speak with Professor Ronald Brachman, one of the authors of this book. We discuss various topics covered in the book and explore the question, how we can create artificial intelligence with broad, robust common sense rather than narrow, specialised expertise.

Professor Ron Brachman is the director of the Jacobs Technion-Cornell Institute and is a professor of computer science at Cornell University. Previously, he was the Chief Scientist of Yahoo! and head of Yahoo! Labs. Prior to that, he was the Associate Head of Yahoo! Labs and Head of Worldwide Labs and Research Operations.

We start off with a detailed discussion about the progress that we have made in recent decades, in developing narrowly focused and task oriented artificial intelligence systems. Some of these systems outperform humans; however we do acknowledge and discuss the need for developing artificial intelligence systems that can operate in general settings. We discuss the concept of artificial general intelligence and explore how understanding “human common sense” and equipping AI with common sense is an extremely important milestone in our journey toward developing artificial general intelligence. We discuss the challenge of developing a clear and thorough understanding of the nature and working of human common sense. We explore how “common sense” might be modelled and incorporated in future artificial intelligence systems. We then discuss the future of artificial general intelligence.

Complement this discussion with Artificial Intelligence: A Guide for Thinking Humans” with Professor Melanie Mitchell and with Artificial Intelligence: Fascinating Opportunities and Emerging Challenges with Professor Bart Selman and then listen to 2062: The World That AI Made” with Professor Toby Walsh