“Machines like Us: TOWARD AI WITH COMMON SENSE” with Professor Ronald Brachman

Machines Like us reviewed on Bridging the Gaps

There is a consensus among the researchers in the field of artificial intelligence and machine learning that today’s artificial intelligence systems are narrowly focused, are designed to tackle specialised tasks and cannot operate in general settings. An important feature of the human brain that enables us to operate in general settings, and in unfamiliar situations is our common sense. In their new book “Machines like Us:
TOWARD AI WITH COMMON SENSE” Hector Levesque and Ronald Brachman explain “why current AI systems hopelessly lack common sense, why they desperately need it, and how they can get it”. In this episode of Bridging the Gaps, I speak with Professor Ronald Brachman, one of the authors of this book. We discuss various topics covered in the book and explore the question, how we can create artificial intelligence with broad, robust common sense rather than narrow, specialised expertise.

Professor Ron Brachman is the director of the Jacobs Technion-Cornell Institute and is a professor of computer science at Cornell University. Previously, he was the Chief Scientist of Yahoo! and head of Yahoo! Labs. Prior to that, he was the Associate Head of Yahoo! Labs and Head of Worldwide Labs and Research Operations.

We start off with a detailed discussion about the progress that we have made in recent decades, in developing narrowly focused and task oriented artificial intelligence systems. Some of these systems outperform humans; however we do acknowledge and discuss the need for developing artificial intelligence systems that can operate in general settings. We discuss the concept of artificial general intelligence and explore how understanding “human common sense” and equipping AI with common sense is an extremely important milestone in our journey toward developing artificial general intelligence. We discuss the challenge of developing a clear and thorough understanding of the nature and working of human common sense. We explore how “common sense” might be modelled and incorporated in future artificial intelligence systems. We then discuss the future of artificial general intelligence.

Complement this discussion with Artificial Intelligence: A Guide for Thinking Humans” with Professor Melanie Mitchell and with Artificial Intelligence: Fascinating Opportunities and Emerging Challenges with Professor Bart Selman and then listen to 2062: The World That AI Made” with Professor Toby Walsh

“Artificial Intelligence: A Guide for Thinking Humans” with Professor Melanie Mitchell

Recent developments in the field of Artificial Intelligence are fascinating as well as terrifying; there are extravagant promises as well as frustrating setbacks; there is great progress in narrowly focused AI applications, and there is lack of progress in the field of Artificial General Intelligence. In this episode of Bridging the Gaps I speak with professor Melanie Mitchell and we discuss the history, recent successes, huge expectations and emerging fears and frustrations in the field of Artificial Intelligence. We discuss fascinating and intriguing research that professor Melanie Mitchell discusses in her book “Artificial Intelligence: A Guide for Thinking Humans”.

Melanie Mitchell is a professor of complexity at the Santa Fe Institute in New Mexico. Her research focuses on genetic algorithms, conceptual abstraction, analogy-making and visual recognition in Artificial Intelligence Systems. Professor Mitchell originated the Santa Fe Institute’s Complexity Explorer project, an online learning resource for complex systems.

We begin our discussion by reviewing the history of this fascinating field and by discussing initial claims and hype that emerged at the start. We then discuss the transition from rule-based AI systems to machine learning approaches. We look into the successes of AI in narrowly defined task-based systems; we discuss the anomalies that emerge when the data is mildly changed. We then discuss the future development in this field and the challenges involved in making any meaningful progress towards Artificial General Intelligence and creating common sense in AI systems. The challenge of creating common sense seems similar to the challenge of finding and understanding dark matter in the field of physics, we discuss this. We look into the profound disconnect between the continuing hype and the actual achievements in AI, what the field has accomplished and how much further it has to go. We also discuss the approach of conceptual abstraction and incorporating analogy-making in AI systems. This has been a fascinating discussion about this ambitious and thought-provoking field.

Complement this discussion with “Artificial Intelligence: Fascinating Opportunities and Emerging Challenges with Professor Bart Selman and then listen to “2062: The World That AI Made” with Professor Toby Walsh

“The Self-Assembling Brain” and the Quest for Artificial General Intelligence with Professor Peter Robin Hiesinger

How does a network of individual neural cells become a brain? How does a neural network learn, hold information and exhibit intelligence? While neurobiologists study how nature achieves this feat, computer scientists interested in artificial intelligence attempt to achieve it through technology. Are there ideas that researchers in the field of artificial intelligence borrow from their counterparts in the field of neuroscience? Can a better understanding of the development and working of the biological brain lead to the development of improved AI? In his book “The Self-Assembling Brain: How Neural Networks Grow Smarter” professor Peter Robin Hiesinger explores stories of both fields exploring the historical and modern approaches. In this episode of Bridging the Gaps, I speak with professor Peter Robin Hiesinger about the relationship between what we know about the development and working of biological brains and the approaches used to design artificial intelligence systems.

We start our conversation by reviewing the fascinating research that led to the development of neural theory. Professor Hiesigner suggests in the book that to understand what makes a neural network intelligent we must find the answer to the question: is this connectivity or is this learning that makes a neural network intelligent; we look into this argument. We then discuss “the information problem” that how we get information in the brain that makes it intelligent. We also look at the nature vs nurture debate and discuss examples of butterflies that take multigenerational trip, and scout bees that inform the bees in the hive the location and distance of the food. We also discuss the development of the biological brain by GNOME over time. We then shift the focus of discussion to artificial intelligence and explore ideas that the researchers in the field artificial intelligence can borrow from the research in the field of neuroscience. We discuss processes and approaches in the field of computing science such as Cellular Automata, Algorithmic Information Theory and Game of Life and explore their similarities with how GENOME creates the brain over time. This has been an immensely informative discussion.

Complement this discussion by listening to The Spike: Journey of Electric Signals in Brain from Perception to Action with Professor Mark Humphries and then listen to On Task: How Our Brain Gets Things Done” with Professor David Badre.