“Artificial Intelligence: A Guide for Thinking Humans” with Professor Melanie Mitchell

Recent developments in the field of Artificial Intelligence are fascinating as well as terrifying; there are extravagant promises as well as frustrating setbacks; there is great progress in narrowly focused AI applications, and there is lack of progress in the field of Artificial General Intelligence. In this episode of Bridging the Gaps I speak with professor Melanie Mitchell and we discuss the history, recent successes, huge expectations and emerging fears and frustrations in the field of Artificial Intelligence. We discuss fascinating and intriguing research that professor Melanie Mitchell discusses in her book “Artificial Intelligence: A Guide for Thinking Humans”.

Melanie Mitchell is a professor of complexity at the Santa Fe Institute in New Mexico. Her research focuses on genetic algorithms, conceptual abstraction, analogy-making and visual recognition in Artificial Intelligence Systems. Professor Mitchell originated the Santa Fe Institute’s Complexity Explorer project, an online learning resource for complex systems.

We begin our discussion by reviewing the history of this fascinating field and by discussing initial claims and hype that emerged at the start. We then discuss the transition from rule-based AI systems to machine learning approaches. We look into the successes of AI in narrowly defined task-based systems; we discuss the anomalies that emerge when the data is mildly changed. We then discuss the future development in this field and the challenges involved in making any meaningful progress towards Artificial General Intelligence and creating common sense in AI systems. The challenge of creating common sense seems similar to the challenge of finding and understanding dark matter in the field of physics, we discuss this. We look into the profound disconnect between the continuing hype and the actual achievements in AI, what the field has accomplished and how much further it has to go. We also discuss the approach of conceptual abstraction and incorporating analogy-making in AI systems. This has been a fascinating discussion about this ambitious and thought-provoking field.

Complement this discussion with “Artificial Intelligence: Fascinating Opportunities and Emerging Challenges with Professor Bart Selman and then listen to “2062: The World That AI Made” with Professor Toby Walsh

“The Self-Assembling Brain” and the Quest for Artificial General Intelligence with Professor Peter Robin Hiesinger

How does a network of individual neural cells become a brain? How does a neural network learn, hold information and exhibit intelligence? While neurobiologists study how nature achieves this feat, computer scientists interested in artificial intelligence attempt to achieve it through technology. Are there ideas that researchers in the field of artificial intelligence borrow from their counterparts in the field of neuroscience? Can a better understanding of the development and working of the biological brain lead to the development of improved AI? In his book “The Self-Assembling Brain: How Neural Networks Grow Smarter” professor Peter Robin Hiesinger explores stories of both fields exploring the historical and modern approaches. In this episode of Bridging the Gaps, I speak with professor Peter Robin Hiesinger about the relationship between what we know about the development and working of biological brains and the approaches used to design artificial intelligence systems.

We start our conversation by reviewing the fascinating research that led to the development of neural theory. Professor Hiesigner suggests in the book that to understand what makes a neural network intelligent we must find the answer to the question: is this connectivity or is this learning that makes a neural network intelligent; we look into this argument. We then discuss “the information problem” that how we get information in the brain that makes it intelligent. We also look at the nature vs nurture debate and discuss examples of butterflies that take multigenerational trip, and scout bees that inform the bees in the hive the location and distance of the food. We also discuss the development of the biological brain by GNOME over time. We then shift the focus of discussion to artificial intelligence and explore ideas that the researchers in the field artificial intelligence can borrow from the research in the field of neuroscience. We discuss processes and approaches in the field of computing science such as Cellular Automata, Algorithmic Information Theory and Game of Life and explore their similarities with how GENOME creates the brain over time. This has been an immensely informative discussion.

Complement this discussion by listening to The Spike: Journey of Electric Signals in Brain from Perception to Action with Professor Mark Humphries and then listen to On Task: How Our Brain Gets Things Done” with Professor David Badre.

Quantum Computers: Building and Harnessing the Power of Quantum Machines with Professor Andrea Morello

Quantum computers store data and perform computations by utilizing properties of quantum physics. Quantum computations are performed by these machines by utilizing quantum state features such as superposition and entanglement. Traditional computers store data in binary “bits,” which can be either 0s or 1s. A quantum bit, or qubit, is the fundamental memory unit in a quantum computer. Quantum states such as the spin of an electron or the direction of a photon, are used to create qubits. This could be very useful for specific problems where quantum computers could considerably outperform even the most powerful supercomputers. In this episode of Bridging the Gaps I speak with professor Andrea Morello and we discuss fascinating science & engineering of conceptualizing and building quantum computers. Professor Andrea Morello helps us to unpack and tackle questions such as what a quantum computer is and how we build a quantum computer.

Andrea Morello is the professor of Quantum Engineering in the School of Electrical Engineering and Telecommunications at the University of New South Wales Sydney, Australia.

I begin our conversation by asking professor Morello what a quantum computer is, and how it differs from classical and conventional computers. The no-cloning theorem’s implications in the field of quantum computers are next discussed. The no-cloning theorem states that it is impossible to create an independent and identical copy of an unknown quantum state. Professor Morello’s team uses single-spin in silicon to construct quantum computers, and we go over their approach in depth. The true value of quantum computers can only be realised if we develop creative algorithms that make effective use of quantum computers’ exponentially huge information space and processing capability. We discuss this in detail. We also touch upon the concept of quantum chaos and discuss research in this area. This has been a fascinating discussion.

Complement this with “2062: The World That AI Made” with Professor Toby Walsh and then listen to “Artificial Intelligence: Fascinating Opportunities and Emerging Challenges with Professor Bart Selman.