Riskgaming

The Orthogonal Bet: Exploring the history of intelligence

Design by Chris Gates

Welcome to The Orthogonal Bet, an ongoing mini-series that explores the unconventional ideas and delightful patterns that shape our world. Hosted by ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Samuel Arbesman⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠.

In this episode, Sam speaks with writer, researcher, and entrepreneur⁠ Max Bennett⁠. Max is the cofounder of multiple AI companies and the author of the fascinating book ⁠A Brief History of Intelligence⁠: Evolution, AI, and the Five Breakthroughs That Made Our Brains. This book offers a deeply researched look at the nature of intelligence and how biological history has led to this phenomenon. It explores aspects of evolution, the similarities and differences between AI and human intelligence, many features of neuroscience, and more.

Produced by⁠ ⁠Christopher Gates⁠

Music by⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠George Ko⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ & Suno

continue
reading

Transcript

This is a human-generated transcript, however, it has not been verified for accuracy.

Danny Crichton:

Hey, it's Danny Crichton here. We take a break from our usual risk gaming programming to bring you another episode from our ongoing mini-series, The Orthogonal Bet hosted by Lux's scientist and resident Samuel Arbesman. The Orthogonal Bet is an exploration of unconventional ideas and delightful patterns that shape our world. Take it away, Sam.

Samuel Arbesman:

Hello and welcome to The Orthogonal Bet. I'm your host, Samuel Arbesman. In this episode, I speak with the writer, researcher, and entrepreneur Max Bennett. Max, the co-founder of multiple AI companies, and is also the author of the Fascinating book, A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains. This book takes a deeply researched look at the nature of intelligence and how biological history has resulted in this phenomenon. It explores aspects of evolution, similarities and differences between AI and human intelligence, so many features of neuroscience and more. I wanted to speak with Max, learn more about this book to discuss the five intelligence breakthroughs, as well as learn how someone who came from outside the world of neuroscience ended up getting involved in the world of research and writing this book. Max and I had a chance to discuss evolution and the origins of intelligence, the features and power of language, storytelling and imagination, and even what is missing in the current world of AI.

Let's jump in. Max, great to be chatting with you and welcome to The Orthogonal Bet. Your book, A Brief History of Intelligence, it came out last year. It's fascinating and you come from outside of the field of intelligence and neuroscience, I'm also very much an outsider, and even though you're coming from outside, you've written this really thoughtful book on all these different topics and it's incredibly detailed and researched. You share a little bit in the introduction of the book about how you came to write this book and how you ended up corresponding with experts. Maybe you can tell our audience a little bit about what led you to actually writing this book and kind of what got you into all these different topics.

Max Bennett:

Yeah, it's a great question. It wasn't a pre-planned journey by any means. I mean, the core of it was my own curiosity and the spark of curiosity began with what's classically called Moravec's paradox. So, I spent most of my career commercializing AI systems, so I built companies around taking machine learning tools and applying them to business problems for businesses to be more efficient, make more money, so on and so forth. And whenever applying these machine learning systems, this thing called Moravec's paradox, Hans Moravec is a famous computer scientist, always reveals itself, and Moravec's paradox is the following, why is it the case that there are certain things that humans are effortlessly good at, like moving around in the world, doing the dishes, so on and so forth, that are oddly so hard for us to get machines to do, and then there are things that are so hard for humans like counting numbers really fast that are incredibly easy for computers to do?

And so, of course in between Moravec's paradox lies the question of how the human brain works, because that is the physical instantiation of our intelligence that's clearly distinct from that of computers. And so, I with this extremely naive belief that like, "Oh, the brain was understood." I just bought a neuroscience textbook and I was like, "I'm just going to read this over the summer and then I'll understand how the brain works and that'll satisfy my curiosity." And what ended up happening is every book I read, I felt like I understood the brain less and less and I couldn't stop. So, I started reading book after book and eventually it got to the point where I was reading papers and then it got to the point where I started having my own ideas. And so I was like, "You know what? I'm just going to start collaborating with neuroscientists," and no one would respond to my emails, because who is this random kid emailing us questions about his ideas?

So, eventually it got to the point where I was like, "All right, I'm going to submit one of these ideas I had to a journal, and obviously they'll reject it because who am I? But at least they'll have to tell me why it was a bad idea. They have to at least look at it, so that's the way I'm going to get scientists to at least look at my ideas." And I just got, in some ways, I think lucky. The paper got accepted and-

Samuel Arbesman:

Oh, wow.

Max Bennett:

... I won what's called the Reviewer Lottery, because Karl Friston is a very famous neuroscientist is one of the reviewers, and so that was just luck. So he liked the paper, he kind of took me under his wing a little bit, became an informal mentor. My confidence was a little bit bolstered. So, then I just started coming up with more ideas, and the line that shifted from just doing neuroscience to I feel like I have a book to write here, was I started with this idea of trying to reverse engineer the human brain as it is today.

So, I became fascinated with the question of, can we peer into the human brain and understand what algorithms to implemented? And that was sort of the first paper I published, which is a cool idea, but still entirely speculative. And eventually I got to the point where I felt like reverse engineering the brain as it is today is so difficult, because we have a technological limitation of understanding how the brain works. In other words, we can't actually peer into the brain. We don't have the technological tools to observe the synaptic strength between every neuron, to record every neuron simultaneously, or even to know what the effects of transmission over synapses are.

And so, it seems to me that one underutilized tool in our toolbox was evolution, which is okay, the human brain today is incredibly complicated, but it certainly wasn't always so. So, then I just started looking for a book about, well, what do we know about how the brain came to be? And to my surprise, there wasn't really a lot written on it. There's one huge textbook by this guy John Cass called Evolutionary Neuroscience. It's just a neuroscience book that just describes the biology of how brains evolved, but no one had really merged that with comparative psychology, in other words, the history of what new abilities emerged, and what's known in AI. And so, in there I started just sort of taking my own notes and then it just ballooned into what I eventually believed would be a useful contribution, and that's sort what led to the book.

Samuel Arbesman:

That's amazing, and then so in the in-between point where you're kind of just doing neuroscience, then also beginning to delve more deeply into all these different things, did you consider doing some sort of graduate school work in this kind of thing or you felt there was a little bit more freedom to be doing it outside of it doing this kind of combination of writing for general audiences, synthesizing, in addition to actual scientific research?

Max Bennett:

Yeah, I definitely still flirt with going to graduate school. Yeah, it's definitely an interesting open question. I think my business obligations as of now and for the foreseeable future are too great to go to graduate school. So, I'm trying to satiate my curiosity while still adhering to my business responsibilities. So, I tried to find a balance between these two things. I also felt like the place I could be most useful was bridging the gap between the academic community and the general public, because again, I am an outsider and I don't have classical graduate training.

I think I felt like where I would be more useful, instead of writing a book for academics, it would be writing a book for both parties, but definitely geared towards the general public. And in order to make sure I was sort of adhering to the general academic principles of peer review and so on and so forth, before writing the book, I took the core ideas of the book and compiled them into two papers that I got published. So, I made sure I went through peer review, made sure there was a form factor designed for the academic community, so then I could sort of go off and write what is a little bit more of a popsci book.

Samuel Arbesman:

Sure, but at the same time though, I think as someone who also kind of writes for general audiences, I think there is a great deal of power in synthesizing and translating ideas for general audiences, but I also think that oftentimes people who are coming a little bit outside of a field can offer a lot, because they are able to have a different sort of perspective than someone who is just living in that field. And also the kinds of things like you mentioned, which are the unsolved problems or the open questions like these kind of gaping holes in a field that oftentimes people when they're experts and specialists, they're kind of working on, "Okay, how do we move everything a little bit forward without necessarily realizing or consciously acknowledging the fact that like, oh yeah, there's this giant space that has just not been tackled yet?" There's all these things that are unknown.

Max Bennett:

Yeah, my experience, because the book was at its core me assembling ideas that were already there into what I would call a first approximation of the story of how the brain came to be. And my experience of it is you have all these brilliant scientists that are sort of in the trees helping us do very rigorous research, and I just came as an outsider, and squinted, and looked at the forest and said, "I don't know if we all realize this, but over the last 20 years, a really interesting story has been assembled by the evidence." And so I was like, "We should look at that story, because although of course it's a first approximation, I'm making certain assumptions, I'm glazing over certain details, if you look at the forest overall, there is a really elegant story that merges comparative psychology, neuroscience, and AI into a coherent arc of how the brain came to be." And this wouldn't have made sense without the last 20 years of neuroscience data, parapsychology data, so on, and so forth.

Samuel Arbesman:

That's amazing. So, maybe this is the perfect moment to allow you to share a little bit about that story. I feel like the scaffolding of the story is basically these five breakthroughs, I guess, an evolutionary history. I don't know if you want to go through those different breakthroughs or the overall story and that picture that you try to tell in the book.

Max Bennett:

One of the core sort of realizations I had was when you look at comparative psychology, which is a study of comparing intellectual and behavioral abilities between different animals, there's this litany of different abilities that we see across animals. And when we look at evolutionary neurobiology, there's a litany of brain regions that emerge, and so I think in the forest, so we're stuck in the trees, really what it seems like is there's just been a ridiculous number, billions of iterations that changed an ability here, an ability there, a brain structure here, a brain structure there. And what I found most interesting is when looking at the big brain changes that emerged and looking at the intellectual abilities that seem to have emerged at certain points in evolutionary history, what seems like a suite of different intellectual capacities can actually be understood as different applications of one sort of algorithmic operation, and that merges what seems like a litany of arbitrary changes into a coherent story.

So, I'll give an example of this. So with mammals, one of the key things that seems to have emerged in early mammals, there are sort of three main new abilities that the evidence suggests we got around 150 million years ago from the transition from proto-mammals to mammals. We see this thing called vicarious trial and error emerge. So, even rats have evidence of being able to imagine possible futures and the neuroscience studies of that are really, really cool. We see episodic memory, so there's lots of good evidence that all mammals, including rats can re-render a memory of a past event in their life, and we see counterfactual learning. There's some really clever experimental paradigms where you can see a rat consider taking an alternative past choice. And so these might seem like different abilities, but if we take the AI lens where we try to reduce everything to what is the algorithm being implemented, these are really all different applications of simulating.

These are all rendering a state of the world that's not the current one. And so, then if we go and we look at what actually changed in the brain of mammals, we see that really only one new brain structure emerged, this thing called the neocortex. And so, this helps us now piece together a first approximation, which is it seems to be the case that the neocortex emerged and from the neocortex came these three new applications of the ability of simulation. And more and more it comes together, the more we actually look at what does the neocortex do, and there's a lot of really interesting ways you can see how the neocortex is highly involved in things like simulation.

There's also great studies on mammalian perception. A lot of people refer to mammalian perception as a constrained hallucination, which also has really nice synergies with this idea that all the neocortex does is render a world model, so on and so forth. That's just an example of the value of merging these things together takes something that seems very different and realize it's actually related. So, to just quickly summarize, because we could take up hours going through each of them-

Samuel Arbesman:

Yeah, and then we can dive into a few different things, but yeah.

Max Bennett:

Yeah, yeah, yeah. So the very first breakthrough, which emerged in the very first brains is what I call steering people from a more technical world would call this taxis navigation. Effectively, it is the classification of stimuli in the world and to good and bad, and you approach things that are good and you avoid things that are bad. That seems to be the core function of the very first brain. Fast-forward to vertebrates and you see reinforcement learning evolve. So, you can see that this classification of things into good and bad was converted into a reward signal, and a lot of the really cool things that we see in AI to make reinforcement learning work, such as temporal difference learning, actor-critic architectures, the importance of having a reward expectation in order to accurately learn without a model of the world, there's tons of evidence that that's exactly what's happening in fish brains and that's what emerged in early vertebrates, so we see pattern recognition, reinforcement learning emerge there.

With the mammals, you see building on top of this infrastructure of learning through trial and error. You see this emergence of being able to not only learn from what actually happens, but from imagined possible actions. So in other words, once you have the sort of infrastructure for reinforcement learning, you can now build this sort of model architecture on top of it where you can imagine possible actions and then learn from those. With primates, you get what I refer to as mentalizing. In other words, what's also called metacognition, so thinking about thinking, and you can see this is kind of a model of the model itself. So in simulating, a rat can imagine possible external states of the world. With primates, we see these new neocortical regions that get input from the older mammalian neocortical regions, which can be seen as creating a model of one's own inner simulation.

So, this has a ton of benefits. Primates are very good at theory of mind, thinking about what other people are thinking. They're very good at imitation learning, because they can imagine what someone else is doing, so on and so forth. And then with humans, you get the emergence of language. There's lots of controversy as to what makes humans unique. There are people that still make the argument that only humans can imagine the future. I find that not a very compelling argument given the evidence. So to me, the evidence is very strong that really the core thing that makes humans different is just we have this thing called language that we can talk about, but those are the five sort of algorithmic breakthroughs.

Samuel Arbesman:

This is great. Yeah, there's a lot of different directions there that we can go in. I guess one to of think about is if we were to rerun a tape of evolution, would we get any of these breakthroughs of intelligence? Would we get them in this order? Is there inevitability of intelligence looking the way it does for our minds? I mean, I think a lot of people, and especially people read science fiction or they're just beginning to just imagine different possible spaces, there's this sense that either super intelligence or some other kind of super smart things are going to be kind of like us, but just faster and more, versus other people saying, "No, no, it's going to be something qualitatively different." And I wonder if your work can almost shed some light into, how do we think about the actual realm of human intelligence and other animal intelligences within this larger space of possible intelligence and evolutionary trajectories?

Max Bennett:

I love that question and I don't have the answer, but I have lots of thoughts on it.

Samuel Arbesman:

That's perfect.

Max Bennett:

It's a profound question, because on one hand I want to say yes, in one hand I want to say no. So, on one hand I want to say no because when we look at the animal kingdom, there are so many diverse form factors of intelligence that went down their own evolutionary trajectories. An octopi has a brain that looks nothing like our brain, and yet has incredibly impressive abilities. A bird brain has gone through a very different evolutionary trajectory, arthropod brains have gone through very different evolutionary trajectories. So on one hand, I want to say it's a very human centric view to say our five breakthroughs are some inevitable path. If we looked at any other smart animal in the animal kingdom, we would see a very different series of paths. On one hand, I want to say that there is a diverse suite of intelligences, and a lot of the way we look at AI is through a very human centric lens.

Just look at the most recent breakthroughs or language models, which is a very human centric problem statement. On the other hand, I want to say yes, because although the physical instantiations of many different brains seem to be in some ways different, there seems to be remarkably similar algorithms that are implemented. For example, this sort of temporal difference learning and predictive coding system that we see in vertebrates, we also see in many arthropods, lots of good evidence that that is not from a shared ancestor. These seem to have been independently evolved and stumbled upon that once you have signals for good and bad, eventually the best way to learn through trial and error on good and bad is to have a predictive coding or temporal difference-like system. This algorithm, although the exact nuances might be slightly different, is almost inevitable once you have classification of good and bad.

Similarly, taxis navigation seems to have reemerged even at different levels of scale. The same algorithm that worm-like ancestors did is also implemented by single-celled organisms. The physical instantiation is entirely different. Protein cascades in a cell are very different than neurons in a worm brain, but the algorithm is almost the same. Also, with model-based reinforcement learning or simulating, there's a lot of good evidence that early vertebrates could not do this, but birds seem to have independently evolved this ability-

Samuel Arbesman:

Interesting.

Max Bennett:

... in a very different brain structure called the DVR, which under the microscope doesn't look anything like the neocortex and yet seems to somehow implement a similar type of algorithm. So, on one hand I want to say that some of these things are kind of inevitable, and where things flourish might be more on the basis of the exploration valley of what is available option. So for example, and this is complete speculation, but one speculation I've had for a while, and I'm curious if any listeners... Email me if you have thoughts on this, is that one reason why birds never achieve the same intelligence as a human is just because of the different limitation in the way that their model architecture was implemented, it's not as scalable.

So the DVR, which is their equivalent to the neocortex, is a nuclear structure. In other words, there's different spherical nuclei that are talking to each other. It's possible that that doesn't scale up as easily as a layered structure like a neocortex, because you can just keep adding more layers and it's inherently parallel. And so, one speculation is that the reason why even dinosaurs had relatively small brains, some evidence suggests they were quite smart, but their brains couldn't scale up the same way the human brain just expanded by 3X over the course of 500,000 years, which is mind-blowing, or a few million years. It's possible that just the sort of location in this morphology space, it just reached a local maximum it could move, and we just happened to get lucky that mammals had a morphology that scaled up really nice.

Samuel Arbesman:

Oh yeah, this is fascinating. I love this idea, these examples of convergent evolution kind of pointing to certain amounts of inevitability of certain things. At the same time though, I think what you're saying is that it's not that... And we're certainly fans of human intelligence and all the features of human intelligence, but it's not that like, "Oh, certain types of intelligence are better or worse," as long as they are appropriately fit to an organism's environment, that's all that it needs. And so, whether it needs some more sophisticated stuff, if it's more complex, but it doesn't actually add anything, then it might actually be inefficient and could be selected away or whatever. And so, there's lots of different ways of thinking about how these paths could go.

Max Bennett:

Totally.

Samuel Arbesman:

And you might only get the human intelligence for certain very specific, unstable environments, and that kind of leads you down those certain kinds of paths.

Max Bennett:

Another interesting takeaway, the more I studied the evolutionary trajectory of humans and other animals, is it's usually the niche that's on the cusp of not making it where you get the most innovation, because by biomass, even animals are not the most successful organism. I mean, there's way more biomass of fungi and ants than there are animals, but it's way harder to be an animal, because an animal is the only creature that has to actively go and kill other things in order to consume food. And so, animals have always had this harder path forward, and so you get lots of innovation from the trials and tribulations.

The other interesting thing is if there weren't calamities throughout the past on Earth, there would never have been this many opportunities for new diverse types of intelligence to emerge. For example, if it were the case that an asteroid did not hit Earth 66 million years ago, there not might be any animals on Earth, because our ancestors were relegated to a tiny niche of being nocturnal squirrel-like creatures underground, and the world was ruled by dinosaurs. So, if that had not happened and given mammals the opportunity to proliferate, the world might just be full of dinosaurs right now, and there never would be any humans. Go back, in every mass extinction event you see the same sort of fact. So, you also need a world where there's enough crises and shocks to the system to keep exploring different parts of this morphology space, because otherwise you find a local maxima and then the ecosystem doesn't change.

Samuel Arbesman:

If the earth were a lot more boring, we'd all be single-celled organisms and living happily in our warm puddles or whatever. So, going to the feature that you kind of discuss as a hallmark of human intelligence, language, when I think about language, I guess there's kind of two different directions I kind of think about. On the one hand, and going back to with AI and these large language models, we associate intelligence with language, because that's one of the things we do.

And so when you see these systems generating language, we're kind of already primed to think that they're intelligent, even sometimes if maybe are not necessarily as intelligent as they seem. On the other hand, there's been this long history of people and scientists saying, "Okay, humans are the only species that can do X," or whatever, and this is a well-known thing. And then of course they say tool use and some other things, and of course there's lots of other things within the animal kingdom and the tree of life that can do these things. When it comes to language, and there are other organisms that can use language in certain ways, and I think when you're talking about language, it's a certain type of language use. Maybe it's kind of like recursive and kind of open-ended, but in addition, language use is maybe a symptom of something more deep within the brain and the mind, and maybe you can kind of talk about what are the unique features there.

Max Bennett:

I should also note that it is an open question whether certain communication systems that have been observed like whale songs might constitute a language. So, I do think that is an open question. The point in the book when tracking our evolutionary history is not ever to say that only humans have X or only mammals have X, but to ask, when in our trajectory did these things emerge?

Samuel Arbesman:

Oh, certainly, yeah. I'm just kind of providing some other context as well.

Max Bennett:

Of course, of course. And so with language, what's interesting is it seems to have required this thing that emerged earlier, which is mentalizing, in other words, the ability to consider the mind state of someone else. And I think one of the most intuitive ways to think about this is Nick Bostrom's sort of famous allegory about the paperclip problem where we have this really super intelligent AI system that is entirely benign, has no desire to harm anyone, and we just give it a simple language instruction, "Hey, can you please maximize paperclip production in this paperclip factory? Because running out of paperclips." And it goes, "Great," and it enslaves all of Earth and converts Earth into paperclips and starts to seize greater and greater chunks of the solar system in the observable universe. And the point of that, as silly as it sounds, is to make clear how we don't realize how imprecise language is, because we're constantly engaging this inference process of inferring what people mean by what they say.

So, language is a huge compression of what's actually being communicated between two minds that are using it, and it's founded on this idea of mentalizing where you look at other apes, they're not very good at language, but they're very good at inferring what other people are intending by what they do and very good at inferring what knowledge they have. And so, it seems that that might have been a prerequisite in order to evolve sort of language on top of it. And one way I like to think about language is this unique ability to share mental simulations, because even when we're just planning a hunt, let's say, one of us can render a plan and see it succeed in their mind, and now for the first time ever with language, they can communicate that plan, and then I can render the same plan in my mind.

And so it's this ability to share in our mental simulations, which existed for a long time. Over 100 million years animals have had mental simulation, but never before could they be directly shared, and that's one of the most powerful things about it. There's a divide in the linguistics community, and I think Noam Chomsky's view has been harder and harder to defend with the explosion of large language models, of course, but there is still a small cohort that believes that there's something unique that happened in the human brain that unlocked this operation on which language stands. I'm very skeptical of that. What makes language possible in the human brain is not some unique language organ, but we have a unique set of instincts that enable a general learning system to learn language. And for example, one unique thing that human children do that it's very difficult, if at all possible, to get a chimpanzee to do, is we natively engage in this thing called joint attention.

So a pre-verbal child, you put an object in front of them, and they will point to the objects and then look to a parent and they'll look back and forth to confirm that the parent is attending to the same object they are. And this is this hardwired program that enables us to add declarative labels to things, because if I now have a simulation of a cup and I can confirm through looking at eye gazing that my mother has a simulation of the cup, and then she uses the label cup and I hear that, now we can label things, and it's painstaking to get a chimpanzee to do that. And that's why what they tend to learn is what's called imperative labels, not declarative labels. They learn, "If I hear this and I do this action, I get a treat."

Samuel Arbesman:

Interesting.

Max Bennett:

You're training them the way you train a dog, but they're not assigning labels to arbitrary objects in their mind, and so that's why it's very difficult to get them to learn language.

Samuel Arbesman:

This is all fascinating. Would you say then, and maybe this is overly controversial, but when it comes to large language models, they have the language ability, but without the mentalizing first, and so that's why there's tendencies towards hallucination, and some of these other kinds of things as well? Would that be a fair statement?

Max Bennett:

Yes, I think that is a fair statement. I think they're also lacking the grounding mental simulation of the world, the world model, et cetera, but that doesn't mean that scaling these systems up won't for practical purposes for many use cases circumvent the need for these types of things, because LLMs can do things that biological brains cannot. So, a biological brain needs mentalizing, because we don't have that many years before we need to enter the worlds and we need to start engaging with other people. I can't read a million theory of mind word problems to understand how humans behave. I need to be relatively effective quickly. An LLM, even though it maybe doesn't have the same infrastructure that a human brain has, we can feed it millions and millions of theory of mind word problems until it builds a somewhat sufficient representation of how humans behave in terms of knowledge such that from a performance standpoint, it's very hard to distinguish the difference.

But it's important still as the builders of AI systems to be aware, I think, of the foundation on which these abilities emerge, because it's going to make us much better at understanding where brittleness lies. So for example, I would be concerned if we just took an LLM, put it into an embodied AI agent, like a robot, and then said, "Now go help the elderly around the house." I would be concerned that the suite of word problems that its data was trained on would lead it to make some important errors when inferring what people mean, their knowledge, so on and so forth. So, I think there's this interesting thing going on where we can scale up LLMs to perform well on certain tasks that seem to circumvent the need for certain things, but there might be cases where we do want to recapitulate things in biology if we want them to be more robust to new types of applications.

Samuel Arbesman:

And certainly, I mean related to training these things and letting them exist in the real world as robots or whatever, I mean, there is a distinct difference between being trained on the world of story world and then the physical world and there are many things that happen in stories and texts that are not necessarily related to reality. And if these systems don't have that understanding yet, it can be problematic.

Max Bennett:

Yeah.

Samuel Arbesman:

Related to, I guess, language and mentalizing, there's also just the category of memory, like how we remember certain things. And you mentioned this ability to have episodic memory almost like relive certain episodes, but there's also other kinds of differences between human memory or animal memory and the memory of computers, for example, going back to the artificial intelligence analogs. One of them is just the way in which we retrieve memories, and maybe you can kind of talk a little bit about the differences and similarities there.

Max Bennett:

So classically, and we can build algorithms on a computer or abstractions that simulate the way that brains do memory, but in the physical instantiation of a versus a animal brain, there's a core difference in how memories are stored. In a computer chip, things are done through what could be called register addressable memory. So, information is stored with a lookup key, and if you lose the key or the key gets damaged in any way, you lose access to the memory, the memory is gone. Human brain or animal brains seem to use content addressable memory. So, the way you retrieve a memory is you give me a tiny piece of the whole memory, and then I fill in the rest, and that leads to some important differences. With animal brains, there's an increased robustness to stochasticity and noise in the brain itself and the sensory input. So, this is why you walk into a room and you smell something and immediately a memory from some past event emerges. It's exactly the type of content addressable memory.

The downsides of content addressable memory is the memories can get corrupted in weird ways. So, because you engage in this filling in process, in other words, you're regenerating your past experience from a latent representation of the experience, you feel often more confident in your knowledge of what happened than your actual compression of that memory itself. And this is why so much eyewitness testimony has led to false convictions of people that were convicted of criminal behavior that then the DNA evidence, they're exonerated, because we realize we fill in memories of things that feels real to us, but a ton of it was just filled in.

There's also some fascinating studies where they've recorded people give stump speeches and they often refer to some past memory in their life. I forget exactly how the study was done, but over the course of a year or two years, they tracked the change in how this story evolved for this person that kept telling it on the stump speech. And you ask them at the end, and they have no recollection of the actual true memory, because they've retold it so many times, it just changes minor ways each time.

Samuel Arbesman:

Interesting.

Max Bennett:

And so, it corrupts the memory every time you re-remember it, which is unlike how a computer works. So, there's pros and cons of the way that brains encode.

Samuel Arbesman:

One of the things I think about, I'm just very interested in these computational tools for thought and better tools for thinking, and then related to that, there's also a lot of people who've think about improving your memory and your recall, are there techniques built on these kinds of better understandings of the way in which the human brain stores memories that can allow us to have a little bit of the best of both worlds?

Max Bennett:

The people who engage in these memory competitions, one thing they do, which I find very telling, is the components of the human minds and mammalian minds that is so good is spatial memory. We're very good at engaging in spatial learning. This is why what's so intuitive for us is so hard for machines. We don't realize how difficult it is, how effortlessly we navigate around the home, how effortlessly we remember how to get from point A to B.

And so, one of the key memory techniques that people in these memory competitions do is when they're remembering facts or numbers, they create this called a memory palace, and they have a mental map of this palace, and they place items in these locations. And I think it's fascinating that that improves your performance [inaudible 00:32:50], because what you're doing is you're sort of piggybacking on this evolutionary program of remembering space to learn these things that we didn't evolve to learn arbitrary numbers, we didn't remember all these weird semantic facts that now humans try to remember, but if we sort of piggyback on this spatial memory, we now become way better at those things. And so yeah, that's one thing that comes to mind.

Samuel Arbesman:

Going back to simulation, and you kind of mentioned that simulation encapsulates a few different features of what the human mind can do, especially from the mammalian brain of simulating different futures and counterfactuals, there's also this just idea of imagination and curiosity, actually testing the world, but then also testing things in our own minds, and related to this is also the idea of fiction. I see a lot of people talking about one of the great benefits of reading lots of stories and reading fiction and novels and things like that is you can imagine what the world would be like if you had taken different paths, or you can imagine what it would be like to be someone with a very different background as yourself. And the truth is that also ties together with this idea of language as well. So, how do you think about simulation, language, storytelling, imagination, and curiosity? Are these things all deeply interrelated? Is there something distinct about certain aspects of these? Am I overly generalizing a lot here?

Max Bennett:

No, no, I think there's a relationship between them. I think curiosity might be standing on its own. I'll talk about that in a second, but story is interesting, because the fact that humans get such pleasure from storytelling, I think is revealing about the instincts required to make language stable and effective strategy for survival. So, one of the challenges when we've done experiments trying to teach non-human apes language is they don't have this sort of natural tendency to want to share what's going on in their heads, and they don't have this natural curiosity about what's going on in our heads. And yet young human children get enamored with a story, and there's such pleasure in coming back from a story of what happened and wanting to inform your friends of what occurred. One theory is that in order to make language such a useful tool, it needs to be used a lot.

And so, we developed a preference to want to share information in our head and inquire into the information in other people's heads. Another lens on that is Robin Dunbar has a theory on gossip. One of the problems with language is I can also use it in an antisocial way. In other words, I can use it to lie, and so in order for language to be selected for it needs to benefit the participants of language, and if there's an incentive for me to defect and just start lying, then that destabilizes that, and then language can be lost because the people who don't speak language will do better than the people who do speak. And so, one theory he has is that the way in which you keep it stable is you enforce harsh punishments on those that defect. And so, gossip is a way in which you can pull this off, which is if we have an instinct to always share any social, moral sort of defection from a social norm, then the chance you're going to get caught when you sort of do something in an antisocial way goes up.

And so, that's one reason why he theorizes humans have such a strong instinct for gossip is to keep people sort of behaving in a pro-social, in other words, collaborative way. So, I think story and imagination are related, because story is an instinct that drives us to share our imaginations with each other. You said something like hypothesis testing. So although you said that in passing, I think it's a profound point that is not well understood in AI, and could be the final breakthrough to getting the AI systems that we want. Embedded behind ChatGPT, despite how incredibly smart it is, is a incredible amount of brute force human labor, and that brute force human labor is curating the data on which it is trained. And what that means is if you give it data that writes things like the world is flat, ChatGPT will believe the world is flat. Humans have to protect the training data from any falsehoods entering it.

ChatGPT will not on its own eliminate these falsehoods. And of course the AI systems that we want are ones that have a model of the world, and as it engages with the world, it updates its model to best fit the world at it's observing, such that if falsehoods are given to the AI system, it can on its own verify their falsehood, and this is what a smart scientist does. You tell them a fact, like the world is flat, they go and do some experiments and they say, "I don't believe the world is flat." And so, that is a really fascinating open question is, how do we build AI systems that continually engage with the world, build hypotheses, test their hypotheses and learn on their own without the brute force human effort to build these beautiful golden data sets on which we train these systems.

Samuel Arbesman:

And do you think that's related to this idea of curiosity, like actually wanting to learn about new aspects of the world, or engaging with novelty? Because there's this whole field of open-endedness and novelty search and things like that, which they're all kind of premised in this idea that you need to have a system that is trying to find new things or interesting things, combine them together in order to create I guess new insights or new ideas or new features of a system. Are these kinds of things also related to this idea of hypothesis testing?

Max Bennett:

Yeah, so I think there's two forms of curiosity. One is well understood in AI and is not satisfying what you described, and then one version of curiosity is not well understood and does satisfy what you're describing. The version that's well understood of curiosity is this idea that we should add to the reward function of an agent surprising itself. In other words, we should reward it for exploring, and this is well understood in reinforcement learning. The famous story around this is the DeepMind team couldn't get AI agents to beat Montezuma's Revenge, because the rewards are so sparse in that Atari game. And one way they got it to start exploring effectively is they built a state prediction sort of machine that tried to predict the next state of the game, and whenever it did a bad job predicting the next frame, it gave the agent a reward.

So in other words, what they're doing is they're training the agent to enjoy and seek out surprise, so then it would go into other rooms, it would keep trying new things, and then once it understood what was going to happen next, it got bored and it moved on, and that was very effective for getting it to explore and try new actions. That has, I think, nothing to do with hypothesis testing. That's just an effective way to manage the exploration, exploitation sort of dilemma in reinforcement learning. Then, there's another form of curiosity, which is I don't like when something in the world doesn't make sense to me. So, it's not a native thing that I want to try something new. It's that if you show me something that is odd and doesn't map my model of the world, it's inconsistent, either I must reject it or I need to somehow engage in some hypothesis to merge this into my understanding of the world.

And I think that's different because it's not about just exploring actions, that's about updating my world model. These are probably in some ways related of course, but I do think that we don't fully understand, which is how do we enable an AI system to realize the data it's receiving does not align with its world model, and hence it needs to engage in hypothesis testing, update its world model, or reject the information, because it's not credible, but there's some disconnectedness that needs to be resolved. And I think some inner intuition we have is when you finally understand a math equation or a concept in school, there's this weird euphoria when things make sense. What is that? That is the things making sense in one's mind. In other words, we take some new information and then it aligns to our model of the world, that magic, I don't think we fully figured out how to embed it into AI system.

Samuel Arbesman:

Well, I think even more powerful than that is taking in some new information, initially it not making sense, living with that complexity or tension and then eventually resolving it. And so, I think there's a line that's often attributed to Isaac Asimov, this idea of the best thing to hear in science is not eureka, but that's funny.

Max Bennett:

Absolutely.

Samuel Arbesman:

That's the harbinger of something deeper and you have to actually pursue that, and being able to build systems that can pursue that kind of thing, that would be really, really powerful.

Max Bennett:

100%, yep.

Samuel Arbesman:

Related to that, I mean, one of the interesting things about your book is even though it deals with evolution, it was written recently enough that it was actually able to engage with ChatGPT, and some of these large language models, which is really exciting to see. Since you wrote the book, obviously there's been some new advances and new different directions, how do you think about the current state of AI and the directions and paths that people are trying? Do you think that just scaling things up is the key, scaling things up with maybe some sort of more symbolic approaches could be the right way, entirely different approaches that could be complementary would also be really powerful? Are you entirely agnostic about all these? How do you think about all this?

Max Bennett:

I think that we need diverse investments. So, I think it is not a foregone conclusion that scaling up transformer architectures are going to get us embodied AI agents walking around our home. There are still a camp of people that think we don't need any more new ideas, we just need scale. That's possible they're right, but I don't think it's a foregone conclusion to the point where we should forego investing in alternative approaches to achieve what we want. So, I think there is a reasonable case to be made that transformers won't even solve the robotics problems. There is a chance we end up here in five years being like, "Man, that was a hype cycle and we still don't have robots that can do the dishes." There's a chance I'm wrong, and that does happen, but I think there's a good enough chance that we're going to hit a brick wall on that.

The misinformation problem with just the fact that all a transformer really is is self supervised on a data set. There is no hypothesis testing engaging with the world and continue learning, I think could end up being not just a small sort of side problem, but a catastrophic challenge that we need to solve the more these AI systems get embedded into our world. And so, my general take is it's obviously incredibly impressive, the degree with which scale has solved problems. It's by no means necessary for us to build AI systems in a biologically plausible or even inspired way, but that said, I think there's enough risk in those investments that we should still have some people investing in biologically inspired approaches, trying to take good ideas from the way mammalian brains work in the case that we do hit a wall on scaling things up.

Samuel Arbesman:

To go full circle back with the book you wrote and everything like that, what has been the response from people in different fields from this book, whether it's AI researchers, neuroscientists, technologists from other different domains? How have people responded to this kind of approach?

Max Bennett:

I've been really humbled by... I went into this being nervous, of course, and I think because I'm such an outsider, I was in some ways if I could write the book again, I would make it even less technical. I think in some places I was insecure, so I over made sure that people knew I did my homework, if that makes sense.

Samuel Arbesman:

Totally makes sense.

Max Bennett:

So if I could do it again, I think I maybe would be more willing to have some people be like, "Hey, you glossed over details, but make it simpler for someone coming with less domain knowledge," but I've been really honored by the response, I think especially from the academic community. I've been invited to several academic conferences, which has been really humbling to be around all these people that are heroes of mine and get to talk about the ideas.

I think some people find it refreshing to have an outsider do this sort of forest for the trees exercise, because it's not the kind of work that helps with a tenured track, right? You don't get tenure for writing a book like the one I wrote. The incentive system in academia doesn't beget work that I did. And so I think there has been an appreciation, which is nice of, "Hey, we needed an outsider to come in here and write a book." And that's been really fun, and there are certain research groups that I'm getting excited to get involved in to try and tackle, take some of the ideas in the book, and maybe build AI systems inspired by them. Built a lot of really cool friends from people reaching out from the neuroscience and AI community, so yeah, it's been a fun journey.

Samuel Arbesman:

That is amazing. That is wonderful to hear. I'm glad it's worked out so well, and yeah, that's I think a great place to end. So yeah. Max, thank you so much. This has been amazing.

Max Bennett:

Thanks for having me.