Securities

The p-zombie theory of consciousness

Description

The rise of generative AI and large-language models (LLMs) have forced computer scientists and philosophers to ask a fundamental question: what is the definition of intelligence and consciousness? Are they the same or different? When we input words into a chatbot, are we seeing the early inklings of a general intelligence or merely the rudiments of a really good statistical parrot?

These are modern questions, but also ones that have been addressed by philosophers and novelists for years, as well as the occasional philosopher-novelist. One of those rare breed is the subject of this week’s “Securities”, specifically the novel Blindsight, the first of two books in the Firefall series written by Peter Watts back in 2008. It’s a wild ride of dozens of ideas, some of which we’ll talk about today. Spoilers abound so caveat emptor.

Joining me is Lux’s own scientist-in-residence Sam Arbesman as well as Gordon Brander, who runs the company Subconcious, which is building tools of thought such as Noosphere, which is a decentralized network of your notes backed by IPFS, as well as Subconscious, which is a social network built around those notes that allows you to think together with others. Think of it as a multiplayer version of Roam.

We talk about a bunch of concepts today, from the distinction between consciousness and intelligence, Searle's Chinese Room, the Scrambler consciousness test, whether consciousness is necessary for intelligence, and then for fun, a look at intelligence and the Large Language Models that have sprung up in generative AI. Approachable, but bold – just as Watts approaches his works.

Transcript

This is a human-generated transcript, however, it has not been verified for accuracy.

Danny Crichton:
Hello and welcome to Securities, an audio and video podcast, plus a newsletter, discussing science, technology, finance, and the human condition. I'm your host, Danny Crichton, and today we're talking about one of the most philosophically expansive and intricate hard sci-fi books out there, Blindsight. Blindsight is the first of two books in the Firefall series written by Peter Watts back in 2008 and it's a wild ride of dozens of ideas, some of which we'll talk about today. Spoilers, abounds of caveat emptor. Joining me today is Sam Arbesman, a common guest here on Securities because Sam, you're employed here at Lux.

Sam Arbesman:
That's true.

Danny Crichton:
We always grab your time. Sam is our scientist-in-residence here at Lux Capital. And joining us as a special guest is Gordon Brander. Gordon runs the company's Subconscious which is building tools of thought such as Noosphere which is a decentralized network of your notes backed by IPFS as well as Subconscious which is a social network built around those notes that allows you to think together with others. Think of it as a multiplayer version of Rome. Gordon, thank you for joining us.

Gordon Brander:
Oh, thanks for having me.

Danny Crichton:
We're going to talk today about a bunch of different concepts from the distinction between consciousness and intelligence, Searle's Chinese Room, the Scrambler Consciousness Test whether consciousness is necessary for intelligence. And then just for fun, a look at intelligence and the large language models that have sprung up around generative AI.

Or maybe we won't do any of that stuff because we are intelligent, conscious beings who can completely ignore everything that we hammered out on the pre-production call. Who's to say? That's what makes these podcasts interesting. And let's just put it bluntly, we had a list of about 20, I think, philosophical puzzles from this book and our producer Chris Gates was like, "We're going to lose all of our subscribers if you talk about any of these." So with that said, Sam, Blindsight, let's talk a little bit about the plot because it's sounds like an amazing, amazing book that I have not read.

Sam Arbesman:
Yeah. It is a lot of fun. There's a ton there on... There's even a massive appendix where it's quoting all the different scientific articles. But in terms of the... I guess there's a few different ways of describing it. One is in the same way that there's people trying to do the Great American novel, this is the great consciousness novel. This is just taking the idea of consciousness and really hammering home. But really, I mean, in terms of the plot, it's essentially a... It's a first contact novel. It's about humanity engaging with an extraterrestrial intelligence. And so, the plot basically is, there's evidence of some sort of intelligence like a whole bunch of stuff happens, I think, in the atmosphere of the earth. And then, they send out a whole bunch of probes to some comet that's sending out some messages. And then eventually, the bulk of the plot is this few person ship that goes to somewhere in the Oort cloud and there's this ship, this vessel orbiting it and they have to engage with it. And then... I would definitely say not hilarity ensues. It's a further level-

Danny Crichton:
A PhD dissertation ensues.

Gordon Brander:
Yeah.

Sam Arbesman:
It's not even-

Danny Crichton:
It's dark.

Sam Arbesman:
It's very dark. It's like-

Gordon Brander:
Bordering on cosmic horror. Yeah.

Danny Crichton:
So this isn't Star Trek the Next Generation which everyone feels good after 45 minutes minus-

Sam Arbesman:
No, it's not. No. Basically, no one feels good really at any point and it's also very... The characters who are on the vessel or on the ship, the first contact ship from Earth, they're all, I would say, outliers when it comes to consciousness in a number of different ways and how people think about the mind but they're also... They're all there for very specific reasons. And so that, coupled with trying to actually understand this giant vessel which calls itself Rorschach. That's the entire plot, trying to figure out what is going on and there's de-extincted vampires. It's bonkers and amazing and-

Danny Crichton:
Sam, we talked about this. The vampires are not on the script. We talked about this.

Sam Arbesman:
Okay. Fine.

Danny Crichton:
We talked about this. So Sam, when did you read the book?

Sam Arbesman:
Okay. Fine.

Danny Crichton:
You read it when it came out in 2008 or more recently?

Sam Arbesman:
No, no. So I think I actually tried it years ago, not necessarily when it came out but a while ago. And then, over the years ago, a number of people have been like, "Oh, this is really good," and people pointed to it and finally... And I think I was actually having a conversation with our producer in the context of talking about large language models and the differences between intelligence and consciousness and I'd mentioned this book Blindsight.

I was like, "Oh yeah, it touches on these issues and I never really got into it but I think this could be a fun thing to discuss and it's definitely worth rereading," or in this case, for me actually giving it a full opportunity. And so, I read it only in the past couple months and it was amazing and it's like this thing, yeah, that came out almost 20 years ago. Because I think, maybe it originally came out in 2006, I don't remember exactly. And it nails all of the issues that we're thinking about in terms of large language models and how to think about, "What is intelligence? What is super intelligence? Is intelligence the same thing as consciousness?"

Danny Crichton:
So Gordon, you're in a company called Subconscious so I think you're the perfect person to talk about consciousness and intelligence. But Gordon, how did you run into the book? Because we crafted this episode because we all figured out that we all had read this book or you two had read this book and I could follow along and tag along much like our audience I'm sure. But Gordon, how did you run into the book?

Gordon Brander:
Yeah. Actually, I was at a Santa Fe Institute Conference. Santa Fe Institute is this thing out in Santa Fe. They do complexity science. I think, Sam, you might be connected with them to some degree but-

Sam Arbesman:
I definitely... So years ago, I spent a summer there as an undergrad and I stayed in touch with a lot of folks and do things periodically with them and help out and they got a great community-

Danny Crichton:
And Josh here at Lux Capital is also on the board of trustees for Santa Fe as well-

Gordon Brander:
Ah, that's right. Yeah, yeah. So one of the-

Danny Crichton:
Yeah. We're all intertwined. Yeah.

Gordon Brander:
Yeah. There's a fellow there named Michael Garfield who did actually the Santa Fe Institute podcast. We were chit-chatting and he mentioned this book is just one of the best sci-fi books he had ever read. And yeah, I got to say, I picked this book up and it is so dark. It's like an existential crisis in a book. It really makes you question the value of life and consciousness and qualia and all this stuff. But I almost put it down a couple times and... That was a few years ago. When I picked it back up to do a little bit of a re-read before this podcast, I came away thinking this is one of the best hard sci-fi novels I've ever read and like Sam said, it comes with 17 pages of scientific citations so the guy did his homework. I think he was actually a practicing biologist at some point.

Danny Crichton:
Well, amazing. Well, I know our last show was on Tomorrow, and Tomorrow, and Tomorrow which is about as opposite of a hard sci-fi book to discuss as this one but let's dive into the interesting subject. So our first subject was around consciousness versus intelligence. And you had highlighted in our pre-production notes two quotes, one from Thomas Nagel, that consciousness as there's something that it is like to be a bat so something essentialist to being a type of entity versus W. Ross Ashby which says that intelligence as getting information and doing something about it. So let's talk about this distinction because it seems to be at the core of the book and, really, the core of life and everything we're experiencing today, both for ourselves existentially as well as in large language models.

Gordon Brander:
Yeah. Exactly. I think those two quotes really do tease it apart. So Thomas Nagel's notion that there is something that is like to be a bat. It might be different than what it's like to be me but I think a bat probably has experiences, an inner life of some kind. There's probably not something that it's like to be a rock. But it's interesting because this actually, you might look at that and since we're conscious beings and we tend to think in this way, you might be, "Well, I guess that's intelligence."

But I like Ashby's quote as a counterpoint. So I guess the full quote is, "To the biologist, a brain is not a thinking machine, it's an acting machine. It gets information and does something about it." So his point here is that actually, from the standpoint of evolution, evolution doesn't actually care about your qualia, it doesn't care about your inner life, what it cares about is survival outcomes.

And it's a little bit of a, I think, hard thing to picture what it might be like to be intelligent without being conscious. But actually, there are examples of this in nature. Like an ant, it might be conscious, I don't know, it has 250,000 neurons compared to our 10 billion. If it's conscious, it's probably not very conscious, right? It's also not very intelligent. But when you get a bunch of ants together into a colony, actually, the colony is very intelligent. The ant colonies will cooperate to use tools to solve problems but I don't know. I don't think an ant colony is conscious. Sam, what do you think? You're the scientist.

Sam Arbesman:
I don't know. I mean... To be honest, I don't have a personal opinion on ant colony consciousness. With that being said, when I read Gödel, Escher, Bach by Douglas Hofstadter, there is an ant colony that has a name and is a character in these dialogues that he has. And so, it definitely seems to have some sort of consciousness. But the question becomes, "Could it act in a conscious way?" Like in a Turing Test kind of way of, "Oh yeah, it responds in a way that shows intelligence," but maybe the lights are on inside. And the question then becomes, "How do you know?" and the answer... People, I think, are struggling with this and there are a number of scientists who have come up with certain metrics on what would be the information processing within a system that maybe merits saying, "Okay. This system has consciousness." So when is a system conscious? When is the individual components conscious?

Danny Crichton:
Well, it reminds me of... In software engineering, there's this concept of duck typing. So if it walks like a duck and it quacks like a duck, then it must be a duck. So this, I think, is getting at the Nagel quote of if it seems like a bat and you're looking at it and it acts one, then it is one. But then, I think what you're getting at Sam is this deeper question of at the core of the LLM controversy right now which is like you ask a question to the LLM, it gives you an amazing response back. It's hyper accurate, it's almost as intelligent... It's certainly more intelligent than a third grader, a fourth grader like, "Where is this divide line?"

And so, clearly duck typing is not enough to characterize consciousness. There's no rational thought, there's no enlightened thinking, there's no loop. I'm thinking almost the Santa Fe Institute but there's no feedback loop in the complex system that's self-correcting, self-learning. It is just stats, if you will. And so, I like this distinction because I do think we're struggling and we're right on this liminal space between the two in software engineering in a way that we're not able to get past.

Gordon Brander:
I think Turing was pulling on that insight too with the Turing Test. He's anticipating all of the advancements that would happen with computers and was saying, "Well, this thing is like a big electronic brain and what happens if it starts having conscious experiences, qualia, an inner life? How do we know?" And his proposal was basically that we have a dialogue with it and we try to see how it responds and if it starts responding in ways that seem like it has consciousness, that there's something that is like to be a computer, then we treat it as such. Because the notion is like, I guess, "I can't get into your head. I can't actually know if you're conscious."

Sam Arbesman:
Well, I wonder there though if that means that we're going to always be biased towards more human type consciousness because we know what it's like to be a human and what it's like to have this internal monologue, things going on inside our heads. And so, when we see systems that have similar kinds of outputs, no one thinks that the generative AI systems for images are conscious. That hasn't been a conversation because the outputs are so wildly different from what we output, which in this case is text and thoughts and strings and symbols. I do wonder though, as we think about consciousness, there probably is this very large sense like high dimensional space of types of consciousness. But right now, we're really only able to identify things that seem very human-like.

Danny Crichton:
I remember... I mean, so Gordon, you mentioned experience is different than a rock. But what I think is interesting is if you look in the natural world over the last two decades, a lot of scientists have come together to understand how trees communicate and we're learning in much of the way that I think in the animal world, we learned the same thing over the last couple of decades that dolphins and whales or these incredibly intelligent creatures, they have languages.

In fact, I think we had a Lux Recommends article either last week or in my inbox for this week around that we have started to learn baby talk for dolphins. That we actually now understand that dolphins, like dolphin parents, are using a form of baby talk, a kind of devolved deformed form of communication that they're actually in some ways like a metaphor.
But we also see this in the plant world so we are learning more and more about how trees communicate with each other, that they essentially send, let's call it chemicals, into the air that get moved on winds. And so, different trees can hear from each other. So if a tree is suffering, other trees actually respond in unique ways. They can strengthen their bark. They can change the structure of the actual plant and how it grows. So they're actually connected and in the most avant-garde way, they're all a consciousness. All the trees are communicating with each other and warning each other. It probably happens over hundreds and hundreds of years to be clear.

So what I like about this, and then we'll move on to some new other concepts, is like there's not just intelligence and there's not just consciousness. You can be conscious and not that intelligent, intelligent but not that conscious. But you can also have all these other forms of consciousness, which is what Sam was talking about. You can have ant colonies where each individual ant has a little bit of consciousness, but really, the intelligence emerges from the colony as a whole.

In other cases, we have individual animals that communicate with each other much like humans or you have plants where it's almost like a network in a very Santa Fe way, a decentralized network of individual entities. I don't know what the right way to call it because they're not conscious in the way that we would necessarily understand, but they do take in information from the world and bounce it back and they're solving problems and they're going into collecting information, they're getting sunlight. And so, I just think it's amazing to see how many parts of that spectrum exist where I think we have this very narrow view of, "Well, human is conscious and that's the only way it goes forward."

Let's move on. Let's move to the Chinese Room because Chinese Room was a huge part of this book. So we had a couple of concepts, Chinese Room and the Scrambler Consciousness Test. So Gordon, you just read the book. Do you feel like you can summarize what the Chinese Room and Scrambler Consciousness Test is?

Gordon Brander:
Yeah. I'll do my best. So I guess the notion is even on our planet, we have a wide variety of kinds of intelligence, right? Ant colony is intelligent, I'm intelligent, a bat's intelligent, you could say a rainforest or nature or markets are intelligent and all these things might be conscious or not conscious in different ways.

So one of the core questions in this book is, what happens when you run into an intelligence that's completely alien? How do you figure out what's going on? Is it conscious? Is it intelligent? How does it solve problems? How does it think? A sort of philosophical experiment that's referenced a lot in the book is this concept of the Chinese Room. So the basic background here is you imagine yourself locked in a room and little slips of paper are coming in under the door and on those little slips of paper is a language that you don't speak.

In the original thought experiment, it was English speaker getting little slips of paper that have Chinese written on them. And in front of you, you have basically a big book and in that book you can look up, "Okay. Well, what is this symbol? When I see this symbol, I'm supposed to write this other symbol and then slip another note back out the door." So this is interesting, right? Because if you imagine now you're the person on the other side of that room, the person slipping notes in Chinese under the door, to you, it would seem like whoever's in that room understands Chinese. But actually, the person is just looking things up in a book like, "Do they really understand?" and I think we've actually been running into some of these questions too with AI. I'm curious, Sam, you might be able to talk a bit more about that.

Sam Arbesman:
When this product experiment was first proposed, I think it was viewed as like, "Oh, this is a slam dunk of showing that computers can never be conscious. They're just doing this symbolic manipulation." The counter argument, I think, is also quite powerful. It's like, "Okay. In the same way that an individual neuron does not understand what is going on in my mind, this person doesn't understand either what's going on in this room but the overall system can understand," and this is something, Gordon, you and I think have talked about before, the room might be conscious, like the system understands. So the system can handle these kinds of things. Whether or not it's conscious versus just intelligent, I think that that's an interesting point. But these LLMs, individual artificial neurons, they're not understanding what's going on. But the question then becomes, "Okay. Does it overall understand it or is it this stochastic parrot that is..."

And this is, I think, the interesting thing. And related to this is the idea of the philosophical zombie or the P-zombie which is like, "Okay. If you have..." It's not like Walking Dead or anything like that. It's basically just the idea of imagine a person who is just like you but there is no consciousness. So every interaction would be identical but there are no lights on and the question is, "What does that mean? Is it even possible?" And now, I think with these AI systems, we're beginning to grapple with this, "Okay. These systems can obviously interact for significant periods of time with other humans in very human-like ways." And so the question is, "Are they P-zombies? Are they actually conscious?" and I feel comfortable right now given the ways in which we can break them to say that they're not conscious yet, at least. That being said, I mean, there are ways of breaking humans too so we have to just be humble in all of this as well.

Danny Crichton:
I think one of the interesting things whenever I think about the Turing Test, it has this concept of you, you put information in, you're getting reasonable information back. And so you're like, "Okay. Whoever's on the other side..." I mean, the Turing Test is basically the Chinese Room and I do love the ethnocentrism of the Chinese Room. It could be easily be the American room which would probably be more accurate for most people's experience around the world. But nonetheless, this idea of you just put stuff in, you get stuff out, it's reasonable. Turing says that's basically the definition of artificial intelligence, you can't make a distinction.

So I'm remembering, I think at MIT there was this project called Eliza, E-L-I-Z-A, which was a text-based chatting app back in whatever the '60s or '70s and it was actually fairly realistic. You could give it a couple of responses and it would come back but it was not intelligent in any way, shape or form. Nothing like the LLMs we had today. It was, I think, pre-wired and was quite contained. And so, I think that there has to be a qualitative judgment here. We can't just quantitatively... In the LLM world today, there's a lot of benchmarking that's like, "Okay. If you ask this question, do I get a response? And does that seem to make sense?" and that's how we're evaluating a lot of LLM models.

There also has to be a qualitative sense of, "Does it actually think? Does it actually combine different pieces of information together? Can it be full because it is being given words in unique ways that messes up with the neurons that are in the neural network?" And I'm thinking particularly of Gary Marcus who was on the podcast last year who makes a huge distinction between there's statistical AI and the actual functional rules-based AI which failed as a discipline but we lost truth along the way. And so, I think that gets at this aspect of what is really true comes from consciousness or intelligence out of here.

Sam Arbesman:
I definitely think... There's a lot of things there. One thing, going back to the stochastic and statistical versus rules based, I actually think a lot of the rules and symbolic manipulation are emerging at this high level in a much more organic way and robust way than the original rule like symbolic manipulation, the old style. But going back to the Eliza, the chatbot, one of the interesting things though is these chatbots and [inaudible 00:20:58] are very dependent on your expectation like what you put in. So if you think this is going to be some sort of psychoanalyst, then you're much more predisposed to overlook its weird things where it's like, "How does this make you feel?" It's very scripted. I've played with it before.

And in the same way that a lot of these AI systems, like the current AI tools, if we go in thinking that they're going to do certain kinds of things, we're more predisposed to find that. So I feel like to a certain degree, yes, we do need to have tests because I think otherwise, it's very easy for these systems to become mirrors and we see what we want to see. And so, I think that's just something to be aware of.

Danny Crichton:
In other words, we're art. But Gordon, let me ask you. I mean, we... I want to talk about this quote here about ambiguous phrasing because I think a huge part of the book is trying to tease out distinction. So we have this quote here from the book by Peter Watts, "Our cousins lie about the family tree with nieces and nephews and Neanderthals, we do not like annoying cousins in the context of theory of mind versus parrot." That's in my notes and having not read the book, I have absolutely no idea what the context is but maybe you could take us a little bit of a leap of faith there.

Gordon Brander:
Yeah. Actually, Peter Watts loves these two thought experiments, the philosophical zombie and the Chinese Room and actually, in the sequel to Blindsight, there are literal philosophical zombies, like people who have their consciousness removed for various reasons. So in addition to the vampires, we get Scientific Zombies. But in any case, most of Blindsight is actually this team of people trying to figure out this alien thing, spacecraft entity. We don't know what it is. Is it conscious? Is it intelligent or is it a Chinese Room? Is it actually a stochastic parrot?

Because when they run into it, it's speaking English and they basically figure out that, "Hey, it must have been listening to signals bouncing around radio and reverse engineered what's going on here." So is it just reflecting back to us like a mirror what we put into it or is there something else going on here? So there's a few ways that they try to poke at it that actually... The funny thing is 20 years later, 10 years later, 20 years later, however long it has been since the book was published, we see LLM researchers doing similar things.

So one of them is, yeah, this ambiguous phrasing trick. So if you think about you're in the Chinese Room and a slip of paper comes in and that slip of paper, the person who understands what they wrote has deliberately written something that's grammatically ambiguous. Like in the case of that phrase that we heard earlier, "Our cousins lie about the family tree," this is their response to Rorschach, this alien entity asking about human origins. And so, are we saying our cousins are arranged around the family tree or are we lying about the family tree to Rorschach, right?

And the interesting thing is if you are a stochastic parrot, if you're just predicting the next token, you look up in your little rule book and it's like, "Well, I'm going to collapse that ambiguity because I actually don't have any ability to recursively reflect on what I just read and intuit that there's something wrong here, that there's a halting problem." So we would expect that, at some level of recursion, there'll be a bottoming out, a flattening of the ambiguity and that the alien or let's say the LLM is going to actually hallucinate a response that's very confident but might ignore all of those ambiguities.

Danny Crichton:
Obviously, language and consciousness are very intertwined, at least from the human perspective. When we think of consciousness, we think of people able to communicate using their languages and I study foreign languages and I always have this argument around fluency. That even in English, if you talk to medical doctors in the ER, I have no idea what they're saying. You go to the air traffic controllers and you're a pilot, I have no idea what they're saying. They're speaking English and I speak English and I'm a native speaker of English but I have no idea what's being communicated. And so, it's not just enough to know a specific language. At some base level, there's domains, there's technical terms. Sam, you have a PhD in biology. You start talking about Western Blotting Tests and I'm like, "I have no idea what you're getting at whatsoever." Northern Blotting, Western Blotting, Southern Blotting, I don't know, can you do a southwest more Santa Fe style Blotting? I don't even know. So there's all this-

Sam Arbesman:
By the way, initially... As an aside, initially it was based on a person's last name and I don't remember if it was Western or Southern. And then as a play on that, it was the guy's name and then they just started making cardinal directions which is a fun little aside.

Danny Crichton:
Right? This is a good example. To me, we look at language and I think that's where this Chinese Room gets really interesting is at some point we are all just trying to use some symbolic language. When I speak in a foreign language, I can get pretty good. I know what it's saying and then I write the response and I'm like, "Do I know... Okay. There's 20 synonyms for funny but is it sardonic, ironic? Is it rye?" There's so many different slight nuances to these words and I just don't know them, right? I don't use these languages enough to be able to communicate at that level. And so, am I conscious? What's interesting to me is when I speak a foreign language my quality of work obviously goes down. But at some point, it doesn't even feel like it's conscious, right? I am becoming a stochastic parrot. I'm like, "Well, I remember in the textbook it was supposed to be this and I've heard someone use that on the internet." Maybe I didn't even know they were being sarcastic and I'm now sounding terrible.

Gordon Brander:
Yes. Steven Wolfram actually makes this point too about LLMs, that they're computationally pretty shallow actually. This is one of the reasons they're bad at math because math is often about recursively expanding a series of tokens into some sort of answer. And one of the things he notes is actually, it seems that human language is actually surprisingly computationally shallow. It feels like it ought to be extremely recursively deep but it does make me wonder, especially as a verbal processor myself, how many times am I just guessing next token? And maybe there's some process up there that's just predicting whatever the next token ought to be.

Danny Crichton:
Well, two thoughts real quick though. This reminds me of... What is it? The linguistic like the fish, the fish fish fish fish fish. The fish is a noun, a verb, an adverb.

Sam Arbesman:
Who's a buffalo?

Danny Crichton:
Oh, buffalo, buffalo, buffalo-

Sam Arbesman:
Buffalo, buffalo.

Danny Crichton:
Buffalo. Yeah. Wrong animal. See, that's my conscious? Well, that's my intelligence. I'm already a P-zombie. I don't even need a P-zombie. I'm already walking around without consciousness. But the other one is a novel that won, I believe, a couple of major literary awards, Ducks, Newburyport. So this is about maybe two years ago but a 1000-page novel with essentially a single sentence. The entire book is one sentence. It is about a 40-hour read and it is one sentence and throughout the whole thought, it's basically a stream of consciousness. It's basically the fact that... And there's some paragraph and then it's like, "The fact that this. The fact that that," that is the entire book and it works. It won a lot of literary awards. It actually is quite compelling. I have read pages and then I was like, "I can't. My context window in my LLM in my brain is just not capable of sustaining this."

But I agree with both of you, Sam and Gordon, we just can't untease that. And so, the question is, are we limited in consciousness? Are we just... I think of Q on Star Trek, is there just higher intelligences who are going to be able to put this all together in a unique way? But the last point I'll make and then we'll move on, is when I think about LLMs, there's one thing of training and putting into the neurons and yet we don't consider that consciousness, right?

Because we only consider it consciousness when we can communicate it with it when we actually text with the chatbot and go into the black box. Because otherwise, it's just a black box. We have actually no idea what's sitting there until we start to communicate it and do these tests and I think that is particularly interesting as well. That even though we have the ability to inspect every neuron, we have all the stats, we have all the models, all the data stored in memory, that we have it fungible, we can look at it, we can inspect it, we don't consider that consciousness at all. It has to be through language by which we judge that.

So the next topic was whether consciousness is necessary or not necessary for intelligence and this was specifically about... So I guess the main character is Rorschach? Did I get it right this time?

Sam Arbesman:
Yeah. Yeah, Rorschach.

Danny Crichton:
Yes, exactly. I learn things every day on this podcast. We're talking about whether nonsense but super intelligent Darwinian replicator and we have a quote here, "Scramblers are the honeycomb, Rorschach is the bees." Talk a little bit about that.

Gordon Brander:
Yeah. So this is... Spoiler alert, this is the big cosmic horror reveal of the book is that we have this big scary spaceship maybe looking thing and then we have a bunch of almost octopus creatures within it that have a lot of neurons but don't seem to actually exhibit consciousness. They're good at solving problems but not self-reflective. And the question in the book is like, "Well, which one is the alien? Is it the big spaceship looking thing? Is it the thing within it?" and his big reveal is that neither one is the alien. This thing is actually... It's more like a super intelligent virus or like an ant colony where none of the ants are actually conscious either. It's a process that knows how to replicate itself and perform intelligent actions in response to its environment and in response to feedback but there's not a thing that is like to be Rorschach and one of the characters uses this example of honey bees.

So interesting thing is, in a beehive, you have these nicely packed hexagonal cells like beehive hexagons and it turns out that mathematically, that's the most efficient way to pack a bunch of things together. And the question is, how do bees know this actually? Where is that programmed in the neurons? And it's not actually, they're just... What the neurons are telling the bee to do is go around in a circle, a bee makes a tube, and what happens is as these tubes get pressed together, the pressure of the adjacent cells and the softness of the wax creates these hexes. It's the total system that is producing this intelligent packing behavior. So scramblers, these alien-like characters are being kind of produced by this larger structure and both of them are like the honeycomb. It's like a series of mechanistic steps that are being carried out that produce this intelligent and even super intelligent behavior that the characters observe in the book.

Sam Arbesman:
I think the idea is right, intelligence is vital for self-preservation and reproduction and evolutionary fitness. But I think, yeah, the really provocative argument is that consciousness is maybe this weird, almost evolutionary dead end or it's actually... And so, the argument in the book, if I remember correctly, is that consciousness is actually a waste of resources because when we are-

Gordon Brander:
It's inefficient.

Sam Arbesman:
Right. It's inefficient. We are thinking about the world, we're focusing our attention on certain things, but if we didn't have to focus our attention, if we didn't have to have lights on, we could actually be doing so much more. And so the scramblers which are, yeah, they're not sure, "Are these the aliens or is not?" They're nothing. They're just like... But they're able to just do so much more than humans can do because they don't have to think about what they are. It's like I-

Gordon Brander:
Yeah. Rorschach actually attacks because the people ask it to do this recursive self-reflection like conscious communication and it interprets this as a virus. Like, "Hey, you're asking me to run a runaway wild loop. I'm not going to do that-"

Sam Arbesman:
Right. It's like-

Gordon Brander:
"Instead, I'm going to retaliate."

Sam Arbesman:
Yeah.

Gordon Brander:
Yeah, yeah.

Danny Crichton:
Well, let's move on. So we talked a little bit about Fermi Paradox so I think you summarized that well. So let's go to the meat of the discussion which was supposed to be its own section, we integrated it throughout which is LLMs. So fast forwarding into 2023, book was published, or at least the paperback, I looked at Amazon, paperback was in '08 so somewhere in the mid-2000s, let's call it. But we're 15 years in the forward and these are the core questions that are coming out today. One of the references is David Chalmers who wrote a really popular book last year or the year before called Reality+ which has become the Bible of the philosophy of virtual reality. So what are we seeing? There's a little bit of baudrillard in there and images and simulations. But then, we're also getting consciousness and are we beaming our consciousness across the ether, so to speak?

But when we get to LLMs, we, for the first time, have a statistical computer brain that seems on par with basic human thought. I don't think anyone is confusing with AGI or Artificial General Intelligence. It doesn't seem like we've seen AutoGPT and some others try to be actors but there's clearly huge breaks and it seems very statistical and it seems very brittle. So it doesn't seem like we're close to AGI right now but LLMs do seem to be able to replace real human work for workers who, in some cases, as I've written about in the Securities newsletter, legal professionals and medical professionals and people who have serious degrees, both undergraduate and postgrad. And so, what are we talking about when we get to intelligence in a lot of these philosophical case studies, the Chinese Room, P-zombies? Does it help us understand the LLM world better in some of the debates that we're seeing in the media today?

Gordon Brander:
Yeah. LLMs, they're intelligent. They're getting information and doing something about it but are they conscious? Have we created a philosophical zombie? Sam, do you know?

Sam Arbesman:
I don't. I'm inclined to think that they are closer to philosophical zombies than actually conscious. I think, yeah. And they're clearly intelligent. I don't... And maybe this is my anthropocentric view and my human bias, they don't seem to be conscious mainly because the kinds of tests that are done in Blindsight, they... And in many ways actually, the LLMs actually do pretty well on those kinds of things but there are still ways of tripping them up.

That being said, I mean, humans also get tripped up unless they're focusing really well. And so, it's one of the things that like, "Is my consciousness a spectrum of... Sometimes, I'm really focusing and then I'm really there and other times I'm not." I am inclined to say they're probably closer to philosophical zombies. I don't necessarily think they're human level intelligent but not conscious at all. I think they're less than human intelligent but still pretty intelligent but not conscious. I don't know. Gordon, what do you think?

Gordon Brander:
Well, yeah, I think I net out in a similar place but then I have these kinds of maybe second opinions that come at me. So there's this paper called... It was Kosinski 2023 called Theory of Mind May Have Spontaneously Emerged in Large Language Models. That was very controversial. But basically, this scientist subjected various flavors of GPT to the kinds of false belief tasks we use to test theory of mind in people. Like, "Is this person conscious? Do they have a theory of mind?" So GPT3 did about at a 40%, GPT 3.5 was 90%, which is better than most six year olds. GPT4 is 85%.

This is weird. If it's a philosophical zombie, how does it do so well at theory of mind? It seems like it's embodying a great deal of information about the way people think. It also makes me wonder, so one of the proposals for where consciousness comes from, why we have it is because we're a social species and basically, we need it for theory of mind. I need to be able to simulate Sam in my head and Danny in my head so that I can interact with them in a social way. But I don't know since GPT is computationally shallow and might be a philosophical zombie, what's actually going on here?

Danny Crichton:
Well, I think one of the questions I have is how much does emotion... I mean, we talked about language but how much does emotion become a key aspect of consciousness that there's essentially... And when I say emotion, I mean some sort of signaling, some sort of complex signaling mechanism that offers you feedback on what you're saying, how you're interacting with something. Because all of the LLMs, as of today, that are really popular are chatbots which are basically emotionless vessels of pure text and if you want emotion, you have to ask for it so like, "Act angry when I say this. Write this in an angry voice." But part of consciousness, at least in my views, would seem to come from the fact that you have your own feedback loops. And so, as you're interacting, there's a real-time response in addition to the actual language that's coming back.

Sam Arbesman:
So one thing I would caution with that and going back to these chatbots from the '60s, '70s like the Eliza one, there was another one that really, I think it was even less complex but all it basically did was whatever you would say, it would just curse at you and insult you. And it turns out, that was even better at causing people to think it was really a person because insult and anger is very stateless. And so, you don't really have to have a large context.

I remember this from The Most Human Human from Brian Christian's book and I think it talked about this very stateless kind of thing. And so I would say, we should caution... Yeah. I think emotion is an important component but I would say it might make us even more easily tricked when we see emotion in some of these which I... It was Kevin Roose long chat with Sydney where he was afraid of certain things or-

Danny Crichton:
Right. Right.

Sam Arbesman:
There were a lot of emotions and everyone... It might have been the emotion was the thing that really caused people to have a great deal of concern. It was like... And so, I think emotion is an important component. I don't know if it's required. I mean, I think Data from Star Trek without his emotion chip probably was still conscious. At the same time though, emotion can also trick us and so I would just be hesitant there.

Danny Crichton:
Now, let me ask because I think you mentioned it, Gordon, you've read Part 2-

Gordon Brander:
I have.

Danny Crichton:
And Sam, did you read Part 2?

Sam Arbesman:
Yeah. I have not.

Danny Crichton:
So I'm curious, just to close out the episode here, I'm curious, are the thought experiments just continued in Part 2 or is there a whole nother rendition of things going on in the second as part of the duology?

Gordon Brander:
Well, I got to say Part 2 is packed with even more ideas and it continues to pull on a lot of these threats as I mentioned. In addition to vampires, he adds a scientifically plausible zombie, a literal philosophical zombie and these are military grunts who get their consciousness switched off for their tour of duty basically and he pulls on other threads of cognition. There's other ones in both these books that we haven't even touched on like tools as extensions of the self and... One of the things he also explores in the second book is this notion of super intelligent AIs and that perhaps consciousness is a phase that evolution passes through. So the idea is that actually, they reach human level intelligence, they become conscious, they supersede human level intelligence, and they just diffuse into an unconscious but super intelligent system which is a pretty mind-bending thing to contemplate-

Sam Arbesman:
That's like consciousness as the moody teenager phase.

Danny Crichton:
Well, I was right. I will say closing out that Peter Watts who is the author of Blindsight collected his thoughts and essays together under a title called Peter Watts Is An angry Sentient Tumor which was published in 2019-

Gordon Brander:
Good description.

Danny Crichton:
So that gets you a sense of the author and the mode of thinking here, I think that's really great. But I think we've covered enough philosophy in one episode of Securities. Sam, our scientist-in-residence here at Lux, thank you for joining us and Gordon, founder of Subconscious, a network of connecting your notes and socializing around them and creating intelligence and consciousness out of all the thoughts in your head. Thank you both for joining us.

continue
listening