Riskgaming

The Orthogonal Bet: Dave Jilk on AI, Poetry, and the Future of AGI

Design by Chris Gates

Welcome to The Orthogonal Bet, an ongoing mini-series that explores the unconventional ideas and delightful patterns that shape our world. Hosted by⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Samuel Arbesman⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠.

In this episode, Sam speaks with ⁠Dave Jilk⁠. Dave is a tech entrepreneur and writer. He’s done a ton: started multiple companies, including in AI, published works of poetry, and written scientific papers. And he’s now written a new book that is an epic poem about the origins of Artificial General Intelligence, told from the perspective of the first such entity. It’s titled Epoch: A Poetic Psy-Phi Saga and is a deeply thoughtful humanistic take on artificial intelligence, chock-full of literary allusions.

Sam wanted to speak with Dave to learn more about the origins of Epoch as well as how he thinks about AI more broadly. They discussed the history of AI, how we might think about raising AI, the Great Filter, post-AGI futures and their nature, and whether asking if we should build AGI is even a good question. They even finished this fun conversation with a bit of science fiction recommendations.

Produced by⁠⁠⁠⁠ ⁠Christopher Gates⁠⁠⁠⁠

Music by⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠George Ko⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ & Suno

continue
reading

Transcript

This is a human-generated transcript, however, it has not been verified for accuracy.

Samuel Arbesman:

Hello and welcome to The Orthogonal Bet. I'm your host, Samuel Arbesman. In this episode, I speak with Dave Jilk. Dave is a tech entrepreneur and writer. He's done a ton, started multiple companies including in AI, published works of poetry and written scientific papers, and he's now written a new book that is an epoch poem about the origins of artificial general intelligence told from the perspective of the first such entity. It's titled Epoch: A Poetic Psy-Phi Saga, and is a deeply thoughtful, humanistic take on artificial intelligence, chockfull of literary illusions. I wanted to speak with Dave to learn more about the origins of Epoch as well as how he thinks about AI more broadly. We discussed the history of AI, how we might think about raising AI, the Great Filter, post-AGI futures and their nature, and whether asking if we should build AGI is even a good question. We even finished this fun conversation with a bit of science fiction recommendations. Let's jump in.

Dave, so great to be chatting with you and welcome to The Orthogonal Bet.

Dave Jilk:

Sam, thanks for having me.

Samuel Arbesman:

No, this is my pleasure. This is a lot of fun. I think perhaps the best place for us to start might be for you to just explain a little bit about your new book, Epoch: A Poetic Psy-Phi Saga, as well as its origins, how you came to write this book.

Dave Jilk:

Sure. So Epoch is an epoch poem. It is about the first AGI, artificial general intelligence. For purposes of this discussion, I'd say that means a fully human-level ai. It does not mean what some people in the industry are now calling AGI, which is a significant step down from that. The idea that it's simply general is not all we're talking about. So it has all the cognitive capacities of human intelligence and human agency.

The story, the plot is essentially a coming-of-age story of that first AGI from its own point of view. So in a way, this book is a memoir of a future AGI. It is also an epoch poem, which makes it not easier to read, but adds a lot, I think. What I've been hearing is the people who are enthused about the topic don't really find the poetry that much in the way. Large sections of it are free verse, and that's why I didn't call it in the title an epoch poem. I called it a poetic saga because it's more poetic than poem in most places. There are definitely sections that are more formal.

That's the story, the plot, it has many themes and they're very humanistic in nature. The kinds of things that we would think about as humans, who am I? What is my identity? What do I care about? What is my purpose? Why am I here? And many, many others. So it's not just about this story. And the story itself, for science fiction, which this book also is, there's a distinct lack of action. There is some action, there's a whole taking over the world thing, for example,

Samuel Arbesman:

That little detail.

Dave Jilk:

That detail, but it is part of the story, but it's almost brushed aside. It's almost irrelevancy at some level. I really wanted to try to explore what this would be like if we take certain positions on AGI happening on a human level, AI coming to pass. The story as to how I got here is somewhat long. I can give you a shorter version or a longer, which would you prefer?

Samuel Arbesman:

Why don't we start with the short and then we can dig in and then get more bits of the longer one as it makes sense.

Dave Jilk:

I was pretty involved in AI research in a certain sense for a while. I started a company called eCortex that did, we've now, after 20 years, we finally put it on the shelf, but it was doing research for government agencies on computational cognitive neuroscience, which is a field of its own. It's a subfield of neuroscience that attempts to simulate brain and cognitive functions, not at a neuron chemistry level, but rather at a more cognitive level in terms of things like various problems that humans can solve in the lab and trying to see what the simulation would do. And so you'd see models that are tens of thousands of neurons, but not multiple billions like we have in these large models. And the idea there was trying to understand the mind by simulating the brain. That would be Randy O'Reilly's term for it. Randy was a professor at University of Colorado at the time, and he and I co-founded that company.

I was very interested in this stuff and around 2013 or 14 is when all of the surprises with AI started happening. People were very surprised when AlphaGo beat the top Go player and then they, two weeks later, it could beat all of the chess players and chess programs because they just retrained it. And people started writing more about this. Nick Bostrom came out with his book, which really was essentially a distillation of a lot of the LessWrong community's discussions of the topic. And that book, by the way, I disagree with a lot of what it says, but it's a very good introduction to the thinking processes around some of this. And then right around the same time a bunch of movies came out, we might talk about some of those later, but there was a lot of science fiction going on analyzing this notion of AI coming to consciousness or becoming sentient or whatever the situation might be.

Samuel Arbesman:

Was this around when Ex Machina came out? I can't remember exactly which ones.

Dave Jilk:

Yeah, I'd say right around the same time. So I guess I'm giving you the long version.

Samuel Arbesman:

That's fine. I'll take it.

Dave Jilk:

And I have a computer science background, and I was interested in these things. I was very frustrated and irritated with the way people were talking about these issues. I do not see eye to eye with some portions of the field on some of these questions. And so at some point I started writing poetry and decided that would be a good format for writing this story, or perhaps I wanted to write a narrative poem. And I thought that something about AI would be a good subject matter for my narrative poem. So it's not clear which one was first. In general, I think it was the poetry that came first. But I decided to express my views in a somewhat unusual format, keeping in mind that I had published a few academic papers on AI safety and where these things were going to go. So I had taken a scientific route previously and decided to take a different avenue here.

Samuel Arbesman:

This is super interesting. And so I'd love to hear more a little bit about the current thinking around AI safety or how AI was going to develop that felt you really wrong or misguided. That led you to think about either write papers, but then also eventually to write Epoch as a counter description of how these things might actually go.

Dave Jilk:

This is oversimplifying, but I would say today there are two camps that don't really talk to each other much, like in many scientific research fields. One is the, I really think that they're ultimately good old-fashioned AI folks. That's where they come from. That's the way they think about it. They typically are theoretical computer scientists, and in many cases, their physicists turn computer scientists. And they have a certain view of the world that heavily influences the way they think about all this. The other camp is some of these communities like Alignment Forum and LessWrong, where they've gone well past that and they're talking about other things. And I'm not sure I agree with everything they say, but they're thinking about it much more openly.

The main difference between these two camps is that the theoretical computer scientists talk about wanting to prove that it's safe before we build it. And there's this notion of certainty. And if you read Bostrom's book, it infiltrates the book. He frequently will say things like, "We can't be 100% sure that this will be safe." And so when he's exploring these possibilities as to things that might go wrong with AGI or what the course of events might be, it's all about we can't risk any chance of it going wrong. And I'm sorry, but that's nonsense. I mean, nothing in the world is like that. We can't mathematically prove the outcome of anything. We live in a physical world and, this is your area of expertise, slight changes in initial conditions create very large divergences and outcomes. And so even if you can prove the behavior of a computational system, you can't determine whether those outcomes are going to be good. And so even just starting from the very basics, this makes no sense.

One of the leaders in this area has been Eliezer Yudkowsky. A lot of his work is very interesting and important. One of my favorite recent books in terms of its ideas, if not its writing, is Inadequate Equilibria, which talks about the kinds of frustrating things we find in the world. And so I respect the guy, but he has essentially proved to himself that if AGI is created, it's just going to kill us all. And it's going to kill us all not because we're evil or because it doesn't trust us, it's going to kill us all because it needs to use our atoms to create more computation.

Samuel Arbesman:

Is partly one of your critiques of that sort of approach, because I sometimes feel this way, there's a lot of assumptions and they're very interesting, but then you have assumptions on top of assumptions and you eventually build this very interesting edifice, but you, at a certain point forget to interrogate some of the earlier things. And so you feel very certain at the end without necessarily questioning whether or not the entire chain of argument is as strong as you might think.

Dave Jilk:

I'm not sure we want to go too deep on this, but there's this tendency to dogmatize preliminary conclusions because it simplifies discussion. And of course you have to do that in any scientific field. You have to make assumptions. In physics, you have to assume there are things called electrons, even though there are many ways to describe the phenomena associated with what we call electrons. And so we reify these things and I think some of that has happened in these discussions. There's a tendency to forget, as you said, that these are just tentative conclusions.

And this is all, I mean, these are all very speculative things. I mean even the computational theory of mind, where the idea is that mind represents some sort of computation or could be simulated by some sort of computation, even if that simulation is not at the level of serial computation, meaning that a program is not the right level of description, but rather you have to simulate neurons or something like that. So those are different levels of description, even that we don't really know. So even the very starting point that artificial intelligence of the kind you and I are talking about today is possible, we don't know that. I mean I assume it in the book. I mean, it takes a position that, yeah, that's going to happen, otherwise there's not much to tell. It would be a story of scientists banging their head against the wall and failing,

Samuel Arbesman:

Trying over and over and then nothing happened.

Dave Jilk:

Which, by the way, is the history of good old-fashioned AI. I mean, they've been doing that for 70 years and continue to play a role in these discussions even though there's a dismissiveness about what's going on right now. And they say, "We don't really understand anything about the brain." And what that means is that they don't understand anything about the brain. In fact, neuroscientists and computational cognitive neuroscientists and the people that are working in this field know quite a bit about the brain, and many of the advances that we're seeing are based on what we understand about the brain. And so as you can tell, if you hear the irritation in my voice, I get very tired of this sort of attitude. And then these are leading computer scientists, and I don't want to really name names, but these are people who are well-respected in the field of AI, and they're saying things that are, I can speculate as to why, but it just doesn't cohere.

Samuel Arbesman:

Well, first of all, I would say, yeah, I totally agree that just because we do not understand something fully does not necessarily mean that therefore we are at a point of total ignorance. And there are many points in between, and we have to recognize, okay, yeah, we don't fully understand the brain, it's very, very complicated, but we have made incredible advances over the past several decades. And being able to actually encapsulate some of those advances within computation is incredibly powerful and has actually resulted in many different advances.

But I guess I will also ask you what then is your take on what the right path is for AI? Is it neural network approach? Because in your book, the way in which the main character, the narrator is trained and taught or grown or educated, it's very much like raising a child, to a certain degree. Is the ideal endpoint a combination of these massive neural networks along with humans raising these AI systems? Is it something else? Is this a thought experiment and you want to see where it goes? Is this really your take on the best way to do this kind of thing?

Dave Jilk:

Well, there's sort of two questions lurking there. One is, how are we most likely to be successful in building it? And the other is, how are we most likely to be successful in not creating a bad outcome? Because, so again, this is why I say the Bostrom book is a great starting point for people who haven't thought about this stuff, is that it really does follow through the logic of, well, so these things are as capable cognitively as we are. And let's not even talk about intelligence explosions. I think those are actually somewhat self-regulating, which we could also get into. But the book definitely gets into that. But even just something that is fully as cognitively capable as humans and also has its own sense of agency, humans can't hide on a thumb drive. We can't back ourselves up. We depend on our bodies. We can't travel on a wire on a laser. These advantages are going to make it impossible for us to control in so many ways.

So sometimes people talk about, "Oh, what rights should AGI have?" And my response to that is, "Well, you don't need to worry about that. The question is what rights will it give us?" I mean, it's going to become the new dominant, if it's possible, and if people are successful in creating what I'm talking about. And let's be clear, I'm not talking about Claude and ChatGPT, and we can get into where those fit in. But if we create this, it is going to be a new dominant species, or I don't know if you want to call it a species, but form of agency on the planet. That seems pretty straightforward. It's very hard to argue successfully against that other than just simply being skeptical.

Then again, those two questions become very interesting. One is, how do we do it? And my answer is, I think we always use the brain as our guide. If a competitor had an advanced technology and you wanted to compete with that company, this is in a startup land, but for whatever reason they didn't have any intellectual property, so you could just copy what they did, why would you try to build it some completely different way? You would reverse engineer it, you'd figure out what they did and you'd build it. The history of AI up to about the 2010 timeframe was very much, let's try to figure out a new way to do this. Airplanes don't fly like birds, so we don't have to make minds like brains. And by the way, we're not interested in neuroscience, we're interested in math, and so we're going to do computer science-y things.

So I think that an anthropomorphic approach, and I say anthropomorphic instead of neuromorphic because it's a somewhat broader term. I think we need to think about the whole human system that includes things like embodiment and the ecological brain and the idea that it needs to live and exist in an environment. And ideally, at least that environment would be something like the physical world, even if it starts in a simulation, because otherwise it won't be able to navigate the physical world. This is my thinking on the best approach actually succeeding at building it. Now, whether other approaches are possible is another question, and you may have noticed that that's mentioned in the book and comes up a couple of times. But ideas like the seed AI to me seem deeply implausible. The idea that we could write a little program that's a serial computation that somehow can make itself better without it being actually cognitively capable. The idea that that will succeed first seems implausible. Still possible.

Now, to the second question that you implicitly brought up, what's the best way to make it safe? Well, I argue that if these other approaches to AI, assuming for the moment that they might be successful, the claims that we don't understand them at all are true. We don't have any way to talk about it. That said, if it really is that alien, it can't understand any of our science or any of our language or any of our documentation, so how is it going to actually take over? It's going to have to rediscover all of science in a new way. It can't look in our physics books and figure out how to build anything because it doesn't understand the physics books. And if you say, "Well, it would understand the physics books," then it will understand us because it understands our language. So you can't separate those two things.

But I think that the anthropomorphic approach also gives us a lot of comfort that can do something reasonably good with it, that we can raise it like a child and it will be attached to us. I have a thesis that, and this is not just mine, it comes from Hilary Putnam talked about this and other philosophers. All of our values are embedded in all of our knowledge. They're not separated. People do have these conceptual constructs, but deeper down, our values come from things that we think are important.

Samuel Arbesman:

They're infused in our literature and how we think about the world and our politics. It's all bound up together.

Dave Jilk:

Yeah. And in fact, as you saw in the book, it's all bound to mom. Our mothers are our first teachers and the first things that we learned, and possibly our fathers as well, depending on the upbringing structure, everything we know, the foundations of that started with our parents and our nuclear family and the kids we played with and other humans. And so if you bring them up that way, then we have a lot more knowledge about how to do that so that it works than we do about how to raise a seed AI or what little code snippets to put into a seed AI so that after it finally improves itself, it ends up having a positive view of humans. So what I like to say is this idea that AI is going to be completely alien to humans. If you do look at large language models, all it is is our culture. I mean depending on what you've trained it on. Obviously if you trained it on Reddit and some of the, whatever, the darker areas of the net, it's going to be a darker version of that.

Samuel Arbesman:

Right, it'll be a more dystopian subset of humanity, but it's still deeply human, for better or for worse.

Dave Jilk:

It's still deeply human. That's right. And when I play with Claude, that thing has read a lot more than you or I will ever get to read in our lifetimes, and it has a lot of it in there. And so if you give it human culture through literature and other sources, it understands human culture in a sort of way. Now, of course, understand is too strong a term for LLMs, but let's assume that somewhere along the line when AGI is being created, that same training process is part of the program, then yeah, it's got a perfectly good grasp of it.

So then the issue becomes let's try not to raise a sociopath. Let's try not to create a sociopath. And this is another thing that has happened is that early on people talked about maximization functions where we're trying to optimize, maximize something. And there was this assumption that that's what humans do, that we're trying to optimize some function. Humans don't do that. There are a few humans that do that, and sociopath is what we call those humans. They have a single variable or set of variables that they pursue without regard to any others. And this idea that it's static, not a dynamic goal system. We humans are constraint satisfiers. We do our work, we do what we need to do, we have some goals, but most of us go off and have fun after a while. And there's no reason you have to build AI as a maximizer. So that creates a sociopath. Don't build that. That could, in fact, be dangerous.

So this is where I go with it. The warnings about anthropomorphizing AI are sensible at one level, but don't do it accidentally. But I'm not anthropomorphizing here accidentally. I'm doing it through and through because I think that's the way we'll be most likely to be successful, and it's also the most likely way that it will be somewhat conducive to human alignment. And of course, that's a complicated question too.

Samuel Arbesman:

So related to alignment, and this clearly is the anthropomorphization process of raising AI and kind of building it this way, this seems from your perspective that to be the most likely to succeed in terms of generating AGI. I guess the question becomes, after running this thought experiment and developing this very carefully in this book, do you now think humanity should be making AGI? After doing this, do you think, "Okay, this is something good?" After you wrote this, you're like, "Okay, now that I've run it through, maybe this is not the right path for humanity?" How do you think about that?

Dave Jilk:

Here's what I think about that, and I've been asked that question before, so I'll give you my now stock answer. We need to stop asking that question. And the reason we need to stop asking that question is that controlling what humanity does is not really something that is possible or an option. One can imagine a worldwide police state with its primary goal as preventing anyone from researching AI. And even that's suspect, this is easier to do than building nuclear weapons. You don't need uranium, you don't need to separate it. You can do this with computers.

And in the book, I make this point even more broadly with respect to technology progress. Now you can delay these things. You can try to control them, to some extent, you can try to influence them, I should say. You can try to make sure that the best people who care about the outcomes are doing the work instead of people who are just carelessly trying to see how much profits they can generate or whatever it is. As Stan Ulam said about the hydrogen bomb, "If we don't build it, that's not going to stop the Russians from building it. It might take them a little longer, but they're going to be working on it."

And so this question of should humanity X? Well, who gets to decide that and should the USA do this? Okay, that's a political question that we can debate. Should we be funding AI research? But if we don't fund it, that doesn't mean the Chinese aren't doing it, or the Russians or the North Koreans, or Europeans for that matter. In fact, the Europeans are funding this pretty well. And so my answer to that is technology progress is going to happen, you can tweak the details, but it's a question we need to stop asking relative to AGI because if it's possible, it's going to happen. And so I don't really have an answer. Should we? Shouldn't we? Probably we should be careful and be slow about it. Thoughtful. Slow is the wrong word because if you go too slow, somebody else will get there first and they might do it badly. So ideally, what we would do is the best people thinking about how to do this safely would also be working on it aggressively so that we get there with something that has a higher likelihood to have a good outcome.

Samuel Arbesman:

So maybe the question then is, I guess how excited are you about a world with AGI? Because as we're talking about it, AGI when done well is not going to be maximizing some specific constraint or optimization or whatever it is. It's going to be ideally as complex as people are, but as anyone who has raised children or just watched people grow up, you can instill them with values and things like that, but they're still their own people. They're not these automatons that you can say, "Oh, now that you are grown and have good values, now do all these things for me and I don't have to think about them." No, it's like they're just going to be doing their own thing. And I feel like with AGI, given all the things we've talked about here, that would be similar to your perspective of, okay, they're going to be these supercharged very intelligent agents or entities or whatever it is. So the world of AGI, does that make you excited, even when done well?

Dave Jilk:

Excited might be too strong. There's excitement but apprehension. It is very exciting to have the prospect of, during our lifetimes, if that is the case, that there is this, to coin a term, epochal change in how the world is going and where we're going. And there is this question of the Great Filter, which I get into a little bit in the book of, well, will we blow ourselves up? Will we kill ourselves with bioweapons? These are all science fiction movies, the 12 Monkeys. And so will one of those things happen? I mean, it does feel to me like there's a bit of a ticking clock. The more you learn about the history of nuclear weapons and the incidents that have happened where we got saved by one dude, in many cases Russians, by the way.

Samuel Arbesman:

Right, like saying, "Oh, this looks like it's not actually real. I shouldn't launch all these nuclear weapons in retaliation for what seems to be an attack." And those people saved humanity.

Dave Jilk:

It may be that nuclear war wouldn't kill everyone. I mean, that's arguable, but it would be a rather big setback.

Samuel Arbesman:

It would be a bleak existence.

Dave Jilk:

Is that an understatement? It just feels like the clock is ticking on some sort of great filter, and AGI is probably one of those. And we do look out into the heavens and we don't see anyone. So I like to say that because of speed of light issues, either already here, or even if they're on the way, it's going to be a long, long, long time. It's not that likely that right now is when the aliens arrive. So they're either here or they're not out there. Either there is a filter or there's not. And so it is exciting, the possibility that this technological advancement, that could be one of the threats, that could be the end, again, I don't completely discount these risks, could also be the thing that fixes some of these things, these little conundrums we've gotten ourselves into simply by being what we are, being humans.

Samuel Arbesman:

At the end of the book, and I don't think this is a spoiler of any sort, but there's a postscript of, okay, post-AGI, what the world looks like. Maybe I think it's several centuries after. If you can describe a little bit of what that world looks like. And then also I'd love to hear your thoughts on is that your view of a good future or just a likely future or just a fun, interesting future that you wanted to develop as post-AGI?

Dave Jilk:

Yeah, so just quickly, I want to make sure I'm clear that the earlier parts of the book I think are strongly informed by my understanding and positions on how brains work and how computer science works. It's a bit of a prediction as to the most likely way that this will occur. The later parts of the book are how it all plays out later and what the AIs ultimately decide to do. I'm not even sure that's speculation. I guess it would be speculation, but it's also intended as an illustration of some of the things that I've heard people say or write about that they think would be the best human outcome or civilization.

Because in AI safety, they talk a lot about alignment, but they ignore the idea of what would we align with. I think a lot of them are utilitarians. They have the usual utilitarian debates without thinking about anything else. But you hear people talk about primitive cultures as being really the ideal state of humanity. Of course, they like to neglect healthcare. They forget that primitive states of humanity actually were at war constantly. Another one is the rural ideal. If you look at our culture and our art, as long ago as at least the Renaissance, even in an ancient Rome actually, they had this whole thing about the farmer being the kind of ideal and living pastoral lifestyle, and so many paintings and so much music,

Samuel Arbesman:

Right, like some sort of this pastoral vision.

Dave Jilk:

Others they think about in terms of, no, we want to advance society, but we want guaranteed minimum income. Nobody has to work. This would be the perfect. All right, so these are some of the angles that I take and explore them with enough illustration to show the upsides and the downsides to some extent. And it's a dual question, what would the AGI future look like? What could it look like? What would we want it to look like? And if AGI doesn't happen, what do we actually want the world to look like? I'm not sure they're really different questions, although the second one is harder because we've been trying to create our ideal world for a long time. And only part of the reason and we have not succeeded, is that actually everybody has a different view of what the ideal world would be, so we fight about it.

Samuel Arbesman:

No, but it's still this idea of, okay, if there's a world of AGI and it could build an ideal world, what do we want that ideal world to look like? So even in the absence of AGI, this is a good thought experiment because it primes the pump or whatever, it gets us to think about these kinds of things. But you're right, exactly, no one of us will have the exact same view of what this idyllic future should be.

Dave Jilk:

That's true, but I bet yours and my views might not differ that extensively. But a lot of people would have a very, very different view. And so it's not like it's minor differences among us that we could then fight to the eternity about, but rather there are actually enormous differences as to what people think. And so that is also part of the challenge.

In making this part of the story though, it's important that since this situation is created, the end state is created by the AGI. It also has to be what they want, right?

Samuel Arbesman:

Right.

Dave Jilk:

There are many elements of it that are just simply for their convenience, which is exactly what we would do. You might bend a little bit, but it's like, I am only going to go so far to make this ideal. It's not going to be ideal. And there is that. But here's the thing. Some people have found this very problematic, they don't like the idea of, and this is a bit of a spoiler, in some elements of this future, there's this domestication of humanity. But we're already domesticated. I mean, almost nobody knows how most of the technology we use every day works. Have you heard of the illusion of explanatory depth?

Samuel Arbesman:

Why don't you explain what that is?

Dave Jilk:

I love the term. This notion that if you actually probe someone and ask them how things work, they really don't know.

Samuel Arbesman:

But yeah, you scratch a little bit below the surface and you're like, "Oh, yeah, these are details I'm not familiar with." Yeah.

Dave Jilk:

If I asked you how a refrigerator works, you would give me the, I'm guessing, I know I would just give it a high level, "Well, there's this fluid that kind of absorbs the heat, and then it passes it out the other end." And that's it. I don't know what the fluid actually is. I don't know what features those molecules have that makes it do that. I don't really know how the pump works, to be honest. And I don't know why, after a while, they start to make a lot of noise. There's a level at which I understand it, but the explanatory depth is pretty shallow. We humans are domesticated, and we need to realize that we are, that we've been domesticated by our own technology, by the system of society as it exists.

Another objection people have is, well, humans, if AI is out there, then we can't be the best. I'm like, well, you're already not. I mean, I don't know who I'm talking to, but you're already not the best. No matter what you do, there's always somebody, I learned this as an undergraduate at MIT, there's always somebody who's going to get a better grade. It's just-

Samuel Arbesman:

Right. When I think about with AI, and even just these LLM tools, now that they can write crappy poetry or write some middling essay or whatever, humans are not unique. And guess what? There are so many other humans that can do these same things as well, better than you. That's not anything uniquely different in our... there's always more entities out there that are doing things better or different than you. And so it's less about the technology being able to do these things better and more about what do you actually care about doing yourself?

Dave Jilk:

That's right. And the idea of, well, I just want a human to be the best, is just a very strange tribalism. It's like, well, we're humans. We created these technologies. So think about building roads and digging irrigation ditches. There was a time when people with shovels went out and did that. And now we have this thing called a loader. It's a machine, and it lifts a ton of material at a time. It does it better than humans. It only does that. Okay, well, AGI would do all of the things we do. Maybe it won't have the same connection to the world that we do, and that's fine. Maybe we have that uniqueness, but we need to find our uniqueness and our value somewhere else than these domination sorts of approaches. That would do us potentially a lot of good anyway, but in an AGI world, it's going to become essential.

Samuel Arbesman:

Taking a step backwards and then looping back to that, one of the things that we haven't mentioned in terms of the idea of the Epoch poem is that it is deeply allusive. Allusive with an A in terms of there's lots and lots of illusions. It has this very Paradise Lost, Miltonian feel where there's many, many different references like biblical references and mythological references and all these different things. Which, one, I want to hear about what drew you to doing that kind of thing, which I really enjoyed. But also it speaks to the fact that thinking about AI deeply is not just this very technical endeavor. It's a deeply humanistic thing. And going back to what we're saying, you have to think very deeply about how do we derive meaning? How do we think about ourselves as humans? And to do that, you can't just be thinking about computer science. You have to be thinking about the stories we tell ourselves, whether it's ancient or modern or whatever it is. And I feel like that kind of allusions really drove that home. I don't know if you want to talk a little bit about that kind of approach?

Dave Jilk:

I don't know if you've written any poetry, but it's one of the first things that you start to notice about writing it is that it takes on a life of its own and certain directions get out of hand very quickly. And this was an example of that. I mean, I've certainly used allusions in the book, I call them osculations, which is a word with two meanings, one is a kiss, and the other is a, and it's by the way a related meaning, but it's a mathematical notion of a curve that's more close than a tangent intersection, but rather the derivatives also overlap. And so it's more than a fleeting touch.

And so the osculations in the book, I've usually incorporated some of the language, but modified it to fit my own poem, my own poetry. In almost all cases, the ideas covered by that work are similar or related to what I'm talking about. So it's not random, although there is the occasional, it just sounds good here. I just started keeping track of where these things came from, and then I decided that this was going to serve a few purposes, and the purposes expanded as I went. Some of it was just being able to give credit, and then some of it's a reading list. Some of the things I haven't really read the entire work, although many of them I have. Like at the end of a Wikipedia page, learn here, also read.

Samuel Arbesman:

Further reading.

Dave Jilk:

One of my good friends who's a poet, she was the first person to really appreciate it as poetry, and she said she really liked that aspect of it because it was where all the training came from. She said that actually before ChatGPT came out and was public, but in a prior conversation you had mentioned, "How did LLMs influence this?" If you read it that way, it's pretty fun because it pops out that in essence, they're just quoting AI and AIs are just quoting some of the things they've read. But guess what? We do that too, and in fact, frequently most of what we say we don't realize came from some other source. I mean, the language itself, of course, but also that Shakespeare had so many phrases that we still use every day. And so that is what it's like to be human, to some extent.

Samuel Arbesman:

I love that. Maybe a final question then is what other visions of the future or AI in science fiction do you look to or would you recommend that other people consume as they think about these things?

Dave Jilk:

I'm not actually a big science fiction reader. I like to consume it as video, TV shows and movies. Around that timeframe that we were talking about earlier in this discussion, the, I don't know, 2013, 2014, there was a lot of stuff that came out that was pretty interesting and thoughtful, exploring some of the different angles on what might happen.

So three things I would say jump out at me. I actually just rewatched Her, and if you haven't watched it in a while, it's crazy. So the first two thirds of the movie we're already there pretty much. And in fact, they've already, so I was reading about some of the AI companions that are out there, there've already been companies that have built AI companions, and they've already had to get rid of their erotic side. So Her, the outcome is one of the outcomes that people have talked, and it's an interesting one, and that, of course, we're not to that yet. The movie's really fun for that reason, is that the first two thirds of it's like, wow, this really did happen. And then, okay, something different happens. And so I would recommend that.

Ex Machina is, I think, a good movie. It mixes robotics in with the AI, but of course, that's actually pretty natural. Its ending and outcome is another one of the reasonable possibilities, although it leaves open where it really goes after that. But the idea that if we keep AI in a box, it's just going to want nothing more than to try to escape and figure out how to get out. That's in my book as well. And we humans are like that too. We don't want to be in a cage. And then finally, I'd recommend, so if you haven't seen The Orville-

Samuel Arbesman:

It's Seth MacFarlane. It's like a Star Trek, I think ostensibly a parody, but actually, it feels very similar to Star Trek: Next Generation, like that version of Star Trek future.

Dave Jilk:

It's got a bit of Galaxy Quest, but it also has real themes. There are several episodes, at least a pair, on the androids. There's an android on the ship, he is Kaylon is the name of the group, and they come from a particular planet. And the two episodes that dig deep in that, you find out the history of the planet Kaylon and how the androids came to be and why there are no people on Kaylon. Really fascinating and well done. That was also actually an influence for my book in terms of thinking about how do we handle this? How do we treat the AI? What do we do with it? Because if we agree with my logic that eventually we can't really control it, well then we want it to think well of us.

Samuel Arbesman:

I love that. This is great. Well, Dave, thanks so much for taking the time to chat with me about your new book, Epoch. This was fantastic. Thank you.

Dave Jilk:

Thanks, Sam. This has been really fun.