Riskgaming

The Orthogonal Bet: Understanding Embodied Intelligence

Description

Welcome to The Orthogonal Bet, an ongoing mini-series that explores the unconventional ideas and delightful patterns that shape our world. Hosted by ⁠⁠⁠⁠⁠Samuel Arbesman⁠⁠⁠⁠⁠.

In this episode, Sam speaks with ⁠Michael Levin⁠, a biologist and the Vannevar Bush Professor at Tufts University. Michael’s work encompasses how information is processed in biology, the development of organismal structures, the field of Artificial Life, and much more.

Sam wanted to talk to Michael because of his pioneering research in these areas. Biology, as Michael’s work reveals, is far more complex than the mechanistic explanations often taught in school. For instance, the process of morphogenesis—how organisms develop their specific forms—challenges our understanding of computation in biology, and Michael is leading the way in this field. He has deeply explored concepts such as the relationship between hardware and software in biological systems, the process of morphogenesis, the idea of polycomputing, and even the notion of cognition in biology.

From his investigations into the regeneration process in planaria—a type of flatworm—to the creation of xenobots, a form of Artificial Life, Michael stands at the forefront of groundbreaking ideas in understanding how biology functions.

Produced by ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Christopher Gates⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Music by ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠George Ko⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ & Suno

Transcript

This is a human-generated transcript, however, it has not been verified for accuracy.

Danny Crichton:                                        

So Kelly, I read the book last night, really, really enjoyed it. You have a new book out here. I actually picked it up in the UK. It was one of these, I was at Waterstones somewhere in Trafalgar, and I walked in through the front door. We run the risk gaming scenarios, the podcast, the newsletter, and you had this book, and I don't know what it's actually titled in the U.S., but at least in the UK, Playing with Reality: How Games Shape our World. And I was just like, that's not only an auto buy, that's something I can expense, which is even more exciting. But I read the book, I was really, really compelled and it covers a lot of ground. So I want to do the exact same thing.
                                                     

But the first thing I wanted to talk about was obviously we talked about war gaming, we talked about game theory, a lot of these sort modern concepts pulling out from economics and mathematics, but you really give a much broader historical brush to the idea of gaming, the idea of divination, chance, and even the word history of the word clergy. And so I just wanted to start with this idea of how important games are to human societies and why you started it that way.

Kelly Clancy:                                          

Yeah, so games are universal pastime for humans and even for animals, they're evolutionarily ancient. Even bees seem to play. And it's really actually very hard for a scientist to study because it's very hard to get animals to stop playing. And so it's kind of hard to have any control.

Danny Crichton:                                        

Right.

Kelly Clancy:                                        

Anything that's evolutionarily conserved like that is probably really important and that's kind of one of the motivations I had in writing this book. Some of the oldest games we see are like game board games are 8,000, 10,000 years old, so they predate written language. There's something really powerful in particular games of chance have really captivated our imagination. There are 4,000 year old Hindu Psalms that talk about dice being like a drug and dice have been outlawed periodically. And have also been considered almost like spiritual, where in Greek times the priesthood would have these 20 sided dice with different names of gods on them, and a supplicant would come with a problem and ask for help, and they would roll this dice and say, this is the God you should pray to. The kind of unknowability of chance was often seen as a mouthpiece of God because God or fate would control the outcome of the dice rolls or the die rolls.
                                                     

The word clergy comes from the Greek word Kliros, which means lot or chance because it was only the priesthood that could interpret these dice throws. People in the Bible would use lots to divide inheritances. We have this intuition that chance is this great leveling agent, that it's something very fair and equitable. There's also a potential that randomness was harnessed as a decision-making tool, so it debiased our choices. Evidence for this is there in the 1950s, a Yale anthropologist, Omar Khayyam was wondering what are some universal principles of human reasoning across different cultures? And one thing he saw all over the place was witchcraft and divination. And he wondered, well, why would so many human societies hold onto this practice given that it's kind of famously not very effective? So he thought, well, maybe it is effective in ways we're not seeing.
                                                     

And so he looked at the Innu people of I think the Labrador Peninsula, and they didn't know where to hunt. If they had no idea where the game would be, they would use divination like scapulimancy. They would crack some bones over the fire and read the lines of the bones to decide where to hunt. So rather than going the way they would normally go or doing something that would be predictable, the randomization element of the divination would make the human hunters less predictable to the game. So yeah, we see that we've sort of harnessed chance both for pleasure and for decision-making for thousands of years.

Danny Crichton:                                        

And I think this is really interesting because, and you said it was evolutionarily conserved, you also have this through line of dopamine, at least in the early chapters. Where 100 years ago we had this issue in which people basically fell asleep, became catatonic. We don't know what happened. It was actually quite the emergency. And over time we sort of build up a biological basis for understanding dopamine. And then you really emphasize how the story just keeps expanding, expanding over the decades as we learn how crucial dopamine is into the reward cycle in the brain. And it connects into what I thought was really interesting is not just in chance and surprise, but how important that sort of surprises both to playing a game but also to learning. And maybe you could talk a little bit about that.

Kelly Clancy:                                          

So dopamine is a neurotransmitter. It's all over the news. People use it all the time. Often it's considered something like a pleasure molecule, but really it's a learning signal and a kind of motivation signal. One important aspect of dopamine is that it is what makes addictive things addictive because it's a learning signal. So for example, addictive drugs will spike your dopamine levels. It's not that it's necessarily the pleasure of the drug, but it's that your dopamine levels are spiked and then so whatever preceded that spike, it becomes like an ingrained behavior. And so you start to cycle through the drug seeking behaviors. And the same is true of unpredictable events because in neuroscience parlance, it's a foraging problem. So you are trying to make a decision... Your brain is basically trying to predict the world. It's a prediction engine. And anything in the world that is operating at chance, you don't have a sort of model of becomes really interesting and intriguing to the brain. It wants to figure it out.
                                                     

And sometimes that unpredictability is because you just haven't formed the right model of that system. And sometimes there's intrinsic chance in the system and you'll never figure it out, but the brain can't really distinguish between those. And so the unpredictability, the surprise of a chance element can draw people in. It can become highly addictive just the same way that a drug is addictive. So that's why we see gambling addiction and people kind of go into slot machines, and it turns out that 50% odds is the maximally addictive level of chance, and that's why a lot of casino games are operating at that kind of feedback rate.

Danny Crichton:                                        

Well, and you had I think a point in there where it wasn't that people... People actually committed when they lost, they kept going, and what they were fearful of was the jackpots. Something would actually take them out of that dopamine rush over and over and over again and return them to real life.

Kelly Clancy:                                          

Well, yeah, I think the interesting element of people who are really addicted to slot machines or really, really entranced by them, is that if they win, they'll just use those winnings to keep playing. So it's really not necessarily about the getting money, it's about learning. If you interview people, they often feel like, "Oh, I almost understand the system. There's something there for me to learn. I have a system," whatever it is. I think it's much more about the epistemic read than material greed.

Danny Crichton:                                        

And I think in this context we have these games of chance, we have dice lots, we see this across cultures. We see this on all continents essentially, and you cover Mesoamerica to Europe to Asia. And so these games have been evolutionarily conserved for thousands of years all across the world. One of the points that you really highlighted that I found interesting, I was a probabilist as an undergrad, is that probability, which is sort of the mathematics of chance and figuring out these games, essentially you can solve these games and at least understand the odds of what you're playing doesn't come in until a very late stage in development. And there's even science, particularly in Europe where people were sort of getting close to figuring out some of the rules of probability, but they never seemed to lock it in. And I thought that was such an interesting puzzle that you sort of teased out for at least a little bit to try to understand what was going on there.

Kelly Clancy:                                          

Yeah, so a lot of mathematicians have wondered why didn't the Greeks, for example, figure out chance when they had very advanced mathematics? There was tons of gambling happening, the emperors were all addicted. It was a major part of society then. So we've had it, why didn't anyone figure it out? And one possibility is that it was in part thanks to the kind of rise of empiricism during the enlightenment where people started really wondering, can I work this out? There was more financial incentives because there was a rise of the mercantile casino. Where before it would be you're gambling between friends and so often the odds are pretty much 50/50 and it kind of evens out in the wash, versus when you have casinos that are engineering an edge, now you really need to fight for your winnings. There was a huge financial incentive and we had great mathematicians out of curiosity, started playing around with like, okay, if I make this bet it wins, and if I make this bet, it often loses. And so what is behind that?
                                                     

And it was really a huge breakthrough because until that we discussed, a lot of people thought chance was really completely up to the will of God. And so if you won, you were like God's chosen one. It was like a sign of your inherent luck. As people sort of understood that there's actually laws to chance. That there are these really subtle but profound ways of in aggregate predicting what would happen even if you can't predict on a given roll. This was totally phenomenal and no one thought that that would be possible. It just seemed so impossible that chance could be systematized in any way. In the same way that for a long time people really didn't think that whatever was happening with the way the stars moved could also be happening here on earth, until Newton was like, "Oh, an apple falling, it's the same force as that's moving the planets." So the fact that these terrestrial mundane events had universal laws was really a breakthrough.

Danny Crichton:                                        

You're getting at not just the sort of development of probability, but then you go into the empiricism of the modern era. You know skipping ahead to Kriegsspiel, the war game that came out in Austria, which has I think been covered quite a lot over the years. But you really show that from these early kernels, this legacy of gaming across history really starts to not just shape, and this is sort of the thesis of the book, it's not just shaping, it's not a game that we just play for fun on the side, even if we're kings and nobles and it's sort of a pastime. It really starts to shape how we interact with each other. We start to play a game and the way we compete in that game starts to become the way we competing in real life. Whether that is in a war game and how we fight wars in World War I or World War II, how businesses compete with each other and our assumptions.
                                                     

And I was really interested of, it really blows up quite quickly. It seems like it goes from these sort of pastime entertainments into the heart of decision making and processing somewhere in the 1920s to 50s. It happens in a generation. I'm curious why it accelerates so, so fast?

Kelly Clancy:                                          

That's a great question. I don't have a great answer for it. I mean, probably the really inciting factor is game theory. The invention of game theory between 1928 and 1944 by John von Neumann, the mathematician, who was a great lover of games and kind of loved Kriegsspiel and followed the war on maps as a kid, World War I as a kid, and was quite sort of militaristic himself. He was also a refugee from Europe. He was a Hungarian Jew who left in 1933 for obvious reasons and really heartbroken by what he saw as this profound irrationality of these decisions. And he wanted to understand what made things rational. What would a rational human do? How would they make decisions? And he used games as the model for this. If you think of a chess game, you're playing against another player, you both have the same goal and you're making choices in tandem to get to your goal before the other player, et cetera.
                                                     

And so you can kind of predict people's behaviors because you know what they want. And this became the basis of whatever, rational decision making theories, revealed preference theories. You kind of read your opponent based on what choices they're making. You don't need to know anything about their psychology, you just need to see what they're after. That reveals everything you kind of need to know. So he made this in part in collaboration with Oskar Morgenstern. It's a branch of pure mathematics. There's zero empiricism in it as a theory itself. It was meant to model human decisions. It's been taken as that maybe for better and for worse. It's not actually a very good model of humans, but it is a really interesting model of how to optimize things. So it's been really useful for optimization questions and computer networking questions and things like that.

Danny Crichton:                                        

We had Danny Kahneman who unfortunately passed away last year, but two years ago he was on the podcast, and obviously one of the landmark Nobel laureates in this field of decision science, pointing out dozens, with Tversky, of cognitive flaws in the human brain. And one of the things, and this is getting at the key critiques that you're making in this field, is we have this massive expansion of game theory across society. We're starting to use economic models, not just in theory in the 50s, but then as we get into the 70s, 80s, we're starting to build that into business decision making. It gets into the MBA curriculums, it's getting into popular culture through Danny Kahneman's book, Thinking Fast and Slow, and others. But then you're sort of pulling us back and saying, look, these are theories that are built around mathematical axioms and laws and principles.
                                                     

They weren't designed first and foremost empirically in working backwards. We were starting with incentive designs, optimizations and saying, look, the math says that people should rationally do X. And you'll highlight a couple of different things. For instance, the very basic notion of a util and trying to optimize utility. I can optimize utility by getting more income. I can also cut my utility, work less hours, cut my income, work less hours, do charity work, and have quote-unquote "more utils" at the end of the optimization. And game theory and a lot of these tools don't really allow you to optimize for that because we're not sort of calculating what humans are actually doing. In other words, the old, the map is not the territory. Well, humans are the territory, and we've built these maps that increasingly seem inaccurate and yet are heavier and heavier used.

Kelly Clancy:                                          

Yeah, so game theory can kind of incorporate all of these different preferences that people have, but the problem is it's often not used for that. And another major flaw of it is that our preferences are not fixed. And game theory, what you want kind of determines all of your behaviors, the outcome of the game, it determines the game itself really. And of course, in reality, we want flip-flops in the summer and galoshes in the winter, and we all change all the time. Part of the problem here is that when psychologists and economists have sort of looked at what does game theory say we should do versus what people actually do in reality, they don't line up very well. And rather than kind saying, well, maybe humans work a little bit differently and maybe that this isn't really a great model of humans, we have instead said, okay, humans are irrational, in this very fussy definition of rationality, where they're going after the things that the experimentalists think they want.
                                                     

The experimentalists are saying, "Oh, they must want the money. They must want the selfishness. They must want," blah, blah, blah. And human participants are saying, "Actually, no, I want to cooperate with this person." These could still be selfish in this technical definition of selfishness, which is like, these are the things that people want to do. So it's all these kind of weird assumptions that get lost and mixed up. And we keep saying like, "Oh, look at this surprising bias that humans have." Rather than saying, "Okay, actually maybe humans are not performing according to game theory and we need to find a different behavioral theory to explain their behavior."

Danny Crichton:                                        

And I want to double down on this because I actually thought this was one of the most interesting parts, which was later in the book, this idea of dynamical systems. That so much of game theory is you have a couple players and it's truly like a game, whether it's chess or checkers, Go, it's two players, they're sitting across from each other, they're making decisions, and you're sort of predicting this. And I've always been really frustrated with this as someone who studied game theory myself, of like the real world's so much more complicated. There's so many more choices. I mean, even I emphasize the old, and I always make the same comment as well. It's like, well, you can play checkers, you can also just flip the board. That is an option you can do and we don't want to go down this road. But you emphasize, actually, it's really interesting, the two competitors actually have to cooperate to make the game function, which in real life, that cooperation doesn't necessarily exist.
                                                     

But on the dynamic side, you emphasize, I think it was John von Neumann talking with someone else and it was like, look, these games are static. If you want a theory of dynamic games, you're talking about maybe 300 years, 100 of your super ambitious. And that was, I think, in the 1940s or 50s, it's been 60, 70 years. I mean, have we gotten a better sense of dynamical games and sort of that sort of iterative piece to it, or are we still struggling to come up with a theory of how humans actually interact over time?

Kelly Clancy:                                        

I think there has been progress, and it's been pretty recent. I think a lot of the field is still really concerned with these equilibrium systems and these fixed preferences, but we are seeing some people incorporating people learning their preferences, people learning in general. And that's been for a while. Learning preferences is a more recent discovery, but there's been the notion of bounded rationality for a long time now. So there have been lots of nice caveats fixed into the theory. But one concern I have is that it's maybe a little bit like the epicycles for planetary orbits, where the Greek astronomers were like, well, the planets must be moving in circles. So the fact that they don't appear to be moving in circles means that they're moving in circles upon circles upon circles. And so rather than maybe finding a more elegant explanation for human behavior, we're kind of adding in all of this heavy jargon and heavy mathematical lifting that maybe still isn't quite right.

Danny Crichton:                                        

This maybe heads us into the direction of AI. And you've worked at, I think DeepMind, if I recall correctly, which famously built AlphaGo, which I was actually very fascinated by because I just read Benjamin Labatut's Maniac, of seeing the parallel with John von Neumann and AlphaGo and Lee Sedol, who's the Go player in that story and seeing that in parallel with this book. But side note. But obviously artificial intelligence has been in this real world of how much is it similar to humans and do we learn in the same way that humans learn? Should we build cognitive systems that are direct from human neuroscience into the computation? And it seems like games have been the key mechanism, and this is sort of the last area of your book of how AI has progressed over the last 40, 50 years.
                                                     

So you have graduate students in the 60s building games and starting to have them compete with each other. And these days we have self play. So this idea of don't have an algorithm play against a human, you'll never collect enough data. The Go algorithm fights against the Go algorithm. They start to learn from each other. You can introduce maybe some perturbations to keep it interesting. You can iterate millions and millions of times at a loop much faster, and that's the best way to learn. Where is gaming today in terms of its interaction with artificial intelligence?

Kelly Clancy:                                          

Right now, it seems like the most exciting stuff is happening where people are putting AI into games. Having more interesting boss fights, more interesting interactions with the NPC character, generative environments. Maybe we'll start seeing tailored environments that are custom to each player, to their aesthetics. Whatever dialogue choices they make could change the entire narrative of a game. I think ultimately we kind of have exhausted most games to play with against with AI. We've basically hit all the big board games. We've done a lot of the powerful video games. I don't know that they have much more to prove in that realm. And basically what I see happening now is so they've spent decades creating these algorithms that work well in games, now that we're kind of out of games, they're trying to find realms of reality that can be modeled as games and putting these algorithms to work in those realms.
                                                     

One example is language modeling, which we have these large language models, and the way they're trained is basically a game. It's kind of like a game of Jeopardy or Hangman or something where you give it big corpus of text and you take words out and have the system guess what words should go in that spot. Similarly, certain realms of biology can behave in rule like orderly ways, like protein folding is completely dictated by the physical forces of the protein. And then so once we can computationally predict how a protein folds, then we can maybe target drugs to it more quickly. We can make custom medications, et cetera. So this is where I see that going is that it's not so much that we're going to keep having AI play games, but rather try to find parts of reality that can be modeled as games and kind of sick those algorithms on those areas.

Danny Crichton:                                        

I think what's interesting here is you're synthesizing across AI, and we just had an episode I think two, three weeks ago, Chris can keep me honest, but two, three weeks ago called Evolved Technology, focused on protein folding ESM3 and a lot of the technologies that go out here, but I think the synthesis you're really focused on, it's straight out of operations research undergrad, which was reward functions and figuring out, look, what's the objective here? How do you reward an AI system to learn? How do you do that where, look, it could be nine steps out, it could be 25 steps ahead. You highlight one of the famous moves from the AlphaGo experience in South Korea. This sort of magic from God moment where no one understood why it was in the fifth row, not the fourth row, and then it ended up like 37 or dozens of turns later was the correct kind of option. And everyone sort of saw it in that moment.
                                                     

What I thought was interesting was this idea of, but it's all games. Everything about artificial intelligence, about optimizations, about a strategy, it's about figuring out what do I want to deliver? I ask a query, am I being served that query? Was it answered correctly? And it sort of gets rewarded. And that takes us back to the earlier part of the conversation because we don't have a dopamine system in AI. There's no neurotransmitter literally emanating out of our computers. But what we're sort of inducing when we're designing these systems is creating kind of a dopamine loop that otherwise wouldn't exist in those models.

Kelly Clancy:                                          

Right. And I mean actually interestingly, there is a really strong parallel between reinforcement learning and dopamine, where as people started recording from dopamine neurons in since sort of the 80s and 90s, they were behaving in all these weird ways and we couldn't quite figure out what they were coding for. And then AI researchers started using self-play as a form of training up machine learning systems. In particular, there was a backgammon algorithm that was trained using self-play, became the best backgammon player in the world, better than humans. This was one of the first times people had made a program that exceeded human level at backgammon. This was like 1991 I think. The sort of neuroscientists looking at dopamine neurons looked at the backgammon algorithm and realized like, oh, this is what dopamine neurons are doing. They are error prediction algorithm. They are a learning signal that is sort of predicting if I'm right or wrong in winning or losing, for example.
                                                     

So there's this really strong tie between what dopamine neurons are doing in monkey and human brains and what these self-play systems are doing. But you're right that we can't really get them to behave like a human. They do these really alien things. They solve tasks in ways we could never predict, which in part makes them a little bit dangerous because they can get off the rails quite easily. And they also fail in a lot of ways that humans don't. For example, as an analogy, if you were training a person to make an omelet, you could give them white eggs, they would learn to make an omelet, and then if you gave them brown eggs, they could do it. A reinforcement learning program would fail at that.

Danny Crichton:                                        

Right. Right, right, right.

Kelly Clancy:                                          

They can't generalize in any way. One way that people have used something like reinforcement learning to get a human-like response in these systems is reinforcement learning from human feedback, where you have a large language model that's trained on text and then you have humans interact with it and give it feedback like trying to gamify or score how personable the system is, how helpful or nice or whatever. We're trying to find these ways of scoring these very messy human preferences that is actually not that great. Reinforcement learning from human feedback is very frail, very fussy. Obviously lots of people have found all kinds of problems still with large language models and how they interact with people. So we're kind of failing there. Instead of maybe having this sparse feedback, I've seen other ideas of constitutional AI where you're always nice, you're always helpful and friendly, so you give it a kind of operating sense of principles to proceed from and that might be more helpful, but it's still pretty messy out there.

Danny Crichton:                                        

In one of the early episodes on the podcast we had Gary Marcus, who's a well-known AI critic and comes from the symbolic rules-based modeling of artificial intelligence. If you want to have truth, it has to start with you, you can call constitutional AI and these are all kind of family of concepts. So long as you're sort of just learning from large amounts of text, a lot of it from the internet, from the bowels of the Borg, it's hard to have a truth engine that actually works consistently. But there's something in the middle there because obviously as humans we don't have rules in the same way, maybe you could argue physics is a rule. And so if I touch a hot stove, I learn from it. I know it's hot every single time. I understand that a white egg and a brown egg are eggs and they both function the same way even though they're different colors.
                                                     

But there's something about as we develop as children from toddlers all the way up, that we end up having to learn those rules and we don't know what truth is. Truth is something that we built socially, that we build as a community and it comes together. There's actually a game there of how we collaborate to kind of agree on what truth actually means. As we were talking, I was thinking about the fact that we always focus on surprise and the AI algorithm is sort of getting this negative dopamine of like, whoa, I was wrong. I need to update my priors. I need to update my math in the model, but it doesn't have shame. It's not as if ChatGPT is looking over at Google's bot and is like, well, I'm just embarrassed. My answer is so bad. I am going to hand my hand in shame and walk away. And so these other emotions that AI isn't encompassing in that model would seem to be part of the challenge of progressing the field.

Kelly Clancy:                                          

That's a really interesting point. I really like that. I think that is really a part of what drives human preferences as well, where we are not just interested in truth, but we're interested in social acceptance. And if you think of dopamine as encoding reward, it's encoding all kinds of rewards. It encodes not just food and money, but social values. And these are things that we share through language and share through our relationships. When people are on the internet, they're getting rewarded by social acceptance in a particular subgroup for believing the earth is flat or believing X, Y or Z. Truth really doesn't have much to do with what we collectively or in subgroups decide to be the correct behavior, the correct beliefs. Beliefs are just something that get reinforced by reward.
                                                     

And this is what's also kind of funny about some of scientific theories where people, for example, who are working from a game theoretic perspective and saying, okay, look at all these human biases compared to game theory, they're getting rewarded with scientific publications. So the fact that this is not a great model of human behavior will spin out a million different scientific publications, and so then all these scientists get rewarded for still working in that field and not necessarily stepping back and saying, okay, does this actually mean something about the fact that this is not a great theory?

Danny Crichton:                                        

Well, I think you're getting an incentive mechanism design and how much gamification exists in modern life. So I mean, going back 10 years, gamification was a fun word, came out of the mobile world. Games were sort of the first hot subject for a lot of mobile before utilities. I don't know, Uber became more popular, but gaming was sort of the original native format for mobile phones. And so gamification became this buzzword in Silicon Valley where it's like every product needs to have a gamified feature. You need to have points, you need to incentivize, you need people to come back. Hey, if you come back 10 days in a row, you're going to get a little reward. And the reward doesn't even have to be money. It could be stars, it could be any made up metrics that you want.
                                                     

So you have Duolingo learning languages and it constantly sends you texts and you have streaks and you have all these different gamified mechanisms to keep people coming back over and over again. And then we see it in jobs. So in some workforces you have leader boards that went back to the 70s, 80s, but now these days they're digitized, your bonuses are attached to them. You have all this sort of gamification that used to be like, I just showed up at work, made the coffee and went home. Where's sort of the divide line? I think this gets at kind of the thesis you have with this idea of playing with reality. But where should the game stop?

Kelly Clancy:                                          

So gamification, like 10 years ago, you're right, it was kind of promised to fix all these problems. Everyone would be working out and learning languages and happy at work. And none of that really has happened. In part, I think because the gamification has not been that inspired or actually fun. As you say, a lot of it is kind of lazy and it's like, oh, here's some points, here's a badge, have fun. That's not fun for most people. Some people really like that and it's rewarding for them, but most people are super annoyed by the owl's constant texting, and it's just a kind of drag. So part of it is that they're not really making these games, they're like game-like or they're pacing on these superficial game dynamics. So I think there is promise there and there is maybe some danger there as well where if it actually did get good.
                                                     

So for example, there are people who are driven by these metrics and sometimes in a workplace as you say, maybe your bonus or your performance will be rated by these. And so I think Amazon was using some kind of gamified app for their warehouse workers, what tasks they had. And it was driving them to work harder and faster than they kind of could. And I think it had an uptick in worker injuries. There is some danger and drawbacks to doing this. There's a little bit of danger in general of whoever's making the game, dictating the behaviors of the players obviously. And do we necessarily want our values to be aligned with whoever's making these games? Maybe in some cases, yes, maybe in some cases, no. I think being able to step outside the game and see what those are and how it's being designed is maybe an important sort of virtue that we should expect of them.

Danny Crichton:                                        

Well, and I think what's interesting is obviously games can be entertainment, but most of real life is a game. I mean, you talked about scientific publishing. Obviously if you want to get tenure at a major university, you've got seven years, you got a couple of major publications, you got to get them in a couple of journals. There is a quantitative metric essentially, even though you're doing original research and it's very hard to compare any individual paper. But the reality is if you want that sort of next achievement, just like an achievement on a Steam, you're going to have to do a set of tasks in a various explicit set of orders. It's almost like levels two, three, and four of a game, you're passed the tutorial and you keep going. And that sort of model exists I feel like in so many careers. So when I go into risk gaming and the idea of building scenarios for... We dubbed at risk gaming to get out of just the idea of war, you can kind of apply this to a lot of different fields and categories.
                                                     

But the way I kind of do the incentive design for all the characters is you focus on the career path. So if you're a politician, you focus on polls. If you're a CEO, you're focused on your stock price and trying to get quarterly earnings. If you're a journalist, you're trying to get attention on Twitter or you get more people in your substack. We all have these sort of incentives that line up with us. And so I always found it interesting as I was reading, just how much it's like we just exist in a game. And in some ways WarGame, the movie from the 1980s that Reagan famously I think screened at the White House, and you mentioned in the book, it's sort of a depiction of this, is you don't even realize you're sort of playing real life as you're doing all this sort of stuff.

Kelly Clancy:                                          

And I think it's also really important for people to... I mean, that's part of what I hope people get out of reading the book is thinking about what games you're involved in and what are the actual rules. I think for example, employees say, are performing the duties that they believe are correct and not realizing that maybe they're being judged on something completely different. I know a lot of people who work at tech companies where they're being conscientious and trying to maintain software, but what gets promoted is new projects and not maintenance. And this is why you get behaviors like the outcomes like the Google Graveyard where Google will push out a bunch of new products, there's this flashy couple months, and then they quietly kind of go away because Google employees are not promoted for maintaining these projects and are promoted for creating them.
                                                     

It's not just for employees to think a little bit cynically about how to get ahead in their jobs, but also for maybe people who are at the top level of corporations to say, okay, yeah, maybe this is not exactly how we want our company to be running. How do we get the incentives right to get the behaviors from our workers, our employees better in line with how we want the company to be seen? So yeah, it's really an interesting question to really step back and see what games you're involved in.

Danny Crichton:                                        

You're focusing on people work towards the measures that you give them. So if it's new projects, that's the focus. Maintenance, which has been a theme at Lux, and I think we had a podcast earlier this year called Lux and the Art of Startup Maintenance, but maintenance has been a major theme for us the last couple months. Of how in a world of climate change, with trillions of dollars of maintenance, but no one gets a ribbon cutting. And so no one wants to take a picture of well we took the sewer and it works the same as it did before, but it'll last 50 more years. There's no way to take a clean shot at that. And so we have this sort of deferred maintenance that constantly adds up.
                                                     

But I want to shift the conversation to a little bit. So online gaming communities have obviously grown, and you're a gamer yourself, and I play a lot of single player games. I've never played Fortnite. I have played Fortnite, I've played Fortnite for 12 minutes, so I literally can check the box, but I play a lot of single player role, player games, et cetera, adventure games. But obviously gaming is the media or medium of this era. Fortnite, more than 100 million players. Other games, absolutely reaching into the millions of simultaneous players on these experiences. And generationally, if we expand gaming to just a little bit more of simulation, you add in Minecraft, you add in Roblox entire generations are growing up in these virtual gaming worlds. What's the implication for people existing sort of in these simulated tight spaces?

Kelly Clancy:                                          

One implication is just that I think socialization looks a lot different than we're used to it, where I think there's this really prevalent stereotype of gamers being isolated maybe because their parents walk into the room and they're like on a computer alone. But of course they're actually interacting with lots and lots of people often, where they're chatting with people on whatever chat program and they're playing with their group and it's not necessarily an isolated thing at all. And so socialization is just looking very different. It doesn't mean it's not necessarily better or worse. It can be worse because there can be a lot of online bullying and things like that. This is true of pretty much any online space. It's just different, but it doesn't mean that people are isolated. And actually I think it's been helpful. For example, I think in the Covid lockdowns, teenage boys fared a lot better mental health-wise than teenage girls. And this was largely attributed to the fact that they tend to play more online social games.

Danny Crichton:                                        

Interesting.

Kelly Clancy:                                          

Yeah. So there's been a long history of concern about violence in video games linking to the real world. We still have no real evidence of this. Yes, there's lots of violence in America, and yes, there's plenty of video game players in America and there's also lots of video game players in Singapore, and there's much, much less violence. We don't really know how this is going to impact people's mental health. I think there's an argument that being online or being on phone so much is detrimental to people's mental health. I think it can also be really helpful to people. Some people, that's how they find community. So I don't think it's really clear yet what the actual outcomes are going to be.

Danny Crichton:                                        

Yeah, we had Jonathan Hyde on last year, and he was just in the early phases of writing out what is now Anxious Generation, which was published over the summer, and I'm with you, it's a pretty caustic book, and it sort of started a movement of banning phones in schools and sort of severing a lot of these online communities. But I'm a little bit more like you. If you grew up in suburbia and have no access to your friends, either because you can't drive, you can't get there. These online gaming spaces or some social media, can be very useful in connecting with peers and actually feeling part of a community.
                                                     

And interestingly, I mean compared to when I was growing up in the 90s, where it was hard to do video, it was hard to do audio, everything was through text. I mean, these days, if you're part of a clan in World of Warcraft, whatever the case may be, you are on audio, you actually hear each other, you're part of this kind of group. You can do something you can't really do in real life. And so there is a little bit more camaraderie and I think teamwork building there as well.
                                                     

But what I find interesting though is does it make real life harder? And what I mean by that is in the context of game, you're always simplifying the world. You point out in America's Army, very realistic simulation, you're doing battlefield triage, you're helping your teammates, but the reality is you don't die. And that's a pretty big difference for real life. You get to just respawn and keep on going. But even just in the sense of SimCity to take a canonical example, and you have a short chapter on Maxis and simulations, running a city is very enjoyable in City Skyline or SimCity or one of these games, because it's a game and it's a sandbox playground that you can kind of build out here.
                                                     

And then you get into the real world of New York City and running a city is awful. It's terrible. You have to deal with trash. You are dealing with very angry people. We're trying to get the rats off the street. It's almost impossible. People are fighting you. You have the pro-rat population that dials in. I don't know if you ever watched one of these clips of the pro-rat people and they're like, "We're trying to protect the rat community." And everything's impossible. And real life is very, very frustrating. And we do a lot with policy, and I always joke, it's like if you actually had to model Congress and what a congressman does on a day-to-day basis, you'd go insane. It would be worst game you could possibly imagine.
                                                     

So I'm curious, maybe to close this out, this divide between the simplification of a lot of these models, the gaming environment, these contexts, and the kind of dopamine you can get when you're sort of in that context, versus the challenge of say, political advocacy, the challenge of fighting for a quote-unquote, "righteous cause" that may never actually happen. You might spend 50 years and never get the reward of the other side of that. Is that hard to bridge the gap between those two contexts?

Kelly Clancy:                                          

Yeah, that's a really great question. I think really important consideration for how we think about moving in these worlds because games are really compelling. As you say, you get these quick rewards, things kind of come easy. Even puzzle games, you have a certain affordances, you have this glowing object and this glowing object, and you just have to do something with them. And the real world is like there's very little clue and what avenues to go through. Even problems in games, they're all design choices. For example, in City Skylines, apparently there's been a homelessness problem that people have been dealing with and they're trying to figure out how do we get the homeless encampments off of our tennis courts and things. And the way that these sort of games are created, there's a sense that there's, oh, they're realistic simulations of how reality works.
                                                     

But of course, these are all just design choices. They decided to have a homeless population in the game. This isn't something that necessarily emerges always in human societies. And we have societies where there isn't really homelessness because there's always some social network or it's very, very, very rare compared to how it is in America. The problems in games are design choices and the problems in real life are like these immersion things that we don't know where they're coming from and we don't know how to fix them. And if we did, we would have them fixed by now. That complexity is a really important thing to grapple with. And I think part of why digital spaces are so compelling is because the world is so much more clear and all of your problems are so clear. And even in social media, there's the moral clarity of everybody has their exact opinions and they're very sure of them.
                                                     

And of course, if you actually talk to real humans, it's so much more complex and difficult and everyone's got a crazy story. I think this is a really important point, and it's really easy in a game to forget about extra game solutions. So like tipping the chessboard or in Kriegsspiel, one problem in World War I was that the Germans, for example, were thinking about how to invade France. They planned a technically perfect invasion through Belgium. They're like, this is great. It worked out just like they wanted, except the fact that they invaded Belgium, who was then a sort of neutral country, then drew their ally Britain into the fray and drew the U.S. into the fray and completely doomed them. So in the game, it was all technically perfect and they didn't think about the diplomatic outlet, like the outcomes.
                                                     

I think that's an important aspect of any model that you're dealing with. It's like, what am I not including here? What could tip this? Another one in World War II was the U.S. had anticipated everything Japan was going to do except kamikaze pilots. They just couldn't conceive of people suicide bombing us. So we're always going to leave something out of the game. And so we always have to be aware of that and be open to that. And as individuals moving in the world, seeing where we're maybe getting stuck in games that are not really moving us ahead, where we're maybe feeling productive and feeling like we're getting somewhere, getting rewarded, and our house around us is falling apart and our relationships are falling apart, and how to find reward elsewhere.

Danny Crichton:                                        

Well, I think that's a great place to end. I will do two plugs. One is a couple months ago we did on the Orthogonal Bet miniseries with Sam Arbesman, we did a piece with a biographer of Will Wright and Maxis. So for those really interested in the simulation, we did another 45 minutes on this subject, so we've brought it back a lot.
                                                     

But Kelly Clancy, Playing with Reality: How Games Shape our World, which I have found out in the UK, it's How Games Shape our World. But in the U.S., it is, How Games Have Shaped our World, which I think is one of those subtle changes. I don't know if that's a British ism or why it was such a subtle difference because sometimes they totally change the title. Do you have a story? I don't know.

Kelly Clancy:                                          

It's not a super interesting story. It's just that the Americans wanted to show that it was more of like it was a history and not just like a futuristic-

Danny Crichton:                                        

Past tense versus future. Well, that's ironic. I would think it's like the opposite. I would think the Americans want the future tense, here's what's coming-

Kelly Clancy:                                          

Right?

Danny Crichton:                                        

... and the British are always looking back towards a thousand years to the North. But anyway, Kelly Clancy, thank you so much for joining us.

Kelly Clancy:                                          

Thanks for having me.

continue
listening