Securities

“I am basically a cosmic Fluke” and the chaos of science, policy, and human narratives

Description

Humans are enamored by a good story. The world overloads our mammalian senses, and so we seek any simplifying structure to narrate what we are witnessing and make it more accessible for processing. That simplification doesn’t just reduce the complexity of the world, but also makes it difficult to see the extent by which luck drives the successes of our geniuses — and the failures of others. From scientific discoveries and power-law venture returns to legislative breakthroughs and decisions during war, the world is, essentially, chaos.

That might trigger a bout of deep existentialism for many of us, but for ⁠Brian Klaas⁠, the hope is that confronting the stochastic nature of the world can lead to better governance and progress. In his new book ⁠Fluke⁠, Klaas argues that we need to upend the simplistic statistical analyses and modeling that are common across social science and other domains and replace it with one that can encompass a theory of flukes. That means understanding timing, path dependency, and how the world is a complex system that is far more of a continuous variable than a binary one.

With ⁠Lux⁠’s scientist-in-residence ⁠Sam Arbesman ⁠and host ⁠Danny Crichton⁠, we all talk about how chaos rules our lives; how a better understanding of complexity can improve investments, science, and life; Darwin’s luck of publishing his research on natural selection; the dangers of the human penchant for finding narrative; the random luck of our life experiences; and why understanding flukes can be a counterpoint to the ideas of moneyball.

Transcript

This is a human-generated transcript, however, it has not been verified for accuracy.

Danny Crichton:
But Brian, I'm super excited. I know Sam loved the book.

Brian Klaas:
That's right.

Danny Crichton:
I think it's an amazing topic. We're huge complex systems junkies, so anytime we get into chaos theory and politics and science and venture capital, that's sort of our jam. Let's just start with the background. You've written a number of books, you are very prolific, a lot of folks on politics, and this is sort of a departure for you.

Brian Klaas:
Yeah, I mean it is a departure, but I guess there's sort of a logical progression. So my last book was about power and what I did was I started reading well beyond political science. I mean political scientists think about power all the time, but so do people who are studying the animal kingdom and anthropologists and a variety of scientists and so on. So I started reading much more outside my discipline and I got very interested in a lot of different fields.
And Fluke is basically this sort of synthesis of lots of different disciplines trying to tackle an issue that I felt for a long time about how I think social science has gone wrong and also how I think the world is much more defined by chaotic dynamics than we pretend. And so there was sort of a professional side to it where a lot of the work that I was doing, I didn't believe in a lot of those sort of fundamental principles of some of my discipline. And some of it was also just from my own life I had seen how small changes could have huge effects. And I wondered why that was so constantly written out of the models that we use to basically make sense of the world.

Danny Crichton:
When we think about all these different fields, I think of science, I think of politics, I think of the venture world and entrepreneurship, to me what's interesting is how often this tiny little perturbation actually completely changes the discipline. We were just talking about a book called The Maniac about John von Neumann who was the singular genius, who kind of revolutionized multiple mathematical fields all at the same period of time. And to think, imagine if he was born somewhere else, imagine if he didn't get into the right network. Imagine if there was a car crash early in his life and just sort of ended it. Would we be in the same place? Would we have converged to the same models?
I'm not sure that's true. We're still using the Von Neumann design in every chip today, and maybe we would've gone down the same route. But as I get older and see more experience, it just comes up over and over again that everything could have been contingent, everything could have changed. The internet of the way it's constructed, the way that most startups go down this road. The fact that we have an ad driven model in the internet versus subscription is dominated by the fact that the first entrepreneurs wanted to be free and open and not closed and paid. And that to me, it's so much more open than we realize. And I think your book Fluke really gets at that contingency that is a universal across all these different fields.

Brian Klaas:
So I'll try not to be too long on this answer, but I love this question and there's two things I would say about it. One of them is this idea of how path dependency and lock-in really reshape the world based on timing. So even if you have innovation, you have something that's inevitably going to be produced. Fire was going to be discovered at some point. We were going to figure this out as humans, but the way it gets sort of incorporated into systems does matter. And I think that we see that a lot in the modern world and this sort of idea of path dependency and lock-in. And I think it's something where you can see this with musical instruments. I give this example in Fluke where it's like, look, you have all this experimentation about how to make noise and there's a lot of things that sound good and sound bad to human ears, but at some point there just was a lock-in where people decided this is a guitar and the guitar now is unchanging basically forever.
And I think that's the sort of stuff where the moment or the person actually really matters, even if the idea is inevitably going to be produced, somebody who's probably going to put strings together and strum them, but the way they did it with six of them and the exact body of the guitar, et cetera, is contingent. And that changes the world. And this leads the second point, which is I think there's a fundamental misunderstanding that a lot of people have about how reality works because our mind is engineered for categorization. So the way I try to describe this to people is I'm like, the world is a continuous variable that we treat as though it's a binary variable. So when I was talking about Fluke and its early onset to a historian friend of mine, I told him the story that opened the book, which is about a tourist couple going on vacation to Kyoto Japan in 1926, and then later, 19 years later, the husband and the couple ends up as America's secretary of war and intervenes to stop the bomb from being dropped on Kyoto just because he went on vacation there.
My friend who's a historian says to me, "But America would've won the war anyway." And I was like, "Yeah, no, actually I think you're completely right about this." If Kyoto had been blown up, it still would've caused the Japanese to eventually surrender after the second bomb if it hit somewhere else, et cetera. But the problem is, it's not like that's the end of the story. We have all these things where, oh, the category is binary. You win the war, you lose the war, but how you win the war affects history. And people who are in Kyoto would've died. And in addition to one of the evolutionary biologists I talk about in the book, who was in Kyoto Motu Kimura, there was also a man who was involved with the F scale for tornadoes, one of the founding fathers of modern meteorology who might've been incinerated in that blast.
And so when you think about this, the same with Von Neumann, it's this kind of thing where it's like, yes, there are good ideas and bad ideas and good ideas survive, but the exact timing, nature and personality that's driving a specific idea or innovation is important because of those forces of lock-in and path dependency. And I think if Steve Jobs hadn't invented the iPhone, we probably would've had smartphones if there hadn't been the Apple innovation around iPhones and so on. But it would've been different. And I think that's the issue where people make this cognitive mistake of assuming that everything just gets washed out. And I don't think it does at all.

Samuel Arbesman:
Right. So in science you have this idea of the multiple independent discovery and it's like that, oh, there's almost this over determination of the scientific discoveries would've occurred, certain technological advancements. But you're exactly right. The details and the timing and the specifics, they all matter and they all matter for a whole variety of different reasons and to kind of just sweep it away and say, okay, this is overdetermined and it's always going to converge in the exact same way as sometimes being a little naive.

Brian Klaas:
Well, this is why in the book I have this example from Charles Darwin because he is always used as the example to refute my way of thinking on this. Because basically for those who are less familiar with this, Darwin is someone who discovers evolution at exactly the same time as Alfred Russell Wallace, this famous naturalist basically comes up with a very similar idea. Sends Darwin a letter, and Darwin who has written down his theory of evolution by natural selection has basically put it in a drawer and has not published it springs to action to avoid losing the credit to Wallace. But Wallace, when you look at his writings, although he came up with a very similar idea, he also had a series of writings about things in the supernatural world like seances and conjuring flowers into existence with the powers of the mind and so on.
And he later became sort of a bit of a joke. He was ridiculed in some of the scientific journals of his time. And I think about how much, first off, how evolution was an uphill battle to convince people of in the first place with Charles Darwin respected gentlemen advocating for it. So how much different would've been if there was an outsider who was pushing seances saying that evolution was real. But also the more direct impact is that Darwin's cousin was Francis Galton, who was one of the founding fathers of eugenics, and he was inspired partly by Darwin's thinking. And so sort of a misinterpretation misapplication of Darwin's thinking. And you wonder, would eugenics have unfolded in the same way if it had been Wallace rather than Darwin who had popularized that idea?
And I think this is the kind of stuff where I just think there's a fundamental myth. I've thought this for a very long time, that there's a fundamental myth about how we understand society that puts things into neat little boxes and categories and has a few variables to explain everything. And I think it's sometimes useful. I'm not trying to say don't model stuff or don't have categories. I'm saying that you get into trouble when you forget, as many people have said before me, that the map is not the territory or that with models George Box's statement of all models are wrong, but some are useful. And I think a lot of us have forgotten that. I think that there's a lot of economists who think the model is actually the world, and that gets you into all sorts of trouble, which I hope Fluke will convince some people. We need to draw back a little bit from the brink.

Danny Crichton:
When I hear some of this intellectual history, I'm sort of reminded of David Mitchell's Cloud Atlas, which goes through five, six different eras, each of which kind of connects to the next, and you're sort of connecting the dots between someone who wrote a book in the South Pacific who 100 years later someone reads that book is triggered with an idea. Someone has a music, they come up with a song. That person, 50 years later, that song is listened to someone else and they start to protest. And so that idea of in some ways, the multi, in some cases, century long flukes that connect together binding us all together. And that reminds me of another novelist, Amitav Ghosh who had a book called The Great Derangement focus on climate change, and he focuses on the challenge of writing fiction around climate because in many cases, climate is about disasters, crises, these sorts of almost supernatural random events.
And so he talks about how he survives a tornado, this kind of freak disaster at university, which hits him in India. It kills multiple people. He actually only survived by 20, 30 feet. And he's like, "Look, if I wrote this in a novel, this is like a fluke. And if I wrote this, you'd be like, this is ridiculous. In the middle of this chapter, a tornado comes down on my building and I'm 20 feet away, people around me are dead. I am not." And you'd be like, this is just an author has no idea how to write a good continuous plot. And so I think it's interesting because you're talking about this continuous variable versus binary, but when we get into the real world, there's kind of both. And at times we want to create a narrative out of this. We want this kind of a congealed story where everything connects, but oftentimes it is just a fluke. It is just completely random. A legislator didn't show up to vote that day, a bill got passed and entire life's changed and no one intended for that action to take place.

Brian Klaas:
Yeah, when you were talking before about the Cloud Atlas effect and so on, one of the things that I say to people who are skeptical of my ideas in the book is I say, do you know who Albert Einstein's great, great grandfather was or great great grandmother? No, you don't. You have no idea who these people are. They mattered. They were important because if they didn't exist, you would've had a totally different unfolding of the scientific history, but also the atomic bomb. I mean, this idea that we can just sort of parcel out parts of the world and just assume that everything would've been fine if different people had existed is just not true. Now, I mean, when it comes to the aspects beyond the Cloud Atlas version of events, I do think that there is a need for humans, as you alluded to to make sense of the world through stories.
I mean, I have a chapter that draws on this from Jonathan Gottschall's great book, The Storytelling Animal and all the narrative bias and all that type of stuff that we make sense of the world through narratives. We stitch together neat stories of cause and effects to make sense of really, really complex dynamics in the real world. And one of the things, I had to add a footnote to this because I knew when I was writing the book, I'm like, I am putting a ton of stories into this book. And someone who's looking at this could be like, well, hold on aren't you just fooling us because you're using narrative bias to get your ideas across? And the footnote basically says, yes, this is the only way the human brain can make sense of the world, so we have to do it this way. But it's important for us to recognize that bias that exists in our heads. That's, in my view, been forged through evolution and the view of many evolutionary psychologists and biologists and so on is one that has produced cognitive mistakes.
And I think it's something where the sort of pendulum sometimes swings the other way too much where people say, oh, we just need to look at the cold hard data and that will solve us of this. We will fix the narrative bias. I think that's a misunderstanding too, because there's some idea that data is somehow objective, whereas narrative is not, I think data is produced. There are biases in terms of what we measure, how we measure it, what we decide is important, et cetera. And so I think there are issues like this that are inescapable, but it's better for them to be out in the open and just to say, yeah, I am trying to basically infiltrate your brain with a story, but that's the way that your brain is designed to make sense of the world. So it's an unavoidable option.

Danny Crichton:
I think one of the key questions for me though, so on one hand we have narrative and we have these narrative biases, and this is a way of simplifying a complex world and a world that is obviously well beyond the can of any human brain, billions of people, hundreds of countries, the scale is unfathomable. On the other hand, we were already critical of economists and other social scientists because they're creating these models which is focused on parson, which is trying to figure out, okay, yes, the world is extremely complicated. Let's try to reduce it to four or five factors. Let's try to figure out which of those factors sort of matter. And we're sort of also saying, God, that's also a terrible approach going down the road. What does sort of the Goldilocks in the middle?

Brian Klaas:
Yeah, that's a very difficult question, and I think it's the one that I expected to be challenged on most in Fluke. And you're actually the first one to pose it.

Danny Crichton:
Yes, we're all over you. This is a hard hitting podcast.

Brian Klaas:
Well, as it should be. This is, I think the question that the 21st century, if we're going to make progress, is going to have to answer. Complex systems are important, even though they can't answer many of the questions that I highlight because the assumptions baked into the modeling they do is better. So I think it's already an improvement to have acknowledged that systemic factors are important, that you have to pay attention to the noise. There's a lot of things that you write out of linear regression models that you don't write out in complex systems and agent-based modeling and all this type of stuff. So it's a step forward, but it doesn't solve the problem. I think the thing that you have to do is you have to sort of get iteratively better at figuring out which things are worth modeling and can be tamed and which things can't. And for the things that can't, trying to figure out the short-term interventions that mitigate harm the most, there's certain things that we just cannot predict and we probably never will be able to predict, but we still sometimes have to make choices.
So even if we don't know what's going to happen next in a cancer diagnosis, you can't just say, this is a rare form of cancer, I don't know what it is. Throw up your hands and do nothing. You still have to decide what a treatment is, and sometimes that happens for the economy, sometimes it happens for pandemics, et cetera. So then you have to have principles that I think are different from the ones that dominate social science right now. One of those being that causality without any ability to do anything with it is less important than something that you don't understand, but that is useful. And basically I think of social science as a way to mitigate harm. I think that's why we exist, and I think we've lost sight of that. I think a lot of social science looks like what I describe as sort of chasing the holy grail of causality that they're never going to find because causality is an infinite number of factors that cause things.
And so parsimony is sometimes a thing that we feel like we have a relative grasp on, and then it blows up when a black swan hits us and we don't have any idea what we're actually doing. I also think it leads to the lesson that I do talk about in Fluke, which is about experimentation in the face of uncertainty. And this is where speaking to evolutionary biologists and so on, it was like, I want more of these people involved in social science and in policymaking. Because one of the things that every evolutionary biologist understands is the wisdom of experimentation. I mean, it's how all these genius life forms have navigated a constantly changing, often hostile world. And what do we do? We basically impose what we think is the answer on the world through ideology.
I mean, I've described previously how policy gets made, and you basically have two sides in an election that say, this is how the world works. And the other one says, no, this is how the world works and how we're going to fix it. And then you vote for them and you try one of them, and then four years later you try something different, but you don't actually know what was the better option because you never do any experimental testing or anything like that. So there's an example in Vancouver that, I'm not saying this is the one size fits all approach, there's very few of those in social science, but they basically did some AB testing with homelessness where they had one randomly selected group of homeless people and they just gave him a bunch of cash, and the other group got a whole bunch of social support, temporary housing, drug rehabilitation programs, all sorts of stuff that was worth money, but it wasn't actually giving them money.
And what they found that was really striking was that the people who were given the bag of cash were actually much more likely to end up in their own home in six to eight months. And also that they were much less likely to use drugs, which is exactly the opposite of what most people expected. And again, this might not work everywhere. Maybe it was just in Vancouver, maybe it was that specific program, et cetera. But it's the kind of thing where at least you're trying stuff to figure out what's the best outcome. And I mean, I don't know why that worked in that context. I couldn't explain to you the causality of the factors and so on. We could come up with hypotheses, but I really only care what's going to stop people from being homeless. So there's sort of this disconnect when I go to social science conferences where it's like, oh, we have this perfect identification strategy for causal inference. What can you do with it? Oh, absolutely nothing. And it's like, well, is that really why we exist as a discipline? I am sort of skeptical of that.

Danny Crichton:
I want to redirect our attention real quick, not get so stuck on causality. Because I think what Fluke has done really well as a book is identify this commonality between so many different fields. So there's these flukes, they happen, whether it's a genius scientist, whether it is a venture capital firm that happens to hit gold and back a founder, whether there's a founder with an idea at a certain time, it just captures the zeist and captures the market. And then we see in politics all over the place. I was just reviewing Jake Berman's book, The Lost Subways of North America.
And so often when it comes to transportation systems, you just realize there's just flukes everywhere. City council votes a certain way, state legislators switch aside by one vote, and suddenly the bill doesn't pass, the system collapses, whatever the case may be. I'm curious, as you think about this sort of mediated chaos, we see this with research grants. We see this with allocating venture capital dollars. We see this as we go into legislators and appropriations. We have to take this chaos and actually do something with it as you were sort of saying. We get a cancer diagnosis, we don't know what causes, it's rare, but we have to have a solution or at least try to approach a solution. How do we build robust institutions that can learn from this kind of chaos and do something with it?

Brian Klaas:
Yeah. So I guess there's sort of two sides to that question. One of them is how you make decisions in the face of uncertainty to maximize investments or to come up with a good solution to a problem that you don't fully understand. And the other is about doing what my grandfather's advice to me for a good life was, which was to avoid catastrophe. That was his life advice.

Danny Crichton:
It's a good advice.

Brian Klaas:
Yeah, it is good advice. And it's also good advice that I think a lot of people have forgotten in the world of economics and politics these days. So on the former question about how to make decisions, wise decisions in the face of uncertainty, I think this is where experimentation and randomness should be dialed up. It's not like we should do everything randomly. There are some systems we really understand, and in those systems, it makes a lot of sense to optimize in a highly directed top-down way. And there's a lot of systems we don't understand. And I think in those systems it makes more sense to experiment more and to do some random allocation. I mean, research grants is a great example of this. I think I mentioned this in Fluke as well. You sometimes forget what hit the cutting floor and what didn't in the book.
But when you look at things like the mRNA vaccine, I mean, Katalin Karikó who became a Nobel Prize winner was basically almost forced out of academia because her idea was useless in the 1990s until it became the most valuable idea in the world in 2020. And I think that's the kind of stuff where you just don't know what's going to be important. So when you read research grants and you try to allocate money to or investments, and you try to think, oh, what's going to be the next big thing? You don't know. And so in that context, the hubris of saying, well, I already understand exactly what's coming next, therefore I'm going to pick based on my assumptions is actually a pretty stupid way of doing it. It's not always the right way to move forward. So I think there's some of that that I talk about. And there's many, many parts of the book that talk about experimentation, the wisdom experimentation.
In terms of the resilience stuff, I think it's just sort of, I do worry that the flip side of the model-based mentality is that we know what's going on, we can therefore control the world, and that means that we should optimize the absolute limit of every system. And this is the kind of stuff where you see the downfall of it in things like the Suez Canal getting blocked in 2021 from one boat that caused 54 billion of economic damage and shaved more than 0.2 percentage points off global GDP according to one study. This is the kind of stuff where if you have optimized systems with no slack in them, then a single boat can wreak havoc on your supply chains and so on. If you build resilience, you're going to have some ability to withstand the flukes.
And I think there's an example where there's a Latin American power grid that basically decided to do exactly this for resilience, where they made a more expensive, less efficient power grid that had decoupling built into it so that when any part of the system failed, it was isolated and it costs more money, it was less efficient, but it more than paid for itself when the first blackout happened and it only caused economic damage in one small part of the country.
And I think that's the kind of thinking that is going to flow out of the mentality that I put forward in fluke, which is to say there's a lot of stuff we don't understand. A lot of random things are going to happen that are going to create problems for us. And the way to understand that world is the exact opposite of what mostly we're told, which is optimize for absolute efficiency, perfection, and every ounce of inefficiency is your enemy. And I don't think that's true. I think that we're probably better off having fewer catastrophes and more resilience. And this is the kind of stuff where it's very difficult to have a one size fits all approach to it, but I think the principle is one that people who are in these systems can apply it more effectively to navigate risk.

Samuel Arbesman:
So you just spoke about the idea of how to build robust institutions and building in slack and things like that. Are those ideas applicable also for individuals at an individual level in terms of making decisions and you mentioned avoiding catastrophe? But are there other things that we should think about when we're grappling with the fact that every small change in decision we make can have these untold ripple effect throughout the world?

Brian Klaas:
So there's the sort of pragmatic side to this, and there's the philosophical side to this, and both of them I think are important. The pragmatic side is actually relatively similar to what I just said about societies in a sense. I think that human behavior in the 21st century has been optimized to an unhealthy degree. I think that the degree to which we are told constantly that we need to have life hacks and we need to optimize everything and we need to have every checklist that yields to another checklist, I think that's bad for us. I think it's bad for innovation. I think it's bad for our livelihoods. I think it causes us to make mistakes in terms of understanding the way our world is unfolding around us, and also how to navigate an uncertain system. But I also think that the experimentation points hold as well.
I mean, I think this is something where the Google Maps mentality of the world, Google Maps is a great, great thing for optimizing between point A and point B, and that's extremely helpful if the only thing you care about is getting there in the fastest possible time and shaving every extra minute off. But that mentality I think is one that's sort of pervaded lots of our lives. It's like a lot of the stuff we like doing the most as humans is wandering and every app is telling us, don't wander. Here's the way that we'll shave off one minute. And so I think there's stuff like that in terms of pragmatic decision making where experimentation is actually better for us and it also produces more innovation. I mean, most of the ideas in Fluke I wrote after I went walking with my dog. And I know it sounds sort of cliche, but it's literally sitting in front of a computer all day, I couldn't think of what to do, and then I would go for a walk and think and clear my mind and it just flow right out of me.
And I think that's something that in the modern world, especially with work from home, it's possible for more people to do this in ways that might actually yield a lot more lightning strike moments for smart ideas and innovation. In terms of the philosophy, I think this is one of those things where I've thought very differently about the level of control I have over my life. I think I have a lot less than I previously imagined, and that's given way to being able to let go a bit more and also to just enjoy life a bit more. I mean, one of the things that I think about a lot is how any success I have is derived from a series of factors that I had literally no control over. One of them being that I was born at all, probably the most important one. But then beyond that, it's where I was born and when I was born, super important for my life trajectory, who my parents were was super important. The fact that they loved me and gave me a good upbringing, I mean, all these things were completely out of my control.
So when I start to accept that more, I just sort of feel like I'm along for the ride a little bit more. It's not like I don't have any agency, of course I make decisions and I'm trying to be a strategic thinker. But it's just sort of easier for me philosophically to think I am basically a cosmic fluke. I believe that I don't believe I have some grand universal purpose in the universe, which is fine with me, and it makes me happier to be honest. And I think at some point, what is life's supposed to be other than avoiding catastrophe and being happy and sort of hoping to make other people's lives better too. I think that it's a very cliche meaning of life, but I think it's basically the right one, so might as well say it, even if it sounds cliche.

Danny Crichton:
I will say when it comes to cosmic fluke, I'm thinking of the Alaska Airlines flight with the door plug that blew out, and the fact that the people who are sitting in the exit river right next to it missed their flight. I think it was traffic or something like this, but they were supposed to be there. There were people who it were supposed to be a foot away, they almost certainly would've been sucked out the door and they just happened to missed the flight, and because they missed the flight, they're still alive. And at a certain point, there's a level of how do you actually account for, I think there have actually been some interesting psychological studies of folks who have had near death experiences like this where it really radically changes your life view on just how much contingency exists, even as you feel as an agent, an individual who's making decisions, et cetera.
But I want to ask one more question as we sort of close out. So Sam was just asking about the micro view of this. Here's the chaos, how do I deal with this as an independent agent, as an individual? A lot of your work is also looking at structures and the super structure and society, and I think one of the interesting narratives that's happened over the last couple of years in social science has been the connection between the two, specifically around lottery. So let's say the world is contingent, let's say we can't just perfectly allocate every scientific research dollar, every immigration visa, every dollar venture capital.
There's a burgeoning group of folks who are sort of saying, look, maybe devote 50% of it to your rational model of how you want to allocate it, but devote some of it to randomness, devote some of it to lotteries, have a bar, but everyone above the bar sort of goes into the pile and you have a lottery. So whether that's college admissions, whether that's scholarships, we do have a diversity visa in the United States, which is based on the lottery. So there is that sort of element to it, but basically add randomness to this because our models oftentimes exclude certain groups of folks, certain ideas, and so that randomness can sort of enter into that equation. Is that a proper policy approach, you believe, in a fluke driven world?

Brian Klaas:
Yes. So I generally think that more of these would be good, but it of course depends on what system you're talking about because I briefly talk about sports in Fluke, and I talk about money balling, and there's a series of problems of money balling. It made the game somewhat boring to use data analytics to try to manipulate baseball, but it worked really, really well. So you shouldn't use lotteries or random decision making when you actually really understand the system because it would make you worse. If you don't, if you're facing uncertainty, which you can't tame easily and you don't fully understand the system, then lotteries are a pretty good form of experimentation. In the closing chapter of the book, I talk about this tribe in Southeast Asia, and they cultivate rubber and rice and the groups around them all have their ideas on exactly how to best cultivate these two crops. And the other group knows how to cultivate rubber, but doesn't really have a clear idea on how rice is going to work. They think, look, it sort of fails sometimes, it succeeds sometimes.
And so they have a superstitious belief based on religious dogma and so on that is tied to the idea of looking at the movement of a specific number of holy birds. So there's these birds that they think are really, really have special powers, and that if they subjectively interpret their movements, they will know where to plant the rice. Now, rubber they know is going to work like clockwork. They just figure, okay, it's fine. Let's just plant it exactly the right place and it'll be fine. It's an analogy for effectively those kinds of systems. The rubber systems, you understand it, you plant, it works really well in the rice systems. What turned out to be the case is that this group of people actually make more effective rice crops than the areas of everyone else around them because they've inadvertently created randomness. They've basically created a lottery because the subjective interpretations of bird movements is something that caused you to plant rice in random places.
And when there were a group of researchers who looked into this, they're like, wow, the yields for the rice crop here are much higher. And it's because in a system of uncertainty, randomness is a good move. So I use this as an analogy to say, you've got to think about what the rubber problems are and what the rice problems are. Baseball is a rubber problem. It's a problem where if you have good data analytics, you will be better, and you shouldn't use random allocations to decide who to draft probably into your team.
But in other areas, scientific research, investing, lots of things that are thinking about sort of resilience for an uncertain future, a certain degree of that should probably be derived from experimentation or randomness. And it doesn't have to be purely random. I mean, there's some things where as you say, there could be a benchmark over which the idea is just not viable and therefore it gets called from the pool or whatever. But that's an implementation question more than a philosophical one. I think the philosophical one is that in uncertain systems, lotteries can be a very good way of navigating uncertainty.

Danny Crichton:
Well, Brian, whether there's a rubber problem or a rice problem, certainly I don't want a choice between that menu that's random. There's no uncertainty on which one I'd rather have on the table. But lotteries, flukes, thank you so much. Brian Klaas, the author of Fluke, available in your bookstores now. Thank you so much for joining us.

Brian Klaas:
Thanks so much for having me on the podcast.

continue
listening