AI and democracy are in great tension with each other. AI models are built by a priesthood of research specialists, unmoored from the will of the public. Yet, these very models are increasingly running important parts of the economy and increasingly government. How do we take advantage of these new capabilities without losing control of them?
That’s the debate at the center of our conversation today with Gideon Lichfield, the writer behind the Substack “Futurepolis” and the former editor-in-chief of Wired. Through his recent writings, Gideon has explored what a public option might look like with AI, how participatory democracy can be buttressed by new organizational and technical designs, and the tactical changes needed to make it much easier for government to procure software.
Joined by host Danny Crichton and Riskgaming director of programming Laurence Pevsner, we first talk about Gideon’s two recent experiences playing our scenarios on the Chinese electric vehicle market and AI deepfakes. Then we pivot to a broader conversation on the future of governance, discussing everything from participatory budgeting and liquid democracy to balancing between technocracy and democracy while remaining optimistic about the future.
Produced by Christopher Gates
Music by Georg Ko
Transcript
Danny Crichton:
Gideon, thank you so much for joining us.
Gideon Lichfield:
Thanks very much for having me.
Danny Crichton:
So Gideon, you've actually joined us multiple times, but not on the podcast. You've actually joined us in real life at two of our separate risk gaming events. We had one on powering up, which is focused on the Chinese electric vehicle market, and you also joined Deepfaked and Deep-Six, our game focused on AI election security, and deepfakes, as that title would indicate. And I think just at the stop here, Laurence and I are just curious. I think you may be the first guest we've ever had on the program who's actually been to two separate games. What was your experience between these two? Because they're quite different.
Gideon Lichfield:
And Deepfake and Deep-Six was this game where we tried to figure out who was deepfaking, or who was trying to subvert a US election campaign with deepfakes. And what I mainly remember about that game was just the sense that it created of the fog of war, this sense that if I were a Washington policymaker, I would hate to be in that situation, where all this information is coming at you from different directions and it's time critical, and is it the Iranians, is it the Russians? Is it the North Koreans? Who's doing what? Is this piece of information true? We seem to have got it backed up by several different sources, but actually maybe it's a misdirection.
It was this constant confusion, and all of these different people with their own agendas trying to influence your opinion about what was happening and who was behind it. So for me, even though it was just a little game, it was actually a really effective way of making clear just how confusing life can be when you're trying to deal with these critical decisions.
Laurence Pevsner:
The chaos of that game. And people are in this room, it's crowded, there's a lot going on. You're trying to talk to as many people as possible. One of the things I always point out to folks is that that's the easy version of the game. In real life. Right? You don't have everyone in a room together. You're all in separate places, you're all talking at different moments and times. And so we've done a convenience for you by bringing you all together, and yet people still feel like that is total chaos.
Gideon Lichfield:
Yeah. Yeah, yeah, exactly. And it makes you a little bit concerned about the people who around the world and what they're hearing and what they're getting access to. And then the China EV game was obviously a very different experience. And what was fun there was, in some sense, it felt a lot more like almost like a board game. You're sitting around the table with these people and you're trying to trade things with them and gain credits for different kinds of stuff, and gain investments from the automakers and so forth. But what became clear by the end of the game is just how much it's rigged against the mayors of the cities.
Danny Crichton:
AKA you.
Gideon Lichfield:
Aka me as the deputy mayor of Shanghai. And I asked you about it at the end, and you told me that in fact, yeah, that sort of reflects reality, that these bureaucrats are pressed on one side from the car makers who are trying to come in and cajole them into taking investments and giving the car makers good terms, and on the other hand from the central government that's constantly pressing them for better results, for more information. And it's actually a really difficult balancing act to play, and that those bureaucrats rarely come out on top.
Danny Crichton:
No, I think that's exactly right. When you look at the way the game is constructed, obviously the three car makers, they compete with each other. Foreign consultant kind of connects the dots, and then there are two cities, Chengdu and Shanghai. You were Shanghai. And from the mayor's perspective, you get squeezed from all sides. You get squeezed by the population from the central government, you're getting squeezed by the car makers who are trying to negotiate with you. A consultant just wants money out of you through the provision of tax rebates or tax cuts. And so there's really no way out of the system.
Gideon Lichfield:
Yeah, and I did not do particularly well in the game, and I realized by the end that the reason was that I was trying to be too much of a good mayor to the central government. I was trying to be a good boy, and actually I should have just ignored them and pursued my own interest more.
Danny Crichton:
Yes. And the other thing I would say is, and we don't see this very often in the game, but if the two mayors collaborate with each other, one of the biggest challenges is essentially the car companies pick you each off from each other. Right?
Gideon Lichfield:
Right.
Danny Crichton:
So you're both offering tax incentives, you start to get into an arms race, so it starts at 10% or 50%, it goes to 80%. Suddenly there's no revenues, the whole thing kind of collapses. But if you hold the line and essentially are a cartel-
Gideon Lichfield:
Yeah.
Danny Crichton:
... that's where you have a lot more leverage, and suddenly, "Hey, I'm not going to defect."
Gideon Lichfield:
Yes.
Danny Crichton:
So from your perspective, it's really just a classic prisoner's dilemma.
Gideon Lichfield:
Yeah, which is so interesting, because it's a strategy that did not occur to me at all during the game, and apparently not to the mayor of Chengdu either.
Laurence Pevsner:
Yeah, it almost never does occur to folks, and again, I hate to point out that actually the game was even easier than you thought, but this is another example where in real life there are many more than two cities, so it's actually much harder to band together and to make the prisoner's dilemma work out. But in this there only were two cities, so it is theoretically possible, and yet it almost never occurs to the players to work together, because you see, "Oh, here's this other city. They are, in fact, my most direct competitor," is how we tend to think, which I think reflects just human thinking in general.
Danny Crichton:
I think with risk gaming, a lot of what we try to do is emphasize instead of compatibility, how do people who are competing with each other under uncertainty, under risk, find moments of collaboration, find moments of competition, and make the decisions that they do? When you are writing on your Substack, Futurepolis, you're doing the same thing, focused really on finding compatibilities between people, between citizens. You have this very bold mission statement of updating our systems of democratic governance to make them fit for the 21st century, and you emphasize institutions like participatory budgeting, citizens' assemblies, digital public spaces, et cetera as institutions and means of building that kind of consensus. I'm curious how you see the world from this perspective.
Gideon Lichfield:
Yeah, I mean the problem with all of these very interesting systems of collaboration and participation is getting people to use them, and in particular, I think getting established lawmakers, policymakers, governments to pay attention to them. So I think one of the most interesting experiments, or certainly one of the biggest experiments in participatory budgeting in the US, was in Seattle, where after the George Floyd protests and all of the upheaval that the city saw in the wake of those, there emerged pressure for the city council to give local communities, and particularly communities of color, some say in how money was spent. And so that led to a total of, I think, something like 25, 30 million dollars from the city budget being allocated to citizens to participate in a budgeting process to figure out where that money was spent. Which was a tiny, tiny sliver of the city budget, but it emerged from that public pressure. And I think there are also mixed feelings about how successful the experiment was in the end.
So this is one of the issues, is that a lot of these things are very small. Another example that I wrote about in Futurepolis was this small experiment in a kind of mini citizens' assembly in Tennessee, where they brought together 11 people to talk about gun violence. And they had this full range of people from someone who's a firearms instructor and a combat veteran to a teacher who has seen several of her students shot dead while she's been a teacher. And so they came together to discuss ways of dealing with gun violence, and certain things ended up being off the table, like nobody on the pro-gun side was really willing to talk about gun control legislation. But nonetheless, it managed to reach consensus on a handful of issues around gun safety, and around educating people better about gun use, for instance.
And anyway, there was a handful of proposals that came out of that, but they didn't really lay any groundwork with legislators in Tennessee to try to get any of this adopted. And so in the end, I think one of their proposals was half adopted. And this is the thing that you come up against with any of these deliberative bodies, is that unless you're getting lawmakers' buy-in from the get-go to take the process seriously, there's a large chance it won't make an impact. And the question is, what do you need to do to get lawmakers' buy-in? Presumably you need to start with some kind of crisis that makes them realize they need to listen to people more. So that kind of engineering of legitimacy, I suppose is a way to put it, that's a lot of the tricky part. The processes themselves are worked out. It's getting people to take them seriously is the problem.
Laurence Pevsner:
When it comes to participatory budgeting versus a citizens' assembly, one difference that really strikes me is, in a citizens' assembly there's an active effort to try to get people from all walks of life. It's not dissimilar from a jury process.
Gideon Lichfield:
Right.
Laurence Pevsner:
And I know the Yale political theorist Hélène Landemore has done a lot of work around this. She talks about more successful efforts than we've had in the US in places like France. Participatory budgeting has become popular in the US in some places, but one of the differences is that it's whoever volunteers. Right? It's whoever wants to show up to the meeting. And often the people who show up to the meeting have much more extreme views than the regular populace might, and so this can actually create kind of perverse outcomes, where supposedly it's for the public community, but actually the public never would have voted on this or agreed to this. It's just that people who had the time and ability to show up got to steer the process.
Gideon Lichfield:
Right. Which is the story of town hall meetings and civic participation processes down history, right? It's always the people with time on their hands and with the pots to stir and the trouble to make who are most motivated to show up. And I think this is also a risk, maybe a vulnerability, of any of these processes.
I was at a discussion a couple of months ago where a lot of people came together to talk about democracy innovation, and there was a lot of talk about different kinds of participatory democracy. And the point that the speaker made was, every system can be gamed, and if we trying to just replace the systems that we have right now, which people have learned to game very, very well... An example is campaign finance is a perfect example of how the political system here is gamed. But if we simply replace those with new ways of bringing civic participation in or creating dialogue with lawmakers or whatever it is, that too will be gamed, because special interests will always want to find a weak point.
And so yeah, citizens' assemblies are notionally at least more resistant to that because you have a process that is meant to ensure that you reach out to people from across a wide range of the political spectrum and socioeconomic backgrounds and so forth and involve them. But that process too, if somebody is determined enough, they could probably find a way to subvert it. That need to always stay a step over the head of people who want to take control of the process is always going to be there.
Danny Crichton:
Well, there's people with the expert knowledge, and they are experts and they get to run everything, and you are not an expert, you don't have any say in this. And even if it doesn't necessarily have an effect, the fact that you are coming to terms with what these decisions are? When I think about risk gaming, a lot of what we're trying to do is not necessarily persuade. It's actually education of saying, "Well look, this is actually the trade-off. It is between money and profit, it is between long-term investment and short-term investment. And it's hard."
And so you may actually disagree with the decision. You may not even like the decision you made live in the spot. You may not like how you ran Shanghai in two hours, but you learn something along that process of there's some intellectual humility that comes with trying to balance a budget. There is the challenge of different points of view, and realizing that you had a very strong point of view about an issue, and you suddenly realize, in concert with other people you were listening to, that "Oh, actually I understand your perspective," that I didn't even think about the healthcare context, maybe, in Tennessee, or the education context when it comes to guns.
And so maybe there's some way for all of us to win here. Win-win, which is Laurence's favorite term of this podcast. But there is a way for us to come together around some sort of mediated consensus and make progress.
Gideon Lichfield:
Yeah. I mean let me step back and reframe your little history of democracy there slightly, because we created republicanism, and initially we also created a system where most people did not have the vote. And so there was sort of already an implicit privileging of quote "expertise," where expertise meant if you were white and male and had property and were educated, it was assumed that you were entitled to vote. Then we gradually moved towards universal suffrage, and also then during particularly the 20th century, we saw the rise of this technocracy that informs government policy to a much greater degree than, let's say, a century before that.
So I would say in the earlier history of the country, first, only a small portion of the citizens actually really had access to decision-making, government, or influence on it. And second, and the country was smaller, obviously, and people had more direct connections with their representatives. But also there was more of a sense, I think, that the elected representatives were the ones who were taking decisions, and their role was both to take decisions and be informed and also to connect with the popular will.
What we saw in the 20th century was more of a rise of this technocratic class, which was divorced from the public, and so this tension kind of emerges, and the historian Sophia Rosenfeld talks about this in her book Democracy and Truth: A Short History. This tension arises between the technocracy, whose job is to make the best decisions or the most informed decisions for society, and the political class or the populist side of the equation, whose job is to try and figure out what the people want and how to translate that into policy.
And these two sort of coexist as necessary parts of the system, but the hypothesis Rosenfeld makes, and that I think resonates, is that over the last few decades, the technocracy not exactly grew too powerful, but it became influential in such a way that people saw decision-making as being divorced from their own needs. They saw it as being the preserve, rather, of this unelected technocratic elite. They saw politicians, and in particular progressive politicians, aligning themselves more and more with that expertise and getting divorced from what people wanted or felt was going on. And so there we get this populist rise in backlash.
So anyway, now to come back to what you were actually saying. I think that there's definitely an opportunity presented by things like citizens' assemblies, because what you have there is a fairly small number of people, but a representative sample of the population, who get to spend a long time talking about an issue and listening to experts from all different fields, and getting to grips with the complexity and the difficulty of it, and saying, "Okay, yeah, we see this is difficult. We see this is complicated."
They're also given a framework and a structure that allows them to reach consensus rather than end up in this kind of adversarial position, which is what happens in parliaments. So for those people, it can be a really, really useful exercise in understanding the messiness of decision-making and the messiness of consensus building. And so then the question is, all right, how do you translate that awareness, that understanding, to a broader public? That's the tricky part.
Now there have been, I think, some interesting exercises, and Ireland is famous for its citizens' assemblies and I think it has made some good examples of this, exercises in communicating to the broader public what happened at the citizens' assembly. Doing interviews with the participants, maybe recording the discussions, publishing stories about the deliberations and how they got from this issue to the other, from disagreement to consensus on this particular issue. So there are ways, I think, to make the process visible to people and make them see the value in it. But again, that does kind of depend on having a government that cares about this stuff and wants to do it, and I don't think we have that right now in the US. Not at the federal level.
Laurence Pevsner:
It strikes me that you just have a fundamental numbers problem here. Right? This is the whole reason that republicanism exists to begin with, that just at a certain point, you can't have the Athenian democracy, everyone gets into the room. Right? You just can't do that in a country our size. You can do that maybe with the... I hail from a New England town, which does actually have a huge assembly of people who comes together for the town to talk things over. It's still republicanism, right? Even though there's I think like 300-plus people who come together, it is still like, "Okay, but they're elected." And I really like citizens' assemblies, I think it's a really powerful instrument, but they are only useful on issue to issue basis, and not for a broad, sweeping, running the government kind of challenge.
Gideon Lichfield:
Yeah. And I don't think... I mean, okay, some of the advocates of citizens' assemblies, maybe including Hélène Landemore, would want them to take over from the running the government thing or from the role of a parliament. I don't think that that's what we should expect, but I think that in an ideal world, you have assemblies that are convened for very specific questions and with different sets of people, and then those inform the work of an elected full-time congress or parliament.
Laurence Pevsner:
Even if we can't just fully do politics without the politicians, this is a podcast hosted by a venture capital firm, so it is only natural to inquire, is there a tech solution here? Right? Often when we try to think about getting more people involved, if it's a numbers problem, it's like, well, we do have technology solutions to be able to gather a lot of data and get a lot of people in a virtual room, even if we can't get them in a physical one. So I'm wondering if there's been any exploration in the tech space on this.
Gideon Lichfield:
I hesitate to use the word tech solutions, because tech never actually solves the thing, unless... you know.
Danny Crichton:
[inaudible 00:17:29] yes.
Gideon Lichfield:
You know, tech tools. Tech tools.
So there are some interesting users here, I think, and AI inevitably gets a shoutout. One of the most large scale participatory processes we have right now, certainly in this country, is open comment [inaudible 00:17:44] on rulemaking and on laws. And so thousands, perhaps millions of people sometimes will write in on a proposed piece of legislation or a proposed rule and express their opinion. Now, this too is very, very prone to gaming, obviously. I'm trying to remember, I think it was the net neutrality legislation that got something like 12 or 14 million comments, most of which turned out to be from bots.
But you have at least the potential there for a large number of people to express an opinion, and I think technology tools can help with things like filtering out bots and spam, to some extent figuring out which comments are legitimate, collating those comments, finding common themes in them, identifying points of agreement or fault lines, and providing that information to lawmakers and to policymakers in a way that simply wasn't really available when someone just had to sit and read through all the comments.
There's a slightly different version of this, I wrote about this also in Futurepolis, this Japanese politician who ran an election campaign where he used AI to scrape social media and the web for people's opinions on different kinds of political issues. He was running for governor of Tokyo. He used that to inform a campaign platform that he created, and then he put that platform out on the web and allowed people to submit comments on it, and again, used an AI system to take those comments and group them by different themes and identify overlaps and so forth. I think the number of comments he got was somewhere on the scale of thousands, or maybe not even that many, but the system, the process, was there.
So yeah, I think there is definitely now, particularly with AI, a way to take this very vast and often very unstructured quantity of information that people might want to submit as input and observations or ideas about law or policy, and make sense of them. And then also ways to use it to communicate back to people about what has been decided or about the process that's going on. So yeah, those are some tools. Once again, I don't think they're solutions unless you structure it right and prevent the incentives for gaming.
Laurence Pevsner:
Did that governor win?
Gideon Lichfield:
Governor Winn?
Laurence Pevsner:
Did the Japanese-
Gideon Lichfield:
Oh no, he didn't.
Laurence Pevsner:
No.
Gideon Lichfield:
No, he came fifth. He came fifth, but also he'd never been a politician before, he was a science fiction writer, and the field was something like 51 candidates. And so the four people who came ahead of him were all established politicians. So you know, relative scale, he did okay.
Danny Crichton:
Yeah, unlike Lawrence, who I am sure, hopefully they don't call them selectmen in your city, because that would be a very New England thing.
Laurence Pevsner:
They do.
Danny Crichton:
They do. Of course they call them selectmen.
Laurence Pevsner:
It's the first selectmen.
Danny Crichton:
Yeah. Yeah, I will say, similar to this idea of selection, one of the areas that I was really interested in years ago, and less so... not less so because it didn't really go anywhere, but I was writing about it when I was at TechCrunch, was about liquid democracy and the idea of everyone gets a vote. But today we have these sorts of one election every four years and you kind of pass the baton, and then you keep going on. The idea of liquid democracy would be, I transfer my vote to anyone I want, who gets to aggregate my votes, who can also transfer them to someone else, who can transfer them to someone else, et cetera et cetera, and you can aggregate up to the first selectman, as you may be in your New England [inaudible 00:21:05]
And so the idea here is this idea of proxying. It's basically direct democracy, but if you don't want to be involved, you can give it to your friend, you can give it to a politician, you can give it to your professional guild. Whoever you trust your proxy your interests. Could be anyone who's also a citizen of the United States, or the country that you're a part of. And I thought it was always really interesting, because to me, AI is one aspect of, there's a whole new set of AI technologies, but we have other technologies. In that case it was blockchain, which we don't mention on this podcast nearly enough. But this idea of, "Look, we have so many more technologies than we did in 1776. What could we empower with different types of media?" Whether that is different types of consensus building social networks, probably not Facebook, but you can imagine.
In Taiwan there's something called, I believe, vTaiwan, where actually every ministry publishes, very similar to what you just described in a commenting period, a rule, and there are moderated discussions around those rules that go in as input into the ministry around the rulemaking process. And so to me, I feel like there's a really empowering moment if you can let yourself go of the model that we've designed from several centuries ago, and realize we have a bunch of new tools and ways of building things, that you can design from scratch new types of architectures that are technology, not solved, but technology empowered, that are ultimately ways for humans to connect together that may not otherwise meet in person.
Gideon Lichfield:
Totally, and that's why I find this exciting. And again, I think one has to approach it with a certain degree of caution or skepticism, because once again, all of these can be gamed. So liquid democracy, you could say that what we have right now is a form of liquid democracy where you have just one choice, which is to delegate your vote to a member of Congress. In liquid democracy as it's being discussed, you could choose to delegate it to a proxy, to anybody. The opportunities for people basically buying your vote, gaming the system, finding ways to push up proxies who have the interests of a certain interest group at hand, they're not necessarily representing the interest of the people they say they represent? There's all sorts of ways in which it can be subverted.
Using a platform... They use this platform called Polis in Taiwan to do collaborative, basically lawmaking. They ask people to write statements about what they think the solution to a particular problem should be, and everybody can write a statement and they can upvote or downvote each other's statements. But it is set up in such a way that you're incentivized to write a statement that will get as much agreement as possible, and so it's a consensus building mechanism. But again, how well that works depends on who's taking part, and in Taiwan they've run into a bit of an issue where it's only a small proportion of people who are quite Internet-savvy who actually get on and use this platform. So it's giving you a certain not very representative sample of the population, and also limits the legitimacy and the number of people who participate.
So again, all of these are really exciting options, and this is why I write about them, but all of them suffer from this problem of gaining legitimacy and preventing from being gamed.
Danny Crichton:
Let's close a lid on that, and let's go to another subject that you've been writing about very recently, which is around artificial technology models in general, and specifically that obviously the frontier models today, we look at OpenAI, Anthropic, DeepSeek... Doesn't matter if it's just US, but globally as well, Mistral in France. We have this issue where every single model is owned by a private company. They're all invested by private companies, they're owned by a handful of large tech giants. And you have identified this as a major problem from the perspective of the commons, the perspective of the public, that there's not really a quote-unquote "public option" when it comes to some of these models.
A couple of them are open source, generally open weights are accessible, but not necessarily the proprietary code that built the model. So there's maybe a halfway solution. But you recently wrote about this, and I was curious about this idea of, should governments get involved in building AI models? Is that something that's high utility? Is that a quote-unquote "waste of taxpayer resources" when companies are already willing to do it on our behand? How do you frame all that, and why is it of interest to the public itself?
Gideon Lichfield:
Yeah, so there's this idea that has emerged of public interest AI or public AI. And as you say, it's the idea that there should exist AI models and data centers and training data, all of the layers of the AI stack, there should exist public versions of those, which are not owned by a for-profit company, which exist to serve the public good, meaning that anyone can have access to them at reasonable fees. At cost, let's say. That training data can be made available to people who want to develop models for scientific research or for certain social ends, and that the data provided are clean data, they're not drawn from all of the junk that we have on the Internet, they're not copyrighted. And that maybe you create research facilities that scientists can use, or that small businesses can use, to develop their own AI models and solve particular problems.
And I think it's an interesting idea, because there are sort of parallels to different aspects of this. So you could look at something like Britain's National Health Service, a public option for healthcare, and the idea there is to make healthcare affordable and available to everybody, and also to allow for things like economies of scale, which allow cheaper treatments to be developed. So you can see a similar argument for public AI, where you make AI available to everybody and you create incentives for people to develop AI models and solutions that can work at a low cost for a lot of people. Then another parallel that is used here is, again going back to the UK, the BBC. So the UK has various broadcasters, they're in private hands, but then it has this public broadcaster, and part of its mission is to make sure that it's educating the public and that it's creating a healthy public discourse and a good civic culture. Yes, it's funded by taxpayer money, but it provides good programming and good content, and it increases Britain's soft power and has all of these impacts.
And so by the same token, if you have AI that is not trained to spout misinformation, but instead to espouse democratic values and bridge gaps and so forth, then that would be a public good. So there are all these kinds of reasons for having it, and then I think the questions become, well, who should pay for it? It's not necessarily entirely taxpayers. You know, you could try and create an incentive structure where private investors also have some reason to want to support this stuff. Maybe because again, it creates infrastructure that smaller businesses can use to develop AI models so that they're not dependent on the big AI companies. So this idea is circulating and I think it's got merits, and then the question is, who's going to make it happen and where does the money come from?
Danny Crichton:
I think the BBC models are actually really interesting, because it's not just limited to UK citizens. They're the ones who are paying the licensing fees and forcing the BBC truck to roam around looking for unlicensed televisions, picking up signals. But what's interesting is, the BBC is how a huge chunk of the world sees the UK. Right? When you mentioned soft power, I think of everywhere from Africa to Asia, all the way through Latin America has access to BBC, and you both get a impression of the UK, but you also get this source of information you wouldn't otherwise have in your own information commons.
And it's interesting, because when I think about artificial intelligence, the prices are very high, and that's still true today, and that's because inference is a very expensive cost. But you could imagine... One of the themes that we have going on in risk gaming for this year is this idea of global development in the age of AI. But if you are in Africa, you have no access to this. You also can't afford the inference, let alone training the models. And so being able to democratize some of these technologies, being able to do that in a safe, effective way, having it pedagogically focused, having it be ideologically neutral, it's a really interesting place to be for many countries that just don't have access to these technologies at all.
Gideon Lichfield:
Yeah. I mean the "democratize" in quotes, I hate the word democratizing because... Anyway, we won't get into that. But the making technologies accessible to more people part, I think that is going to happen anyway. We're seeing it happen. There are open source and open weights models, there are smaller instances of models that you can run on a phone or a laptop, and the ecosystem is creating all of that. I think the reason for having something that is the so-called public interest AI is that those open source models, there's still really no guarantee of what's going into them in terms of data, how they're being used. Access may still depend on whether you have decent internet service or what kind of computing capacity you have. So the idea of making something that really is available and also encodes certain values? That's the argument for going further than simply having an open source ecosystem.
Laurence Pevsner:
When I think about the politics behind this, as good of an idea as it might be technologically, for the same reasons that we can't seem to get a public option in the US, I imagine we would have just the same kinds of fights about a public AI option. In fact, it'd be even more intense, because there is no AI infrastructure now except for the big companies. And people are already... If this were to come from the left, which is already very highly skeptical of AI to begin with... And I think about, one of our risk games there's a joke, but we have a candidate run for president who is going to be the last politician, and they will fire themselves and let AI run the entire government. And it's funny, but that is a real fear that people have, is of an AI takeover.
And you can imagine, just the same way people are like... On the one hand you have people will say, "I like my private AI. I like my Anthropic." "I like talking to Claude," or, "I like talking to ChatGPT. Why do I need to pay for a public option?" And then even the people who would support a public option will say, "I don't like AI at all, so I don't want AI up in our business in the government." So at least in the United States, it seems like it would be a real challenge. Now, you brought up several examples in the UK, where this is much more of a norm to have these public services, and there's a lot of different European examples here where I think it would be maybe more likely to have a public AI option. I'm curious if you agree with that analysis, then, if you see any governments even starting to experiment with this.
Gideon Lichfield:
I think that you're right that it's a harder ask in the US. I think that where the analogy with healthcare breaks down to some extent is that in healthcare, the idea of having a public option is seen as largely replacing private healthcare, although wouldn't need to entirely. It's a bit more of a kind of "Either one or the other." I think with public AI, it's just something that exists in addition to all of the private sector stuff that's happening.
You're right, there's still a question of "Why should we pay for it?" Why should taxpayers put money into this? And so there comes back the question of whether you can create incentives for the private sector to invest in public AI because it in some way forms a public good. I don't know. I think that is tricky. But then as you said, there's probably more interest in this and the UK and Europe. Just, as you said, as the BBC projected British soft power and provided a resource for the rest of the world, I think you could imagine public AI options emerging in Europe that are then accessible elsewhere in the world as well. And so they serve a purpose for Europe, or for the UK, of its own technological cloud having further reach, and again in the soft power way, their own values reaching further into the world. So I can see some way in which it gains traction outside of that region and then makes public interest AI available to even people in the US.
Danny Crichton:
One of the things that I find really interesting with Futurepolis is you sort of compare the model you're taking with the future of governance to the model around climate change, and this idea that there are a bunch of specialists who are sort of separate coming together, but they know they're working on one big project. When you look at... You know, we're talking about a public AI option. How does that connect broadly into this idea of future of governance? Is that the same thing? Is it just a side project? Is it part of the same flow of people who are all interested in this sort of thing, or do you see this as sort of an archipelago of researchers who are maybe all in the same sea, but not sharing the same land?
Gideon Lichfield:
So the reason I use this analogy with climate is, when you think about the climate change movement, which has been building for a long time, there are people who work on very, very disparate things. They could be working on laws about carbon taxes and they could be working on the chemistry for batteries, for lithium batteries, and they could be working on the manufacturing process for solar cells and they could be working on how to breed drought resistant crops, and so on. And they all know that they're working on the problem of climate change. It's very clear that these efforts all fit together.
So for me, the future of governance space is somewhat similar, but just more fragmented. People are working on all kinds of different things, and they don't necessarily think of themselves as being part of a big project. So public AI, to me, is part of this because the problem of how governments govern technology and the problem of how much power technology companies, particularly the AI companies now, their influence and their ability to create infrastructure that we all depend on? Those are problems of governance.
And if we end up with a world in which the AI companies can basically do what they want, and they're providing all of the infrastructure that both the private sector and the public sector and civil society all depend on, and they call the shots about what gets built and what doesn't get built, and they call the shots about what kinds of limits there are or are not on what an AI can say, whether it produces hate speech, whether it produces misinformation and so forth? Then we have a problem with governance. We as citizens and we as governments have kind of lost control of our own fates to an extent.
So that's why I think that is part of that broader picture. That broad picture is very broad. My picture of the future of governance includes the people doing the public AI stuff, and that includes the people who are trying to write regulation for AI or other forms of tech, and includes the people who are thinking about citizens' assemblies and participatory budgeting, and it includes the people who are thinking about how do you make government procurement work better, because the way in which governments buy and procure technology for their own use is really, really broken. And so it does go very broad. So you could say it's a little bit of a grab bag, but I try to write about them all because I do see them all as connected.
Danny Crichton:
Let me pivot to one final conversation, because obviously we've been talking very high level. We're talking about constitutions, republicanism, selectmen, first selectmen, and all the way down. But in terms of practicalities, right, we're in 2025. Government still is sort of running the same way it does. You had a piece recently on a report by Jen Pahlka focused on just tactical improvements to governance today. So nothing deep and constitutional, but how do we keep the lights on? How do we do it cheaper and more effective?
You've covered so many different subjects on Futurepolis over the last six months. I'm curious when you think about that balance between, "God, the system's so broken. I want to have a complete second system," straight out of a Fred Brooks software engineering book, versus "God, we just need to do micro improvements to pieces of the system as we can, because government is always sort of cantankerous, it's like a ship of Theseus where you're constantly replacing each part," and yes, maybe in 30 years it's entirely different government, but as of right now you can only fix one and mend a piece at a time, how do you balance those two different impulses in the work that you're doing?
Gideon Lichfield:
I'm not sure that... I mean, I think the gradual approach is the one that inevitably is going to happen. It's not like we're going to tear up the system. If I try to imagine what great government looks like 30 or 40 years from now, I think it contains a lot of the parts that we already have. I think we probably still have a Congress, we probably still have, hopefully we still have, a separation of powers and courts and executive. So those basic branches, I don't see going away. I think the stuff that I write about and that is interesting is, how do you make all of that work way better? And you could divide this into a handful of big buckets. One is the civic participation. How do you go beyond just having people vote every four years, having them have a say and have that say be taken seriously and have it influence policymaking?
So that's where all the participatory stuff goes. And you could see that as having a day-to-day experience of democracy instead of democracy as a thing, as a ritual, that you do every few years and then forget about. Then there's making the government itself run better. This is the stuff that Jen Pahlka writes about in her report on state capacity. It's about cleaning up procedures. There's huge amounts of bureaucracy and really, really complicated procedures, and if Elon Musk were actually trying to make government work more efficiently, he would go after those procedures instead of firing a bunch of people. But there is an enormous amount that could be done to just clean up the bureaucratic mess and make it operate better, and make it better, as I said earlier, just at procuring and developing its own technology, because government, like everything else, runs on technology, and it's terribly bad at managing it and at procuring it.
Related, I suppose, part to that is building services to citizens that are more like what they expect from consumer companies. Right? So that you can go and apply online for whatever benefits, or even have the benefits delivered to you automatically without you having to apply, like happens in Estonia, because the government has determined that you're eligible for this benefit, so you get it. Why should you have to make an application? So doing things like that to make it more citizen facing, make it work better in that respect. And then I think the final part, which is really necessary and that nobody has really yet figured out in any way that I'm seeing how to do well, is just how to make government, lawmaking and policymaking, move at the speed of technological development. Right? We're just seeing that the process right now is you pass a law after a lot of deliberation, a lot of horse trading, and then that law just sits there for decades, and you're trying to specify that law for technology that will probably have completely changed by the time it's passed.
The EU's AI Act was a good example of this, where they started drafting the act, and the whole generative AI explosion happened while they were in the middle of that process, and they had to make modifications to allow for that. And now agentic AI is about to explode, and I don't think the Act has any kind of conception of what impact that might have. And we're just going to see more and more of these developments happening very fast, and law and policymaking staying way behind. And so how you reform that process is, I think, critical to governments maintaining both effectiveness and legitimacy, and I don't see yet where that is happening. But none of this precludes keeping the existing institutions we have. It's just all about, how do you make them work at the speed of the 21st century?
Danny Crichton:
I had a journal article last year that was focused on using AI, and comparing exactly what you described, I think was part three, which was you apply for a mortgage, you can do it online, you fill out a couple of forms, it's automated. And the idea here was there's a black box around human decision makers, there's a black box around AI. The way we solve for that is having due process concerns around them to ensure that decisions are made properly, that if there are mistakes made, whether human or bot, so to speak, that you have mechanisms to be able to appeal, to have that seen properly with procedures. So this idea that AI is bad or automation is bad? We've used automation in government for years. Since the sixties, the US Postal Service reads address labels without humans. It scans them. And that may seem like a facile example, but the answer is, of course we can automate things.
We can automate applying for certain benefits, as you pointed out with Estonia and others, where, why should I have to fill out this form that you already have all the information that you need? You're just asking me to fill it out and then double checking that that's what's actually in the system. Why don't we just solve that from the beginning? But let me summarize this whole conversation. Obviously a huge amount of focus on future democracy governance is concerned about bringing new technologies into here. If you look around the world today, there's a huge amount of spectrum, from people who see the entire world collapsing and blowing up to people who are extremely optimistic for the future, that see technology is extraordinarily empowering. I'm curious where you stand on the spectrum. Are you very optimistic, very pessimistic? [inaudible 00:41:12] depending on what goes on in the next couple of years, but where's your head today?
Gideon Lichfield:
I suppose I am... I don't know. I was thinking this morning that maybe I would describe myself as a cautious pessimist, but no, I think long term I'm relatively optimistic. I think that short term, I'm pretty pessimistic. I think the institutions that we have right now are being very, very sorely tested. I don't know just how much destruction the Trump administration is going to do to the institutions of the country, but it could be considerable. And it could be that the US goes through a period where there is really no democracy of any kind for, I don't know, maybe it's decades. And I think in saying that, we also have to recognize that for a very large number of Americans, many more than just voted for Trump, it wasn't really a great example of democracy before this either. And that's one of the reasons why we're seeing the political reaction we have. It was creaking and it was elitist and it was out of touch with people's needs.
So short term, probably pretty bad. Longer term, no regime lasts forever. I think that people will start to apply some of these techniques that I've been talking about. They'll start to see the need for a rebalancing of power between very large companies and civil society. And there's a British author called James Plunkett who's written about this a great deal. But I think about what happened during the industrial revolution, so late 19th, early 20th century. The impacts of industrialization were absolutely terrible. Urbanization, disease, absolutely slave-like conditions in factories, child labor, pollution, all these terrible things. And it took a while for basically society to rally round and say, "We need laws to protect us from these bad effects." And that's where we got social safety nets and we got labor laws and we got unionization and we got, in some cases, national health services, we got social security. All of these different things that made it safer for people to live in an industrial society.
And we're at that stage now when it comes to digital technology. James Plunkett talks about physical land and digital land. Physical land, we sort of more or less solved the governance issue a century ago. Digital land, we're only just beginning to solve it, and everything that is happening now is happening in digital land. And in digital land. What we have is these immensely powerful companies, and we don't have any, really, of the checks, the balances, the social safety nets, the regulations that make sure that what those companies do isn't harmful to society. And I think we're going to have to go through that process, and that could take a while, but I think it will happen.
Danny Crichton:
Gideon, on that note, I think again, you're the only person who has done two risk games, a podcast now, and I know we have you in Lux Recommends as a couple of our newsletters. So you've been in all of our products. Always great to have you here, and hope to see you again in person at a risk game soon.
Gideon Lichfield:
Thank you very much. When do I get paid?