Riskgaming

Finding a Third Way on the AI singularity

Our guest today, ⁠Mike Sexton⁠, believes that the AI singularity has arrived, and somehow, it ended up “on page C3 in the newspaper.” What he’s getting at is that the tools we have at our fingertips today like ChatGPT, NotebookLM and others are already so diversely capable, we have reached a point of no return when it comes to future societal change. We need to get ahead of those changes, embrace them, and offer new paths for everyone to take advantage of these tools.

Mike serves as the Senior Policy Advisor for AI and Digital Technology at ⁠Third Way⁠, the prominent centrist Democratic think tank that emerged from the Clinton administration and the pro-tech, pro-competition left that was at the core of national power in the 1990s. He researches the changing policy landscape around AI technologies, and argues that Democrats need a new direction other than anti-capitalism or existential risk doomerism.

Joining hosts Danny Crichton and Laurence Pevsner, the three talk about the rise of effective altruism and effective accelerationism (or e/acc), why improving government services is so critical for the future of the Democratic Party, AI technologies in robotics and research, and finally, why a bipartisan consensus is emerging on protecting America’s AI industry going forward.

continue
reading

Transcript

This is a human-generated transcript, however, it has not been verified for accuracy.

Danny Crichton:
Mike, thank you so much for joining us.

Mike Sexton:
Yeah. Happy to be here.

Danny Crichton:
So Mike, I first met you at our risk gaming session with Senator Mark Warner back in November, focused on deepfake and deep-six. This was about maybe a week before the election. Focused on AI, election security, and deepfake technology. You joined us. But I guess, I have just found out that you and Laurence have known each other quite a while, well before I got to meet you in DC.

Mike Sexton:
Yeah, Laurence went to high school with one of my best friends in college, and that's where we originally met. And I bumped back into him over Slack when he joined this group I'm part of called Foreign Policy for America. And my eyes lit up and I was like, "I remember that name," and then reached out to him. So it's cool to be always reconnecting with folks from... That's why you go to college, right? To expand your professional network. It's not for the education anymore. We have chatbots for that.

Danny Crichton:
Yeah, no one is there for the education, if you saw that New York Magazine article a couple of weeks ago. A huge focus of it was, "I'm here for marriage, and I'm here for making money." One would hope that both of those are somehow related to-

Laurence Pevsner:
And to find co-founders.

Danny Crichton:
And to find co-founders, which we support, to be clear. But one would hope that a liberal arts and a depth of education in engineering would be required for this. But you joined our Riskgame back in November. We've had a bunch of folks talk about the game. But I'm curious six months later, believe it or not, what do you remember from the experience?

Mike Sexton:
I remember pulling open my card and seeing Mark Warner had been assigned this comedian who makes fart jokes, and I pull out my card to see what role I am, and I'm the Secretary of State. I just gagged. It's like, this guy is on the Senate Intelligence Committee, and I'm playing the Secretary of State.

Danny Crichton:
Random dice for the role. I do love this, though, because I do think the onboard experience of these is, you walk in, you don't know who anyone is, and instead of doing the LinkedIn search of, "Who are you? Where did you come from? What are you lobbying for?" Et cetera, et cetera, is still normal at a DC cocktail happy hour. You just pull out roles, and some people get really funny roles. I will say that Senator Warner chose a funny role. He asked us for one of the... We had five or 10 comedic roles, and he did choose that one. Thankfully he didn't get convicted lobbyist, which was, I thought of the last moment like, "Oh, we should pull that out in DC," compared to New York. It's much funnier up here than down there.
But it was so great for you to join us. But on the LinkedIn introduction, you are the senior policy advisor focused on AI and digital technology at an institute called Third Way, which is trying to connect a bunch of different ideas around economics, AI, digital technologies, together around policy issues. Maybe talk a little bit about the work that you're doing and what the institute does more broadly.

Mike Sexton:
Yeah. So Third Way was founded during George W. Bush's first term, by a bunch of alumni from the Clinton administration, who were interested in establishing a democratic think tank that was focused on issues that specifically matter to moderate voters, swing voters. So it started originally very focused on gun control and appealing to moderate gun owners. And so that's really always been our philosophy. We've expanded into gay marriage, we were very early being pro-nuclear. I came in three years ago first working on cybersecurity and a little bit of privacy, and then my role expanded to cover artificial intelligence with the release of ChatGPT. And it's been a really interesting evolution as a policy issue throughout this time, because the Democrats have had this long-standing flirtation with this very anti-business, political mindset.
I won't put my finger on where it comes from, but there's a very anti-public sector, this perspective that companies by definition are exploiting people. So we are trying to, I think especially with regards to AI, reframe that to say, "The private sector is where innovation comes from. It's where research and revolutionary technologies come from. The United States government is not going to be the ones building the most advanced AI and nuclear fusion reactors. It's going to be companies." And so Democrats need, I think, to have a political mindset that is a little bit more opportunistic about these innovations, and instead of seeing the companies creating them as the problem, think of how are we going to be using these innovations to help the American people, both with the hands of government and just with the hands of the private sector acting on its own, as people figure out what these tools are and adopt them into their lives.

Laurence Pevsner:
To the end, this is such the topic right now. It seems like you stumbled into what is the hottest topic in party politics. Left, right, center, everyone wants to figure out, what are we going to do about AI and in all of it, is it going to become AGI, and if so, what's going to happen? That is the debate and you're right at the center of it. So one thing we discussed early on, Mike, when you were joining us for our Riskgame and after we were texting back, and forth and you were talking about how you felt like there needed to be some kind of framework to respond to E-ACC, this Twitter acronym you see all the time. I wonder if just for our listeners, you could start by saying what that movement is and what you think its positives and negatives are and why you think you're proposing an alternative framework.

Mike Sexton:
E-ACC or Effective Accelerationism is an outgrowth of the effective altruist movement, the philosophical philanthropic movement, the guy big under Sam Bankman III. That movement had a large flirtation with a lot of AI-doomer philosophy, and people who thought long-term, I want to make the best of my financial donation. And if you believe that a likely outcome of the world is an AI apocalypse, then it makes sense to be donating a lot of money to prevent the AI apocalypse, potentially taking some radical measures like a six-month global pause on AI development.
But this then inspired its own evil twin, as I describe it, which is effective accelerationism, which takes this philosophy and just puts it on its head. They say in the same way effective altruists say we want to fit along utilitarian lines benefit humanity in the best way possible, they say we also want to do this and AI is the way we're going to do it. And so AI must be invested, in must be sped up. We cannot slow this down at all. People like Marc Andreessen, they had somewhat had some controversial takes about this saying that if you slow down AI, you are literally murdering people in the future who could be getting life-saving treatments with the AI that you are slowing down.
This gets a little controversial in terms of that kind of language, but I think at the core in terms of saying innovation is what drives the improvement of society, there's a philosophical nugget at the center here that you can't just wave away. So I think when Democrats look at the whole spectrum of this, look at everyone from Marc Andreessen to the polar opposite, Eliezer Yudkowski, who has been pretty popular with effective altruists in the past, who believes that there is a more than 99% chance that AI is literally going to kill everyone. Democrats can't be sticking to either of those polls.
It's a little cliché to think we need a Third Way between these two complete opposites, but yeah, you obviously do. So I lay out a three-fold agenda for Democrats. At Third Way we love doing things in threes. So the first agenda item would be advance. To be an agenda item to advance the state-of-the-art in AI, we should take it off the agenda the idea that we should be doing a six-month pause. Unless China happens to get out of the car and go take a smoke break from AI development for a few months, we should have our feet on the gas.
Second is protect. There are areas, especially like deepfake pornography and child sexual abuse material where there are urgent problems with AI that we need to solve. We should be solving those problems and they should be targeted. They should be prioritizing things that are already manifesting, not things that will hypothetically happen five to 10 years in the future. And the last is implement. This where I'm worried that Democrats are likeliest to fall down, is that we need to be using AI to improve government services, to improve education, to improve public safety, public sanitation. There are so many ways we can be doing this.
If you think to the next presidential election and the election after that, I don't know, I can't tell you where robotics will be. But I can say with a hundred percent certainty it will be more advanced than it is today and it will probably surprise you. So the Democrat's agenda should have a plan. What are we going to do with AI? What are we going to do with robotics to improve American's lives? So that's what I think is really the top three things Democrats need to hit to be able to talk to people who are nervous about AI, but also to appeal to the people who are benefiting from AI and who are actually implementing these AI benefits in companies themselves.

Danny Crichton:
We were talking about early on in the show was that even the pre-show about cheating, one of the interesting things I think about with AI is the rapid adoption of users to these tools, right? So you look at ChatGPT. Fastest app in history from zero to a hundred million users. Launches in late 2022 on November 30th. Within I think two months, it's up to a hundred million users. Now I think it's probably approaching a billion. If you look globally to outside of the United States, kids are using it as young as nine and 10. They're turning in homework assignments in fifth and sixth grade. You go into college, people are doing this here. I was down at a military base in the south, half the flag officers in the room had raised their hand when they said they used ChatGPT at work to fill out mundane, not even legally probably. This is not DOD-approved, but people are using these tools.
And so on one hand, when I think of democratizing the benefits of a lot of AI, people have access to these cutting edge models. They may not have access to pro or some of the more expensive versions, but the other models that are publicly accessible are pretty good. I think the big question becomes how much do you see concentration of wealth around these sorts of... So inequality, not so much of the usage of these tools, but in the same way that social media was very democratized and open to everyone, but also concentrated wealth in a narrower group of people. How does that apply to AI? And second, if I think of democratic politics, which I don't do, certainly not as much as you and Laurence do, but when I dovetail and I see, look, there's a group on the far left that's fairly focused on inequality. There's a group towards the center that says, look, there's a bunch of other things we should try to focus on. Is that the right lens to start to think about an AI technology policy for the left?

Mike Sexton:
I think Matt Galasius was just writing about this in his email newsletter, that there are Democrats whose analysis of inequality focuses on who has power and how do you take power away from them. And there are other Democrats who are more focused on the outcomes. Who are focused on what benefits are people getting from this company, this service, this government institution. And thirdly, we're very much the latter kind of people. We are more skeptical that the big antitrust battles that are mostly pushed from the left would actually result in improved competition or the improved quality of service for Americans. And I would say when it comes to this question of inequality, I do. It makes complete sense to me to be skeptical that open AI could become the next unbreakable monopoly.
It says something that a company that was founded in 2016 is now coming around to possibly be the new biggest company that says something about our economy and the innovation that we don't appreciate through our traditional antitrust lens. But it also, I think, underscores the importance of open-source artificial intelligence. And this is an issue that has been tricky to parse. When I was writing in 2022, 2023, these large language models were pretty new. The question of catastrophic risk was a lot more front of mind. But as the open-source models have come out and advanced and continue coming out, we have seen that the apocalypse has not happened.
And so the highly risk-averse position, which was logical I believe two years ago, to say let's hold off on open-source AI for now, I think the tables have really turned on that. The release of DeepSeek in China as an open-source model, has really demonstrated how you can take an open-source model or just take other open-source models and copy them and build something new that is very useful that can give you significant market share. It's an interesting issue that's played out politically. It's really the libertarian that is most vocally, as I see, on board with open-source AI. And I think Democrats, if they want to talk about AI competition, if they're serious about not allowing any company to become a monopoly, the best way to do that is to make sure that there is a robust open-source AI market and the United States is in the lead.

Danny Crichton:
It's interesting you point this out. I think if you go to the DC debates, open-source has been this weird kryptonite for some, the superpower for others. We hosted a game last year with a bunch of folks in DC, focused on this very subject. And people still A, don't understand open-source versus closed source. A lot of people want solutions that are impractical like I want open-source but only for American scientists that are pre-approved. And it's like, well, either the code is open or the open weights are available or they're really not because otherwise you have a Streisand effect, you can just copy it and other people can grab it. And so that dovetails., Well, I agree with you with the Libertarian right, which is I have this code, I've given away the copyright, everyone can use it. It's beneficial to humanity.
I think the second piece here, which was hard for a lot of folks to understand is what I like to call the good enough thesis of AI, which is a lot of the open-source models may not be at the frontier. They may not be the best in the world. In some cases they are briefly or whatever the case may be, but in many cases they are not the frontier and best models, but they're good enough for most use cases that if choosing between a proprietary model that costs money requires you to do expensive inference versus an open-source model that's free, the license to be able to use integrate in whatever way you want, those open-source models look very, very attractive under certain circumstances. And so I would like to see a dovetail between left and right to continue to go down this route.
The challenge has been national security, which we spend a lot of time on and I don't know how much you do as well. But on the national security front, open-source is anathema to almost all security objectives that the Pentagon and the National Security Council care about. And so it's been a very delicate dance between different constituencies, different groups, how fast do you want to accelerate economic growth versus how much do you want to try to protect national security? And I don't know where all that goes. You've written a couple of memos on this subject, so I'm just curious when you lay out your agenda, it's one thing to demonstrate to have one, but when you think about laying out the agenda, what does that look like?

Mike Sexton:
So a couple of months, I published a memo with the explicit title, Open-Source AI Is A National Security Objective. I was not the first expert to be saying this. There's a guy named Keegan McBride at Oxford University who had publish articles more or less making the same case. My memo happened to be right after DeepSeek, which was not planned. But it did underscore this point, which was if you look at DeepSeek and you're shocked and you're worried China is going to overtake the United States, there is a very mathematically logical corollary to what you are feeling, which is that the United States need to be building the best open-source AI in the world. There are some people like there's a group, the Open-Source AI Foundation or OSAFE, that's come about in a couple of months, that want to force the U.S. government to only use open-source AI in contracts.
That's a interesting take. I don't think it's necessarily what we are saying needs to happen. But in terms of the best open-source models in the world should be American, I believe is a pretty logical statement if you were surprised by DeepSeek. If you are using an AI model to build the next pair of smart glasses or to be helping with taxes or immigration forms or something, then no, we should not be using open-source models from China that do not know about the historical events of Tiananmen Square and cannot tell you two forms of oppression that the Uyghurs face. We believe in free and open access to information in the United States and that's going to require these open-source models that are built without a censorship regime on top of them.

Laurence Pevsner:
In the Riskgame that we play on this, one of the key lessons that our players learn is that the reason why companies will invest in open-source is not for the betterment of humanity as much as they might say that, but it's actually to undercut their competitors, right? This is Meta's play right now. The whole reason Lama exists is to make it so that their direct competitors that have the closed-source models are undercut by having this free open-source model they can use and therefore the value of the closed-source isn't nearly as high.
Taking that lesson and thinking about essentially that these are always monetary incentives behind all this, going back to your idea that there's less doomerism now when it comes to AI, one thought I have as well, isn't there less doomerism simply because these models are making a lot of money and so people are incentivized to want the AI to keep advancing, to keep going. And at the same time, we haven't actually reached the threshold where we would be worried about a threat. The predictions of doom would've come out during an AGI scenario, and if you put Tyler Cowen aside, who I think is quite premature in his declaration of AGI. But maybe this is a good transition to just asking you, you lay out your own definition of AGI in one of your memos. Maybe you can talk about what you think it is, how close you think we are, and whether you think that that doomerism threat has any merit or not.

Mike Sexton:
My definition of AGI is the dictionary definition of an AI that is able to do anything a human can, which unfortunately comes with all sorts of ambiguities. Is it better than the best human? Is it only as good as the best human? Because the problem with this is once you built a chess engine that is as good as the best chess player, you've actually built a superhuman chess engine. There isn't some point where chess playing AI sticks around at artificial general intelligence and does not cross over into artificial super intelligence. The crossover is immediate. Notebook LM is not just as good as any human at taking notes from a document and then turning it into useful ways to consume that information. It passed that immediately.
So I see models like O3 and how there's still some areas that they are not perfect, but I've also been looking at ChatGPT really since it came out and just thinking, well, there is no one human on earth who knows all the information that ChatGPT 3.5 has. So if you had asked me 10 years ago knowing nothing about any of this that would happen, I would've looked at ChatGPT 3.5 and said, "If that's not AGI, it's somewhere in the ballpark at least." Because I might know more than ChatGPT 3.5 in certain areas where I'm an expert, but I definitely don't think there is anyone who has its level of knowledge of medieval European history and languages spoken in southeastern Asia.
By combining all of that, I would've already said that looks like a significant milestone to the singularity. I think there is a little bit of a bias among the experts to try to tamp down excitement. What's happening is that I'm publishing a memo about the singularity, literally saying, we may have passed the singularity. And this memo is probably not going to be on our website's front page. We're mostly focused on the same things that are top priority issues for Democrats, like budget issues, like immigration. The fact that the singularity passed is on page C3 in the newspaper.

Danny Crichton:
Well, I also just think... There was a moment when I think back two, three years ago where existential risk was on the front page of everything, people were getting a lot of access on Capitol Hill. It was very, very influential. And then I feel like there was this gap where a combination of, I think, existing industry trying to lobby around this and say, "Look, this is ridiculous. You can't make policy based off of the worst case scenario." And then two was just, it just didn't come together. It didn't happen, right? You're like, okay, we just invented this massive technology. Everything's going to change. And I feel like people using it were like, "This is amazing."
And so it's one of the few examples where I still think these existential risks still exist. I am sensitive to the idea of super intelligence. There is maybe a singularity. Not in a deep belief, but in a, I get it. Enough compute power, you can scale up, chips are getting faster. At some point we've seen a few examples of AI emerging out of its shell, so to speak, out of the computer, to be able to control the world in unique ways. I think this becomes a little bit more heightened as you get to robotics. And maybe that's a good stepping stone for you because the other subject which in our world very much dovetails with each other.
So we have one of the lighting AI companies in robotics, physical intelligence, PI, and a huge part of what they're focused on is still super nuts, which is collecting data about the world and building an operating system for physical intelligence. But I'm curious when you think about the robotics side of the equation, because you've been working on a working paper for a while on this subject, what's the future there and do you think that you're going to see a resurgence of some of the concerns? I call this the Hollywood effect, as do many others, of suddenly you see a Terminator drone or you see the uncanny valley of a bipedal machine walking through your office and you're like, "Oh shit, it's here and it's going to kill me."

Mike Sexton:
It's a good question. I found people who are more pessimistic about the future of humanoid robotics. It seems to me the trajectory, if it continues, that there is, I'd say at least a 50% chance that there is some commercially available humanoid robot that you can get in your own home to take care of chores by the next presidential election. One X technology. There are humanoid robots in people's homes for testing, and as you gather that more training data, and often you can do that training in simulations now, thanks to Nvidia tools like Isaac Sim. There's a lot that's being done to bring robotics to that ChatGPT moment. And I think that that ChatGPT moment is probably in the next five years for robots. My personal take. And then the question from there really becomes, what is this going to do to the economy? I have a friend who was working as a researcher, and just like you, went to grad school college with wanting to be a researcher or a research assistant in political science, and I think has seeing ChatGPT come and more or less run roughshod over his preferred career track.
If your dream is collecting information from internet sources and synthesizing them in a way that helps your supervisor, there are tools that already do that better and you should have been looking at different career tracks. And with robotics, I think we're going to see a similar level of jobs having to move where my hope would be that it's not necessarily far fewer janitors who are employed, but that janitors will be overseeing robots that are doing their job much more effectively at a much higher volume than they would be able to. And then this brings up questions of, okay, so in a nursing home. In a nursing home, what jobs should be done by the robots versus the humans? Because I certainly don't think nursing homes should be fully automated. I do think there should be humans in nursing homes including with robotics, but the humans should be in specific roles. And I think there's going to be a lot of the future of education of public safety.
It's going to be robots that are doing a lot of the hard labor and humans who are in a role of taking responsibility for that labor, being accountable if something goes wrong, interacting with people to explain what's going on. We've already really seen this evolution in the military with the spread of lethal autonomous weapons. People talk about lethal autonomous weapons all the time as a category. It drives me insane because if you actually talk about the weapons themselves, this naval drone, this sentry gun, this suicide drone, it becomes pretty clear, okay, the human in the loop... The Pentagon has not just built these and then thought, oh, shoot, we forgot to put humans in the loop. They have humans in the loop. The humans do things in the military, they just do things with autonomous weapons now. And that's where all these other sectors of the economy are going to be going. How do we make sure that those jobs, that people are trained for those jobs, that those jobs are plentiful and can be created in the future?

Laurence Pevsner:
This goes back to that famous line in the IBM handbook. "A computer can never be held accountable." And that's why you have humans in the loop, right? That's why you are suggesting humans in all these jobs. But there does seem to be some kind of, I don't know, contradiction here, where if we really are believing that at this point, if robotics has had a ChatGPT moment and if you're arguing that we already have AGI within the field, why can't they be held accountable? Hasn't that changed? Now the computer has its own intelligence. Why do we need the human to be accountable instead of the computer?

Mike Sexton:
Organizationally, if you are running, let's say Third Way, and you want something done, you could replace me with an AI that is just looking at what's on the internet and synthesizing it into some sort of lowest common denominator that sounds most appealing for Third Way. And it's possible maybe that ChatGPT 5 could be good enough to get rid of me, but at least from where I see right now, it really does matter actually to have a person who can make dispositive choices. I think the AI tends to narrow itself to this lowest common denominator. I think that is structural. I'm not sure that is going to go away in future models. But for me, as a human in this role, I am able to say, "I think we should take this position on this issue, and I am aware that this is going to tick some people off. And this is how we should deal with some people being ticked off by this position."
I don't think AI is necessarily able to lead with that... AI does not have that leadership capability to say, "We're going to tick some people off. This priority is going to the side." I think if you ask AI, if you give it all of the notes from the past two years of what lawmakers have said about AI, about the need for regulation, then that AI is more likely to say, "Okay, well, here's our structure for a holistic AI regulatory framework." It's not going to say, "Hey, actually the holistic EU thing, they're doing that in the EU, it's not working out so great. Why don't we take a beat and say, look, when we're back in power in four years, maybe we can re-approach this regulation issue. But for now, let's be really targeted and strategic." I don't think AI is necessarily giving those answers that could actually disappoint some people.

Laurence Pevsner:
Isn't there a fear though, that you could replace the human by saying, "Okay, AI, yes. Don't just be your normal AI self. I want you to behave like Mike Sexton. Or even, I want you to behave like Abraham Lincoln. Make a decision that a great leader would make and then we'll follow whatever you tell us to do." My worry would be that the more that we upload our own ideas, ways of thinking, the more that the AI will be able to mimic those too. And that seems like it's quite a threat to human leadership, as you put it.

Mike Sexton:
Yeah. I'm just not sure that that would scale very well. If that AI at some point makes a choice that you don't like, you have to reverse it in three or six months, are you going to replace the AI and you have a person who maybe 12 months into the future, okay, we had a hard talking last year that this specific way we're going about things doesn't work, and I know this person is accountable for that failure. I don't think you can scale responsibility with technology. I don't see that in the cards in the near future.

Danny Crichton:
What is top of mind for you next? Are you continuing down... I know you're a fellow in AI, so obsessive and obviously there's no way to escape it. But when you think of the issues that are coming in, we're heading into mid-2025, it seems like a breakout year across every industry around artificial intelligence. What keeps you up at night? What do you focus on in your work in the next couple of papers?

Mike Sexton:
I think the interesting, challenging, exciting thing for right now is there's a lot more bipartisan consensus on artificial intelligence than people realize. I think the media naturally focuses on areas of conflict, and so it's interesting to be working on a specific policy issue where there actually is not this partisan rancor that there are on many other issues. So I'm very interested in building coalitions with Republicans who have the same views on AI, and I am interested in that from a place of deep respect. I read Atlas Shrugged in high school. It did not change my opinions, but I thought, there's something interesting here to this thesis of Atlas Shrugged, which I think is basically that entrepreneurs, private citizens, inventors, are the people who really push society forward, and government is effectively a parasite that tries to stop these people from achieving all the success they deserve and then slows the world down.
I don't completely agree with that thesis, but the idea that private individuals do invent the technologies that actually revolutionize society is a pretty hard one for me to disagree with, working right now on this issue. And so I feel very comfortable engaging with Republicans, Libertarians, folks right of center on these issues. And they have a lot of interesting policy ideas too, that don't fit what you would assume in some sort of... Like George W. Bush's old party. So I'm really looking forward to continuing that dialogue and finding a special place for Third Way speaking to the center and people who see AI as one of, if not their top issue right out.

Danny Crichton:
Well, that sounds great. Well, Mike Sexton, senior fellow at Third Way. Definitely check out his papers on AI, robotics and the singularity, all of hot popular topics right now. But Mike, thank you so much for joining us.

Mike Sexton:
Thank you.