Riskgaming

Intel, chips and America’s future

By now, we’re all familiar with the crisis that has faced America’s chip manufacturing industry. Intel remains the last bastion of homegrown chips (if we exempt new developments from TSMC and Samsung). Yet, Intel’s stock has been bludgeoned, down more than 55% over the past five years as Nvidia skyrocketed about 1,475% in the same period. What would it take to rebuild America’s chip capacity? Do we have a chance to build our own TSMC?

That’s the question that ⁠Kyle Harrison⁠ has been asking. He’s a general partner at ⁠Contrary Capital⁠, and alongside his co-author ⁠Maxx Yung⁠, the two wrote a new blockbuster report called, ⁠“Building an American TSMC.”⁠ It’s a magisterial look at the accidental history, enervating present and intricate future of semiconductor fabrication in the United States, and what it would really take to maintain and grow this critical capability.

Kyle and host ⁠Danny Crichton⁠ talk about Intel’s recent and historical woes, how to avoid another massive manufacturing failure like General Electric and Boeing, the complex barriers to entry in the semis market, and finally, why ecosystem development particularly around specialized labor is so crucial to protect.

continue
reading

Transcript

This is a human-generated transcript, however, it has not been verified for accuracy.

Danny Crichton:
Kyle, welcome to the show.

Kyle Harrison:
Thanks for having me. I'm excited to jam.

Danny Crichton:
Absolutely. We were just talking about... We were chatting last week on Intel semiconductor, SMIC, a whole bunch of stuff going on over there. It must be the fastest moving story, even in a fast moving industry, fastest moving story going on over the last couple of days.
There's been new export controls. The export controls will pull back. They seem to be put back in place again. We've had this direct intervention of the president into Intel in a very unique way that I think is fairly exceptional. So, there's just so much to cover over here. What's on mind for you right now?

Kyle Harrison:
What's interesting is that I feel like I have been living this constant volatility for the last six months or so, because we've been writing this piece on and off for the last six months. And every few weeks, it felt like there was something where it was like, "Man, dang it. If we could have been publishing this now, it's great timing. There's something going on. It's right in the moment."
So, I feel like what I've come to appreciate is that this is a space that never stops moving. It's just shining its light on something else. Intel just happens to be at the center of so much of what's going on right now, I think for the right reasons, and I'm curious to see how it shakes out. But it just never stops. This is such a critical industry that we can't stop paying attention to it.

Danny Crichton:
One of the things I'll say on writing, I mean, we were in London a couple of weeks ago and they just launched the Strategic Defense Review, but they'd struggled with the exact same problem, which was they kept rewriting it, and then world events would intervene. Famously, Ukraine had this drone attack in Russia, and then overnight it was like, "Well, we need counter autonomous weapon systems," and that's nowhere in the report. So, we need to rewrite the whole thing based on what's going on right now, and you end up losing control.
But you've been working on this piece around Intel for a very, very long, and semiconductors more broadly. I'm curious because, on one hand, this is an industry, and Intel is a 50-year-old company. It's decades old. On the other hand, it feels like it's all coming to a head. This is the moment that either makes or breaks. This is the moment it stays public, maybe it goes private, maybe it's a PE deal, but it definitely seems like a tipping point that they have not experienced for decades of its existence.

Kyle Harrison:
I mean, absolutely. The thing that is critical to understand about Intel is that it has made a number... We were talking about this before we started recording. A number of fundamental, just huge mistakes over the last 20 years. But because of a few choices that they made, one of which was basically becoming the only American company that did not stop manufacturing its own chips while everybody else fell. And those fundamental decisions that they made have placed them at the crosshairs of this really critical point.
What's terrifying is that whether you like it or not, whether you agree with the decisions that Intel's made, whether you agree that they've been-

Danny Crichton:
[inaudible 00:02:37].

Kyle Harrison:
... well-governed or not, right? Most people don't. Most people don't. But whether you like it or not, Intel remains the only US-based company that has even a chance at being able to achieve cutting edge chip fabrication.
They're not doing it well. They're continuing to dropping the ball. It's not that they've suddenly fixed everything, but it's the fact that they're the only ones even maintained the possibility of that thing. And now, as we approach issues like growing tensions with China and things like that, and especially tensions over Taiwan, that becomes a more important question than ever before.
That's exacerbated by a massive amount of energy and hype and capital around AI, which also has semiconductors at its center. So, I think that's the reason why Intel becomes this massive flashpoint for everything or any of us are talking about.

Danny Crichton:
Well, it seems true... It's interesting because they've placed themselves, I think completely unintentionally, in a very strategic place. They have been one of these folks who has gone every decision wrong, but you ended up in the crossroads, and so you're like, "Look at how smart I am. Everyone wants me right now at the ball."
But I'll just go over a quick history of the [inaudible 00:03:43] entirely. Arm is the complete owner of the architecture for all of mobile devices around the world. Intel at that time was [inaudible 00:03:50] focused on desktop, the famous Wintel monopoly, focus cloud and data centers. And then, obviously, more recently, also at the time, they also missed out contract manufacturing. So, being the fab for others, as opposed to just being an exclusive vertically-integrated fab themselves, lost the ability to be dynamic, to make a much more resilient business and really build up the fab business.
And then, now, you have the Nvidia problem, which is AI is so hyped. They really don't have great entrants here. Companies are designing their own chips. Amazon, Google, whether it's tensor processing units, et cetera. All that is outside of the Intel world entirely. And as of this morning, as we're recording this, it just crossed the 8% barrier for the S&P 500. So if you're buying the S&P 500, it's eight out of 100 points of that is coming out of Nvidia stock alone.
So, Intel has just sort of missed every strategic motion here for the last two decades. Obviously made a lot of money and that's okay. Boeing made a lot of money. There's a lot of great examples. GE made a lot of money over the years. The difference is that Intel feels like of one of one that will go to zero of one, and if it's not protected, if it's not saved, there really isn't a backup choice here.
I do think it's interesting if you look at GE. We did not save the company. GE Finance basically blew it up. We lost a lot of different industries. You look at photovoltaics, much of the energy industry, some of the core manufacturing that GE was known for has been outsourced and has gone permanently. Boeing seems to be in the closest parallel case to Intel today, of a company that if we lost airframe manufacturing with Boeing, you sort of lose that industry entirely and it's not going to be able to be recovered. No offense to Boom and some of the new entrants, but that doesn't seem possible, and Intel seems to be in that place where it's not the company you wanted but it's the company you need.

Kyle Harrison:
Well, and I would even add to your list, there's a couple of things to touch on there that you're absolutely right to unpack. But the first thing I'd add is that even within its core business of manufacturing chips, Intel has missed... It's not even just missing some of these paradigm shifts, so it sort of evolved their business, required them to get into new industries. I mean, they, at one point, had the interest in buying Nvidia and was sort shot down by their shareholders, and I think it was for $20 billion they wanted to buy Nvidia. So, there's a ton of things they've missed.
But even more critically, EUV technology, they were one of the first to invest in the technology and basically just kept sitting on their thumbs long enough that they allowed TSMC to run wild with it, and it took them several generations before they caught up. They're doing the same thing now where they've invested a massive amount in high-NA EUV, where it's yet again another ability to be able to invest. They've sort of cornered the market on the machines that are coming off the line in 2026. So, there's a question of will they be able to make that again, but data indicates that probably that something will go wrong there. The technological misses, you're absolutely right, and the list is longer than we have time to go into.
The second piece of it is that Intel is absolutely a victim of this sort of height of stakeholder capitalism, where you basically look at the playbook of Jack Welch and it literally has just hollowed out all of these critical industry businesses from either him or his sort of apprentices. Intel is absolutely one of those, right, where you look at the history of Intel CEO starting about 2005. It's when they started hiring people that had more of a finance background, an engineering background, eventually hiring people that had never been an engineer. Massive amounts of stock buybacks, very little investment in innovation, into R&D and maintaining the cutting edge. That's right around the time they stumbled at their 10-nanometer chip.
There's all this historical context to indicate that Intel, like many of the companies you've mentioned, got hollowed out by the way that people thought about building companies and maximizing shareholder value, rather than actually maintaining a cutting edge innovation lead. All of those things are true. I think that one of the things that Intel has that is so unique is that, yes, aerospace is critical, and if we lose the ability to manufacture planes in the US and sort of cede that territory to Airbus, that sucks for us. That limits us. That cuts us out of an industry. That cuts us out from the cutting edge. There's all these fundamental issues with that, and that's true of a bunch of industries.
What's different about this is that it is the fundamental, and we make this point in the report, that the reason that this is so critical is that this is a fundamental building block to cutting edge AI. And that, for the first time, when you really look at a lot of the commodities that wars have been fought over, whether that's oil or plutonium or whatever, that having the thing... In the past, having oil doesn't necessarily give you a nonlinear ability to access more of the thing. Maybe it gives you the ability to sort of more linearly, but AI is basically self-replenishing where if you have better AI, you turn around and AI gets better with the help of AI. It's sort of exponential, right?
We're cutting ourselves off from that at a point in time where we are staring down the barrel of a massive potential threat of a conflict with China. China is not in the same position. China is increasingly building up its domestic capacity for chip fabrication, both of sort of more legacy nodes and cutting edge nodes, and they are sort of the aggressor to our supply of cutting edge nodes with Taiwan. So, I think that the interconnection to these fundamental first questions that we have to answer about technology, I think that's what's different from things like a Boeing.

Danny Crichton:
We spent a lot of time on this issue. We just had a couple of senior administration officials here yesterday, and this was obviously part and parcel of the topic that's going on right now. I'll put it in two categories. One is for those of us in Silicon Valley, we love creative destruction. We love Schumpeterian models. Hey, you're an old stodgy company, you lost. You get rebuilt. You become the fertilizer for the next generation of folks. The smart people, the entrepreneurial people leave. They go build new companies. They build them up. And that's been the history of Silicon Valley for 50, 60 years. Look, it's called Silicon Valley for a reason, as we all know.
Now, the challenge is fabrication is so expensive that it is not the same, as it is in almost any other field. Boeing, again, is an excellent parallel. You want to build an airframe, you need the world's largest factory. There are tens of thousands of workers who are very specialized to go do that. It is very, very complicated work. You want to go build a fab. It is tens of billions of dollars to build a fab, both in terms of the architecture and the skill of that architecture. It's actually very, very hard to make a clean room that is required to go do this. You have to buy EUV equipment from ASML, which is a quarter billion dollars plus.
This is not open for startups. This is almost straight out of Econ 1 of sort a barrier to entry. So, creative destruction just doesn't apply in the same way. Once you break this up and you take two, three years off, this gets to your point about EDA or electronic design automation software. As soon as you sort of give up a lead here, unless other companies stumble, you will never catch up. It does not seem that TSMC, Samsung, SK Hynix, depends on what chips we're talking about, have any part or problem with their culture today in terms of continuing to maintain their leads and continuing to run forward. So, that's one part is the creative destruction piece.
The other that I think is really quite interesting is obviously over the last 20 years when it comes to US-China, China's used creative destruction very effectively as part of state-directed capitalism. They've taken over photovoltaics. They've taken over electric vehicles. There's this fear that chips is next. You have SMIC, you have Yangtze Memory, a variety of other companies that are building the next generation foundries there.
Does history repeat itself? I feel like it's not obvious, right? This is the most complex technology in the world. It requires the best skills in the world. It requires technology that, in many cases, like an N of 10 kind of technology. It's not millions of copies. But EUV equipment is in a couple of fabs all around the world and that's it. Is this one where they get held up and it's just not as easy as it is?

Kyle Harrison:
I think that this is one of the sort of fundamental questions that the venture and startup ecosystem is not as well-equipped to answer. It's not that there's not an answer, but it's that our frameworks are very limited. The same frameworks that did a very good job building photo sharing apps and couchsurfing, marketplaces and things like that. It was very effective at a lot of things. There is a fundamental element of the type of complexity that we're talking about, that these ecosystems are not really built very well to answer.
So when you think about the creative destruction piece, I mean, when you're talking about Boeing, you talk about Boom, one of the things that I am very much I want to emphasize, and we emphasize it in the report, I emphasize that every time I talk about it, this is not to say that the startup ecosystem is trash, right? That there's nothing good going on in new age computing, in new age fabrication, in new age design. It's the same thing with Boom in aerospace. It's not that those things are inadequate. It's that they are inadequate for the specific problem that we are trying to solve that has a very specific time series attached to it.
There's a lot of really great things happening that are going to be important for the next 20 years of computing, in the same way that Boom is going to be really critical over the next 20 years of aerospace. But what we're talking about is that there is a fundamental risk of conflict with China over the next five years. It's a very short time series. When you think about the things that [inaudible 00:12:47] is talking about when he lays out a time series, it's very, very short. And China has done, to your point, a lot of things that are very critical to be able to set them up for a lot of success.
When I think about how is history going to play out and what's going to happen here, this is one of the reasons why the piece is so fundamentally focused on, Intel is our only hope, and that is a waning hope. When you see things like what Intel put in their most recent quarterly earnings call, where they basically put up their hand and said, "We're going to do our best with 14A to be able to find customers for this. And if we can't do it, it might not be economical for us to do this." I think Craig Barrett... I mean, his piece where he laid out what his blueprint for Intel would be, but basically said that that's bullshit. That it's such a cop out to be able to say they're basically begging for help rather than innovating, rather than changing their business and changing their incentives.
Intel is sort of copping out and saying, "You guys better freaking help us or else we're going to die." But the reality is that the Intel that we have, that is the reality of their situation. That's what they're giving us. It's not great. It's not what every leader who is leading the charge at Intel would offer us, but that's being offered. So, we need to come up with some solutions. If we don't do that in the short term, we're going to have a fundamental problem. If we lose all capacity in the US and there is a conflict with China, I mean, that's a really, really difficult obstacle to overcome.
And to your point about we need somebody else to stumble for us, I don't think anybody else is going to stumble. I think that we have a very critical decision to make about whether or not we are going to invest. And that's not just capitalists investing and putting money into Intel, whether that's spinning off the foundry or investing in the foundry as a subsidiary of Intel, but that's also the US government putting massive incentives on trying to pull as much design work into Intel's foundry as possible. There's so much that needs to go right for us to maintain a snowball chance in hell of having any fabrication capacity domestically.

Danny Crichton:
Well, and I think what you're getting at, and this applies to a lot of the really challenging problems in industrial policy in the United States, which is really ecosystem development. One of the other subjects that came up in our conversations recently is obviously shipbuilding. Everyone's talking about shipbuilding. I am the most negative on American shipbuilding. It's not coming back. You let it go.
But it's a good example of this where, again, a lot of different specializations are required to build a ship. You need different people with different skills. You need painters who know how to paint the hulls. You need people who know how to weld. And a lot of these are apprentice-based skills, so you actually need to learn from someone else who knows those skills. So if you lose a generation of people who know how to build, you can't really just recreate it. There's no school and you open up a textbook welding 101, apply weld here. It just doesn't work. It's an apprentice-based skill.
And even in the chips, it may look more intellectual, but I think we underestimate how much tacit knowledge goes into this, how much you actually just have to know what do chips do, what do they do physically. I am not a chip designer. I am not a chip engineer or a process manager. I do have friends though, and one of the things they always say is the laws of physics all apply and they all work in the textbook. And then you get into a lab, and you actually have to build this in real life, and then you realize actually interference comes in different ways and a lot of these rules get broken. So, you really have to have an intuitive understanding of this sort of stuff.
My fear is, look, you want an economically-competitive company. But the more important thing is in the same way that losing US steel meant not just losing steel production, but a whole host of associate industries or what's happening with Detroit Autos today where it's not just the big three, it is the next 500 suppliers who don't just supply to the auto industry but also to defense, to aerospace, to all these other critical industries. These are ecosystems. They're networks. And once those networks start to unspool, they're very, very hard to recover.
One of the things I always advocate for, and it's hard because in our system of government, you have federal, state, local, a lot of economic development happens at the local state. Very little happens at the national level, although that's changed the last couple of years, is very few people have that lens of taking advantage of modern network analysis technologies, which have come out in the last 20, 30 years and saying, Look, do we have the right skills? Are they in the right places? Are we condensing them so that they are competitive?
One of the issues with shipbuilding is there's 20 states who want shipbuilding. So, we're trying to spread it across 45 congressional districts in 20 states, rather than trying to say, "Look, let's just have one great place that if that's what you want to go build, we're the greatest shipyard in the world and you go here."

Kyle Harrison:
What you're speaking to is probably the single most embarrassing self own that has ever happened in the history of geopolitics. We have no one to blame but ourselves for a lot of the things that have happened. And that we've spent a lot of time, our research team has spent a lot of time with Anduril specifically, as a company, and thinking about a lot of the things that Anduril is doing, in particular from a manufacturing perspective and the way that they're building their Arsenal-1 factory.
They're speaking to a lot of the things you're talking about, where it is one large location. They're building it with not industry-specific methods and tooling, but they talk about you can build their line of missiles and things like that with things you would find in your... Tools you would find in your garage. They're doing everything they can to solve a lot of the things that you are talking about.
When I think about what is the opportunity here when... Nothing that you've said is inaccurate. All of those things are systematic problems. A lot of the moneyed interests and the number of congressional districts that are represented in one contract, all of those are fundamental problems. The two things that give me confidence are, number one, I have a book up here behind me, Freedom's Forge, and it's basically the way that-

Danny Crichton:
I have that book too. I don't know-

Kyle Harrison:
Yeah, it's a good one. It's a good one.

Danny Crichton:
Yes.

Kyle Harrison:
It's a good read. What was fascinating to me reading that book again just recently is that when you look at the sort of reindustrialization that happened leading into World War II, in my head, I had always thought of it as a bygone conclusion that America was already heavily producing, and we just had the ability. And as soon as we got into the conflict, we were ready to go. That's not the case at all. There was a massive amount of reindustrialization that occurred. You look at things like cost plus, contracting, and that came from a paradigm of we had things like Ford, and Ford was producing cars, but they weren't producing tanks and they didn't know how to produce a tank.
So, we had to have a business model that would say, "We will ensure that you get fixed percentage on top of your costs as profit." All of those things came out of a reindustrialization effort. I think that it's naive to just be like, "Ah, we did it before, we'll do it again," because it's just uncomprehensibly more complex today than it was in the '40s. There's so much more complexity.When you talk about chip fabrication, it's the most complex thing humans have ever figured out how to do hands down. It's not that we just, "Ah, we did it before, we'll do it again." It is complex, but we have done it before. There is a precedent for us turning on some of this stuff.
The second thing you talked about is there's a lot of these issues with like, hey, we can't have 20 states that are all pulling out pieces of these contracts, and things like that. I think that when you look at case studies like Palantir, like Anduril, like SpaceX, there is a precedent of companies that can, at scale, produce a significant amount and at a level of complexity that is material and that exists in spite of that paradigm. And it feels like there are a lot of regulatory regime changes that are happening, where people are acknowledging that these are fundamental problems that stop us from being capable and that things are shifting.
The only thing that gets me as soon as I start getting myself all hot and bothered and excited about this line of thinking, I run into this one wall that I'm really concerned about that I don't know if we will solve or not. But my hope is that if we get into what is a material conflict and that there is a threat to us, and again, I even look at international conflicts, like Ukraine and Israel and things like that, and the amount of politicization that we immediately have on those friends. I don't even think that it can be a conflict somewhere else. It has to be something that threatens us at home. But if that conflict occurs, my hope is that there are enough seeds planted and enough early springs shooting up that if that conflict happens, we will be able to rally and be able to build towards that.
My fear is more actually on sort of a cognitive security level rather than a national security level, that we are increasingly becoming so manipulatable that I don't know if we have the cognitive capacity to rally, to build, and do all the things that we need to do and to get out of our own way without turning it into this mindless cesspool of angry rage politics. That's the thing that I am more worried about. Can we build crazy complex processes, many of which we've let die? It's like I think if we were at our best cognitively, I think we could. I think that we have the spirit and the capacity to do it. There's enough things established that we could build off of. The question is do we have the cognitive ability to rally. I'm more worried about that.

Danny Crichton:
Yeah. I mean, look, unintentionally, we have now seven Riskgaming scenarios that we've designed over here, and the ones that we've done around cocktail games always seem to end up over into this category. The first one we did, which was number two, was called deepfake and DeepSeek, which was about manipulation of elections in the United States, and how much are people gullible and how much is that fake. The point of that game was really to highlight how hard it was to determine both attribution, who would be doing a threat, as well as motivation.
One of the things we try to highlight is manipulation can also be, I want your wallet, and I'm looking for money. It can be, I'm just having fun. I'm just a hacker who's just enjoying, Joker-style, I just want to see the whole world burn. And in our image, it's much more of a dictator overseas who's trying to get a specific election outcome, and that may be true. But they're oftentimes grafted onto a lot of other movements that are taking advantage of this particular information space.
And then just last week, we hosted a new game that Laurence, my co-partner here, produced. It was a beta test. We'll talk more about it publicly in a little bit, but it's focused on information, misinformation, and the information commons in general around bioenhancements. So, looking at the FDA process, looking at... It's a little pull from the headlines. But one of the challenges is increasingly in a complex society, you do need specialists. You do need experts. You need to be able to reason about things you don't understand.
Look, we talk about chips all the time. As I said, I've never designed a chip. I could probably design an Intel 8080 or whatever from 1960s [inaudible 00:23:15], or a vacuum tube in the back of my apartment or whatever the case may be. To do modern EUV-driven, leading-edge chip design requires an undergrad education, probably a top-flight technical, high school education, a PhD, a postdoc that will be several years, as well as several years of apprenticeships. It is just not practical for anyone not in that field to ever get caught up.
So increasingly, one of the biggest challenges we have in our information commons, which is, as you point out, about manipulation, and I put it less as cognition, but this challenging problem of I just have to accept that I don't know about a lot of things, so I have to trust others who do. And it's getting harder and harder to understand who actually knows what they're talking about, who actually has depth and sort of objective reasoned logic around what they're talking about versus who have an agenda. That agenda can be external to the United States. It could be internal. It could be I have a particular point of view and that's why I have this.
So, you get into, we'll call it vaccines, which is one of the topics that we kind of cover in the game. It's nearly impossible to know who to listen to. I do think that I agree with you, the rally to the flag effect, I think of growing up 9/11, I was in seventh grade, polling went to 93, 94%. There's an incredible outpouring of support. I am with you. I just can't imagine it being so universal, so clear, so unified in the way that if it were to happen in 2025. Because at the time, there was just no way to express a counter opinion to the event. It was pretty much wall to wall, like pro coverage of America and supporting each other and growing.
Today, it would be on Twitter. And the first fusillade of jokes, memes or whatever, the first denigration would've probably happened within six minutes, and then the fight would start and then no one would agree. It was just in a very, very, very, very different.

Kyle Harrison:
Yeah, I think that's right. I think that's why I am more afraid of that than I am of the industrial complexity. Not to say that it doesn't exist, but to say that is that the single failure point that I most struggle to articulate how it plays out well? That's why I spend so much time thinking about that. And it goes back, one of the things you're talking about with expertise is I think a lot about incentives and the incentives that people have. You're talking about whether that is a external third party state actor, internal motivations, or just a personal belief system or whatever.
I think that one of the things that we broke was the direct feedback loop of the things that why people do things. This sort of inherent trust society that you have of people do things because they're trying to get to a pursuit of truth. When we start to hold up people who are clear hucksters, have clearly financially benefited from manipulating other people, and we say, "That's a win. That's smart. That's clever. That's good." I like that they did that even though, yeah, did those people get screwed in other livelihoods? Sure, but they won the game.
That's the game. It fractures this inherent trust that you have of everybody is in the pursuit of truth. Some will win, some will be right, some will make a lot of money by being right, and that's okay. We broke that inherent system by rewarding like, hey, what matters is who wins, who gets ahead, who comes out on top. I think that that's a thing that it also, again, going back to your point about I don't know who to listen to, it's like that's because I have a really hard time understanding whose incentives, what their incentives are.

Danny Crichton:
Yes. Look, as someone who's both a pretty hardcore free speech absolutist, pretty hardcore pro free markets, pro competition, this is the crux of a modern problem. Many of these information commons have been corroded to your point by capitalist incentives, we'll call it. So, scientists, objective, they're in the lab, they're doing experiments, they want to prove things, they want to get status. That's sort of how the traditional sociology of a scientist's career works.
Today, I want to get IP. I want to build that out. I want to spin out a company. It's hard as a VC to evaluate IP early, even though the best VCs can be hoodwinked, maybe we'll call it the Theranos effect, but you put the right demo together, you give the right spiel. It can be very hard to interpret because, again, the science is so early and we're trying to invest on the frontier. It can be a very fine-grained difference.
So, you look at categories, like Alzheimer's research, which famously a year or two ago was discovered that basically for the last 20 years, [inaudible 00:27:32] funding went into a program in Alzheimer's research that was based off of entirely fraudulent faked papers that redirected hundreds of millions of dollars of the public's funds into a line of research that early scientists had learned very quickly was not useful, but that was their research and they wanted money to keep coming to them. So, they lied for two decades and they accumulated power because their research kept getting cited onwards and onwards, and therefore they became the program officers that were allocating the funding.
It is so hard, because my principles are very, very fixed. I really believe in competition and the marketplace of ideas as well as the marketplace of economics, or whatever the case may be. But this is a flaw. There are cheat codes now, and people not only have discovered cheat codes to the system, they increasingly spread them. So it's not just one person kind of gets away with it now. It's like the norm in some parts of our society, where everyone is sort of cheating. I don't know how to rebuild trust, but that is something that we spend a lot of time on the Riskgaming side thinking about is that trust aspect.

Kyle Harrison:
There's a quote that I have been thinking about a ton. It's a classic Warren Buffett-ism of in the short term, the market is a voting machine. In the long term, it's a weighing machine. And eventually, everything is weighed and measured and found wanting, whether it's fraudulent or it's just not very good or whatever. I think that one of the things that we fundamentally underappreciate and still most people don't even think about, let alone appreciate the magnitude of, is something that actually Kyla Scanlon has wrote a fair bit about, of this idea of attention economics. One of the things that leads to is you look at Elon Musk and Tesla.
Tesla, as a public company, has been able to build what... One of my favorite anonymous accounts on Twitter, I think Bucco Capital, he talked about how more even than Tesla as a company, SpaceX, whatever, the most impressive thing that Elon Musk has built is the investor base at Tesla, is that they allow him to continue to do whatever he wants to do and to have these massive moonshot visions and the stock continues to perform regardless of the sort of underlying realities of the company. And then eventually, many of the realities of the company catch up to that narrative eventually.
Maybe they miss the mark multiple, multiple, multiple times in a way that any typical public company, the stock would've been punished relentlessly to the point of something would've changed. Tesla doesn't have that, that underlying investor base. And increasingly, there is this opportunity where if you are able to leverage hype and attention more aggressively, you can actually buy yourself enough time.
There's a lot of arguments to be made that companies like OpenAI and Anthropic and Perplexity and Cursor have fundamentally unsustainable business models, but there is so much hype and so much buy-in that they are able to kick the can down the road of when they will eventually be weighed by whatever market may be, that they can maybe potentially buy their way into something that will be weighed and found acceptable.
I think that's what Tesla has done. There's a lot of things you could be said about Tesla as sort of been a house of cards at many different times in its life, but it has been able to build an investor base that doesn't care, and then maybe eventually they can build a foundation where the cards seem pretty sturdy, impressively so. There's a lot of companies that can do that.
My fear is that if that is taken to its logical conclusion is that if hype maintains as this sort of valuable currency for long enough that you never end up getting weighed, or by the time you get weighed and it collapses, you've moved on to the next thing. And if we allow that weighing mechanism to be broken, then this economic weight of gravity that has kept all previous pyramid schemes and hucksters at bay, because eventually it comes crashing down on them, if we break that, then it's not just on a micro level like the trust goes away, but it's on a macro level, the con never ends.
It's like I look at the same thing, again, going back to the Intel report. How we thought about Intel is like one of the fundamental issues with Intel, not just that it has been very poorly managed, but that Intel's investor base has been primed over the course of decades to want a very specific managed to earnings sort of business performance. That is fundamentally not how you build cutting edge chip capabilities and cutting edge technology. It's not managing to the earnings.
You may miss and the performance may not be there. It's not necessarily predictable earnings. But if you have an investor base that understands the mission towards which you're building, then you have the ability in pursuit of that. The hope is that eventually you have to get weighed. So if Intel Foundry were able to attract an investor base that was more incentivized towards growth and innovation than it was cost optimization, then eventually, they could put out results that would be weighed and measured and found very acceptable and put us in a great position.
The fear is that it's like, yeah, there are these cases where we want companies to build, and if they need to use hype to jettison them towards success, you can do that. But if we allow that to go so far to the excess where nobody's ever held accountable, then we start to get into really serious capital destruction territory, that scares the crap out of me. Because to mean to your point like Alzheimer's research, it's like, "What more would we want capital to be going towards?" Those types of things. Cancer research, all these technological breakthroughs that make the world such a better place and drive progress.
It's like if we let the huckster game play and the never-ending con take advantage of those things, so much capital would be destroyed that even if maybe someday we fixed the paradigm there, like the apparatus that leads to progress, the capital would be so afraid. It's basically the green bubble 2.0, right? It's so much capital was destroyed in pursuit of sustainability in the early 2000s, that even when technology became viable, there is a huge swath of capital that just will not touch the thing. You will forever damage the thing. And I don't want to forever damage the capital going towards Alzheimer's research or cancer research or robotics and human progress.

Danny Crichton:
I think we had a podcast last year with the authors of Boom from Stripe Press, Byrne Hobart and Tobias Huber. The idea of Boom was like... It's very Schumpeterian in my view, and it's much more philosophical in the book, but in the way that I perceive it, they would argue that this is great. What happens is people get lied to essentially. They put their money into something, like AI today or fiber optics or housing construction in six ring suburbs in Midwest towns without any population, or fiber optics go all the way back in all the booms to the railroads.
And you see some of the same GDP numbers, by the way. They look like railroads, [inaudible 00:34:10], fiber optics. 3, 4, 5, 6% of American GDP went into these industries at their peak. So, you're sort of, Boom, there's capital destruction, but then you still have the infrastructure. This is their argument, is that you still have fiber optics and that fiber optics, we had all these dark fiber, WorldCom blew up. We had all these companies, all these global crossing and all this stuff that was happening in the late '90s, early 2000s.
But the fibers don't get pulled out of the ground. Instead, Google ends up, over the next couple of years, you had YouTube as an example, where all that additional bandwidth suddenly got used by video, by streaming. Google itself bought a lot of that fiber, and that's one of the reasons why they were able to basically underwrite YouTube's growth for many years when it was unprofitable and didn't have an ad business, because they actually owned the fiber. They didn't have to pay for the backbone costs and all the traffic acquisition costs over there in order to keep that thing running.
So, there is a positive story to this. Now, I think one of the questions that I've always had at this is that this sounds very historical and it makes sense, and you can look at railroads and be like, "Look, the railroad was broke, but it was fundamentally valuable in the United States long term." And whatever the panic of 1873 and the panic of 1891, we were all in history class and there's a lot of numbers, but my question is always, today, we live in a very different world.
It is not 1945 and we are 80% of the global manufacturing capacity of the globe in the United States. Germany is in shambles. France is destroyed. Japan was nuked twice. Most other countries had lost their manufacturing capabilities. We're in a very different place. China produces an immense amount. Korea produces an immense amount. Taiwan, Japan, India is up and coming. You look at Europe, as much as I am negative on Europe, I've had many shows that are negative on Europe, still produces a lot of stuff onwards and onwards.
Boom, that causes destruction that leads to capital flight, may not be as recoverable today as it was 30, 40, 50 years ago. And I do worry that we look at some of these historical moments and we're like, oh, dot com, but Google made a fortune and look how much money it all made today. I'm not sure it's as easily to recover today as it was before.

Kyle Harrison:
I think that that goes back. It's a couple of exacerbating factors where... Number one, we talk about the world just getting exponentially more complex, interconnected, difficult to sort of parse. I think that causes a lot of problems in recovery. But the second thing that I think a fair bit about is... Because you look at...
Take AI as the current Boom, and I have a friend of mine who works in the energy markets and had a fascinating insight where he was talking about if you look at energy prices, and you can kind of map to there is what people say they believe and then there is what people spend. It's to determine what they believe or to demonstrate what they believe. He said, if you look at energy markets, if you had a handful of these companies, so Microsoft, Meta, a few that were investing in what they thought was truly AGI, was this truly sort of exponential evolution of intelligence that would lead to massive unlocks in every industry.
If that's truly what you're building, and then eventually the only limitation will be compute and, by first principles, energy, then you would want... If you're going to pour $60 billion into the data centers, then you would want to lock in energy regardless of the price. It doesn't matter, because if that price is now that finite thing that can fund what is an infinite amount of intelligence, then you would want to lock... The cost would not matter.
The energy markets don't reflect that, right? The sort of hyperscalers, the way that they are spending money to lock down energy is more in line with somebody who is simply making a pretty standard infrastructure investment. If Meta wanted to come out and say, "I want to build 60 billion of data centers," the stock would be punished, because that's a massive CapEx investment. How do we think about the returns on that? How do we think about... You don't have to update it, blah, blah, blah. There's a lot of negatives.
So if you just say, "I want to build it," then you get punished. If you say, "I am in pursuit of AGI, which is why I want to build $60 billion of data centers," then you're not as punished, and it's an AI winner and there's all these narratives behind that. To your point about the infrastructure that gets built out, my belief is that most of the hyperscalers don't buy their own AGI hype. They want an excuse to lay down infrastructure and to own more infrastructure, and this is a great narrative for them to not get punished in the interim while they do that.
I think that there is some element of like, yeah, we're going to build out a lot of infrastructure. I think that the problem is when you think about how do we recover from this level of hype, the issue is that there are a lot of things that are not... I haven't done as much work as burning the team, and so they may disagree with this. But when you look at a lot of the things that got built that enabled industries where you're able to turn on a highly profitable machine after the fact. There was a thing that infrastructure then led to that generated a lot more profit.
That's true of every sort of bubble. Even you look at a lot of the stuff that people have been able to make money off of. I think one of my biggest fears is that we are chasing such an unsustainable... At least at the current cost curve, we are chasing such an unsustainable business model that there's not going to be a massive... You look at Amazon and Google and Facebook, and the amount of profit that they spit off every year, that has allowed that to sort of perpetuate. I don't know that we're going to have that thing.
So if that infrastructure doesn't lead to then a massive new profit center to be able to keep the cycle going, I think that's the biggest thing that I worry about. And we're doing ourselves a massive disservice because we're doing this hype into an unsustainable model at the height of geopolitical tension, where we also may be going into a massive conflict. I think those three things together are going to make this very difficult to come back from.

Danny Crichton:
I agree with you. I think I will observe a few things. One is I think OpenAI has turned some sort of corner. It was leaked a few weeks ago, maybe two weeks ago. That project Stargate, this massive $500 billion, SoftBank, OpenAI, plus Trump in the White House, is going to result in a single small data center in Ohio. It was over four years, and it sort of backlogged a little bit, so you could think of it as 80 billion the first year and then it kind of accelerates over time, and it's going to be something like 50 million bucks. I am probably exaggerating, but it's like fractions of a penny of what it was supposed to be in terms of scale.
And then, if you look at what happened with ChatGPT 5 last week, there's still some tuning, it's still going around, but it is very obvious, at least to me, that OpenAI is really focused on efficiency of results. The answers it gives are shorter, the interactivity is shorter. That means context windows are cheaper, inference is cheaper, et cetera. It seems clear to me that they have seen a path of, hey, we need to get to efficiency, maybe even profitability, et cetera.
This kind of gets at this idea of good enough AI. I don't think Sam would ever admit that he's giving up on AGI or ASI, and most of these companies won't, but I think more and more companies are starting to dawn on them that you just need good enough if it's doing what I needed to do. There's a military plane that flew over New York this morning, I took out my phone, I took a photo of it. I sent it to ChatGPT, it told me what it was. It told me it could look up the flight information and get the actual flight stats of where it took off and where it's going. That's amazing. It's already working to what I need. I don't need anything. I'm sure I can need more. I'm not saying that coders don't need more software engineer, whatever the case may be, but it definitely seems like if you can reach a good enough point, that may be a profitable business model. You may not need to train as much, and we might already be there.
And the question is, to your point, there is capital destruction then, right? If we have good enough AI, does a company that's valued somewhere between a half trillion and a trillion bucks, depending on who you ask, does that maintain and sustain? Or do we look at that and go, "Wow, this is a race to the bottom. There are 20 different models that can all do this. In fact, it's going to be open source. It could be under your pocket. Apple's going to just have it included on your iPhone for free and 64 gigs of RAM, or whatever the case may be. And OpenAI makes nothing"?
That's what I don't know. There's some big, big open questions. But that leads me to our last piece here, which is projecting to the future. We're in mid 2025. We've talked about worries, which is kind of what this podcast always does. We're worried about everything. We worry all the time. But outside of worries, what are you excited for? Are you a believer? I'm generally a very cynical person, so maybe you will correct the record.

Kyle Harrison:
Yeah. Our sort of guiding principle for our research arm, so Contrary Research that wrote the report on building an American TSMC. The guiding principle of the report is always this idea of a sort of optimistic pursuit. We're not looking for things. We always talk about it as... Because we've compared it to equity research for public markets. We're trying to do a lot of that for private markets.
But rather than issue buy, sell, and hold ratings, we're trying to understand the why buy. What should I believe? What are the things... And it's not even just what do I have to believe, and it's like, oh, I'd have to strain it and figure out, like, okay, crap, I have to believe this thing. That's probably not true and this thing that's probably not true. It's like, no, no, let's unpack the underlying truths that exist. What should we believe? What's believable? That's our guiding principle in a lot of the research that we do.
So when I think about the areas that I am super compelled by, it is this idea of a technological distribution curve. I think all the time about this idea of the innovation curve has a certain scale, it sort of changes in its slants and stuff in terms of how much hype and energy and capital and talent and whatever is being thrown at something at any given time. That innovation curve can change. The distribution curve I think is something that requires a lot of change management. Palmer Lucky has this great line where he talks about, I think the biggest limiter to AGI or ASI will be just human inertia. Just like our ability to adopt things is going to slow a lot of stuff down.
What I'm most excited about is solving those problems, like bridging those gaps. One of my friends had a tweet where they talked about the Magnificent 7, where you split out the sort of returns from the seven versus the returns of the other 493 or whatever. It's like that represents a corpus of market cap that yields very little return, and how are we going to be able to deploy technology effectively into those categories to be able to implement change and to drive returns.
I think that there is, more even than an event as you think about softwares eating the world, I think I love this line from a few folks that we've interviewed for our research, where they talk about, just because software's eating the world, doesn't mean you necessarily have to have exponentially more software companies per se. You can have a lot of different types of companies that are leveraging software, leveraging AI, leveraging technology to be very efficient. I'm very excited about that.
A lot of the reason why we spend a lot of the time in the back of our mind thinking about stuff like Intel, Taiwan, China, industrial capacity, our ability to undergo a massive conflict, and the reason that we invest in companies like Anduril, is because if we face that conflict and we lose that conflict and we lose our influence globally, we don't get a lot of the opportunities to drive that distribution curve forward that I'm so excited about. So, there is this fundamental right to exist that we have to drive forward, that we have to invest in and believe in and incentivize.
Things like turning Intel Foundry into America's foundry, that's a critical piece of allowing us to continue to exist at the curve that we exist at. If that part is taken care of, if we're able to do that, then I'm very excited about our ability to innovate and figure out how can we distribute technology to the long tail of use cases and problems and make life better. Because the thing that when you talk about is life better now than it was...
A lot of people who are very pessimistic, the fundamental idea is that it's like, man, life sucks. And it's like life in many, many cases is so much better than it has ever been at any point in the history of the human race. All of those things came from incremental long tail distributions of technology. It's not always at the cutting edge. The cutting edge often falls far and far behind until it becomes a standard edge, and then that's when it gets distributed. So, at what point does it get distributed is really difficult.
But even things like ChatGPT, ChatGPT was not a technological breakthrough. It was a distribution breakthrough. Way more people use that tool than would have if it hadn't just happened to catch fire in the way that it caught fire. But that's a good thing. It's helped a lot of people in a lot of different ways and made a lot of things better. So, I'm very excited about that long tail distribution.
If we can keep it, if we can maintain the big picture stuff and maintain our ability even to coexist with China, it doesn't necessarily have... A lot of the folks that work in defense talk about we may not defeat China in a head-to-head conflict, but we may be able to maintain parity. As long as we can do that, that might be okay. Americans don't love to hear that because we love to win, but that might be enough to allow us to continue to do certain things better and to hope that in the long run, free market democracy can win out.
Those are the two things that I spend my most time thinking about is how do we solve those big problems, sort of geopolitical conflict, and then how do we make sure that we distribute this technology in the most effective ways to the largest number of lives possible.

Danny Crichton:
Well, let me put it this way. For our listeners who constantly email me and complain that nothing is positive on this show, I just want to point out that we brought Kyle Harrison on this show, and that was a positive message at the end. Although parity, you had to be going for all the way to the last [inaudible 00:47:18] like, this is going to work so well. And then, you bring it back down to reality.

Kyle Harrison:
If it was 100% optimistic, you would know that it was fake. It had to be a little bit of salt in there.

Danny Crichton:
That's part of our new information commons, to create that authenticity vibe. But Kyle Harrison of Contrary Capital, thank you so much for joining us.

Kyle Harrison:
Thanks for having me.