In addition to this “Securities” newsletter and podcast, our Lux Riskgaming Initiative and a long tail of sundry tasks, I’ve added an affiliation as Fellow at the Manhattan Institute this year, where I will be focused on solutions for improving America and the world's economic growth and progress on science and technology. In other words, what I do every day already.
In my debut feature editorial for City Journal, I write about Washington D.C.’s fusillade against the tech industry at a time when the ecosystem is retrenching after the overheated frenzy of the Covid-19 digital economy. I highlight three new policies that are slowing tech’s recovery:
Section 174 R&D Tax Amortization: A ticking-time bomb in the tax law has, unfortunately, arrived without Congress defusing it. These newly-enforced tax changes can have massively negative consequences for tech startups, particularly early-stage companies bootstrapping with some early traction and revenue. A shock tax bill might knock otherwise functioning companies out of business.
Toughening Antitrust Environment: In December, we saw a slew of transactions that were investigated or blocked by the FTC. Without broad recourse to the public markets (where the thinning herd of IPOs has been observed for decades now) and now increasingly to private sales, liquidity for high-quality startups is getting harder and harder to find for founders and venture capitalists.
Private-Fund Adviser Rules: The SEC published its final rule here a few weeks ago, which is designed to equalize the economic and information rights of different investors in private funds. That deeply complicates long-standing arrangements among venture funds, particularly for funds constructed on behalf of anchor LPs. It only adds to the challenges for VCs hoping to fundraise in an already barren year.
Do read the article, and look for more coverage of ideas and solutions that can hopefully unlock more of America’s economic dynamism.
The new global axis of national AI hubs continues with Tokyo’s Sakana AI
Lux had major news this week: we led the $30 million founding seed round for Sakana AI, a Tokyo, Japan-based AI research lab whose founders David Ha and Llion Jones are leveraging research innovations like evolutionary methods, collective intelligence and character-level training to build nature-inspired foundation AI models. Grace Isford and Brandon Reeves wrote up their thoughts on what makes Sakana AI distinctive, including its approach to model building, its position in Japan, and its impressive early hires.
We published a “Securities” podcast as well where I highlighted three of the major themes that led to our investment in Sakana AI:
The new global axis of national AI hubs where leading companies take advantage of local talent networks, unique knowledge and research, as well as government support to build competitive products.
After decades of anemic growth and deflationary pressures, Japan’s economy has become something of a silver lining in the past few years amidst the post-pandemic malaise afflicting much of the industrialized world.
The Asia-Pacific region is finally getting its rightful due in the eyes of international investors. With China’s self-induced economic deceleration, investors are realizing (shocking, I know!) that the rest of Asia is just as — or even more — competitive than the Middle Kingdom. Their attention now increasingly encompasses Japan and South Korea, Southeast Asia (Singapore, Malaysia, Indonesia and more) as well as that great and burgeoning bastion of venture excitement, India.
On the first of those themes, there’s a clear desire by politicians to ensure that their own countries have a stake in the future leadership of artificial intelligence. Given the dominance of America in digital technology the past two decades, many countries are feverishly working to avoid the same outcome in this next wave of technologies. That’s why we have seen massive state support for AI chip compute in the European Union, South Korea and Taiwan to match and in some cases exceed America’s own CHIPS Act. We’re also seeing the rise of software companies centered on top engineering universities, with the hope that local talent doesn’t migrate to Silicon Valley, but instead can build and prosper locally.
America has an unmatched set of inherent advantages, of course. We are already home to almost all of the greatest AI companies in the world, and the network effects of these businesses are all but insurmountable. Despite a byzantine immigration system, technical workers find their way to Silicon Valley, New York City and other U.S. tech hubs. Our university system is also peerless, both in terms of talent and funding.
Yet, artificial intelligence researchers are still rapidly unearthing new techniques, models, datasets, approaches and more in the frantic exploration of computer science’s frontiers. The upshot of network effects is the density of talent; the downside is groupthink, of too many people watching the same presentations at the same conferences and coming away convinced on a single approach to the future. That’s why intentional and unintentional barriers — geographical, language, cultural, and academic — can protect heterogeneous research niches and increase the likelihood that another country might just find the approach that revolutionizes the field.
America will almost certainly dominate the next generation of AI technology, but it’s not going to have a monopoly on the successes coming. For venture investors, that means calibrating portfolios both toward the huge gains likely to come domestically, but also the longer tail of potential winners that sit at global entrepôts. We see Sakana AI as one of a handful of such winners coming up in the years ahead.
How Impulse Space’s Helios will democratize access to Earth’s farthest orbits
Let’s move on from the sublunary considerations of global AI companies to the far-flung reaches of Earth’s outer edges. Thanks to SpaceX and a slew of other companies, we’ve democratized access to low-Earth orbit (LEO) by dramatically cutting the cost of rocketing up payloads into space. That’s most obvious with SpaceX’s Starlink constellation, which currently sits at 5,250 operational satellites after this week’s 300th successful Falcon 9 launch.
Space is vast, but objects in space also travel at brain-melting speeds, which means that it is absolutely critical that every object maintains its correct orbit and distance from everything else. The heavier usage of the cosmos is suddenly placing a premium on space maneuverability — the ability to carefully reposition space objects within an orbit, as well as cheaply move them between orbits.
This week, I talked with Impulse Space’s CEO and founder (and a founding member of SpaceX) Tom Mueller about the release of the company’s designs for a new high performance kick stage, Helios. Helios, which will begin flights in 2026, is designed to dramatically lower the cost of moving objects from low-Earth orbit into medium-Earth orbit (MEO) and all the way to geostationary orbit (GEO).
It’s a good example of how innovation solves one problem, and then creates a whole slew of new ones. We spent the past two decades succeeding in democratizing space, but now we need new solutions to all the congestion, while also offering the same democratization to the rest of Earth’s outer reaches. The cycle of innovation continues.
Grace Isford highlights yet another breakthrough from DeepMind, this time about the announcement of AlphaGeometry, “an AI system that solves complex geometry problems at a level approaching a human Olympiad gold-medalist - a breakthrough in AI performance.” In the firm’s Nature paper, the research team writes that “Notably, AlphaGeometry produces human-readable proofs, solves all geometry problems in the IMO 2000 and 2015 under human expert evaluation and discovers a generalized version of a translated IMO theorem in 2004.” The AI bots will truly take over everything.
Our scientist-in-residence Sam Arbesman enjoyed news that astronomers are baffled by a new “ring-shaped cosmic megastructure” that undermines theories about space. From The Guardian, "The observations, presented on Thursday at the 243rd meeting of the American Astronomical Society in New Orleans, are significant because the size of the Big Ring appears to defy a fundamental assumption in cosmology called the cosmological principle. This states that above a certain spatial scale, the universe is homogeneous and looks identical in every direction.”
Back here on planet Earth, there is growing trepidation that North Korea might be preparing for war. The country’s leader, Kim Jong Un, announced a major ideological shift, renouncing the final unification of the Korean peninsula and declaring that “The reality is that the North-South relationship is no longer a relationship of kinship or homogeneity, but a relationship of two hostile countries, a complete relationship of two belligerents in the midst of war.” Earlier in the week, two prominent analysts, Robert L. Carlin and Siegfried S. Hecker, wrote that they see signs of looming war. “The situation on the Korean Peninsula is more dangerous than it has been at any time since early June 1950.”
Sam was delighted by Étienne Fortier-Dubois’s essay on the worldbuilding that happens in fiction, and why it is so hard for artists to match the richness and complexity of the real world. “It’s still an interesting observation because it ties into the wider philosophical theme of evolution vs. design. Design is great: it’s much smarter and faster than evolution is, it allows us to use our brains to create really cool stuff. But evolution through selection has time on its side, and so far every really complex system known to man — including biological systems like the human body, and cultural ones like human civilization — has been the product of evolution rather than intelligent design.”
Speaking of worldbuilding, I’m always a connoisseur for hubristic tales of overlords constructing new cities, which always foists top-down schematics on people rather than offering them an evolutionary approach that adapts to their complexity. Zooming on Chinese President Xi Jinping’s vision Xiongan, the un-bylined Bloomberg feature, “Xi’s Empty Dream City Shows Limits of His Power, Even in China” is a good follow up to Andrew Stokols’s look a year ago in Foreign Policy on "China’s Futuristic City Is a Test of Its Planning Power.” It’s always the most powerful political leaders that can’t seem to make these planned cities work. Funny how that happens.
That’s it, folks. Have questions, comments, or ideas? This newsletter is sent from my email, so you can just click reply.
Forcing China’s AI researchers to strive for chip efficiency will ultimately shave America’s lead
In incididunt ad qui nostrud sint ullamco. Irure sint deserunt Lorem id officia dolore non. Anim dolor minim sit dolor et sint aliquip qui est. Ex in tempor laborum laboris dolor laboris ullamco quis. Enim est cupidatat consequat est culpa consequat. Fugiat officia in ea ea laborum sunt Lorem. Anim laborum labore duis ipsum mollit nisi do exercitation. Magna in pariatur anim aute.
In incididunt ad qui nostrud sint ullamco. Irure sint deserunt Lorem id officia dolore non. Anim dolor minim sit dolor et sint aliquip qui est. Ex in tempor laborum laboris dolor laboris ullamco quis. Enim est cupidatat consequat est culpa consequat. Fugiat officia in ea ea laborum sunt Lorem. Anim laborum labore duis ipsum mollit nisi do exercitation. Magna in pariatur anim aute.
Right now, pathbreaking AI foundation models follow an inverse Moore’s law (sometimes quipped “Eroom’s Law”). Each new generation is becoming more and more expensive to train as researchers exponentially increase the number of parameters used and overall model complexity. Sam Altman of OpenAI said that the cost of training GPT-4 was over $100 million, and some AI computational specialists believe that the first $1 billion model is currently or will shortly be developed.
As semiconductor chips rise in complexity, costs come down because transistors are packed more densely on silicon, cutting the cost per transistor during fabrication as well as lowering operational costs for energy and heat dissipation. That miracle of performance is the inverse with AI today. To increase the complexity (and therefore hopefully quality) of an AI model, researchers have attempted to pack in more and more parameters, each one of which demands more computation both for training and for usage. A 1 million parameter model can be trained for a few bucks and run on a $15 Raspberry Pi Zero 2 W, but Google’s PaLM with 540 billion parameters requires full-scale data centers to operate and is estimated to have cost millions of dollars to train.
Admittedly, simply having more parameters isn’t a magic recipe for better AI end performance. One recalls Steve Jobs’s marketing of the so-called “Megahertz Myth” to attempt to persuade the public that headline megahertz numbers weren't the right way to judge the performance of a personal computer. Performance in most fields is a complicated problem to judge, and just adding more inputs doesn't necessarily translate into a better output.
And indeed, there is an efficiency curve underway in AI outside of the leading-edge foundation models from OpenAI and Google. Researchers over the past two years have discovered better training techniques (as well as recipes to bundle these techniques together), developed best practices for spending on reinforcement learning from human feedback (RLHF), and curated better training data to improve model quality even while shaving parameter counts. Far from surpassing $1 billion, training new models that are equally performant might well cost only tens or hundreds of thousands of dollars.
This AI performance envelope between dollars invested and quality of model trained is a huge area of debate for the trajectory of the field (and was the most important theme to emanate from our AI Summit). And it’s absolutely vital to understand, since where the efficiency story ends up will determine the sustained market structure of the AI industry.
If foundation models cost billions of dollars to train, all the value and leverage of AI will accrue and centralize to the big tech companies like Microsoft (through OpenAI), Google and others who have the means and teams to lavish. But if the performance envelope reaches a significantly better dollar-to-quality ratio in the future, that means the whole field opens up to startups and novel experiments, while the leverage of the big tech companies would be much reduced.
The U.S. right now is parallelizing both approaches toward AI. Big tech is hurling billions of dollars on the field, while startups are exploring and developing more efficient models given their relatively meagre resources and limited access to Nvidia’s flagship chip, the H100. Talent — on balance — is heading as it typically does to big tech. Why work on efficiency when a big tech behemoth has money to burn on theoretical ideas emanating from university AI labs?
Without access to the highest-performance chips, China is limited in the work it can do on the cutting-edge frontiers of AI development. Without more chips (and in the future, the next generations of GPUs), it won’t have the competitive compute power to push the AI field to its limits like American companies. That leaves China with the only other path available, which is to follow the parallel course for improving AI through efficiency.
For those looking to prevent the decline of American economic power, this is an alarming development. Model efficiency is what will ultimately allow foundation models to be preloaded onto our devices and open up the consumer market to cheap and rapid AI interactions. Whoever builds an advantage in model efficiency will open up a range of applications that remain impractical or too expensive for the most complex AI models.
Given U.S. export controls, China is now (by assumption, and yes, it’s a big assumption) putting its entire weight behind building the AI models it can, which are focused on efficiency. Which means that its resources are arrayed for building the platforms to capture end-user applications — the exact opposite goal of American policymakers. It’s a classic result: restricting access to technology forces engineers to be more creative in building their products, the exact intensified creativity that typically leads to the next great startup or scientific breakthrough.
If America was serious about slowing the growth of China’s still-nascent semiconductor market, it really should have taken a page from the Chinese industrial policy handbook and just dumped chips on the market, just as China has done for years from solar panel manufacturing to electronics. Cheaper chips, faster chips, chips so competitive that no domestic manufacturer — even under Beijing direction — could have effectively competed. Instead we are attempting to decouple from the second largest chips market in the world, turning a competitive field where America is the clear leader into a bountiful green field of opportunity for domestic national champions to usurp market share and profits.
There were of course other goals outside of economic growth for restricting China’s access to chips. America is deeply concerned about the country’s AI integration into its military, and it wants to slow the evolution of its autonomous weaponry and intelligence gathering. Export controls do that, but they are likely to come at an extremely exorbitant long-term cost: the loss of leadership in the most important technological development so far this decade. It’s not a trade off I would have built trade policy on.
The life and death of air conditioning
Across six years of working at TechCrunch, no article triggered an avalanche of readership or inbox vitriol quite like Air conditioning is one of the greatest inventions of the 20th Century. It’s also killing the 21st. It was an interview with Eric Dean Wilson, the author of After Cooling, about the complex feedback loops between global climate disruption and the increasing need for air conditioning to sustain life on Earth. The article was read by millions and millions of people, and hundreds of people wrote in with hot air about the importance of their cold air.
Demand for air conditioners is surging in markets where both incomes and temperatures are rising, populous places like India, China, Indonesia and the Philippines. By one estimate, the world will add 1 billion ACs before the end of the decade. The market is projected to before 2040. That’s good for measures of public health and economic productivity; it’s unquestionably bad for the climate, and a global agreement to phase out the most harmful coolants could keep the appliances out of reach of many of the people who need them most.
This is a classic feedback loop, where the increasing temperatures of the planet, particularly in South Asia, lead to increased demand for climate resilience tools like air conditioning and climate-adapted housing, leading to further climate change ad infinitum.
Josh Wolfe gave a talk at Stanford this week as part of the school’s long-running Entrepreneurial Thought Leaders series, talking all things Lux, defense tech and scientific innovation. The .
Lux Recommends
As Henry Kissinger turns 100, Grace Isford recommends “Henry Kissinger explains how to avoid world war three.” “In his view, the fate of humanity depends on whether America and China can get along. He believes the rapid progress of AI, in particular, leaves them only five-to-ten years to find a way.”
Our scientist-in-residence Sam Arbesman recommends Blindsight by Peter Watts, a first contact, hard science fiction novel that made quite a splash when it was published back in 2006.
Mohammed bin Rashid Al Maktoum, and just how far he has been willing to go to keep his daughter tranquilized and imprisoned. “When the yacht was located, off the Goa coast, Sheikh Mohammed spoke with the Indian Prime Minister, Narendra Modi, and agreed to extradite a Dubai-based arms dealer in exchange for his daughter’s capture. The Indian government deployed boats, helicopters, and a team of armed commandos to storm Nostromo and carry Latifa away.”
Sam recommends Ada Palmer’s article for Microsoft’s AI Anthology, “We are an information revolution species.” “If we pour a precious new elixir into a leaky cup and it leaks, we need to fix the cup, not fear the elixir.”
I love complex international security stories, and few areas are as complex or wild as the international trade in exotic animals. Tad Friend, who generally covers Silicon Valley for The New Yorker, has a great story about an NGO focused on infiltrating and exposing the networks that allow the trade to continue in “Earth League International Hunts the Hunters.” "At times, rhino horn has been worth more than gold—so South African rhinos are often killed with Czech-made rifles sold by Portuguese arms dealers to poachers from Mozambique, who send the horns by courier to Qatar or Vietnam, or have them bundled with elephant ivory in Maputo or Mombasa or Lagos or Luanda and delivered to China via Malaysia or Hong Kong.”