Photo by vadimrysev via iStockPhoto / Getty Images
Every quarter, Josh Wolfe writes an update on what all of us at Lux are seeing, ranging from new ideas at the cutting-edge of science to the machinations of nation-states. This time is no different. I’ve excerpted the introduction and conclusion of this quarter’s letter below with its original formatting; for the full edition, scoot over to the highlighted PDF edition available on Google Drive or the full copy published on Twitter.
Meanwhile, have an enjoyable Labor Day for those of you in the United States!
From V for Vendetta…
The world is filled with anxious anger. Widening inequality driven by inflation; soaring geopolitical tensions from Eastern Europe and the Middle East to Asia-Pacific, the Sahel and Latin America; polarizing elections exacerbated by social media (in a year in which a majority of humanity will vote for at least some of their political leaders); escalating climate chaos that’s cleaving communities; and deep-seated fears of automation wiping out breadwinners have made anxious anger the mainstay emotion of our era.
We’ve already seen the blowback, from riots and ruination to calls for degrowth and destruction of critical assets, including oil pipelines and scientific institutes. Often, small numbers of individuals styling themselves as revolutionaries are at the heart of these responses, externalizing their inner demons on an unwitting population.
Few movies (or comic books) capture this spirit better than V for Vendetta, which centers on the antihero V in a Guy Fawkes-mask who parlays spectacular violence against a totalitarian regime snuffing out humanity. V is an anarchist marshaling no army, whose only power is to wield force to destroy the institutions that he sees as morally wrong. Left unseen is any community of uplift or broad solidarity in the pursuit of freedom. V aestheticizes narcissistic violence in a vainglorious quest to prove his own self-worth.
Yet, there is a different approach to assuaging the world’s anxious anger, and it lies with the community of scientists opening the future to further health, prosperity and equality. As Karl Popper argued in his fight against the Vienna Circle, rebellious scientists fight what’s wrong in order to search for what’s right. Falsification isn’t about destruction, but renovating antiquated foundations to secure a stronger future. Far from anarchist antiheros, these scientists form a collaborative community of comrades conscientiously casting around for a cogent consensus.
Examples abound. Behind Oppenheimer lay the thousands of scientists and administrators of Los Alamos who together invented the atomic bomb. Behind Turing lay thousands of computer scientists who shared and built upon his theoretical work to invent the artificial intelligence models at the heart of our current moment. And behind 2023 Nobel laureate Katalin Karikó lay thousands of bioscientists and clinicians who transformed her pioneering mRNA technology into a pandemic-defeating vaccine that saved millions.
Lux funds ambitious rebel scientists working alongside brilliant teams in the collaborative pursuit of the possible. In preview, in spite of all the political polarization, technological upheaval, and market and geopolitical volatility, this quarter’s letter is a clarion call for scientific advancement (and all who foster, find, fund and benefit from it) as the cornerstone of our collective future. It happens that V is the Roman numeral for five, and so across five sections, we will talk more about rebellious scientists, the importance of laboratory culture, the ambition of Lux Labs, the global macro context in venture and finance, and finally, the geopolitical challenge of maintaining American hegemony in the twenty-first century.
… To V for Valor
When a Senator asked Fermilab director Robert Wilson what value the pricey lab might bring to the nation’s security in the race against the USSR, Wilson replied, “Only from a long-range point of view, of a developing technology. Otherwise, it has to do with: Are we good painters, good sculptors, great poets? I mean all the things that we really venerate and honor in our country and are patriotic about. In that sense, this new knowledge has all to do with honor and country but it has nothing to do directly with defending our country except to help make it worth defending.”
We don't pretend to predict the future, but we're committed to funding those rebellious scientists who envision it and who attract intrepid colleagues to build it — architects of posterity, not merely tenants of the present. While our long-standing Lux Labs initiatives in founding, finding, and funding investments in cutting-edge biotech, compute, AI, autonomy, aerospace, and defense have positioned us well, we are ever vigilant and humble in our pursuit of what's next. We maintain our focus on the cutting-edge breakthroughs and sci-tech superiority that comes from scientists, inventors and founders relentless in their pursuit of competitive advantage — for their companies, their countries, and all of us. In the coming quarter, we will reveal new investments with incredible breakthroughs narrowing the gap between sci-fi and sci-fact, with ventures that control the nervous system; target neural circuits to direct bone growth; use genetic engineering to supply life-saving organ transplants; and develop breakthrough defense systems in the most conflict-laden regions.
Anxious anger is understandable; what’s not are the vainglorious vendettas of the few who want to compel civilization to retreat into the past. The future will be built by vanguard visionaries seeking out valuable verities amidst volatile valuations. Science marches on, heedless of political, social, or economic tumult. We expect the venture landscape will face upheaval, and yet the steady march of scientific progress and human ambition will continue unabated, forever propelling us forward to a better future. Fiat Lux.
Podcast: How games, god(s) and chance transformed human decision-making
Gaming has enveloped our world. A majority of Americans now gamble at least once every year, and popular video games like Fortnite and Roblox count hundreds of millions of global players. In social science, game theory and its descendants remain the mainstay for objectively analyzing human rationality, even as a gigaton of evidence shows the limits of these mathematical approaches. Meanwhile in foreign affairs, wargaming (including some of our very own Riskgaming scenarios!) are used to explore speculative futures that can change the fate of nations.
All of these subjects and more are fodder in Playing With Reality: How Games Have Shaped Our World, a broad and open inquiry into the nature of games written by neuroscientist Kelly Clancy. Kelly weaves discussions of dopamine, surprise, chance and learning into a history of human behavioral development over the ages, but then she pivots her discussion. For all of gaming’s success across time and around the world, what are its limits and are we properly critiquing these simulacra of reality?
Kelly and I talk about her book and so much more across an extended show that gets at the very heart of Riskgaming. We talk about the history of games, why the theory of probability arrived so late in the development of mathematics, why game theory works mathematically but fails to capture the complexity and dynamism of human behavior, how AI models use gaming techniques like self-play to evolve, and how the world might change given the explosive popularity of interactive gaming in all facets of modern life.
The Orthogonal Bet: The Harsh Realities of the Soviet Space Program
In this episode, Lux’s scientist-in-residence Sam Arbesman speaks with John Strausbaugh, a former editor of New York Press and the author of numerous history books. John’s latest work is the compelling new book The Wrong Stuff: How the Soviet Space Program Crashed and Burned.
The book is an eye-opening delight, filled with stories about the Potemkin Village-like space program that the Soviets ran. Beneath the achievements that alarmed the United States, the Soviet space program was essentially a shambling disaster, and the book reveals many tales that had been hidden from the public for years.
In this conversation, Sam explores how John became interested in this topic, the nature of the Soviet space program and the Cold War’s Space Race, the role of propaganda, how to think about space programs more generally, and much more.
Few authors can delight on a Labor Day weekend in the midst of a presidential election quite like Tom Wolfe. Wolfe is an authorial treasure, and his oeuvre is returning to the limelight again thanks to the release of the documentary Radical Wolfe and this week’s re-issue of Radical Chic & Mau-Mauing the Flak Catchers as well as The Kandy-Kolored Tangerine-Flake Streamline Baby, collections of some of his most well-known work. Decades after their publication, the satire still singes unlike anything else. The introduction of Radical Chic by New York Times columnist David Brooks was also published by the NYT, so check that out as well.
Sam enjoyed Saloni Dattani’s analysis of the scale of the Black Death in Asimov Press. “Direct records of mortality are sparse and mostly relate to deaths among the nobility. Researchers have compiled information from tax and rent registers, parish records, court documents, guild records, and archaeological remains from many localities across Europe. However, even those who have carefully combed over this data have not reached a consensus about the overall death toll.”
In optimistic news for a certain type of Elon Musk superfan, Sam points to new research showing that “Terraforming Mars could be easier than scientists thought.” “Ansari and her colleagues wanted to test the heat-trapping abilities of a substance Mars holds in abundance: dust. Martian dust is rich in iron and aluminum, which give it its characteristic red hue. But its microscopic size and roughly spherical shape are not conducive to absorbing radiation or reflecting it back to the surface. So the researchers brainstormed a different particle: using the iron and aluminum in the dust to manufacture 9-micrometer-long rods, about twice as big as a speck of martian dust and smaller than commercially available glitter.”
Finally, a great piece by Lincoln Michel on “What Lasts and (Mostly) Doesn't Last.” “To offer a theory though, I think what lasts is almost always what has a dedicated following among one or more of the following: artists, geeks, academics, critics, and editors. ‘Gatekeepers’ of various types, if you like. Artists play the most important role in what art endures because artists are the ones making new art. Indirectly, they popularize styles and genres and make new fans seek out older influences. Directly, artists tend to tout their influences and encourage their fans to explore them.”
That’s it, folks. Have questions, comments, or ideas? This newsletter is sent from my email, so you can just click reply.
Forcing China’s AI researchers to strive for chip efficiency will ultimately shave America’s lead
In incididunt ad qui nostrud sint ullamco. Irure sint deserunt Lorem id officia dolore non. Anim dolor minim sit dolor et sint aliquip qui est. Ex in tempor laborum laboris dolor laboris ullamco quis. Enim est cupidatat consequat est culpa consequat. Fugiat officia in ea ea laborum sunt Lorem. Anim laborum labore duis ipsum mollit nisi do exercitation. Magna in pariatur anim aute.
In incididunt ad qui nostrud sint ullamco. Irure sint deserunt Lorem id officia dolore non. Anim dolor minim sit dolor et sint aliquip qui est. Ex in tempor laborum laboris dolor laboris ullamco quis. Enim est cupidatat consequat est culpa consequat. Fugiat officia in ea ea laborum sunt Lorem. Anim laborum labore duis ipsum mollit nisi do exercitation. Magna in pariatur anim aute.
Right now, pathbreaking AI foundation models follow an inverse Moore’s law (sometimes quipped “Eroom’s Law”). Each new generation is becoming more and more expensive to train as researchers exponentially increase the number of parameters used and overall model complexity. Sam Altman of OpenAI said that the cost of training GPT-4 was over $100 million, and some AI computational specialists believe that the first $1 billion model is currently or will shortly be developed.
As semiconductor chips rise in complexity, costs come down because transistors are packed more densely on silicon, cutting the cost per transistor during fabrication as well as lowering operational costs for energy and heat dissipation. That miracle of performance is the inverse with AI today. To increase the complexity (and therefore hopefully quality) of an AI model, researchers have attempted to pack in more and more parameters, each one of which demands more computation both for training and for usage. A 1 million parameter model can be trained for a few bucks and run on a $15 Raspberry Pi Zero 2 W, but Google’s PaLM with 540 billion parameters requires full-scale data centers to operate and is estimated to have cost millions of dollars to train.
Admittedly, simply having more parameters isn’t a magic recipe for better AI end performance. One recalls Steve Jobs’s marketing of the so-called “Megahertz Myth” to attempt to persuade the public that headline megahertz numbers weren't the right way to judge the performance of a personal computer. Performance in most fields is a complicated problem to judge, and just adding more inputs doesn't necessarily translate into a better output.
And indeed, there is an efficiency curve underway in AI outside of the leading-edge foundation models from OpenAI and Google. Researchers over the past two years have discovered better training techniques (as well as recipes to bundle these techniques together), developed best practices for spending on reinforcement learning from human feedback (RLHF), and curated better training data to improve model quality even while shaving parameter counts. Far from surpassing $1 billion, training new models that are equally performant might well cost only tens or hundreds of thousands of dollars.
This AI performance envelope between dollars invested and quality of model trained is a huge area of debate for the trajectory of the field (and was the most important theme to emanate from our AI Summit). And it’s absolutely vital to understand, since where the efficiency story ends up will determine the sustained market structure of the AI industry.
If foundation models cost billions of dollars to train, all the value and leverage of AI will accrue and centralize to the big tech companies like Microsoft (through OpenAI), Google and others who have the means and teams to lavish. But if the performance envelope reaches a significantly better dollar-to-quality ratio in the future, that means the whole field opens up to startups and novel experiments, while the leverage of the big tech companies would be much reduced.
The U.S. right now is parallelizing both approaches toward AI. Big tech is hurling billions of dollars on the field, while startups are exploring and developing more efficient models given their relatively meagre resources and limited access to Nvidia’s flagship chip, the H100. Talent — on balance — is heading as it typically does to big tech. Why work on efficiency when a big tech behemoth has money to burn on theoretical ideas emanating from university AI labs?
Without access to the highest-performance chips, China is limited in the work it can do on the cutting-edge frontiers of AI development. Without more chips (and in the future, the next generations of GPUs), it won’t have the competitive compute power to push the AI field to its limits like American companies. That leaves China with the only other path available, which is to follow the parallel course for improving AI through efficiency.
For those looking to prevent the decline of American economic power, this is an alarming development. Model efficiency is what will ultimately allow foundation models to be preloaded onto our devices and open up the consumer market to cheap and rapid AI interactions. Whoever builds an advantage in model efficiency will open up a range of applications that remain impractical or too expensive for the most complex AI models.
Given U.S. export controls, China is now (by assumption, and yes, it’s a big assumption) putting its entire weight behind building the AI models it can, which are focused on efficiency. Which means that its resources are arrayed for building the platforms to capture end-user applications — the exact opposite goal of American policymakers. It’s a classic result: restricting access to technology forces engineers to be more creative in building their products, the exact intensified creativity that typically leads to the next great startup or scientific breakthrough.
If America was serious about slowing the growth of China’s still-nascent semiconductor market, it really should have taken a page from the Chinese industrial policy handbook and just dumped chips on the market, just as China has done for years from solar panel manufacturing to electronics. Cheaper chips, faster chips, chips so competitive that no domestic manufacturer — even under Beijing direction — could have effectively competed. Instead we are attempting to decouple from the second largest chips market in the world, turning a competitive field where America is the clear leader into a bountiful green field of opportunity for domestic national champions to usurp market share and profits.
There were of course other goals outside of economic growth for restricting China’s access to chips. America is deeply concerned about the country’s AI integration into its military, and it wants to slow the evolution of its autonomous weaponry and intelligence gathering. Export controls do that, but they are likely to come at an extremely exorbitant long-term cost: the loss of leadership in the most important technological development so far this decade. It’s not a trade off I would have built trade policy on.
The life and death of air conditioning
Across six years of working at TechCrunch, no article triggered an avalanche of readership or inbox vitriol quite like Air conditioning is one of the greatest inventions of the 20th Century. It’s also killing the 21st. It was an interview with Eric Dean Wilson, the author of After Cooling, about the complex feedback loops between global climate disruption and the increasing need for air conditioning to sustain life on Earth. The article was read by millions and millions of people, and hundreds of people wrote in with hot air about the importance of their cold air.
Demand for air conditioners is surging in markets where both incomes and temperatures are rising, populous places like India, China, Indonesia and the Philippines. By one estimate, the world will add 1 billion ACs before the end of the decade. The market is projected to before 2040. That’s good for measures of public health and economic productivity; it’s unquestionably bad for the climate, and a global agreement to phase out the most harmful coolants could keep the appliances out of reach of many of the people who need them most.
This is a classic feedback loop, where the increasing temperatures of the planet, particularly in South Asia, lead to increased demand for climate resilience tools like air conditioning and climate-adapted housing, leading to further climate change ad infinitum.
Josh Wolfe gave a talk at Stanford this week as part of the school’s long-running Entrepreneurial Thought Leaders series, talking all things Lux, defense tech and scientific innovation. The .
Lux Recommends
As Henry Kissinger turns 100, Grace Isford recommends “Henry Kissinger explains how to avoid world war three.” “In his view, the fate of humanity depends on whether America and China can get along. He believes the rapid progress of AI, in particular, leaves them only five-to-ten years to find a way.”
Our scientist-in-residence Sam Arbesman recommends Blindsight by Peter Watts, a first contact, hard science fiction novel that made quite a splash when it was published back in 2006.
Mohammed bin Rashid Al Maktoum, and just how far he has been willing to go to keep his daughter tranquilized and imprisoned. “When the yacht was located, off the Goa coast, Sheikh Mohammed spoke with the Indian Prime Minister, Narendra Modi, and agreed to extradite a Dubai-based arms dealer in exchange for his daughter’s capture. The Indian government deployed boats, helicopters, and a team of armed commandos to storm Nostromo and carry Latifa away.”
Sam recommends Ada Palmer’s article for Microsoft’s AI Anthology, “We are an information revolution species.” “If we pour a precious new elixir into a leaky cup and it leaks, we need to fix the cup, not fear the elixir.”
I love complex international security stories, and few areas are as complex or wild as the international trade in exotic animals. Tad Friend, who generally covers Silicon Valley for The New Yorker, has a great story about an NGO focused on infiltrating and exposing the networks that allow the trade to continue in “Earth League International Hunts the Hunters.” "At times, rhino horn has been worth more than gold—so South African rhinos are often killed with Czech-made rifles sold by Portuguese arms dealers to poachers from Mozambique, who send the horns by courier to Qatar or Vietnam, or have them bundled with elephant ivory in Maputo or Mombasa or Lagos or Luanda and delivered to China via Malaysia or Hong Kong.”