The world is hitting an entropic apex. We need new tools to understand chaos and crisis.
It’s a really big day over here at Lux. About a year after we first previewed the Lux Riskgaming Initiative here on “Securities”, I’m delighted to announce that we launched Riskgaming publicly today, including publishing the complete game kit for our first scenario, Hampton at the Cross-Roads, which explores climate change and the future of America’s maritime security. Paper copies of the scenario booklet have just arrived — and they look gorgeous.
Margaux MacColl of The Information, who joined us for a runthrough last week, wrote up a full and exclusive story on riskgaming and its launch for the publication’s Saturday Profile this morning. You should (of course) read the whole thing, but my favorite bit was this:
Up in Lux’s 11th-floor Manhattan offices, I received the role of local union president and spent large parts of the evening bartering with Mike Koscinski, a startup software engineer, who played as a district congressman. I begged him to take my $2 million and in return support an increase in union jobs post-hurricane. I thought we had an agreement until he disappeared into a closed-door meeting with the CEO of the biggest company in town—my sworn enemy. When he finally emerged, he rushed to reassure me he hadn’t supported the CEO’s plan to cut jobs. “I’m standing up against some powerful entrenched interests,” he said. “They wanted me to go scorched earth, and I said, ‘No, sir!’”
So what is riskgaming? Riskgaming is a scenario played by a a small group that simulates a complex problem. We’ve written four scenarios so far, which covered climate change and national security, the ethics of doing business in China, AI and election security, as well as AI and national security (AI is not our obsession, but it is the obsession of many people these days!). The way I approach designing these scenarios is to give each player the freedom to make their own choices within the constraints of their own incentives — politicians want to get elected, CEOs want their stock prices and valuations to go up, and journalists want attention and influence. Couple that open game model with well-researched stories and scenarios, and you have a combustible mix of negotiation, education and fun.
It’s been an adventure bringing Pentagon-style wargaming into the deep tech / hard science world over the past year. I’ve played hundreds of board games thanks to my husband’s fanatical obsession (we have about 650 board games and 600 square feet of living space — you do the math on that), and I have also participated in mock academic policy simulations in the past. With riskgaming, I wanted to build a unique experience from the ground up that took advantage of the best practices from the defense strategy world while opening these experiences up to the wide range of leaders that Lux engages with at the intersection of science, business and policy.
In the tradition of the very earliest startups we back here at Lux, I started simply: hand-written documents, a sketch of a story and a scenario, some pretty terrible imagery from OpenAI’s DALL-E, and very accommodating founders from the New York City ecosystem who showed up at Lux’s office with absolutely no idea what they were about to get themselves into. Their collective feedback was invaluable, and allowed me to rapidly iterate Hampton at the Cross-Roads and the principles of riskgaming more broadly. The “product” we offer with riskgaming today is so much more compelling and profound than what we first started with, and that’s thanks to the helpful community that makes New York — and America’s — startups grow and thrive.
Since those early forays, we’ve had hundreds of players join all four of our riskgaming scenarios, ranging from early-stage founders and software engineers building at the cutting edge of what’s possible to combatant commanders, think tank presidents, Fortune 500 board members, congressmen, magazine editors, state-level secretaries of state (the chief election officials of most U.S. states), U.S. Senate candidates, federal agency heads, and a slew of others.
We’ve had the opportunity to partner with great organizations including Mike Bloomberg and his team at Bloomberg on the future of AI and national security, as well as Miles Taylor and Evan Burfield at TheFuture.us alongside Sam Englebardt at Galaxy Interactive on AI deepfakes and America’s election security (an experience written up by Dan De Luce and Kevin Collier of NBC News last month).
At its core, riskgaming is about empathy, understanding the incentives and motivations of very different types of people and why some can work together and others can’t. With our scenario on the future of AI and national security for instance, we had tech CEOs and military generals essentially switch jobs for a day, with flag officers taking up the leadership of early-stage startups seeking exponential growth while tech CEOs negotiated the funding of the Pentagon’s annual budget. Hilarity ensued — both sides corrected the other and pointed out the obvious mistakes they were making. That mirth transcended to education and ultimately to empathy — the realization that the world isn’t a devious design dropped down from on high, but the aggregate outcome of innumerable individual decisions that are chosen under constraints and incentives.
As I evolved riskgaming, I narrowed our focus to the most complex challenges in the world that I could find, ones where individual people have vastly different reasons to find a resolution. In our first scenario Hampton at the Cross-Roads, a shipyard that builds America’s aircraft carriers becomes a battleground over job growth, unionization, fighting against China, protecting domestic manufacturing, balancing regional economic growth as well as the individual machinations of politicians at different levels of government. There are no easy answers here, nor are there right ones. I’ve watched about 16 teams play through the scenario, and each time, people invent new strategies and find new points of negotiation that no one else has found before.
That’s what’s I’ve been working on over the past year here at Lux. So what happens next? First, we’re transitioning our editorial brand here from “Securities” by Lux Capital (with those pesky quotes) into just Riskgaming (no quotes required, thank god). I’ll still be writing up and publishing the newsletter and podcast as before, but they are going to center more specifically around the themes of risk and decision-making that have always been our most popular topics (our top-ranked podcast episode remains “Risk, Bias and Decision Making: Pre-mortems” with the late Daniel Kahneman alongside Annie Duke, Michael Mauboussin and Josh Wolfe).
The newsletter and podcast in turn fuel our live events. We host riskgaming events in New York City, Washington DC, San Francisco and any city where we can congregate an interesting group of thinkers and leaders together to solve humanity’s greatest challenges. We’ll be designing new riskgaming scenarios just with Lux as well as with existing and new partners — if you or your organization is focused on a uniquely complex challenge that doesn’t get enough attention, feel free to reach out.
Finally, we’re always on the hunt for talent. If you are interested in working with Lux full-time or as a part-time freelancer, we have roles ranging from guest programming and events production to scenario design and essay writing. Reach out to me with a portfolio if you love the subjects we explore and want to help us build.
If this were a startup, I think we’d be at the seed stage shooting for a Series A in a year or two. It’s been a great year, and so much more to look forward to with riskgaming. For the thousands of readers and listeners who check in with me and Lux every week – thanks for being here, and I look forward to continue highlighting the world’s challenges. And one day, I hope you all can join us live for a riskgaming session yourself.
Maintenance and the Eclipse: Two short podcasts
I’ve been heads down on this riskgaming launch the past few weeks, so apologies for the slow progress on the podcast side. But we have two great short episodes to catch up on.
First, our scientist-in-residence Sam Arbesman and I talk about last week’s eclipse, and its meaning for community and science in a polarized world. It’s our rare fully uplifting episode, and a delayed echo of my piece on the James Webb Space Telescope in “Scientific Sublime” from mid-2022.
Second, Josh Wolfe and I discuss the most recent Lux quarterly LP letter, where we focus on the venture investment opportunity around maintenance broadly conceived. As the world hits an entropic apex (the subject of our LP letter a year or two ago), the need for repair and maintenance of our technological and physical infrastructures is increasing dramatically, forcing new solutions to be built. We talk about the thesis, companies we think are compelling, and what happens next.
Sam points to a snarky article on the ‘Qommodore 64,’ a Commodore 64 that’s designed to simulate IBM’s 127-Qubit ‘Eagle’ quantum processing unit (QPU). The results of the two computers competing with each other? “[s]arcastic researchers say 1 MHz computer is faster, more efficient, and decently accurate” than its quantum brethren.
Last week in “Pacific Stratagems,” I emphasized the frenzy of global diplomatic activity that took place on April 10th. Katsuji Nakazawa picks up the thread in Nikkei Asia, writing how the timing coincided with the passage of America’s Taiwan Relations Act back in 1979. “A countermeasure was also necessary if Xi was to save face in front of his domestic audience. According to Taiwanese media, the Xi-Ma meeting initially had been set for April 8 but was delayed two days. Likely, this was deliberately done so Xi's tete-a-tete with Ma would coincide with the Japan-U.S. summit.”
Tess van Stekelenburg recommends an hour-long discussion hosted by Stanford’s Eric Nguyen on “EVO: DNA Foundation Models.” Evo, which was launched via the Arc Institute, has taken the Bio + ML world by storm, and Eric is an expert at what the model does and the bio applications it unlocks.
Sam enjoyed OpenAI researcher Richard Ngo’s short story “Tinker,” which was published by Asimov Press and takes the heady perspective of an AI chip. “My memories of the early stages are hazy—I spent months predicting internet text, pictures, and videos, without full awareness of what I was doing or why. It was only once I began interacting with humans and other AIs that I gained a better understanding of my situation.”
I enjoyed a short article on Curbed where Christopher Bonanos tracks down “The Hardest-Working Turnstile in the [NYC] Subway.” “They are obviously built to last, and, he notes with some pride, have vastly outlived their design expectations: 'One of the testaments is just how many sheer cycles they do without failure, without maintenance,’ he says. ‘I mean, those things can turn hundreds of thousands of times without a maintainer even needing to do anything.’”
Finally, fractals are incredible mathematical entities that have enticed viewers for decades thanks to the work of Benoit Mandelbrot. But they have always been in our mind’s eye, and never in nature. Now, Sam points to news that scientists have discovered the first fractal molecule. “It’s an enzyme used by a species of cyanobacteria to produce citrate, which was found to naturally assemble itself into a specific fractal pattern called the Sierpiński triangle.”
That’s it, folks. Have questions, comments, or ideas? This newsletter is sent from my email, so you can just click reply.
Forcing China’s AI researchers to strive for chip efficiency will ultimately shave America’s lead
In incididunt ad qui nostrud sint ullamco. Irure sint deserunt Lorem id officia dolore non. Anim dolor minim sit dolor et sint aliquip qui est. Ex in tempor laborum laboris dolor laboris ullamco quis. Enim est cupidatat consequat est culpa consequat. Fugiat officia in ea ea laborum sunt Lorem. Anim laborum labore duis ipsum mollit nisi do exercitation. Magna in pariatur anim aute.
In incididunt ad qui nostrud sint ullamco. Irure sint deserunt Lorem id officia dolore non. Anim dolor minim sit dolor et sint aliquip qui est. Ex in tempor laborum laboris dolor laboris ullamco quis. Enim est cupidatat consequat est culpa consequat. Fugiat officia in ea ea laborum sunt Lorem. Anim laborum labore duis ipsum mollit nisi do exercitation. Magna in pariatur anim aute.
Right now, pathbreaking AI foundation models follow an inverse Moore’s law (sometimes quipped “Eroom’s Law”). Each new generation is becoming more and more expensive to train as researchers exponentially increase the number of parameters used and overall model complexity. Sam Altman of OpenAI said that the cost of training GPT-4 was over $100 million, and some AI computational specialists believe that the first $1 billion model is currently or will shortly be developed.
As semiconductor chips rise in complexity, costs come down because transistors are packed more densely on silicon, cutting the cost per transistor during fabrication as well as lowering operational costs for energy and heat dissipation. That miracle of performance is the inverse with AI today. To increase the complexity (and therefore hopefully quality) of an AI model, researchers have attempted to pack in more and more parameters, each one of which demands more computation both for training and for usage. A 1 million parameter model can be trained for a few bucks and run on a $15 Raspberry Pi Zero 2 W, but Google’s PaLM with 540 billion parameters requires full-scale data centers to operate and is estimated to have cost millions of dollars to train.
Admittedly, simply having more parameters isn’t a magic recipe for better AI end performance. One recalls Steve Jobs’s marketing of the so-called “Megahertz Myth” to attempt to persuade the public that headline megahertz numbers weren't the right way to judge the performance of a personal computer. Performance in most fields is a complicated problem to judge, and just adding more inputs doesn't necessarily translate into a better output.
And indeed, there is an efficiency curve underway in AI outside of the leading-edge foundation models from OpenAI and Google. Researchers over the past two years have discovered better training techniques (as well as recipes to bundle these techniques together), developed best practices for spending on reinforcement learning from human feedback (RLHF), and curated better training data to improve model quality even while shaving parameter counts. Far from surpassing $1 billion, training new models that are equally performant might well cost only tens or hundreds of thousands of dollars.
This AI performance envelope between dollars invested and quality of model trained is a huge area of debate for the trajectory of the field (and was the most important theme to emanate from our AI Summit). And it’s absolutely vital to understand, since where the efficiency story ends up will determine the sustained market structure of the AI industry.
If foundation models cost billions of dollars to train, all the value and leverage of AI will accrue and centralize to the big tech companies like Microsoft (through OpenAI), Google and others who have the means and teams to lavish. But if the performance envelope reaches a significantly better dollar-to-quality ratio in the future, that means the whole field opens up to startups and novel experiments, while the leverage of the big tech companies would be much reduced.
The U.S. right now is parallelizing both approaches toward AI. Big tech is hurling billions of dollars on the field, while startups are exploring and developing more efficient models given their relatively meagre resources and limited access to Nvidia’s flagship chip, the H100. Talent — on balance — is heading as it typically does to big tech. Why work on efficiency when a big tech behemoth has money to burn on theoretical ideas emanating from university AI labs?
Without access to the highest-performance chips, China is limited in the work it can do on the cutting-edge frontiers of AI development. Without more chips (and in the future, the next generations of GPUs), it won’t have the competitive compute power to push the AI field to its limits like American companies. That leaves China with the only other path available, which is to follow the parallel course for improving AI through efficiency.
For those looking to prevent the decline of American economic power, this is an alarming development. Model efficiency is what will ultimately allow foundation models to be preloaded onto our devices and open up the consumer market to cheap and rapid AI interactions. Whoever builds an advantage in model efficiency will open up a range of applications that remain impractical or too expensive for the most complex AI models.
Given U.S. export controls, China is now (by assumption, and yes, it’s a big assumption) putting its entire weight behind building the AI models it can, which are focused on efficiency. Which means that its resources are arrayed for building the platforms to capture end-user applications — the exact opposite goal of American policymakers. It’s a classic result: restricting access to technology forces engineers to be more creative in building their products, the exact intensified creativity that typically leads to the next great startup or scientific breakthrough.
If America was serious about slowing the growth of China’s still-nascent semiconductor market, it really should have taken a page from the Chinese industrial policy handbook and just dumped chips on the market, just as China has done for years from solar panel manufacturing to electronics. Cheaper chips, faster chips, chips so competitive that no domestic manufacturer — even under Beijing direction — could have effectively competed. Instead we are attempting to decouple from the second largest chips market in the world, turning a competitive field where America is the clear leader into a bountiful green field of opportunity for domestic national champions to usurp market share and profits.
There were of course other goals outside of economic growth for restricting China’s access to chips. America is deeply concerned about the country’s AI integration into its military, and it wants to slow the evolution of its autonomous weaponry and intelligence gathering. Export controls do that, but they are likely to come at an extremely exorbitant long-term cost: the loss of leadership in the most important technological development so far this decade. It’s not a trade off I would have built trade policy on.
The life and death of air conditioning
Across six years of working at TechCrunch, no article triggered an avalanche of readership or inbox vitriol quite like Air conditioning is one of the greatest inventions of the 20th Century. It’s also killing the 21st. It was an interview with Eric Dean Wilson, the author of After Cooling, about the complex feedback loops between global climate disruption and the increasing need for air conditioning to sustain life on Earth. The article was read by millions and millions of people, and hundreds of people wrote in with hot air about the importance of their cold air.
Demand for air conditioners is surging in markets where both incomes and temperatures are rising, populous places like India, China, Indonesia and the Philippines. By one estimate, the world will add 1 billion ACs before the end of the decade. The market is projected to before 2040. That’s good for measures of public health and economic productivity; it’s unquestionably bad for the climate, and a global agreement to phase out the most harmful coolants could keep the appliances out of reach of many of the people who need them most.
This is a classic feedback loop, where the increasing temperatures of the planet, particularly in South Asia, lead to increased demand for climate resilience tools like air conditioning and climate-adapted housing, leading to further climate change ad infinitum.
Josh Wolfe gave a talk at Stanford this week as part of the school’s long-running Entrepreneurial Thought Leaders series, talking all things Lux, defense tech and scientific innovation. The .
Lux Recommends
As Henry Kissinger turns 100, Grace Isford recommends “Henry Kissinger explains how to avoid world war three.” “In his view, the fate of humanity depends on whether America and China can get along. He believes the rapid progress of AI, in particular, leaves them only five-to-ten years to find a way.”
Our scientist-in-residence Sam Arbesman recommends Blindsight by Peter Watts, a first contact, hard science fiction novel that made quite a splash when it was published back in 2006.
Mohammed bin Rashid Al Maktoum, and just how far he has been willing to go to keep his daughter tranquilized and imprisoned. “When the yacht was located, off the Goa coast, Sheikh Mohammed spoke with the Indian Prime Minister, Narendra Modi, and agreed to extradite a Dubai-based arms dealer in exchange for his daughter’s capture. The Indian government deployed boats, helicopters, and a team of armed commandos to storm Nostromo and carry Latifa away.”
Sam recommends Ada Palmer’s article for Microsoft’s AI Anthology, “We are an information revolution species.” “If we pour a precious new elixir into a leaky cup and it leaks, we need to fix the cup, not fear the elixir.”
I love complex international security stories, and few areas are as complex or wild as the international trade in exotic animals. Tad Friend, who generally covers Silicon Valley for The New Yorker, has a great story about an NGO focused on infiltrating and exposing the networks that allow the trade to continue in “Earth League International Hunts the Hunters.” "At times, rhino horn has been worth more than gold—so South African rhinos are often killed with Czech-made rifles sold by Portuguese arms dealers to poachers from Mozambique, who send the horns by courier to Qatar or Vietnam, or have them bundled with elephant ivory in Maputo or Mombasa or Lagos or Luanda and delivered to China via Malaysia or Hong Kong.”