Canadian writer Stephen Marche’s new book The Next Civil War was officially published last week, and it made quite a splash among those of us who read it at Lux.
Marche pulls no punches in his opening lines: “The United States is coming to an end. The question is how. Every government, every business, every person alive will be affected by the answer.” Rather than regale readers with the repetitive talking points that animate left- and right-wing partisans, Marche does something rather novel: he explores alternative incendiary scenarios that would lead to a cataclysm, including local takeovers of federal land, climate change, an assassination, and more.
While the book has several core messages and a constellation of fascinating facts, there are three meta-lessons worth pondering.
First, one theme that flows across these different scenarios is the gaping lack of trust and truth between Americans. Benedict Anderson famously described nations as “imagined communities” in his eponymous book, because no one in a country of 330 million people can meet — let alone know — everyone else. So in order to function as a nation, each American has to imagine who those other Americans are and connect with them through culture, or religion, or language, or institutions, or simply through our very borders.
Marche, borrowing from findings in political sociology, notes that “Typically, when democracies confront violent disruptions to the transition of power, they rely most heavily on their national symbols and collective rituals.” What happens when even basic questions like “who is in charge” can no longer be answered and a shared truth can’t be found? The answer, of course, is in the book’s title.
Second, and this is from Josh Wolfe, but the very discussion of civil war opens the Overton window for civil war, accelerating the narrative rather than stifling it. Indeed, one widely discussed recent poll showed that Americans of all political stripes are increasingly willing to consider violence “justified” for policy ends. Polls like this, discussions like this newsletter, and more make it ever more likely that non-democratic means will be used to effect political change. It’s the future-that-shall-not-be-named.
Third, and perhaps most tellingly about the state of American discourse, The Next Civil War will convince approximately zero of its readers to adjust their point of view on the trajectory of American politics. Readers are going to viscerally agree with some points, viscerally disagree with others, but it is hard to believe — even for a book that makes reasonably strenuous efforts to be non-partisan — that someone will walk away thinking anything differently. Which of course is the vicious intellectual cycle that led to the book being written in the first place.
Marche ends not with optimism, but with an anti-hope. “There is one hope, however, that must be rejected outright: the hope that everything will work out by itself, that America will bumble along into better times. It won’t. Americans have believed their country is an exception, a necessary nation. If history has shown us anything, it’s that the world doesn’t have any necessary nations.”
The way forward is clear, and it’s built into that theme of lux: it’s about enlightenment. Renewing values, renewing symbols, renewing a commitment to truth, and rebuilding institutions that have become antiquated in 2022. And doing that all quickly, since the trendlines aren’t going to be friendly to delays.
After that depressing news, time for some ❤️
In case you missed it, an absolutely huge milestone in the area of xenotransplantation: doctors at the University of Maryland Medical Center successfully — for the first time — transplanted a genetically-modified pig heart to a human patient, 57-year-old David Bennett.
One of the last in-depth stories I did for TechCrunch before showing up at Lux was a profile on UNOS, the United Network for Organ Sharing. Xenotransplantation offers one of the best near-to-medium-term opportunities to rapidly increase the supply of organs needed by humans, and this week’s milestone is a testament to decades of work by scientists and medical doctors to set the stage for this achievement.
While much more work is required to make xenotransplantation a safe and consistent procedure (no one wants to beta trial their heart, after all), this early sign of success should continue to bring attention and capital to the xenotransplantation market. There are still longer-term hopes for lab-grown organs, but those remain mostly in the realm of science fiction today.
A revolution in protein structure prediction
Forwarded to me by Adam Goulburn and Zavain Dar, Sriram Subramaniam, founder and CEO of Gandeeva and professor at the University of British Columbia, published a comprehensive overview of what is happening in structural biology in Nature Methods following the success of AlphaFold in 2018 and particularly 2020.
While sufficiently technical to frustrate a skim read while drinking a Santa Cruz grenache like I originally tried, the paper offers clear guidance on what is happening today with regards to protein structure prediction, and where new technologies like cryo-electron microscopy will take it in the next decade.
With AlphaFold and RoseTTAFold, we now have excellent tools for predicting the general structure of hundreds of thousands of proteins, a number that is rapidly increasing. That databank gives life scientists the means to develop computational models of biology and increase our understanding of how our cells work.
There are gaps though. While we can identify the standard formations of proteins given their underlying amino acids, we often don’t know how proteins change configuration when entering different states or interacting with each other. That’s where newer tools like cryo-EM come in handy. As Subramaniam writes:
Fortunately, several of the gaps in the predictions of these powerful new methods correspond almost exactly to the areas where cryo-EM can provide useful information. Cryo-EM methods based on ‘single-particle’ imaging enable the determination of 3D structures of macromolecular assemblies that are frozen by rapid vitrification in cryogens such as liquid ethane. The rapid freezing process allows the preservation of protein structures under near-native conditions. […] Visualizing multiple conformational and functional states that are populated in the samples by 3D classification is a unique strength of the cryo-EM toolbox.
This is an enormously important line of work, so do check out the paper (without the wine, of course).
Middle market, meet science
From our scientist-in-residence Sam Arbesman, a great editorial published in Nature calling for the funding of “focused research organizations” that would bridge the gap between academic science and venture-backable startups:
A type of non-profit start-up could be a better way to support projects that enable research. These would have full-time scientists, engineers and executives, and total funding of about US$20 million to $100 million that would last around 5 years — longer than most grants or venture-capital funding rounds allow. And they would be set up to pursue predefined milestones, such as improving the resolution of a measurement system by tenfold, or gathering a pre-specified amount of data. We call them focused research organizations (FROs).
ICYMI, Sam wrote about new scientific organizations in last week’s edition of the newsletter, available online. Also check out his directory of new research funding organizations dubbed the Overedge Catalog.
The Renaissance Man of Venture Capital
Most of you reading this presumably know Lux, but this profile by Michelle Celarier in Institutional Investor offers a vivid window into how Josh Wolfe, Peter Hébert and the whole Lux crew here think about the world of venture today.
There is a great segment toward the end on Josh’s thoughts around smelly startups:
“Sight, as an extension of our human senses, is technologically capturable and replayable,” he says. “Sound, same thing. I can record a sound. And then you can play that back.” Taking it one step further, he mentions the app Shazam. “I’m in a coffee shop or a restaurant or a hotel, and I like a song, and I press Shazam and I hold it up, and the sound of that song goes into my little pocket computer, and decodes the waveform and tells me exactly what song that is by the artist.” So why, he wonders, can’t people do that with smells? “Sound is just a waveform that’s traveling through the ether — I can’t even see it, and I have a piece of technology that can capture, invisibly, sound, and play it back,” he says. “A smell is just a volatile organic compound. When you have a candle, or you have chocolate, or you have food, or you have wine, or you have a shampoo, or the smell of your grandparents, or a home, there’s a distinct smell.”
RFS (request for startups) on smell, btw.
An Apple a day keeps China at bay
Finally, SupChina posted a combination podcast and transcript of Chris Marquis’s conversation with former Apple University China head Doug Guthrie. Apple is deeply entwined with China economically and industrially, and some of the more esoteric lessons about that arrangement are absolutely fascinating to read (or listen to). Here’s Guthrie:
And what happens then is just with any company, whether it’s electronics or automobiles, you have components, modules, and final assembly. Right? Most people think like, “Oh, Tesla, they have a factory in China. Its final assembly.” No, actually it’s components, modules, final assembly. And in every one of those levels, the companies that are the most innovative are helping to teach the suppliers how to run their supply chains, and they always teach multiple suppliers. And so what happens is that the suppliers become competitors against each other. And so I can just tell you, I’ve had multiple meetings with CEOs of suppliers of Apple, and they would say, “Why are we pushed to such slow margins?” And I’m like, “Well, you’re paying for it because you want access to the expertise and that’s what everybody wants.” Here’s my favorite statistic about Apple in China. So Apple controls about 35% of the smartphone market share in the world, but they take about 95% of the profit. And why does that happen? It happens because they’re able to leverage their expertise to push suppliers to do what they want them to do. And then those suppliers go make their money from Xiaomi, Oppo, Vivo, everybody.
Lux Recommends
We’ll have more to discuss on this subject in the future, but Deena Shakir got to announce her investment into Gameto, which is looking to slow ovarian aging and improve fertility treatments for women.
Adam Goulburn recommends the movie14 Peaks: Nothing Is Impossible, which chronicles Nirmal Purja’s journey to summit 14 mountains in seven months.
I’m recommending “Virtual boy band Strawberry Prince steps in metaverse spotlight,” a look at the future of music. “Strawberry Prince, or Sutopuri in Japanese, are “2.5-D idols” — half a step short of flesh-and-blood stars. Performing as virtual avatars, they can still draw enough fans to fill stadiums.”
That’s it, folks. Have questions, comments, or ideas? This newsletter is sent from my email, so you can just click reply.
Forcing China’s AI researchers to strive for chip efficiency will ultimately shave America’s lead
In incididunt ad qui nostrud sint ullamco. Irure sint deserunt Lorem id officia dolore non. Anim dolor minim sit dolor et sint aliquip qui est. Ex in tempor laborum laboris dolor laboris ullamco quis. Enim est cupidatat consequat est culpa consequat. Fugiat officia in ea ea laborum sunt Lorem. Anim laborum labore duis ipsum mollit nisi do exercitation. Magna in pariatur anim aute.
In incididunt ad qui nostrud sint ullamco. Irure sint deserunt Lorem id officia dolore non. Anim dolor minim sit dolor et sint aliquip qui est. Ex in tempor laborum laboris dolor laboris ullamco quis. Enim est cupidatat consequat est culpa consequat. Fugiat officia in ea ea laborum sunt Lorem. Anim laborum labore duis ipsum mollit nisi do exercitation. Magna in pariatur anim aute.
Right now, pathbreaking AI foundation models follow an inverse Moore’s law (sometimes quipped “Eroom’s Law”). Each new generation is becoming more and more expensive to train as researchers exponentially increase the number of parameters used and overall model complexity. Sam Altman of OpenAI said that the cost of training GPT-4 was over $100 million, and some AI computational specialists believe that the first $1 billion model is currently or will shortly be developed.
As semiconductor chips rise in complexity, costs come down because transistors are packed more densely on silicon, cutting the cost per transistor during fabrication as well as lowering operational costs for energy and heat dissipation. That miracle of performance is the inverse with AI today. To increase the complexity (and therefore hopefully quality) of an AI model, researchers have attempted to pack in more and more parameters, each one of which demands more computation both for training and for usage. A 1 million parameter model can be trained for a few bucks and run on a $15 Raspberry Pi Zero 2 W, but Google’s PaLM with 540 billion parameters requires full-scale data centers to operate and is estimated to have cost millions of dollars to train.
Admittedly, simply having more parameters isn’t a magic recipe for better AI end performance. One recalls Steve Jobs’s marketing of the so-called “Megahertz Myth” to attempt to persuade the public that headline megahertz numbers weren't the right way to judge the performance of a personal computer. Performance in most fields is a complicated problem to judge, and just adding more inputs doesn't necessarily translate into a better output.
And indeed, there is an efficiency curve underway in AI outside of the leading-edge foundation models from OpenAI and Google. Researchers over the past two years have discovered better training techniques (as well as recipes to bundle these techniques together), developed best practices for spending on reinforcement learning from human feedback (RLHF), and curated better training data to improve model quality even while shaving parameter counts. Far from surpassing $1 billion, training new models that are equally performant might well cost only tens or hundreds of thousands of dollars.
This AI performance envelope between dollars invested and quality of model trained is a huge area of debate for the trajectory of the field (and was the most important theme to emanate from our AI Summit). And it’s absolutely vital to understand, since where the efficiency story ends up will determine the sustained market structure of the AI industry.
If foundation models cost billions of dollars to train, all the value and leverage of AI will accrue and centralize to the big tech companies like Microsoft (through OpenAI), Google and others who have the means and teams to lavish. But if the performance envelope reaches a significantly better dollar-to-quality ratio in the future, that means the whole field opens up to startups and novel experiments, while the leverage of the big tech companies would be much reduced.
The U.S. right now is parallelizing both approaches toward AI. Big tech is hurling billions of dollars on the field, while startups are exploring and developing more efficient models given their relatively meagre resources and limited access to Nvidia’s flagship chip, the H100. Talent — on balance — is heading as it typically does to big tech. Why work on efficiency when a big tech behemoth has money to burn on theoretical ideas emanating from university AI labs?
Without access to the highest-performance chips, China is limited in the work it can do on the cutting-edge frontiers of AI development. Without more chips (and in the future, the next generations of GPUs), it won’t have the competitive compute power to push the AI field to its limits like American companies. That leaves China with the only other path available, which is to follow the parallel course for improving AI through efficiency.
For those looking to prevent the decline of American economic power, this is an alarming development. Model efficiency is what will ultimately allow foundation models to be preloaded onto our devices and open up the consumer market to cheap and rapid AI interactions. Whoever builds an advantage in model efficiency will open up a range of applications that remain impractical or too expensive for the most complex AI models.
Given U.S. export controls, China is now (by assumption, and yes, it’s a big assumption) putting its entire weight behind building the AI models it can, which are focused on efficiency. Which means that its resources are arrayed for building the platforms to capture end-user applications — the exact opposite goal of American policymakers. It’s a classic result: restricting access to technology forces engineers to be more creative in building their products, the exact intensified creativity that typically leads to the next great startup or scientific breakthrough.
If America was serious about slowing the growth of China’s still-nascent semiconductor market, it really should have taken a page from the Chinese industrial policy handbook and just dumped chips on the market, just as China has done for years from solar panel manufacturing to electronics. Cheaper chips, faster chips, chips so competitive that no domestic manufacturer — even under Beijing direction — could have effectively competed. Instead we are attempting to decouple from the second largest chips market in the world, turning a competitive field where America is the clear leader into a bountiful green field of opportunity for domestic national champions to usurp market share and profits.
There were of course other goals outside of economic growth for restricting China’s access to chips. America is deeply concerned about the country’s AI integration into its military, and it wants to slow the evolution of its autonomous weaponry and intelligence gathering. Export controls do that, but they are likely to come at an extremely exorbitant long-term cost: the loss of leadership in the most important technological development so far this decade. It’s not a trade off I would have built trade policy on.
The life and death of air conditioning
Across six years of working at TechCrunch, no article triggered an avalanche of readership or inbox vitriol quite like Air conditioning is one of the greatest inventions of the 20th Century. It’s also killing the 21st. It was an interview with Eric Dean Wilson, the author of After Cooling, about the complex feedback loops between global climate disruption and the increasing need for air conditioning to sustain life on Earth. The article was read by millions and millions of people, and hundreds of people wrote in with hot air about the importance of their cold air.
Demand for air conditioners is surging in markets where both incomes and temperatures are rising, populous places like India, China, Indonesia and the Philippines. By one estimate, the world will add 1 billion ACs before the end of the decade. The market is projected to before 2040. That’s good for measures of public health and economic productivity; it’s unquestionably bad for the climate, and a global agreement to phase out the most harmful coolants could keep the appliances out of reach of many of the people who need them most.
This is a classic feedback loop, where the increasing temperatures of the planet, particularly in South Asia, lead to increased demand for climate resilience tools like air conditioning and climate-adapted housing, leading to further climate change ad infinitum.
Josh Wolfe gave a talk at Stanford this week as part of the school’s long-running Entrepreneurial Thought Leaders series, talking all things Lux, defense tech and scientific innovation. The .
Lux Recommends
As Henry Kissinger turns 100, Grace Isford recommends “Henry Kissinger explains how to avoid world war three.” “In his view, the fate of humanity depends on whether America and China can get along. He believes the rapid progress of AI, in particular, leaves them only five-to-ten years to find a way.”
Our scientist-in-residence Sam Arbesman recommends Blindsight by Peter Watts, a first contact, hard science fiction novel that made quite a splash when it was published back in 2006.
Mohammed bin Rashid Al Maktoum, and just how far he has been willing to go to keep his daughter tranquilized and imprisoned. “When the yacht was located, off the Goa coast, Sheikh Mohammed spoke with the Indian Prime Minister, Narendra Modi, and agreed to extradite a Dubai-based arms dealer in exchange for his daughter’s capture. The Indian government deployed boats, helicopters, and a team of armed commandos to storm Nostromo and carry Latifa away.”
Sam recommends Ada Palmer’s article for Microsoft’s AI Anthology, “We are an information revolution species.” “If we pour a precious new elixir into a leaky cup and it leaks, we need to fix the cup, not fear the elixir.”
I love complex international security stories, and few areas are as complex or wild as the international trade in exotic animals. Tad Friend, who generally covers Silicon Valley for The New Yorker, has a great story about an NGO focused on infiltrating and exposing the networks that allow the trade to continue in “Earth League International Hunts the Hunters.” "At times, rhino horn has been worth more than gold—so South African rhinos are often killed with Czech-made rifles sold by Portuguese arms dealers to poachers from Mozambique, who send the horns by courier to Qatar or Vietnam, or have them bundled with elephant ivory in Maputo or Mombasa or Lagos or Luanda and delivered to China via Malaysia or Hong Kong.”