What does security — national, economic, social, health — mean when the planet itself is creating the threats? Lethal combinations of climate change and human war are cutting harvest yields globally, from rice in California and corn in Iowa to grains in Europe and soybeans in China. With declining harvest projections arriving almost weekly from agricultural statistical agencies, even more intense upward pressure on consumer inflation can be expected this year.
That’s the climate disruption in the short term (skipping, of course, the floods, droughts, wildfires, tornadoes, hurricanes, and other disasters befalling us). In the longer term, cascading effects of climate disruption are likely to further accelerate and reinforce these negative cycles.
In a new paper in Science published this week (preprint also available), David I. Armstrong McKay and his co-authors identify 9 global tipping points and 7 regional tipping points that have the potential to self-reinforce global climate disruption, updating an analysis conducted nearly 15 years ago and published in the Proceedings of the National Academies of Science. Using the most up-to-date models and data, they identify 5 tipping points that may have already been crossed by recent anthropogenic global temperature rise, and they also show that several more are likely to be tipped even in better scenarios of global responses to climate change.
In short, more insecurity is on the way.
The planet itself is the threat, but it is also the source of life for humanity, a dichotomy that we are still learning to grapple with. So what role do the armed services have to play in this new security environment? It’s a question that has been on the minds of U.S. defense planners for about two decades now, as chronicled in Michael R. Klare’s All Hell Breaking Loose. As the demands on the armed services for humanitarian, logistical, and operational support have increased, so too has the necessity to figure out a planetary insecurity doctrine.
This summer, the Journal of Advanced Military Studies (JAMS), owned by the Marine Corps University Press, published a special issue entirely focused on the evolving military response to global climate change. In her editor’s note, Julia F. Irwin asks a provocative question: Should militaries be considered humanitarian actors?
She notes that the humanitarian mission is not a new one for the U.S. military, writing that “By 1959, the U.S. military had established its reputation as not only a global policeman, but also a global firefighter and ambulance driver within U.S. states and territories and across much of the world.” Such missions have remained enduring ever since.
Yet the scale of the needs are starting to overwhelm even the world’s most well-funded and well-trained military services. The sheer number of disasters, their scale, and their breadth across all continents of the globe threaten to subsume all other security missions under the climate security umbrella. In his book, Klare documented the increasing concern of senior military strategists on how the need to fight climate change could muddle the force structure requirements in many regions, most notably the Asia-Pacific.
Rather than view climate disruption as a distraction from “hard” threats, analysts are increasingly seeing climate disruption as the ur-threat, one that — much like tipping points — can create cascading instability. In her research article in the special issue of JAMS, Elizabeth G. Boulton argues that grand strategy needs to focus on the “hyperthreat” of climate change:
The hyperthreat has warlike destructive capabilities that are so diffuse that it is hard to see the enormity of the destruction coherently nor who is responsible for its hostile actions. It defies existing human thought and institutional constructs.
The entirety of modern military organizational design is to handle the threats nation-states have traditionally faced from adversarial actors. Those threats do evolve: non-state actors committing terrorism forced nations to grapple with new battlegrounds and avenues of attack, while cyber is once again challenging defense professionals to consider a novel theater requiring new strategies and doctrines.
The hyperthreat concept moves past this pattern of each new theater requiring a specific response. There’s no ‘adversarial actor’ that is conducting operations through tornadoes and hurricanes — it’s the planet itself. While we tirelessly use war-like language to “fight” climate change, the reality is that there isn’t much of a fight, as there is no enemy (well, other than all of us polluting humans). The result of the planet’s “attacks” is still death and destruction, but perhaps more importantly, also a feeling of human impotence in the face of the geological forces arrayed “against” us.
The evolution of military responses to climate change will ultimately rest with adapting and mitigating that cognitive insecurity. The goal of humanitarian initiatives is to provide help, but it’s also to apply a balm to the wounded human ego that once commanded the planet and which is now ignoring our best efforts to tame it. We no longer have the surety that our efforts to cultivate food or to build shelter will ultimately be fruitful. Even with all of our forecasting and technology, the planet can seemingly shape outcomes to its “will”.
As a hyperthreat, climate change itself must be the prism by which other security threats are evaluated. Large-scale flooding in Pakistan has now displaced or affected more than 32 million people in one of the most geopolitically volatile regions in the world. Whether the concern is terrorism, or nuclear proliferation, or criminal money laundering, it’s now the flood and the humanitarian response that will either magnify or diminish those other threats.
Boulton argues that we need to build new “deep frames,” borrowing a concept from psychology. It’s not enough to sort of shift our conscious narrative about security to the climate. Instead, we need to completely rebuild our mental structures from the synapses to our mental models to imbue ourselves with understanding. Our collective ability to handle an entirely new security phenomenon rests on our ability to rise above our pre-conceived thoughts about these situations. She asks, “Could the concept of enemy be replaced with a range of new ideas and words that match entangled existence?”
Maybe that’s a step too far for most people. The planet isn’t and can’t be perceived as an enemy. But yet, the destruction continues to come, and fields are lying fallow from drought and scorching sun. What we’re witnessing is a hyperthreat unlike anything we have seen before, and that’s going to require us all to think about the meaning of security in ways we have never been asked to do before.
Announcing the Lux Leaders Summit and our keynote conversation
Next month, we will be hosting our very first Lux Leaders Summit in New York City to bring together our portfolio CEOs and a range of senior executives from industry for an intimate and private gathering. Eschewing boring panels and one-to-many VC-driven talk fests, every segment of the Lux Leaders Summit is curated for maximum engagement by every participant with an eye on both building peer relationships as well as generating immediate and long-term business value.
That said, we won’t be eschewing our one and only panel on stage. We’ll be hosting Sam Bankman-Fried, CEO and founder of Lux-backed FTX, in conversation with Derek Thompson of The Atlantic for a discussion on “Effective Altruism and the future of American capitalism.” We hope the conversation in a few weeks sparks an assessment of science, technology and innovation at a pivotal moment for America’s economic future.
New “Securities” podcast: The utopian visions of Stanford’s generations of entrepreneurs
Stanford University is at the beating heart of Silicon Valley and has become almost a rite of passage for generations of entrepreneurs. But how does each generation form, and what skills and mindsets should they be equipped with given our changing world?
No one has thought more about how to shape that entrepreneurial spirit than Dr. Tina Seelig. Seelig is the Executive Director of the prestigious Knight-Hennessy Scholars program at Stanford among many other leadership roles, and she is also the author of Creativity Rules: Get Ideas Out of Your Head and into the World as well as What I Wish I Knew When I Was 20. Joining the podcast was Lux Capital’s Grace Isford.
We talk about Seelig’s class “Inventing the Future” and how she guides students in considering the utopian and dystopian aspects of the future technologies that are shaping our everyday lives. We also talk about generational differences between students over the past two decades, from the 9/11 generation to the global financial crisis and Covid-19 generations and how global events influence the approach of budding entrepreneurs. Then we walk through how to teach leadership, how to increase luck, and why there is such an important correlation between optimism and agency.
🔊 Take a listen here
Lux Recommends
While we are on the subject of the climate and also altruism, perhaps no story got greater traction in our networks than Patagonia’s founder Yvon Chouinard, who has now placed his entire stake in the outdoors brand into a charitable trust to fight climate change through the company’s profits. Bilal Zuberirecommended the profile of him by David Gelles in The New York Times. Gelles writes, “Even today, he wears raggedy old clothes, drives a beat up Subaru and splits his time between modest homes in Ventura and Jackson, Wyo. Mr. Chouinard does not own a computer or a cellphone.”
Our scientist-in-residence Sam Arbesman was intrigued by Jillian Steinhauer’s essay in The New Republic on the future of the American mall, which is also a review on Alexandra Lange’s book Meet Me by the Fountain: An Inside History of the Mall. “What would it look like if we tried to reclaim some of the space we’ve lost and demanded more from our leaders in the process?”
Sam also recommends the obituary for Frank Drake, the inspiration behind SETI or the search for extraterrestrial intelligence, written by Ramin Skibba in Wired. “So in 1974, while serving as the director of the Arecibo Observatory in Puerto Rico, Drake used the radio telescope to broadcast the first interstellar message deliberately sent from Earth.”
The Merge happened this week for Ethereum (and Grace Isford recommends this helpful explainer from Jakub of Finematics), which should negate its energy-guzzling, proof-of-work algorithm for a proof-of-stake design. But for the enduring crypto skeptic, I found Ben Munster’s dispatch in Decrypt on “the World’s First No-Coiner Conference” a surprise delight.
Finally, Josh Wolfe recommends this research post from Google’s AI Blog on “Digitizing Smell: Using Molecular Maps to Understand Odor” by Richard C. Gerkin and Alexander B. Wiltschko. It’s deeply intriguing — and we’ll have intriguing things to say more about the sense of smell in a future issue.
That’s it, folks. Have questions, comments, or ideas? This newsletter is sent from my email, so you can just click reply.
Forcing China’s AI researchers to strive for chip efficiency will ultimately shave America’s lead
In incididunt ad qui nostrud sint ullamco. Irure sint deserunt Lorem id officia dolore non. Anim dolor minim sit dolor et sint aliquip qui est. Ex in tempor laborum laboris dolor laboris ullamco quis. Enim est cupidatat consequat est culpa consequat. Fugiat officia in ea ea laborum sunt Lorem. Anim laborum labore duis ipsum mollit nisi do exercitation. Magna in pariatur anim aute.
In incididunt ad qui nostrud sint ullamco. Irure sint deserunt Lorem id officia dolore non. Anim dolor minim sit dolor et sint aliquip qui est. Ex in tempor laborum laboris dolor laboris ullamco quis. Enim est cupidatat consequat est culpa consequat. Fugiat officia in ea ea laborum sunt Lorem. Anim laborum labore duis ipsum mollit nisi do exercitation. Magna in pariatur anim aute.
Right now, pathbreaking AI foundation models follow an inverse Moore’s law (sometimes quipped “Eroom’s Law”). Each new generation is becoming more and more expensive to train as researchers exponentially increase the number of parameters used and overall model complexity. Sam Altman of OpenAI said that the cost of training GPT-4 was over $100 million, and some AI computational specialists believe that the first $1 billion model is currently or will shortly be developed.
As semiconductor chips rise in complexity, costs come down because transistors are packed more densely on silicon, cutting the cost per transistor during fabrication as well as lowering operational costs for energy and heat dissipation. That miracle of performance is the inverse with AI today. To increase the complexity (and therefore hopefully quality) of an AI model, researchers have attempted to pack in more and more parameters, each one of which demands more computation both for training and for usage. A 1 million parameter model can be trained for a few bucks and run on a $15 Raspberry Pi Zero 2 W, but Google’s PaLM with 540 billion parameters requires full-scale data centers to operate and is estimated to have cost millions of dollars to train.
Admittedly, simply having more parameters isn’t a magic recipe for better AI end performance. One recalls Steve Jobs’s marketing of the so-called “Megahertz Myth” to attempt to persuade the public that headline megahertz numbers weren't the right way to judge the performance of a personal computer. Performance in most fields is a complicated problem to judge, and just adding more inputs doesn't necessarily translate into a better output.
And indeed, there is an efficiency curve underway in AI outside of the leading-edge foundation models from OpenAI and Google. Researchers over the past two years have discovered better training techniques (as well as recipes to bundle these techniques together), developed best practices for spending on reinforcement learning from human feedback (RLHF), and curated better training data to improve model quality even while shaving parameter counts. Far from surpassing $1 billion, training new models that are equally performant might well cost only tens or hundreds of thousands of dollars.
This AI performance envelope between dollars invested and quality of model trained is a huge area of debate for the trajectory of the field (and was the most important theme to emanate from our AI Summit). And it’s absolutely vital to understand, since where the efficiency story ends up will determine the sustained market structure of the AI industry.
If foundation models cost billions of dollars to train, all the value and leverage of AI will accrue and centralize to the big tech companies like Microsoft (through OpenAI), Google and others who have the means and teams to lavish. But if the performance envelope reaches a significantly better dollar-to-quality ratio in the future, that means the whole field opens up to startups and novel experiments, while the leverage of the big tech companies would be much reduced.
The U.S. right now is parallelizing both approaches toward AI. Big tech is hurling billions of dollars on the field, while startups are exploring and developing more efficient models given their relatively meagre resources and limited access to Nvidia’s flagship chip, the H100. Talent — on balance — is heading as it typically does to big tech. Why work on efficiency when a big tech behemoth has money to burn on theoretical ideas emanating from university AI labs?
Without access to the highest-performance chips, China is limited in the work it can do on the cutting-edge frontiers of AI development. Without more chips (and in the future, the next generations of GPUs), it won’t have the competitive compute power to push the AI field to its limits like American companies. That leaves China with the only other path available, which is to follow the parallel course for improving AI through efficiency.
For those looking to prevent the decline of American economic power, this is an alarming development. Model efficiency is what will ultimately allow foundation models to be preloaded onto our devices and open up the consumer market to cheap and rapid AI interactions. Whoever builds an advantage in model efficiency will open up a range of applications that remain impractical or too expensive for the most complex AI models.
Given U.S. export controls, China is now (by assumption, and yes, it’s a big assumption) putting its entire weight behind building the AI models it can, which are focused on efficiency. Which means that its resources are arrayed for building the platforms to capture end-user applications — the exact opposite goal of American policymakers. It’s a classic result: restricting access to technology forces engineers to be more creative in building their products, the exact intensified creativity that typically leads to the next great startup or scientific breakthrough.
If America was serious about slowing the growth of China’s still-nascent semiconductor market, it really should have taken a page from the Chinese industrial policy handbook and just dumped chips on the market, just as China has done for years from solar panel manufacturing to electronics. Cheaper chips, faster chips, chips so competitive that no domestic manufacturer — even under Beijing direction — could have effectively competed. Instead we are attempting to decouple from the second largest chips market in the world, turning a competitive field where America is the clear leader into a bountiful green field of opportunity for domestic national champions to usurp market share and profits.
There were of course other goals outside of economic growth for restricting China’s access to chips. America is deeply concerned about the country’s AI integration into its military, and it wants to slow the evolution of its autonomous weaponry and intelligence gathering. Export controls do that, but they are likely to come at an extremely exorbitant long-term cost: the loss of leadership in the most important technological development so far this decade. It’s not a trade off I would have built trade policy on.
The life and death of air conditioning
Across six years of working at TechCrunch, no article triggered an avalanche of readership or inbox vitriol quite like Air conditioning is one of the greatest inventions of the 20th Century. It’s also killing the 21st. It was an interview with Eric Dean Wilson, the author of After Cooling, about the complex feedback loops between global climate disruption and the increasing need for air conditioning to sustain life on Earth. The article was read by millions and millions of people, and hundreds of people wrote in with hot air about the importance of their cold air.
Demand for air conditioners is surging in markets where both incomes and temperatures are rising, populous places like India, China, Indonesia and the Philippines. By one estimate, the world will add 1 billion ACs before the end of the decade. The market is projected to before 2040. That’s good for measures of public health and economic productivity; it’s unquestionably bad for the climate, and a global agreement to phase out the most harmful coolants could keep the appliances out of reach of many of the people who need them most.
This is a classic feedback loop, where the increasing temperatures of the planet, particularly in South Asia, lead to increased demand for climate resilience tools like air conditioning and climate-adapted housing, leading to further climate change ad infinitum.
Josh Wolfe gave a talk at Stanford this week as part of the school’s long-running Entrepreneurial Thought Leaders series, talking all things Lux, defense tech and scientific innovation. The .
Lux Recommends
As Henry Kissinger turns 100, Grace Isford recommends “Henry Kissinger explains how to avoid world war three.” “In his view, the fate of humanity depends on whether America and China can get along. He believes the rapid progress of AI, in particular, leaves them only five-to-ten years to find a way.”
Our scientist-in-residence Sam Arbesman recommends Blindsight by Peter Watts, a first contact, hard science fiction novel that made quite a splash when it was published back in 2006.
Mohammed bin Rashid Al Maktoum, and just how far he has been willing to go to keep his daughter tranquilized and imprisoned. “When the yacht was located, off the Goa coast, Sheikh Mohammed spoke with the Indian Prime Minister, Narendra Modi, and agreed to extradite a Dubai-based arms dealer in exchange for his daughter’s capture. The Indian government deployed boats, helicopters, and a team of armed commandos to storm Nostromo and carry Latifa away.”
Sam recommends Ada Palmer’s article for Microsoft’s AI Anthology, “We are an information revolution species.” “If we pour a precious new elixir into a leaky cup and it leaks, we need to fix the cup, not fear the elixir.”
I love complex international security stories, and few areas are as complex or wild as the international trade in exotic animals. Tad Friend, who generally covers Silicon Valley for The New Yorker, has a great story about an NGO focused on infiltrating and exposing the networks that allow the trade to continue in “Earth League International Hunts the Hunters.” "At times, rhino horn has been worth more than gold—so South African rhinos are often killed with Czech-made rifles sold by Portuguese arms dealers to poachers from Mozambique, who send the horns by courier to Qatar or Vietnam, or have them bundled with elephant ivory in Maputo or Mombasa or Lagos or Luanda and delivered to China via Malaysia or Hong Kong.”