Photo by Martin Brázdil via Flickr / Creative Commons
First up, wow
The first image of the black hole at the center of Messier 87, a galaxy in the Virgo galaxy cluster, has been published by the Event Horizon Telescope team and an international consortium of research groups. It’s not just an amazing achievement of science, but the collaboration itself is an incredible image of the power of cooperation in advancing the pursuit of human knowledge.
Risks and decision-making
We’ve reached an apex of risk, individually, economically, as a society. Stability has been rent asunder, and the resilience of our support systems from healthcare to supply chains has been all but demolished. Just in this frenzied week of crises, there’s now a nationwide shortage of infant formula thanks to a bacterial infection at a plant in Sturgis, Michigan which forced owner Abbott to recall some formula and the FDA to investigate. Meanwhile, lockdowns in Shanghai have shut down the only factory making CT contrast agents for GE scanning machines, forcing hospitals to ration these types of diagnostics for the foreseeable future. I’d play Supply Chain Shortage Bingo, but unfortunately, the cards are out of stock.
With risks intensifying and rapidly mutating, the emphasis shifts to our ability to perceive these risks and make good decisions under uncertainty. Risk is always omnipresent — even in the best of times, the future is never 100% clear. That means that improving decision-making has enormous value, a value that only rises as the smaller risks of the past have transformed into the monstrous risks of the present.
Lux recently hosted a risk and decision-making seminar lunch with three lauded thinkers who have spent their careers thinking about risk, uncertainty, and the human psychology and incentives of decision-making. Our lunch included Nobel laureate Danny Kahneman whose book Thinking Fast and Slow has been a major bestseller and summarizes much of his extensive work in the field. We also had Annie Duke, a World Series of Poker champion who researches cognitive psychology at the University of Pennsylvania, as well as Michael Mauboussin, the Head of Consilient Research at Counterpoint Global and who has also taught finance for decades at Columbia. Both of them have published influential books on decision sciences.
We’ve edited the conversation and published it as part of the “Securities” podcast, broken up into four segments running just shy of an hour. You can listen to all four episodes on Anchor, Apple Podcasts and Spotify.
The episodes cover a huge amount of ground, but for me, there were three high-level responses I had to the discussion.
The first was in the context of a debate about how easy it is for someone to change their mind once they’ve made a decision. Kahneman bluntly stated that — given his extensive research on human psychology — people just never change their minds. And indeed, it was one of the few statements that all of our participants could agree upon.
Once we have made a decision, we will start to filter and sort all data and feedback we receive into a frame that supports our decision. We consistently seek to reinforce our own opinion and reduce our own dissonance, for the obvious reason that it makes us feel good to be right. Worse, even though economists and psychologists have experimented with a wide variety of strategies and tools to attempt to overcome this bias, we’ve been unable to find a way to make humans more “objective.”
Listening, I was incensed. Liberalism and the Enlightenment values that underpin science, progress and democracy have been under heavy fire in recent years, from a wide variety of actors and for a wide number of reasons. At its core, the ideal of the Enlightenment is rigorous, objective debate that places evidence and proof above human intuition, minimizing and even eliminating our innate biases in pursuit of rational decision-making.
The reality, as shown in experiment after experiment, is that humans have almost no ability to avoid biased framing effects, that there are few to no strategies that can undo them, and even social models like “adversarial collaboration” (where teams solve a common problem but with competition — think scientific publishing) are barely a salve. Much of this isn’t surprising for those who have read Thomas Kuhn’s The Structure of Scientific Revolutions, which showed that science doesn’t progress continuously, but rather that mounting evidence eventually overcomes human reticence and can lead to a punctuated “paradigm shift.”
It’s a perspective that interconnects many of the challenges facing our society today, from free speech on Twitter, to objectivity in journalism, to the bounded rationality of politicians and business leaders. What’s the point of democracy and voting if no one changes their mind? What’s the point of a business strategy session when everyone in the room already has their mind fixed? The experimental evidence here is so overwhelming, and there really hasn’t been a social reckoning of trying to balance those newfound insights with the institutions that govern our society.
That convergence led to my second takeaway during a wide-ranging conversation around the nature of optimism, a quality that is increasingly scarce in these precarious times. Mauboussin triggered the debate by noting that base rates in statistics are clear (such as the average likelihood of a new startup to grow into a unicorn), and yet, entrepreneurs or bettors believe that they can beat the average. Is optimism necessary or even optimal for society?
The good answer is that the general agreement was that no, optimism is not necessary. In fact, Duke argued that it’s hard to believe that miscalibrated optimism could possibly be beneficial — ultimately, if you are making bets that are unlikely to succeed, that by definition leads to a poor aggregate outcome.
That said, she emphasized the importance of distinguishing between risk appetite and optimism. Optimism is a type of miscalculation: We believe that things are less risky than they truly are. That leads to poor outcomes, since it means we will take bad risks that we should otherwise avoid. But, risk appetite is about faithfully evaluating risks, and actually following through on the right bets when the probabilities are in our favor. As she describes herself, she’s a pessimist, but has a large risk appetite.
Kahneman also made a useful distinction between planning and execution, noting that optimism is bad in planning but useful in execution. You don’t want to be surprised by how a plan goes wrong because you were too positive about its potential, but once a plan is in motion, optimism can amplify a team’s performance and lead to better outcomes.
Third and finally, one of the most interesting observations I heard during the conversation came from Duke and her experience playing poker. Ultimately, all hands are random draws of cards, and the odds that my hand is better than yours are probabilistically fixed before I look at them. The most crucial point about probability in poker isn’t that the odds are fixed, it’s that you can play the game in a way that you never find out what the hands of the other players are.
It’s a philosophical view that perhaps obviates the reverting power of base rates. If we never find out what our draw is, then we never discover its actual probability of success. If a founder sells a startup early, we will never find out whether it would have grown large or collapsed if it had continued on. In fact, one form of optimism is what Duke dubbed the “counterfactual cloud of what ifs” — the fact that in general, humans will want to turn over their cards because they’d rather lose than never know whether they could have won.
And that’s perhaps the best philosophy one can have in a world where many bets are diving off a cliff right now. Our best decision may well be to never have to look at the outcome of our decisions at all, to never actually watch as we return to declining base rates. We need to avoid that counterfactual cloud, and to do that, we might need to change our minds — and we all know how hard that is.
Definitely check out the full conversation that’s chock full of thoughts and ideas. Episodes on Anchor here:
While babies may not be fed formula for all that much longer, the good news is that one of the major causes of death of infants has been identified. Deena Shakir recommended this piece on Researchers Pinpoint Reason Infants Die From SIDS (Sudden Infant Death Syndrome), finding that these deaths arise in an infant’s arousal pathway.
Sam Arbesman recommends George Dyson’s book Project Orion: The True Story of the Atomic Spaceship for a somewhat dry but fascinating account of how scientists came together to attempt to use hydrogen bomb explosions to propel spacecraft to orbit.
Sam also recommends Ben Reinhardt’s When should an idea that smells like research be a startup? It’s a topic near and dear to Lux, and Reinhardt writes that the “more the unpredictability in each of the steps smells like uncertainty instead of risk, the less it’s a good idea for the project to be carried forward by a new for-profit organization.”
Richard Smyth goes on a tear in Aeon about the need for naturalists to write with more realism and less poetry in an essay titled “Nature does not care.” He laments the Romanticism of too much writing about nature, and the lack of rigor that meets the needs of nature itself.
Related to my notes in “Easternization of media” a few weeks ago, Deb Aoki in Publisher's Weekly writes about the massive expansion of interest in manga in the United States, noting that overall sales of manga books have skyrocketed more than 2.5 times compared to the last sales peak in 2007.
That’s it, folks. Have questions, comments, or ideas? This newsletter is sent from my email, so you can just click reply.
Forcing China’s AI researchers to strive for chip efficiency will ultimately shave America’s lead
In incididunt ad qui nostrud sint ullamco. Irure sint deserunt Lorem id officia dolore non. Anim dolor minim sit dolor et sint aliquip qui est. Ex in tempor laborum laboris dolor laboris ullamco quis. Enim est cupidatat consequat est culpa consequat. Fugiat officia in ea ea laborum sunt Lorem. Anim laborum labore duis ipsum mollit nisi do exercitation. Magna in pariatur anim aute.
In incididunt ad qui nostrud sint ullamco. Irure sint deserunt Lorem id officia dolore non. Anim dolor minim sit dolor et sint aliquip qui est. Ex in tempor laborum laboris dolor laboris ullamco quis. Enim est cupidatat consequat est culpa consequat. Fugiat officia in ea ea laborum sunt Lorem. Anim laborum labore duis ipsum mollit nisi do exercitation. Magna in pariatur anim aute.
Right now, pathbreaking AI foundation models follow an inverse Moore’s law (sometimes quipped “Eroom’s Law”). Each new generation is becoming more and more expensive to train as researchers exponentially increase the number of parameters used and overall model complexity. Sam Altman of OpenAI said that the cost of training GPT-4 was over $100 million, and some AI computational specialists believe that the first $1 billion model is currently or will shortly be developed.
As semiconductor chips rise in complexity, costs come down because transistors are packed more densely on silicon, cutting the cost per transistor during fabrication as well as lowering operational costs for energy and heat dissipation. That miracle of performance is the inverse with AI today. To increase the complexity (and therefore hopefully quality) of an AI model, researchers have attempted to pack in more and more parameters, each one of which demands more computation both for training and for usage. A 1 million parameter model can be trained for a few bucks and run on a $15 Raspberry Pi Zero 2 W, but Google’s PaLM with 540 billion parameters requires full-scale data centers to operate and is estimated to have cost millions of dollars to train.
Admittedly, simply having more parameters isn’t a magic recipe for better AI end performance. One recalls Steve Jobs’s marketing of the so-called “Megahertz Myth” to attempt to persuade the public that headline megahertz numbers weren't the right way to judge the performance of a personal computer. Performance in most fields is a complicated problem to judge, and just adding more inputs doesn't necessarily translate into a better output.
And indeed, there is an efficiency curve underway in AI outside of the leading-edge foundation models from OpenAI and Google. Researchers over the past two years have discovered better training techniques (as well as recipes to bundle these techniques together), developed best practices for spending on reinforcement learning from human feedback (RLHF), and curated better training data to improve model quality even while shaving parameter counts. Far from surpassing $1 billion, training new models that are equally performant might well cost only tens or hundreds of thousands of dollars.
This AI performance envelope between dollars invested and quality of model trained is a huge area of debate for the trajectory of the field (and was the most important theme to emanate from our AI Summit). And it’s absolutely vital to understand, since where the efficiency story ends up will determine the sustained market structure of the AI industry.
If foundation models cost billions of dollars to train, all the value and leverage of AI will accrue and centralize to the big tech companies like Microsoft (through OpenAI), Google and others who have the means and teams to lavish. But if the performance envelope reaches a significantly better dollar-to-quality ratio in the future, that means the whole field opens up to startups and novel experiments, while the leverage of the big tech companies would be much reduced.
The U.S. right now is parallelizing both approaches toward AI. Big tech is hurling billions of dollars on the field, while startups are exploring and developing more efficient models given their relatively meagre resources and limited access to Nvidia’s flagship chip, the H100. Talent — on balance — is heading as it typically does to big tech. Why work on efficiency when a big tech behemoth has money to burn on theoretical ideas emanating from university AI labs?
Without access to the highest-performance chips, China is limited in the work it can do on the cutting-edge frontiers of AI development. Without more chips (and in the future, the next generations of GPUs), it won’t have the competitive compute power to push the AI field to its limits like American companies. That leaves China with the only other path available, which is to follow the parallel course for improving AI through efficiency.
For those looking to prevent the decline of American economic power, this is an alarming development. Model efficiency is what will ultimately allow foundation models to be preloaded onto our devices and open up the consumer market to cheap and rapid AI interactions. Whoever builds an advantage in model efficiency will open up a range of applications that remain impractical or too expensive for the most complex AI models.
Given U.S. export controls, China is now (by assumption, and yes, it’s a big assumption) putting its entire weight behind building the AI models it can, which are focused on efficiency. Which means that its resources are arrayed for building the platforms to capture end-user applications — the exact opposite goal of American policymakers. It’s a classic result: restricting access to technology forces engineers to be more creative in building their products, the exact intensified creativity that typically leads to the next great startup or scientific breakthrough.
If America was serious about slowing the growth of China’s still-nascent semiconductor market, it really should have taken a page from the Chinese industrial policy handbook and just dumped chips on the market, just as China has done for years from solar panel manufacturing to electronics. Cheaper chips, faster chips, chips so competitive that no domestic manufacturer — even under Beijing direction — could have effectively competed. Instead we are attempting to decouple from the second largest chips market in the world, turning a competitive field where America is the clear leader into a bountiful green field of opportunity for domestic national champions to usurp market share and profits.
There were of course other goals outside of economic growth for restricting China’s access to chips. America is deeply concerned about the country’s AI integration into its military, and it wants to slow the evolution of its autonomous weaponry and intelligence gathering. Export controls do that, but they are likely to come at an extremely exorbitant long-term cost: the loss of leadership in the most important technological development so far this decade. It’s not a trade off I would have built trade policy on.
The life and death of air conditioning
Across six years of working at TechCrunch, no article triggered an avalanche of readership or inbox vitriol quite like Air conditioning is one of the greatest inventions of the 20th Century. It’s also killing the 21st. It was an interview with Eric Dean Wilson, the author of After Cooling, about the complex feedback loops between global climate disruption and the increasing need for air conditioning to sustain life on Earth. The article was read by millions and millions of people, and hundreds of people wrote in with hot air about the importance of their cold air.
Demand for air conditioners is surging in markets where both incomes and temperatures are rising, populous places like India, China, Indonesia and the Philippines. By one estimate, the world will add 1 billion ACs before the end of the decade. The market is projected to before 2040. That’s good for measures of public health and economic productivity; it’s unquestionably bad for the climate, and a global agreement to phase out the most harmful coolants could keep the appliances out of reach of many of the people who need them most.
This is a classic feedback loop, where the increasing temperatures of the planet, particularly in South Asia, lead to increased demand for climate resilience tools like air conditioning and climate-adapted housing, leading to further climate change ad infinitum.
Josh Wolfe gave a talk at Stanford this week as part of the school’s long-running Entrepreneurial Thought Leaders series, talking all things Lux, defense tech and scientific innovation. The .
Lux Recommends
As Henry Kissinger turns 100, Grace Isford recommends “Henry Kissinger explains how to avoid world war three.” “In his view, the fate of humanity depends on whether America and China can get along. He believes the rapid progress of AI, in particular, leaves them only five-to-ten years to find a way.”
Our scientist-in-residence Sam Arbesman recommends Blindsight by Peter Watts, a first contact, hard science fiction novel that made quite a splash when it was published back in 2006.
Mohammed bin Rashid Al Maktoum, and just how far he has been willing to go to keep his daughter tranquilized and imprisoned. “When the yacht was located, off the Goa coast, Sheikh Mohammed spoke with the Indian Prime Minister, Narendra Modi, and agreed to extradite a Dubai-based arms dealer in exchange for his daughter’s capture. The Indian government deployed boats, helicopters, and a team of armed commandos to storm Nostromo and carry Latifa away.”
Sam recommends Ada Palmer’s article for Microsoft’s AI Anthology, “We are an information revolution species.” “If we pour a precious new elixir into a leaky cup and it leaks, we need to fix the cup, not fear the elixir.”
I love complex international security stories, and few areas are as complex or wild as the international trade in exotic animals. Tad Friend, who generally covers Silicon Valley for The New Yorker, has a great story about an NGO focused on infiltrating and exposing the networks that allow the trade to continue in “Earth League International Hunts the Hunters.” "At times, rhino horn has been worth more than gold—so South African rhinos are often killed with Czech-made rifles sold by Portuguese arms dealers to poachers from Mozambique, who send the horns by courier to Qatar or Vietnam, or have them bundled with elephant ivory in Maputo or Mombasa or Lagos or Luanda and delivered to China via Malaysia or Hong Kong.”