Whoever and wherever you are — thank you for subscribing to “Securities” and reading us week after week. Please have a joyous and restful holiday season, since as I wrote last week in Extreme Epoch, 2024 is set to be a watershed.
We’ll be on hiatus on Dec. 30th.
Best of the Podcast 2023
2023 was an incredibly busy year, and nowhere was there more fervent attention than on artificial intelligence. OpenAI launched ChatGPT at the very end of 2022, and its implications found purchase this year among more than one hundred million users and the regulators who serve them. Billions of dollars of venture capital flowed into the AI space, with investors funding everything from data infrastructure and better model training to the applications that are already beginning to transform industries across the world.
Our final “Securities” episode this year is a narration of our 9 favorite shows on AI, which we titled 🔊 “WTF Happened in AI in 2023?” It’s the perfect way to end the year — thanks to our producer Chris Gates for compiling all this material together. The 9 episodes we included material from are:
May the AI be ever in your favor — our own Grace Isford attended the first AI Film Festival hosted by Lux’s RunwayML and talked about the state of the art frontiers of generative film.
Chatphishing, veracity and “two years of chaos and a reset” — Josh Wolfe and I discuss AI and security, specifically the tactic that Josh dubbed “chatphishing” to describe using AI bots to defraud an individual by pretending to be their loved ones or perhaps a famous individual.
“Smell can be art, and it also can be science”: AI/ML and digital olfaction — Smell is just important to AI’s future as text, audio and video. That’s where Lux company Osmo comes in. Founded by Alex Wiltschko, Osmo is building AI models of scent that can help transform industries as disparate as flavor and fragrance as well as mosquito repellant.
In addition to that robust series, we had several excellent episodes that weren’t about AI (shocking!). Four favorites:
Why quitters are heroes with “Quit” author Annie Duke — Annie Duke is amazing, thoughtful, and translates pathbreaking research on the psychologies of decision-making into everyday life. I found this to be a profoundly interesting discussion on not when to keep going, but when should we stop.
The Science of Survival: Adapting Human Life for Other Planets — Christopher Mason is a professor with a bold view: we need to genetically engineer humans to be adaptable to outer space if we are ever going to live off-planet. This is our most speculative show of the year, but it was a fun one.
AI was a recurring theme this year, as was international relations and the rise (and more specifically, falls) of the United Kingdom and Canada. But if there was one thread tying “Securities” together this year, it was employment, jobs and the meaning of work. From the strikes in Hollywood and the rise of generative AI to tighter immigration restrictions across the developed world and the increasingly nihilistic work life of most jobs, we sit at a critical juncture on how work should be valorized within our identities as people and countries.
My five favorite “Securities” columns from the past year:
"Existential Engineer" — my favorite piece of the year departs from the blind optimism of most of the tech industry to pursue a more deliberate path toward an existentialism of engineering and why building things matter to us not just materially, but spiritually as well.
"Garrulous Guerrilla" — the essential thesis on AI. I argue that the small fraction of creatives who do truly original work will survive generative AI, but that very few creatives are equipped to do original work. The implication is that millions of people will eventually lose their jobs in the years ahead.
“Professional Prerogatives” — the jobs that will be protected from AI are those professions that hold the power to stop it cold. First up on that list will be doctors, who have already rebranded artificial intelligence as “augmented intelligence” in their pursuit of human autonomy. They will succeed, but other professions with weaker organizing skills will likely lose.
“Brainwash Departures” — governments around the world are putting in place more and more restrictions on workers with expertise, essentially arguing that the thoughts of some workers are so important, that they are a national resource that must be secured. It’s a chilling new pattern, and one that should be aggressive fought against.
“Striking Employment” — the superstar effect is decently studied in economics, but its effects have expanded to many more labor markets and even to many industries as well. The best are taking a greater share of the returns, and that’s transforming the economics for everybody else.
I’ll highlight two more pieces from our guest writers this year. Our summer associate Ken Bui’s best piece this summer was “IP, IP, Betray,” and our freelance writer Michael Magnani did a great piece on international organizations in “Loose BRICS.” Thank you both for joining “Securities” this year.
Lux Recommends
Grace attended the epicenter of AI that is NeurIPS last week in New Orleans, where our portfolio company Together’s academic partner Dan Fu won best paper for FlashFFTConv alongside his co-authors Hermann Kumbond and others. Check out the paper as well as his more accessible tweet thread. “We show that FlashFFTConv improves quality under a fixed compute budget, enables longer-sequence models, and improves the efficiency of long convolutions.”
Alex Nguyen recommends the Apple TV+ series Drops of God, which pits the daughter of a famed wine oenologist against a competitor for the inheritance of tens of thousands of bottles of rarefied wine. It’s a French adaptation of a Japanese manga series, demonstrating both the globalized nature of wine and also media.
Brooding and dark, I found Carolyn Dever’s essay on “How to Lose a Library” to be prophetic and philosophical. It’s about the cyberattack that has completely knocked out the British Library for weeks now. “How ironic that the most quaintly analog form of research possible, using physical books in a physical library, has been devastated by the hijacking of a digital system. I am experiencing this irony as especially bitter this morning, having arrived at desk 1086 with my list of tasks, hoping against hope that the crisis had resolved. It hadn’t. I hope it will someday soon.”
Shaq Vayda enjoyed Siddhartha Mukherjee’s latest piece in The New Yorker on “All the Carcinogens We Cannot See.” “This is a chilling duality of cancer: each individual cancer comes from a single cell, and yet each cancer contains thousands of clones evolving in time and space. Treating or curing cancer involves tackling this incredible degree of genetic diversity. It’s a clone war.”
Finally, our scientist-in-residence Sam Arbesman has a couple of fun pieces to note. An interesting fact about a Wikipedia article, why the oldest people are not as old as we think, and finally, the age-old question about the movie Home Alone — is the McCallister family rich? The New York Timesinvestigates.
That’s it, folks. Have questions, comments, or ideas? This newsletter is sent from my email, so you can just click reply.
Forcing China’s AI researchers to strive for chip efficiency will ultimately shave America’s lead
In incididunt ad qui nostrud sint ullamco. Irure sint deserunt Lorem id officia dolore non. Anim dolor minim sit dolor et sint aliquip qui est. Ex in tempor laborum laboris dolor laboris ullamco quis. Enim est cupidatat consequat est culpa consequat. Fugiat officia in ea ea laborum sunt Lorem. Anim laborum labore duis ipsum mollit nisi do exercitation. Magna in pariatur anim aute.
In incididunt ad qui nostrud sint ullamco. Irure sint deserunt Lorem id officia dolore non. Anim dolor minim sit dolor et sint aliquip qui est. Ex in tempor laborum laboris dolor laboris ullamco quis. Enim est cupidatat consequat est culpa consequat. Fugiat officia in ea ea laborum sunt Lorem. Anim laborum labore duis ipsum mollit nisi do exercitation. Magna in pariatur anim aute.
Right now, pathbreaking AI foundation models follow an inverse Moore’s law (sometimes quipped “Eroom’s Law”). Each new generation is becoming more and more expensive to train as researchers exponentially increase the number of parameters used and overall model complexity. Sam Altman of OpenAI said that the cost of training GPT-4 was over $100 million, and some AI computational specialists believe that the first $1 billion model is currently or will shortly be developed.
As semiconductor chips rise in complexity, costs come down because transistors are packed more densely on silicon, cutting the cost per transistor during fabrication as well as lowering operational costs for energy and heat dissipation. That miracle of performance is the inverse with AI today. To increase the complexity (and therefore hopefully quality) of an AI model, researchers have attempted to pack in more and more parameters, each one of which demands more computation both for training and for usage. A 1 million parameter model can be trained for a few bucks and run on a $15 Raspberry Pi Zero 2 W, but Google’s PaLM with 540 billion parameters requires full-scale data centers to operate and is estimated to have cost millions of dollars to train.
Admittedly, simply having more parameters isn’t a magic recipe for better AI end performance. One recalls Steve Jobs’s marketing of the so-called “Megahertz Myth” to attempt to persuade the public that headline megahertz numbers weren't the right way to judge the performance of a personal computer. Performance in most fields is a complicated problem to judge, and just adding more inputs doesn't necessarily translate into a better output.
And indeed, there is an efficiency curve underway in AI outside of the leading-edge foundation models from OpenAI and Google. Researchers over the past two years have discovered better training techniques (as well as recipes to bundle these techniques together), developed best practices for spending on reinforcement learning from human feedback (RLHF), and curated better training data to improve model quality even while shaving parameter counts. Far from surpassing $1 billion, training new models that are equally performant might well cost only tens or hundreds of thousands of dollars.
This AI performance envelope between dollars invested and quality of model trained is a huge area of debate for the trajectory of the field (and was the most important theme to emanate from our AI Summit). And it’s absolutely vital to understand, since where the efficiency story ends up will determine the sustained market structure of the AI industry.
If foundation models cost billions of dollars to train, all the value and leverage of AI will accrue and centralize to the big tech companies like Microsoft (through OpenAI), Google and others who have the means and teams to lavish. But if the performance envelope reaches a significantly better dollar-to-quality ratio in the future, that means the whole field opens up to startups and novel experiments, while the leverage of the big tech companies would be much reduced.
The U.S. right now is parallelizing both approaches toward AI. Big tech is hurling billions of dollars on the field, while startups are exploring and developing more efficient models given their relatively meagre resources and limited access to Nvidia’s flagship chip, the H100. Talent — on balance — is heading as it typically does to big tech. Why work on efficiency when a big tech behemoth has money to burn on theoretical ideas emanating from university AI labs?
Without access to the highest-performance chips, China is limited in the work it can do on the cutting-edge frontiers of AI development. Without more chips (and in the future, the next generations of GPUs), it won’t have the competitive compute power to push the AI field to its limits like American companies. That leaves China with the only other path available, which is to follow the parallel course for improving AI through efficiency.
For those looking to prevent the decline of American economic power, this is an alarming development. Model efficiency is what will ultimately allow foundation models to be preloaded onto our devices and open up the consumer market to cheap and rapid AI interactions. Whoever builds an advantage in model efficiency will open up a range of applications that remain impractical or too expensive for the most complex AI models.
Given U.S. export controls, China is now (by assumption, and yes, it’s a big assumption) putting its entire weight behind building the AI models it can, which are focused on efficiency. Which means that its resources are arrayed for building the platforms to capture end-user applications — the exact opposite goal of American policymakers. It’s a classic result: restricting access to technology forces engineers to be more creative in building their products, the exact intensified creativity that typically leads to the next great startup or scientific breakthrough.
If America was serious about slowing the growth of China’s still-nascent semiconductor market, it really should have taken a page from the Chinese industrial policy handbook and just dumped chips on the market, just as China has done for years from solar panel manufacturing to electronics. Cheaper chips, faster chips, chips so competitive that no domestic manufacturer — even under Beijing direction — could have effectively competed. Instead we are attempting to decouple from the second largest chips market in the world, turning a competitive field where America is the clear leader into a bountiful green field of opportunity for domestic national champions to usurp market share and profits.
There were of course other goals outside of economic growth for restricting China’s access to chips. America is deeply concerned about the country’s AI integration into its military, and it wants to slow the evolution of its autonomous weaponry and intelligence gathering. Export controls do that, but they are likely to come at an extremely exorbitant long-term cost: the loss of leadership in the most important technological development so far this decade. It’s not a trade off I would have built trade policy on.
The life and death of air conditioning
Across six years of working at TechCrunch, no article triggered an avalanche of readership or inbox vitriol quite like Air conditioning is one of the greatest inventions of the 20th Century. It’s also killing the 21st. It was an interview with Eric Dean Wilson, the author of After Cooling, about the complex feedback loops between global climate disruption and the increasing need for air conditioning to sustain life on Earth. The article was read by millions and millions of people, and hundreds of people wrote in with hot air about the importance of their cold air.
Demand for air conditioners is surging in markets where both incomes and temperatures are rising, populous places like India, China, Indonesia and the Philippines. By one estimate, the world will add 1 billion ACs before the end of the decade. The market is projected to before 2040. That’s good for measures of public health and economic productivity; it’s unquestionably bad for the climate, and a global agreement to phase out the most harmful coolants could keep the appliances out of reach of many of the people who need them most.
This is a classic feedback loop, where the increasing temperatures of the planet, particularly in South Asia, lead to increased demand for climate resilience tools like air conditioning and climate-adapted housing, leading to further climate change ad infinitum.
Josh Wolfe gave a talk at Stanford this week as part of the school’s long-running Entrepreneurial Thought Leaders series, talking all things Lux, defense tech and scientific innovation. The .
Lux Recommends
As Henry Kissinger turns 100, Grace Isford recommends “Henry Kissinger explains how to avoid world war three.” “In his view, the fate of humanity depends on whether America and China can get along. He believes the rapid progress of AI, in particular, leaves them only five-to-ten years to find a way.”
Our scientist-in-residence Sam Arbesman recommends Blindsight by Peter Watts, a first contact, hard science fiction novel that made quite a splash when it was published back in 2006.
Mohammed bin Rashid Al Maktoum, and just how far he has been willing to go to keep his daughter tranquilized and imprisoned. “When the yacht was located, off the Goa coast, Sheikh Mohammed spoke with the Indian Prime Minister, Narendra Modi, and agreed to extradite a Dubai-based arms dealer in exchange for his daughter’s capture. The Indian government deployed boats, helicopters, and a team of armed commandos to storm Nostromo and carry Latifa away.”
Sam recommends Ada Palmer’s article for Microsoft’s AI Anthology, “We are an information revolution species.” “If we pour a precious new elixir into a leaky cup and it leaks, we need to fix the cup, not fear the elixir.”
I love complex international security stories, and few areas are as complex or wild as the international trade in exotic animals. Tad Friend, who generally covers Silicon Valley for The New Yorker, has a great story about an NGO focused on infiltrating and exposing the networks that allow the trade to continue in “Earth League International Hunts the Hunters.” "At times, rhino horn has been worth more than gold—so South African rhinos are often killed with Czech-made rifles sold by Portuguese arms dealers to poachers from Mozambique, who send the horns by courier to Qatar or Vietnam, or have them bundled with elephant ivory in Maputo or Mombasa or Lagos or Luanda and delivered to China via Malaysia or Hong Kong.”