Photo by Javier Morales via Flickr / Creative Commons
Announcement: Our new “Securities” podcast
I’m thrilled to announce that we’ve launched — in beta, if you will — our new “Securities” podcast. The series will cover the same topics as we do in the newsletter — science, technology, finance and the human condition. In our first week here as we quite literally unpack boxes of equipment, we published two episodes:
Episode 0 explained the thesis of the show and the crises and opportunities that tech will need to target in the 2020s. We also had Deena Shakir on to discuss Lux-backed Alife Health, an AI-driven in-vitro fertilization startup that just raised a Series A.
In Episode 1, I talked with Scott Bade, my former colleague at TechCrunch who is the special series editor of the TechCrunch Global Affairs Project, on topics ranging from the passing of Madeleine Albright, the divide between Silicon Valley and U.S. foreign policy, the need for a U.S. technology doctrine, and the future of U.S.-China relations.
Listen in, subscribe, and tell me and our new producer Chris Gates how we’re doing!
The news media has extensively covered the cyberhacking gang Lapsus$ this week, following the group’s successful data extraction hack of Okta, the single-sign-on platform for corporate apps, as well as hacks at Microsoft and an extensive extortion scheme against Nvidia. U.K. police have now arrested seven members of the gang, disclosing that the age of the hackers ranged from 16 to 21. The head of the group was reported by the BBC and Bloomberg to be the youngest in that range. From BBC:
The boy’s father told the BBC: “I had never heard about any of this until recently. He’s never talked about any hacking, but he is very good on computers and spends a lot of time on the computer. I always thought he was playing games.” “We’re going to try to stop him from going on computers.”
That always seems to be society’s answer to talent, and particularly technical talent — “We’re going to try to stop him.”
As I was thinking about the ages of these voguish hackers, I was reminded of the pipeline challenges for research scientists looking to build their careers, a topic that our scientist-in-residence Sam Arbesman has focused on. Mike Lauer, writing for the National Institutes of Health grant news service Nexus, compiled data showing the age at which an American research scientist receives their first “R01” grant, the NIH’s workhorse for funding projects. Median ages have increased from 38 years old for a first grant in 1995 to now 42 years old in 2020, with men and women following almost identical trendlines.
In other words, scientists need to wait about two decades after their bachelor’s degree on average to receive funding to conduct independent work (otherwise known as “doing science” or “doing their job”).
Not even the most exceptional people within this already talented group fare much better. The 10th percentile of youngest male scientists getting their first grant was 33 years old in 1995 and aged to 36 in 2020, and again, a similar pattern holds for women. If you’re brilliant and ambitious, you might shave a few years off the pipeline march, but not much more.
We are seeing many institutions age. As just one stylized example, Congress is the oldest it has been in 20 years, with the average congressman now aged 58 years, up from 54 two decades ago. Senators on average are 64 years old, up from 59. We’re seeing a similar pattern in business. The ages of newly-hired CEOs of S&P 500 and Fortune 500 companies have risen from 46 years old in 2005 to 57 years old in 2019, and only dipping a bit during the recession according to an analysis by executive search firmCrist Kolder Associates. That’s a jump of 11 years of age in 15 years. Put another way, essentially the same age group in 2005 were still being selected for executive roles more than a decade later.
The causes of this institutional aging are diverse — and hard to provably pin down — but my view is that it comes down to competition and conservatism. Competition for leadership roles across much of the knowledge economy has become more acute. The number of NIH-backed scientists has been flat for more than two decades, even as more people become scientists. The number of S&P 500 CEO roles is quite fixed (although technically co-CEO roles occasionally crop up so it isn’t always an even 500), and the size of Congress has been fixed for some time now. Just with population growth alone, there would be more competition. When you further layer in greater numbers of college graduates and a deeper pipeline of talent, there are more people competing for a flat or declining number of executive positions.
That competition leads right into the conservatism of modern institutions. When an institution has so many options to choose from, it’s hard to boldly choose an unconventional candidate when a more experienced candidate is available, a pattern that trickles down from elite executive positions all the way to entry-level roles.
It’s almost as if every institution is saying “We’re going to try to stop him” (or her). We’re going to protect ourselves from any brilliant ideas or new insights. We don’t want that spastic, bountiful, naive energy that might force change, or hire someone who might just be ornery since the Kafkaesque organizations we have built can be shredded apart by any bright teenager.
Returning to the leader of Lapsus$: here’s a teenager with an incredible talent for computers who has unfortunately used those skills for extortion, acquiring an alleged $14 million of ill-gotten goods. What could we have done as a society — across all of our institutions — to direct that incredible energy and talent to something productive and socially beneficial?
Understandably, youthful energy is directed to where it can find purchase, namely computers, hacking, crypto, startups and other entrepreneurial areas where age is less of a factor in advancement than talent and experience. Much of the anti-institutionalism of crypto comes less from libertarianism or anti-statism in my view and much more plainly from the high walls these institutions erect against newcomers. If you’re not going to be able to evolve banking until three decades into your career, but you can do so via a blockchain startup in three years, which option would you rightly pick?
One of the most persistent themes across “Securities” is that many of our institutions are in a weakened state. Democratic systems are under threat, research laboratories have become stale, American industrial companies are losing their edge. What’s most pernicious is that it’s precisely the group of people who could most renew these institutions who have the least access and agency to do so.
The pressure will only intensify. As I wrote in TechCrunch in late 2020, the no-code generation is arriving. Empowered by platforms like Roblox and Minecraft, kids growing up today are learning digital skills and coding at earlier ages and with greater facility than ever before. If I am optimistic about anything, it is that the next generation is going to have an incredible wealth of talent to use technology to change the world.
The question is where their enthusiasm and energy will be directed. On one side is Lapsus$ and extorting Nvidia to force the company to make its chips faster for blockchain hashing. The alternative is harnessing that breathtaking talent for remaking the frontiers of science: building fusion reactors, developing quantum computing, overhauling our understanding of genetics and medicine, and eviscerating our current nostrums with better science. Our institutions shouldn’t be dams trying to hold back the rising tide of smart ideas and smart people. Instead, we need to open the floodgates, and tap those raging rapids right through to the future.
Universal basic income
I was at Wharton at the University of Pennsylvania this past week to debate the merits of universal basic income, the idea that most welfare programs could be replaced with a single equal income grant to every citizen. I was in charge of the “opposed” viewpoint, which frankly, is the easier side to argue given the number of points against UBI, from its gargantuan economic scale to the assumed deleterious cultural changes it would trigger.
But I focused on one line of attack that doesn’t get enough focus, and that was around catastrophic risks. Our world is filled with disasters that can befall us. A sudden heart attack or medical issue can quickly cost millions of dollars in the United States for treatment. A hurricane or wildfire can instantly wipe out our entire wealth in the form of our home and property. A workplace injury or a mental health episode can suddenly mean that we lose the ability to work and make the income we need for our families.
Most anti-poverty programs (and arguably all of them) can be thought of as publicly-funded insurance. When someone has an injury or a sudden medical emergency, it’s unpredictable, but generally rare. So we insure against these issues with our large-scale health insurance programs like Medicare and Medicaid, which together produce the largest plurality of welfare spending in the United States. Similarly, Federal Flood Insurance offers subsidized insurance for homeowners stricken by floods.
The goal of most welfare isn’t to help all people all the time, but rather to substantially and rapidly help some people some of the time. It’s meant to reduce the chaos and bad luck of our complex society, and give everyone a more stable footing by which to independently manage their jobs and lives.
UBI doesn’t solve for catastrophe. Far from eliminating most welfare programs, everyone will still need a full set of catastrophic insurance policies to cover them just as they did before. It’s a flow rate attempting to handle surges — and that’s a recipe for disaster for everyone.
Lux Recommends
Peter Hébert (via Guy Perelmuter) recommends this gorgeous interactive illustration from Reuters that memorializes the 50th anniversary of the screening of The Godfather, including how its principal photography came together and why it has had a lasting legacy on filmmakers.
Psychology researchers Nadja Heym and Alexander Sumich have made a splash in their field with work on what they dub “dark empaths” — people with personality disorders like psychopathy who also have a high degree of empathy, characteristics that seem contradictory. They discuss their work in The Conversation and how they are trying to revise our understanding of human psychology.
Cameron McCord recommends the Technology & Security Conference he is organizing in Boston with co-support from the HBS Aerospace and Aviation Club, the MIT Sloan Defense Technology Club, and Silicon Valley Defense Group. We’ll have Josh Wolfe and Bilal Zuberi there talking about the future of defense technology — join live in Boston on Saturday, April 23 or virtually.
I’m excited to read Helen Thompson’s new book Disorder: Hard Times in the 21st Century, published by Oxford University Press this week. In it, she focuses on systemic structural forces that are shaping world conflicts, and its timing couldn’t be more propitious given the war in Ukraine.
That’s it, folks. Have questions, comments, or ideas? This newsletter is sent from my email, so you can just click reply.
Forcing China’s AI researchers to strive for chip efficiency will ultimately shave America’s lead
In incididunt ad qui nostrud sint ullamco. Irure sint deserunt Lorem id officia dolore non. Anim dolor minim sit dolor et sint aliquip qui est. Ex in tempor laborum laboris dolor laboris ullamco quis. Enim est cupidatat consequat est culpa consequat. Fugiat officia in ea ea laborum sunt Lorem. Anim laborum labore duis ipsum mollit nisi do exercitation. Magna in pariatur anim aute.
In incididunt ad qui nostrud sint ullamco. Irure sint deserunt Lorem id officia dolore non. Anim dolor minim sit dolor et sint aliquip qui est. Ex in tempor laborum laboris dolor laboris ullamco quis. Enim est cupidatat consequat est culpa consequat. Fugiat officia in ea ea laborum sunt Lorem. Anim laborum labore duis ipsum mollit nisi do exercitation. Magna in pariatur anim aute.
Right now, pathbreaking AI foundation models follow an inverse Moore’s law (sometimes quipped “Eroom’s Law”). Each new generation is becoming more and more expensive to train as researchers exponentially increase the number of parameters used and overall model complexity. Sam Altman of OpenAI said that the cost of training GPT-4 was over $100 million, and some AI computational specialists believe that the first $1 billion model is currently or will shortly be developed.
As semiconductor chips rise in complexity, costs come down because transistors are packed more densely on silicon, cutting the cost per transistor during fabrication as well as lowering operational costs for energy and heat dissipation. That miracle of performance is the inverse with AI today. To increase the complexity (and therefore hopefully quality) of an AI model, researchers have attempted to pack in more and more parameters, each one of which demands more computation both for training and for usage. A 1 million parameter model can be trained for a few bucks and run on a $15 Raspberry Pi Zero 2 W, but Google’s PaLM with 540 billion parameters requires full-scale data centers to operate and is estimated to have cost millions of dollars to train.
Admittedly, simply having more parameters isn’t a magic recipe for better AI end performance. One recalls Steve Jobs’s marketing of the so-called “Megahertz Myth” to attempt to persuade the public that headline megahertz numbers weren't the right way to judge the performance of a personal computer. Performance in most fields is a complicated problem to judge, and just adding more inputs doesn't necessarily translate into a better output.
And indeed, there is an efficiency curve underway in AI outside of the leading-edge foundation models from OpenAI and Google. Researchers over the past two years have discovered better training techniques (as well as recipes to bundle these techniques together), developed best practices for spending on reinforcement learning from human feedback (RLHF), and curated better training data to improve model quality even while shaving parameter counts. Far from surpassing $1 billion, training new models that are equally performant might well cost only tens or hundreds of thousands of dollars.
This AI performance envelope between dollars invested and quality of model trained is a huge area of debate for the trajectory of the field (and was the most important theme to emanate from our AI Summit). And it’s absolutely vital to understand, since where the efficiency story ends up will determine the sustained market structure of the AI industry.
If foundation models cost billions of dollars to train, all the value and leverage of AI will accrue and centralize to the big tech companies like Microsoft (through OpenAI), Google and others who have the means and teams to lavish. But if the performance envelope reaches a significantly better dollar-to-quality ratio in the future, that means the whole field opens up to startups and novel experiments, while the leverage of the big tech companies would be much reduced.
The U.S. right now is parallelizing both approaches toward AI. Big tech is hurling billions of dollars on the field, while startups are exploring and developing more efficient models given their relatively meagre resources and limited access to Nvidia’s flagship chip, the H100. Talent — on balance — is heading as it typically does to big tech. Why work on efficiency when a big tech behemoth has money to burn on theoretical ideas emanating from university AI labs?
Without access to the highest-performance chips, China is limited in the work it can do on the cutting-edge frontiers of AI development. Without more chips (and in the future, the next generations of GPUs), it won’t have the competitive compute power to push the AI field to its limits like American companies. That leaves China with the only other path available, which is to follow the parallel course for improving AI through efficiency.
For those looking to prevent the decline of American economic power, this is an alarming development. Model efficiency is what will ultimately allow foundation models to be preloaded onto our devices and open up the consumer market to cheap and rapid AI interactions. Whoever builds an advantage in model efficiency will open up a range of applications that remain impractical or too expensive for the most complex AI models.
Given U.S. export controls, China is now (by assumption, and yes, it’s a big assumption) putting its entire weight behind building the AI models it can, which are focused on efficiency. Which means that its resources are arrayed for building the platforms to capture end-user applications — the exact opposite goal of American policymakers. It’s a classic result: restricting access to technology forces engineers to be more creative in building their products, the exact intensified creativity that typically leads to the next great startup or scientific breakthrough.
If America was serious about slowing the growth of China’s still-nascent semiconductor market, it really should have taken a page from the Chinese industrial policy handbook and just dumped chips on the market, just as China has done for years from solar panel manufacturing to electronics. Cheaper chips, faster chips, chips so competitive that no domestic manufacturer — even under Beijing direction — could have effectively competed. Instead we are attempting to decouple from the second largest chips market in the world, turning a competitive field where America is the clear leader into a bountiful green field of opportunity for domestic national champions to usurp market share and profits.
There were of course other goals outside of economic growth for restricting China’s access to chips. America is deeply concerned about the country’s AI integration into its military, and it wants to slow the evolution of its autonomous weaponry and intelligence gathering. Export controls do that, but they are likely to come at an extremely exorbitant long-term cost: the loss of leadership in the most important technological development so far this decade. It’s not a trade off I would have built trade policy on.
The life and death of air conditioning
Across six years of working at TechCrunch, no article triggered an avalanche of readership or inbox vitriol quite like Air conditioning is one of the greatest inventions of the 20th Century. It’s also killing the 21st. It was an interview with Eric Dean Wilson, the author of After Cooling, about the complex feedback loops between global climate disruption and the increasing need for air conditioning to sustain life on Earth. The article was read by millions and millions of people, and hundreds of people wrote in with hot air about the importance of their cold air.
Demand for air conditioners is surging in markets where both incomes and temperatures are rising, populous places like India, China, Indonesia and the Philippines. By one estimate, the world will add 1 billion ACs before the end of the decade. The market is projected to before 2040. That’s good for measures of public health and economic productivity; it’s unquestionably bad for the climate, and a global agreement to phase out the most harmful coolants could keep the appliances out of reach of many of the people who need them most.
This is a classic feedback loop, where the increasing temperatures of the planet, particularly in South Asia, lead to increased demand for climate resilience tools like air conditioning and climate-adapted housing, leading to further climate change ad infinitum.
Josh Wolfe gave a talk at Stanford this week as part of the school’s long-running Entrepreneurial Thought Leaders series, talking all things Lux, defense tech and scientific innovation. The .
Lux Recommends
As Henry Kissinger turns 100, Grace Isford recommends “Henry Kissinger explains how to avoid world war three.” “In his view, the fate of humanity depends on whether America and China can get along. He believes the rapid progress of AI, in particular, leaves them only five-to-ten years to find a way.”
Our scientist-in-residence Sam Arbesman recommends Blindsight by Peter Watts, a first contact, hard science fiction novel that made quite a splash when it was published back in 2006.
Mohammed bin Rashid Al Maktoum, and just how far he has been willing to go to keep his daughter tranquilized and imprisoned. “When the yacht was located, off the Goa coast, Sheikh Mohammed spoke with the Indian Prime Minister, Narendra Modi, and agreed to extradite a Dubai-based arms dealer in exchange for his daughter’s capture. The Indian government deployed boats, helicopters, and a team of armed commandos to storm Nostromo and carry Latifa away.”
Sam recommends Ada Palmer’s article for Microsoft’s AI Anthology, “We are an information revolution species.” “If we pour a precious new elixir into a leaky cup and it leaks, we need to fix the cup, not fear the elixir.”
I love complex international security stories, and few areas are as complex or wild as the international trade in exotic animals. Tad Friend, who generally covers Silicon Valley for The New Yorker, has a great story about an NGO focused on infiltrating and exposing the networks that allow the trade to continue in “Earth League International Hunts the Hunters.” "At times, rhino horn has been worth more than gold—so South African rhinos are often killed with Czech-made rifles sold by Portuguese arms dealers to poachers from Mozambique, who send the horns by courier to Qatar or Vietnam, or have them bundled with elephant ivory in Maputo or Mombasa or Lagos or Luanda and delivered to China via Malaysia or Hong Kong.”