Exits are the keystone of a robust venture economy. Ambitious entrepreneurs, helpful accelerators and fast-funding venture capitalists are all necessary inputs of course, but what attracts all this talent and capital in the first place is the opportunity to build something incredible and actually make money (and preferably a lot of it) while doing so. Startups are a pipeline from conception and product-market fit to scaling and exit, and it’s that final stage which creates the pressurized vacuum pulling each company from the beginning of the pipeline to the end.
Many cities — and even nations and entire continents — get this mind-bogglingly backwards. Governments often invest in incubators or community workspaces to network founders and investors together. That’s well-intentioned, but entirely the wrong focus. What can instantly catalyze this activity is the alignment provided by massive potential future returns at the lowest possible risk.
The United States and China have been the strongest global markets for startup innovation over the past two decades. Both have offered extraordinary returns on entrepreneurial effort and ambitious risk capital: the two countries are the only ones in the world which have ultimately crowned venture-backed tech giants worth hundreds of billions of dollars. As one generation of entrepreneurs and executives build their wealth, they have strong incentives to invest in the next generation of founders looking for their own returns.
In a peculiar harmony though, both countries are implementing policies to shrink potential exits, diminishing the rewards of entrepreneurship and ultimately thinning the potential innovation their dynamic markets create.
China’s policy of “common prosperity” has been covered extensively, and if you haven’t followed all the ups and downs, Ryan Haas at the Brookings Institution published a succinct overview from last September. In short, there is on-going action from the central government to redistribute the returns from the country’s leading tech companies like Alibaba and Tencent to the broader population, ostensibly to reduce inequality but also beneficially squelching the growing power of tech companies in relation to the Communist Party of China.
China is not alone in harming its local startup ecosystem though. Under the rubric of national security, the United States is pursuing policies that will diminish exit values and will damage the long-term competitiveness of its own technology industry.
The damage started with a 2018 law called FIRRMA, which expanded and strengthened the U.S. government’s oversight of foreign investors buying equity in American companies. The law’s intention is to prevent foreign adversaries (namely China) from purchasing American companies (in whole or in part) and acquiring their technical know-how in critical technologies, such as semiconductors and quantum computing. Given the great power competition between the U.S. and China, it’s understandable that America wants to prevent its best innovations from falling into the hands of a hostile power.
As always with such laws though, its limits are blurry. SoftBank and its Vision Fund, which for a period was the most prodigious investor in global startups, was tangled up in the regulations, since SoftBank is a Japanese-headquartered conglomerate and the Vision Fund also drew heavily from Middle Eastern sovereign wealth. Japan may well be a mutual defense treaty partner, but from the U.S. regulatory perspective, any foreign country from Canada to North Korea could potentially be a threat to American industry.
FIRRMA’s direct effects then are to decrease the supply of capital into America’s critical technology industries and also, by definition, reduce their exit values by partially restricting the number of bidders for a company’s equity. In other words, the law limits the potential upside of any startup venture in the industries with the most technical risk, precisely the companies where venture dollars are often the hardest to raise.
Right now on Capitol Hill, Congress is debating an even tougher piece of legislation that would go beyond restricting foreign investment into the United States, but also outbound engagement. From the Wall Street Journal article:
The revised screening measure would enable the federal government to restrict certain future transactions in any “country of concern,” defined as “foreign adversary” countries including China, according to the new text. The provisions would apply to greenfield investments, such as the construction of new plants, to deals such as joint ventures that involve the transfer of knowledge or intellectual property and to capital contributions including venture capital and private equity transactions, the text says.
What are some of the sectors covered right now? The Journal continues:
Those sectors and technologies include semiconductors, large-capacity batteries, pharmaceuticals, rare-earth elements, biotechnology, artificial intelligence, quantum computing, hypersonics, financial technologies and autonomous systems such as robots and undersea drones.
So essentially everything but 15-minute delivery apps (or maybe that falls under hypersonic?)
China, of course, is a serious threat to U.S. global hegemony. Its economy remains robust in spite of massive Covid-19 lockdowns, its consumer market has expansive room to grow, and it has an incredibly sophisticated and competitive industrial policy to build capabilities in the most important sectors of future technology. America needs to compete, and compete very effectively, if it wants to hold on to its leadership positions.
But consider what the effects of the law would have on American startups. More sources of venture capital would be restricted. Potential product sales could be blocked, diminishing revenue growth. Exits are further complicated, particularly given that the law could potentially cover any company that is partially owned by Chinese nationals or sources. At every stage of the venture pipeline, growth and value will be clipped.
That diminishment of potential will have ramifications for both VCs and founders. As more and more semiconductor and quantum companies struggle against American bureaucracy to grow and exit, VCs — already stereotypically weary of investing in these sorts of capital intensive verticals — would be wise to consider whether more unregulated innovation hotspots like crypto are a better bet. Founders similarly will look at the regulatory crackdown and maneuver where they can build freely. Why deal with the uncertainty and red tape of the Treasury Department, particularly when building a startup takes a decade or more?
Congress is facing incredible pressure from a range of business groups to narrow the scope of the bill, whether it be the countries or industries or the types of transactions covered. Yet, the general tenor of Washington these days is clear: more restrictions and more control.
That’s not surprising given China’s looming and lethal threat, but frankly, it’s also decades too late. At this point, in 2022, the idea that China’s rise can be hindered by export controls and government reviews is highly circumspect. Such restrictions would have been incredibly effective in the 1990s and 2000s when China’s nascent internet market and tech industry were just getting off the ground. That window of opportunity has long since closed. Now, the only question should be pursuing policies that strengthen America’s startup ecosystem relative to all other countries, and ensuring that we have the most dynamic, robust, and lucrative innovation economy in the world. This latest bill in Congress is a step in the wrong direction.
Lazy tech analogies
I want to briefly follow up from last week’s “Securities” newsletter on “Marginal Stupidity” (which we also discussed in a new podcast episode). The world is incredibly complicated, and humans necessarily use heuristics and analogies to process that complexity into simpler forms. These mental shortcuts are never perfect, but they should broadly summarize the complexity they represent while affording their user a sense of their limitations.
In that vein, I want to call attention to two lazy tech analogies that I’ve seen lately as examples of the kind of impoverished analogical thinking that the industry needs to actively avoid.
At the heart of the misunderstanding is the lazy analogy of “sentience.” Philosophers can rightfully debate that topic forever, but an advanced AI language model that is very good at stringing together reasonable responses to conversational input is absolutely not sentience. Borrowing from software programming theory, what we have here is what might be dubbed the logical mistake of duck typing (“If it walks like a duck and it quacks like a duck, then it must be a duck”). Appearing sentient, and actually being sentient, are two different things, regardless of what the Turing test might argue.
Second, and this comes via Shaq Vayda, is the continuing usage of the word “cloud” to describe a range of startups in wetlab biotech. The idea of “wetlab clouds” is to allow software engineers to programmatically operate a bio lab, using API calls to string experimental steps together like pipetting, northern blotting a sample, or conducing PCR. Writing code would be easier, faster and more repeatable, drastically increasing the efficiency of the biolab.
The wetlab cloud is a nice metaphor, but one that occludes too many complexities. Biological scientists do an incredible amount of work to avoid cross-contamination, to calibrate their instruments, and to adjust their experiments on the fly to account for the regular variabilities in bio research. Those tacit activities are not easily reproducible, and certainly not in the effortless sense of a “cloud.” Wetlab automation is an important category of investment, but analogizing it to software is wrong.
These two examples are just some of the litany of bad analogies roaming out there, particularly in crypto. These are not useful mental shortcuts, but rather dangerous metaphors that can actively blind us to some of the most challenging facets of these complicated industries. We all need to be vigilant of these elisions and dive deeper when we encounter them.
Lux Recommends
Shaq recommends Emily Mullin’s discussion in Wired on new research focused on epigenetic editing. CRISPR can allows scientists to precisely snip strands of DNA, but epigenetic editing allows them to leave the DNA in place while modulating its expression, potentially opening up a new domain of therapeutics.
Grace Isford calls attention to Ethereum co-founder Vitalik Buterin’s new piece on the non-financial usages of blockchain. Meanwhile, I’d bring attention to a newish working paper published by him along with E. Glen Weyl of Microsoft Research and Puja Ohlhaver of Flashbots on a blueprint for using Web3 to decentralize society (shortened to “DeSoc”).
Shaq also points out this Nature interview by Chris Woolston on the decade-long rise of the research software engineer, or coders who build out the tooling and infrastructure for cutting-edge laboratory work.
In a useful corrective, Lada Nuzhna, writes on the over-argued potential for deep learning to change biotech and biochem. “I have been periodically noticing an impression that for ages biology and chemistry have been done with rocks and sticks in caves until deep learning was brought to wet labs by computer scientists, akin to Prometheus bringing fire to mankind.”
That’s it, folks. Have questions, comments, or ideas? This newsletter is sent from my email, so you can just click reply.
Forcing China’s AI researchers to strive for chip efficiency will ultimately shave America’s lead
In incididunt ad qui nostrud sint ullamco. Irure sint deserunt Lorem id officia dolore non. Anim dolor minim sit dolor et sint aliquip qui est. Ex in tempor laborum laboris dolor laboris ullamco quis. Enim est cupidatat consequat est culpa consequat. Fugiat officia in ea ea laborum sunt Lorem. Anim laborum labore duis ipsum mollit nisi do exercitation. Magna in pariatur anim aute.
In incididunt ad qui nostrud sint ullamco. Irure sint deserunt Lorem id officia dolore non. Anim dolor minim sit dolor et sint aliquip qui est. Ex in tempor laborum laboris dolor laboris ullamco quis. Enim est cupidatat consequat est culpa consequat. Fugiat officia in ea ea laborum sunt Lorem. Anim laborum labore duis ipsum mollit nisi do exercitation. Magna in pariatur anim aute.
Right now, pathbreaking AI foundation models follow an inverse Moore’s law (sometimes quipped “Eroom’s Law”). Each new generation is becoming more and more expensive to train as researchers exponentially increase the number of parameters used and overall model complexity. Sam Altman of OpenAI said that the cost of training GPT-4 was over $100 million, and some AI computational specialists believe that the first $1 billion model is currently or will shortly be developed.
As semiconductor chips rise in complexity, costs come down because transistors are packed more densely on silicon, cutting the cost per transistor during fabrication as well as lowering operational costs for energy and heat dissipation. That miracle of performance is the inverse with AI today. To increase the complexity (and therefore hopefully quality) of an AI model, researchers have attempted to pack in more and more parameters, each one of which demands more computation both for training and for usage. A 1 million parameter model can be trained for a few bucks and run on a $15 Raspberry Pi Zero 2 W, but Google’s PaLM with 540 billion parameters requires full-scale data centers to operate and is estimated to have cost millions of dollars to train.
Admittedly, simply having more parameters isn’t a magic recipe for better AI end performance. One recalls Steve Jobs’s marketing of the so-called “Megahertz Myth” to attempt to persuade the public that headline megahertz numbers weren't the right way to judge the performance of a personal computer. Performance in most fields is a complicated problem to judge, and just adding more inputs doesn't necessarily translate into a better output.
And indeed, there is an efficiency curve underway in AI outside of the leading-edge foundation models from OpenAI and Google. Researchers over the past two years have discovered better training techniques (as well as recipes to bundle these techniques together), developed best practices for spending on reinforcement learning from human feedback (RLHF), and curated better training data to improve model quality even while shaving parameter counts. Far from surpassing $1 billion, training new models that are equally performant might well cost only tens or hundreds of thousands of dollars.
This AI performance envelope between dollars invested and quality of model trained is a huge area of debate for the trajectory of the field (and was the most important theme to emanate from our AI Summit). And it’s absolutely vital to understand, since where the efficiency story ends up will determine the sustained market structure of the AI industry.
If foundation models cost billions of dollars to train, all the value and leverage of AI will accrue and centralize to the big tech companies like Microsoft (through OpenAI), Google and others who have the means and teams to lavish. But if the performance envelope reaches a significantly better dollar-to-quality ratio in the future, that means the whole field opens up to startups and novel experiments, while the leverage of the big tech companies would be much reduced.
The U.S. right now is parallelizing both approaches toward AI. Big tech is hurling billions of dollars on the field, while startups are exploring and developing more efficient models given their relatively meagre resources and limited access to Nvidia’s flagship chip, the H100. Talent — on balance — is heading as it typically does to big tech. Why work on efficiency when a big tech behemoth has money to burn on theoretical ideas emanating from university AI labs?
Without access to the highest-performance chips, China is limited in the work it can do on the cutting-edge frontiers of AI development. Without more chips (and in the future, the next generations of GPUs), it won’t have the competitive compute power to push the AI field to its limits like American companies. That leaves China with the only other path available, which is to follow the parallel course for improving AI through efficiency.
For those looking to prevent the decline of American economic power, this is an alarming development. Model efficiency is what will ultimately allow foundation models to be preloaded onto our devices and open up the consumer market to cheap and rapid AI interactions. Whoever builds an advantage in model efficiency will open up a range of applications that remain impractical or too expensive for the most complex AI models.
Given U.S. export controls, China is now (by assumption, and yes, it’s a big assumption) putting its entire weight behind building the AI models it can, which are focused on efficiency. Which means that its resources are arrayed for building the platforms to capture end-user applications — the exact opposite goal of American policymakers. It’s a classic result: restricting access to technology forces engineers to be more creative in building their products, the exact intensified creativity that typically leads to the next great startup or scientific breakthrough.
If America was serious about slowing the growth of China’s still-nascent semiconductor market, it really should have taken a page from the Chinese industrial policy handbook and just dumped chips on the market, just as China has done for years from solar panel manufacturing to electronics. Cheaper chips, faster chips, chips so competitive that no domestic manufacturer — even under Beijing direction — could have effectively competed. Instead we are attempting to decouple from the second largest chips market in the world, turning a competitive field where America is the clear leader into a bountiful green field of opportunity for domestic national champions to usurp market share and profits.
There were of course other goals outside of economic growth for restricting China’s access to chips. America is deeply concerned about the country’s AI integration into its military, and it wants to slow the evolution of its autonomous weaponry and intelligence gathering. Export controls do that, but they are likely to come at an extremely exorbitant long-term cost: the loss of leadership in the most important technological development so far this decade. It’s not a trade off I would have built trade policy on.
The life and death of air conditioning
Across six years of working at TechCrunch, no article triggered an avalanche of readership or inbox vitriol quite like Air conditioning is one of the greatest inventions of the 20th Century. It’s also killing the 21st. It was an interview with Eric Dean Wilson, the author of After Cooling, about the complex feedback loops between global climate disruption and the increasing need for air conditioning to sustain life on Earth. The article was read by millions and millions of people, and hundreds of people wrote in with hot air about the importance of their cold air.
Demand for air conditioners is surging in markets where both incomes and temperatures are rising, populous places like India, China, Indonesia and the Philippines. By one estimate, the world will add 1 billion ACs before the end of the decade. The market is projected to before 2040. That’s good for measures of public health and economic productivity; it’s unquestionably bad for the climate, and a global agreement to phase out the most harmful coolants could keep the appliances out of reach of many of the people who need them most.
This is a classic feedback loop, where the increasing temperatures of the planet, particularly in South Asia, lead to increased demand for climate resilience tools like air conditioning and climate-adapted housing, leading to further climate change ad infinitum.
Josh Wolfe gave a talk at Stanford this week as part of the school’s long-running Entrepreneurial Thought Leaders series, talking all things Lux, defense tech and scientific innovation. The .
Lux Recommends
As Henry Kissinger turns 100, Grace Isford recommends “Henry Kissinger explains how to avoid world war three.” “In his view, the fate of humanity depends on whether America and China can get along. He believes the rapid progress of AI, in particular, leaves them only five-to-ten years to find a way.”
Our scientist-in-residence Sam Arbesman recommends Blindsight by Peter Watts, a first contact, hard science fiction novel that made quite a splash when it was published back in 2006.
Mohammed bin Rashid Al Maktoum, and just how far he has been willing to go to keep his daughter tranquilized and imprisoned. “When the yacht was located, off the Goa coast, Sheikh Mohammed spoke with the Indian Prime Minister, Narendra Modi, and agreed to extradite a Dubai-based arms dealer in exchange for his daughter’s capture. The Indian government deployed boats, helicopters, and a team of armed commandos to storm Nostromo and carry Latifa away.”
Sam recommends Ada Palmer’s article for Microsoft’s AI Anthology, “We are an information revolution species.” “If we pour a precious new elixir into a leaky cup and it leaks, we need to fix the cup, not fear the elixir.”
I love complex international security stories, and few areas are as complex or wild as the international trade in exotic animals. Tad Friend, who generally covers Silicon Valley for The New Yorker, has a great story about an NGO focused on infiltrating and exposing the networks that allow the trade to continue in “Earth League International Hunts the Hunters.” "At times, rhino horn has been worth more than gold—so South African rhinos are often killed with Czech-made rifles sold by Portuguese arms dealers to poachers from Mozambique, who send the horns by courier to Qatar or Vietnam, or have them bundled with elephant ivory in Maputo or Mombasa or Lagos or Luanda and delivered to China via Malaysia or Hong Kong.”