A beach in Hualien, Taiwan. Photo by Danny Crichton.
I’m back from two weeks in Taipei, but no column this week as I catch up. Lots of recommendations from 32 hours of flying though.
Podcast: How Applied Intuition used the Valley’s hardest lessons to upgrade automotive with autonomy
Qasar Younis and Peter Ludwig built Applied Intuition differently from most other startups. At a time of profligate spending at the peak of the tech bubble, they kept expenses low — and the company cash-flow positive for several years now. When every other company was moving toward remote work or a hybrid setup, they doubled down on the in-person, five-days-per-week office (while continuing a no-shoes philosophy). And when it comes to culture, they don’t just post their corporate values on a wall, but encode them right into the very software that runs the company.
The results? Applied reached a new milestone valuation earlier this year of $6 billion as well as announced a strategic partnership with automaker Porsche. It’s a moment of success years and even decades in the making, with both Qasar and Peter growing up amidst the milieu of America’s auto capital Detroit. Yet, it wasn’t just friends and family working in the auto industry that led them to invent the future of the car, but also a willingness to learn from Silicon Valley’s most thoughtful startup growth practices.
Joining me and Bilal Zuberi, we weave a conversation about automotive and autonomy while we discuss the key decisions that founders must make when building a startup. We talk about the pressure of capitalism on company execution, using software to manage a growing organization, why Google exported so much talent in the early 2010s, how to protect engineering productivity with a customer-centric culture, how to construct a useful board of directors, and finally, why markets just “whomp” any other factor of success for entrepreneurs.
Orthogonal Bet Podcast: A technology vibe shift from utopian Star Trek to absurdist Douglas Adams?
This is the inaugural episode of an on-going mini-series for the Riskgaming podcast we’re dubbing the Orthogonal Bet. Organized by our scientist-in-residence Sam Arbesman, the goal is to take a step back from the daily machinations that I generally host on the podcast to look at what Sam describes as “…the interesting, the strange, and the weird. Ideas and topics that ignite our curiosity are worthy of our attention, because they might lead to advances and insights that we can’t anticipate.”
To that end, today our guest is Matt Webb, a virtuoso tinkerer and creative whose experiments with interaction design and technology have led to such apps as the Galaxy Compass (an app that features an arrow pointing to the center of the universe) and Poem/1, a hardware clock that offers a rhyming poem devised by AI. He’s also a regular essayist on his blog Interconnected.
We latched onto Matt’s recent essay about a vibe shift that’s underway in the tech world from the utopian model of progress presented in Star Trek to the absurd whimsy of Douglas Adams and The Hitchhiker’s Guide to the Galaxy. Along the way, we also discuss Neal Stephenson, the genre known as “design fiction,” Stafford Beer and management cybernetics, the 90s sci-fi show Wild Palms, and how artificial intelligence is adding depth to the already multitalented.
Orange juice prices have doubled over the past twelve months, and it’s a fascinating case study of the kinds of complex and combinatorial risks that make riskgaming so interesting. Per a new article in The Financial Times: “Today’s supply squeeze dates back 20 years to when citrus greening — an incurable disease spread by sap-sucking psyllid insects that makes the tree’s fruit bitter before killing it altogether — was first detected in the US.” But it’s not just disease — the orange crisis is also about climate change, evolving consumer preferences, and poor farming techniques unable to meet global demand. The industry’s solution? Don’t make orange juice from oranges anymore.
Having just spent two weeks in Taipei, I witnessed the mass protests currently underway across the city in opposition to controversial legislation that was ultimately passed by the Legislative Yuan. The New York Timeshas a good overview of the stakes, which gets at Taiwan’s democratic dilemma: for the first time, the country has a president with a minority in the legislature, promising years of discord and sclerosis until new elections are held. It’s paralysis at just the worst time.
Sam recommends a new research paper from a trio of writers on "Is it getting harder to make a hit? Evidence from 65 years of US music chart history.” “Here we show that the dynamics of the Billboard Hot 100 chart have changed significantly since the chart's founding in 1958, and in particular in the past 15 years. Whereas most songs spend less time on the chart now than songs did in the past, we show that top-1 songs have tripled their chart lifetime since the 1960s, the highest-ranked songs maintain their positions for far longer than previously, and the lowest-ranked songs are replaced more frequently than ever. At the same time, who occupies the chart has also changed over the years: In recent years, fewer new artists make it into the chart and more positions are occupied by established hit makers.”
A pair of articles on the book industry were enthralling to me. First, a major deep dive into the Penguin Random House antitrust lawsuit data by Elle Griffinshowing the near impossibility of publishing a bestseller in the United States. “The publishing houses may live to see another day, but I don’t think their model is long for this world. Unless you are a celebrity or franchise author, the publishing model won’t provide a whole lot more than a tiny advance and a dozen readers. If you are a celebrity, you’ll still have a much bigger reach on Instagram than you will with your book!” And then a shorter corrective from Lincoln Michel on why the data from the trial can be over-analyzed. “Print book sales have not been decimated by digital sales/streaming. That’s right, despite the introduction of ebooks, various Netflix for books services, and endless cries about the death of publishing…. overall print sales have held pretty steady. And when we add in ebook sales, that means overall book sales are actually increasing.”
Sam got a pre-release copy of John Strausbaugh’s new book “The Wrong Stuff: How the Soviet Space Program Crashed and Burned.” “These achievements were amazing, yes, but they were also PR victories as much as scientific ones. The world saw a Potemkin spaceport; the internal facts were much sloppier, less impressive, more dysfunctional.”
In a major new investigative piece by Matthew Karnitschnig, we are starting to finally stitch together how Jan Marsalek, the fugitive who was once COO of Germany’s high-flying fintech Wirecard, is now assumed by Western intelligence sources to have been one of the most influential agents of Russian president Vladimir Putin’s intelligence services. His goal? Undermining Austrian institutions. “If Moscow really was behind the effort to take over Austria’s spy service, as Western intelligence officials claim, it’s clear that the Russians regard the country as an important prize and are willing to go to great lengths to influence its politics.”
Ina Deljkic recommends an interesting medical discovery of what appears to be the first removal of a brain tumor by the Egyptians thousands of years ago. “But while this evidence from antiquity was well studied during the 19th and 20th centuries, 21st century technologies, such as those used in the new study, are revealing previously unknown details about ancient Egypt’s medical arts, [Dr. Ibrahem Badr] added. ‘The research provides a new and solid direction for reevaluating the history of medicine and pathology among ancient Egyptians,’ he said. The study authors’ methods ‘transition their results from the realm of uncertainty and archaeological possibilities to the realm of scientific and medical certainty.’”
Sam recommends Anthony Lane’s piece in The New Yorker on “Can You Read a Book in a Quarter of an Hour?,” a piece that is a meditation on the popular book summarization platform Blinkist. “The most potent enemy of reading, it goes without saying, is the small, flat box that you carry in your pocket. In terms of addictive properties, it might as well be stuffed with meth. There’s no point in grinding through a whole book—a chewy bunch of words arranged into a narrative or, heaven preserve us, an argument—when you can pick up your iPhone, touch the Times app, skip the news and commentary, head straight to Wordle, and give yourself an instant hit of euphoria and pride by taking just three guesses to reach a triumphant guano.”
Finally, Andy Matuschak has a great lecture with a well-integrated set of captions on the subject of “How Might We Learn?.” “These synthesized prompts can vary each time they’re asked, so that Sam gets practice accessing the same idea from different angles. The prompts get deeper and more complex over time, as Sam gets more confident with the material. Notice also that this question isn’t so abstract: it’s really about applying what Sam’s learned, in a bite-sized form factor.”
That’s it, folks. Have questions, comments, or ideas? This newsletter is sent from my email, so you can just click reply.
Forcing China’s AI researchers to strive for chip efficiency will ultimately shave America’s lead
In incididunt ad qui nostrud sint ullamco. Irure sint deserunt Lorem id officia dolore non. Anim dolor minim sit dolor et sint aliquip qui est. Ex in tempor laborum laboris dolor laboris ullamco quis. Enim est cupidatat consequat est culpa consequat. Fugiat officia in ea ea laborum sunt Lorem. Anim laborum labore duis ipsum mollit nisi do exercitation. Magna in pariatur anim aute.
In incididunt ad qui nostrud sint ullamco. Irure sint deserunt Lorem id officia dolore non. Anim dolor minim sit dolor et sint aliquip qui est. Ex in tempor laborum laboris dolor laboris ullamco quis. Enim est cupidatat consequat est culpa consequat. Fugiat officia in ea ea laborum sunt Lorem. Anim laborum labore duis ipsum mollit nisi do exercitation. Magna in pariatur anim aute.
Right now, pathbreaking AI foundation models follow an inverse Moore’s law (sometimes quipped “Eroom’s Law”). Each new generation is becoming more and more expensive to train as researchers exponentially increase the number of parameters used and overall model complexity. Sam Altman of OpenAI said that the cost of training GPT-4 was over $100 million, and some AI computational specialists believe that the first $1 billion model is currently or will shortly be developed.
As semiconductor chips rise in complexity, costs come down because transistors are packed more densely on silicon, cutting the cost per transistor during fabrication as well as lowering operational costs for energy and heat dissipation. That miracle of performance is the inverse with AI today. To increase the complexity (and therefore hopefully quality) of an AI model, researchers have attempted to pack in more and more parameters, each one of which demands more computation both for training and for usage. A 1 million parameter model can be trained for a few bucks and run on a $15 Raspberry Pi Zero 2 W, but Google’s PaLM with 540 billion parameters requires full-scale data centers to operate and is estimated to have cost millions of dollars to train.
Admittedly, simply having more parameters isn’t a magic recipe for better AI end performance. One recalls Steve Jobs’s marketing of the so-called “Megahertz Myth” to attempt to persuade the public that headline megahertz numbers weren't the right way to judge the performance of a personal computer. Performance in most fields is a complicated problem to judge, and just adding more inputs doesn't necessarily translate into a better output.
And indeed, there is an efficiency curve underway in AI outside of the leading-edge foundation models from OpenAI and Google. Researchers over the past two years have discovered better training techniques (as well as recipes to bundle these techniques together), developed best practices for spending on reinforcement learning from human feedback (RLHF), and curated better training data to improve model quality even while shaving parameter counts. Far from surpassing $1 billion, training new models that are equally performant might well cost only tens or hundreds of thousands of dollars.
This AI performance envelope between dollars invested and quality of model trained is a huge area of debate for the trajectory of the field (and was the most important theme to emanate from our AI Summit). And it’s absolutely vital to understand, since where the efficiency story ends up will determine the sustained market structure of the AI industry.
If foundation models cost billions of dollars to train, all the value and leverage of AI will accrue and centralize to the big tech companies like Microsoft (through OpenAI), Google and others who have the means and teams to lavish. But if the performance envelope reaches a significantly better dollar-to-quality ratio in the future, that means the whole field opens up to startups and novel experiments, while the leverage of the big tech companies would be much reduced.
The U.S. right now is parallelizing both approaches toward AI. Big tech is hurling billions of dollars on the field, while startups are exploring and developing more efficient models given their relatively meagre resources and limited access to Nvidia’s flagship chip, the H100. Talent — on balance — is heading as it typically does to big tech. Why work on efficiency when a big tech behemoth has money to burn on theoretical ideas emanating from university AI labs?
Without access to the highest-performance chips, China is limited in the work it can do on the cutting-edge frontiers of AI development. Without more chips (and in the future, the next generations of GPUs), it won’t have the competitive compute power to push the AI field to its limits like American companies. That leaves China with the only other path available, which is to follow the parallel course for improving AI through efficiency.
For those looking to prevent the decline of American economic power, this is an alarming development. Model efficiency is what will ultimately allow foundation models to be preloaded onto our devices and open up the consumer market to cheap and rapid AI interactions. Whoever builds an advantage in model efficiency will open up a range of applications that remain impractical or too expensive for the most complex AI models.
Given U.S. export controls, China is now (by assumption, and yes, it’s a big assumption) putting its entire weight behind building the AI models it can, which are focused on efficiency. Which means that its resources are arrayed for building the platforms to capture end-user applications — the exact opposite goal of American policymakers. It’s a classic result: restricting access to technology forces engineers to be more creative in building their products, the exact intensified creativity that typically leads to the next great startup or scientific breakthrough.
If America was serious about slowing the growth of China’s still-nascent semiconductor market, it really should have taken a page from the Chinese industrial policy handbook and just dumped chips on the market, just as China has done for years from solar panel manufacturing to electronics. Cheaper chips, faster chips, chips so competitive that no domestic manufacturer — even under Beijing direction — could have effectively competed. Instead we are attempting to decouple from the second largest chips market in the world, turning a competitive field where America is the clear leader into a bountiful green field of opportunity for domestic national champions to usurp market share and profits.
There were of course other goals outside of economic growth for restricting China’s access to chips. America is deeply concerned about the country’s AI integration into its military, and it wants to slow the evolution of its autonomous weaponry and intelligence gathering. Export controls do that, but they are likely to come at an extremely exorbitant long-term cost: the loss of leadership in the most important technological development so far this decade. It’s not a trade off I would have built trade policy on.
The life and death of air conditioning
Across six years of working at TechCrunch, no article triggered an avalanche of readership or inbox vitriol quite like Air conditioning is one of the greatest inventions of the 20th Century. It’s also killing the 21st. It was an interview with Eric Dean Wilson, the author of After Cooling, about the complex feedback loops between global climate disruption and the increasing need for air conditioning to sustain life on Earth. The article was read by millions and millions of people, and hundreds of people wrote in with hot air about the importance of their cold air.
Demand for air conditioners is surging in markets where both incomes and temperatures are rising, populous places like India, China, Indonesia and the Philippines. By one estimate, the world will add 1 billion ACs before the end of the decade. The market is projected to before 2040. That’s good for measures of public health and economic productivity; it’s unquestionably bad for the climate, and a global agreement to phase out the most harmful coolants could keep the appliances out of reach of many of the people who need them most.
This is a classic feedback loop, where the increasing temperatures of the planet, particularly in South Asia, lead to increased demand for climate resilience tools like air conditioning and climate-adapted housing, leading to further climate change ad infinitum.
Josh Wolfe gave a talk at Stanford this week as part of the school’s long-running Entrepreneurial Thought Leaders series, talking all things Lux, defense tech and scientific innovation. The .
Lux Recommends
As Henry Kissinger turns 100, Grace Isford recommends “Henry Kissinger explains how to avoid world war three.” “In his view, the fate of humanity depends on whether America and China can get along. He believes the rapid progress of AI, in particular, leaves them only five-to-ten years to find a way.”
Our scientist-in-residence Sam Arbesman recommends Blindsight by Peter Watts, a first contact, hard science fiction novel that made quite a splash when it was published back in 2006.
Mohammed bin Rashid Al Maktoum, and just how far he has been willing to go to keep his daughter tranquilized and imprisoned. “When the yacht was located, off the Goa coast, Sheikh Mohammed spoke with the Indian Prime Minister, Narendra Modi, and agreed to extradite a Dubai-based arms dealer in exchange for his daughter’s capture. The Indian government deployed boats, helicopters, and a team of armed commandos to storm Nostromo and carry Latifa away.”
Sam recommends Ada Palmer’s article for Microsoft’s AI Anthology, “We are an information revolution species.” “If we pour a precious new elixir into a leaky cup and it leaks, we need to fix the cup, not fear the elixir.”
I love complex international security stories, and few areas are as complex or wild as the international trade in exotic animals. Tad Friend, who generally covers Silicon Valley for The New Yorker, has a great story about an NGO focused on infiltrating and exposing the networks that allow the trade to continue in “Earth League International Hunts the Hunters.” "At times, rhino horn has been worth more than gold—so South African rhinos are often killed with Czech-made rifles sold by Portuguese arms dealers to poachers from Mozambique, who send the horns by courier to Qatar or Vietnam, or have them bundled with elephant ivory in Maputo or Mombasa or Lagos or Luanda and delivered to China via Malaysia or Hong Kong.”