On the “Securities” podcast this week, I posed an audacious question: “How much would you spend to transform the lives of every denizen of one of America’s great cities?” Or even a suburb or a small town? If you could improve the lives of every Angeleno for $100 billion, is that money worth spending? How do we even begin to consider such an expansive (and likely expensive) proposition?
Valuation is the lifeblood of capitalism, as it’s the precondition for free exchange. Improvements in our calculation of valuation can lead to exceptional economic growth, as once unthinkable transactions suddenly become feasible.
Simple bartering allows one to compare the relative value of two objects, but currency allowed us to abstract valuation for all objects simultaneously. As humanity started creating intangible assets, we needed better machinery for valuation to compensate. Many assets, like a share of stock without a dividend, have essentially no “real” value (you can’t sleep on it, you can’t eat it or so one hopes). In order to judge their valuation, we evaluate them relative to “comps” — comparing similar assets and assessing a sort of median price we believe other stock buyers believe the stock is worth in an endless braid of beliefs.
These decisions still involve trade, but it gets even harder when we invest broadly in society. How much should we collectively pay to install safety barriers that save multiple lives per year? That requires assessing the value of a statistical life, actuarial tables, and a dismal accounting of dollars spent to life years gained. Scaled up, we can do the same calculus with a new highway or train station: we can estimate future usage, increased fares and tax revenues, and try to quantitatively rationalize a decision.
Then there are megaprojects that are so large, they are impossible to fathom in the context of valuation. Consider the audacity of Georges-Eugène Haussmann, who transformed central Paris in the mid-to-late 1800s into the wondrous urban landscape we know today. It was a plan of exceptional vision (and of extraordinary destruction to the inhabitants caught up in it). Ultimately, it’s renowned as a masterpiece of urban design that is still studied with envy the world over.
How do we account for that transformation? We could calculate real estate values, the increased lifespans of Paris’s inhabitants (hard to causally identify given so many other coeval public health advances), the faster speed of movement across the city through grand new boulevards and the coming of the Paris Métro. But that doesn’t even begin to capture the aesthetic, architectural and cultural benefits of building up the extraordinary beauty and quality of France’s capital.
Economists never suggest the impossibility of valuing something, and indeed, one could imagine laboriously assigning a quantitive value to every feature and benefit I just suggested. Even as you begin to get a handle on those questions though, there’s a question about time. Central to economics is the time value of money, and so how do we account for the benefits of Haussmann’s accomplishment over the past 150 years and counting? Does all of that count, even as the original inhabitants who suffered and paid for these improvements have long since passed from the Earth?
I keep asking questions because we have slowly transitioned away from the strict valuations that make economics a coherent discipline into the realm of the philosophical. Valuation is quantified, but values are qualified. What we think counts, what we think matters, is ultimately what undergirds our underwriting. The awakening of an environmental consciousness starting in the 1960s led engineers to quantify environmental protection in most construction projects. We didn’t include it before, and now we spend a lot of time (often too much time) calculating and adding it into the sum total of a project’s benefits and costs.
All of this thinking brings us back to this week’s podcast episode, which focuses on Boston’s Big Dig megaproject. By reputation, the Big Dig has gone down as one of the greatest boondoggles in American infrastructure history. It’s goal was deviously simple: replace an unsightly six-lane highway known as the Central Artery that scarred central Boston and replace it with a series of well-designed tunnels that would move vehicular traffic below ground and out of sight (think of it as Elon Musk’s The Boring Company but actually working). Decades later and billions and billions of dollars over budget, it finally opened about 15 years ago. It wasn’t so much a celebration as a sign of relief for politicians who had long been sapped of capital and willpower by the forever project.
My guest Ian Cossrecently recorded a nine-part retrospective series on the Big Dig for Boston’s NPR affiliate GBH News. A local, he had heard of the traumatic story of the Big Dig, its cost overruns, the insane construction delays and the herculean efforts by Boston’s leaders to bring the unmitigated disaster of a project to a final close. But what he discovered across more than 100 interviews and assiduous research was a very different narrative: the Big Dig — considered in its totality and context — was a massive success. In fact, it was such a success, we might consider doing such a megaproject again somewhere else, and soon.
On the expense side of the equation, many of the project’s cost overruns were not, in fact, overruns. According to Ian, the federal government’s modeling of construction costs failed to take into account inflation during the 1970s and early 1980s (hint: inflation wasn’t low then), which meant that the massive and lengthy Big Dig project would see its costs multiply just to account for already-known inflation. Such positioning no doubt helped with early traction to get the project underway from politicians and the public, but astonishing increases in cost estimates soured the perception of competent management.
It’s on the benefits side of the equation though that Ian makes his strongest argument. With the Big Dig now finished for more than a decade, we can see the extent of evolution in Boston’s urban landscape, and it’s astonishing. The Seaport district (where I once lived) has gone from a handful of converted loft apartment buildings to now dozens of skyscrapers holding residences, offices and hotels. Before the Big Dig, the Seaport was all but unreachable without crossing the highway that cut it off from the rest of downtown Boston.
The cut through central Boston that once held the Central Artery freeway has now been replaced with a greenbelt park that flows through dense neighborhoods like North End, which now have much more fluid access to the rest of the city. Transportation options like Amtrak’s Acela stop South Station are now easily accessible from the financial district. Perhaps most importantly, traffic in the tunnels is significantly reduced compared to the congestion that the Central Artery offered commuters for years (although ask any so-called MAsshole driver and they will certainly scream at you otherwise).
The Big Dig cost upwards of $15 billion, although no one really knows how to calculate the true cost of the project. In reflection and over time, that value increasingly seems like a bargain for rejuvenating one of America’s founding cities and opening up whole new neighborhoods for residents and office workers.
So how do we measure audacity? Ambitious projects are always going to induce sticker shock; after all, there’s rarely a discount bin for boldness. Announce the true cost of one of these projects, and even if accurate, the immediate revulsion will occlude even the slightest spark of imagination of what is possible. Maybe we’re so inured to failure that we just want these big ideas shot down lest they gain momentum and start to terrorize us.
I think Ian’s key point though is important to remember: sometimes, it really takes the megaproject to make it all work. The waterworks and water wars of California eventually transformed the state into the most productive agricultural region of the United States while also producing that city of quartz, Los Angeles. Pierre L'Enfant’s design for Washington DC brought a grandeur to a young nation’s capital that was sorely lacking amidst the swamps of the Potomac. Energy projects from the Hoover Dam to the expansion of fracking and massive solar installations have brought abundant energy that has powered modernity for all of us. The occasional megaproject — well-designed, well-planned and well-executed — has the ability to transform our lives for the better.
Nitish Pahwarings the death knell in Slate on Quora, the once ubiquitous Q&A site that has become a vortex of stupidity with the rise of generative AI. “The tragedy of Quora is not just that it crushed the flourishing communities it once built up. It’s that it took all of that goodwill, community, expertise, and curiosity and assumed that it could automate a system that equated it, apparently without much thought to how pale the comparison is.”
Our scientist-in-residence Sam Arbesman and I were both quite excited by the announcement of the success of the Vesuvius Challenge, which uses AI models to interpret high-resolution CT scans of the 1,800 Herculaneum Papyri that were left behind during the volcanic explosion of Mount Vesuvius in 79 AD. What an incredible fusion of past relics, present technology and future progress.
Sam highlights “Neal Stephenson’s Most Stunning Prediction” in The Atlantic, in an interview with Matteo Wong. “About a year ago, I worked with a start-up that makes AI characters in video games. I found it rewarding and fascinating because of the hallucinations: I could see how new patterns emerged from the soup of inputs being fed to it. The same thing that I consider to be a feature is a bug in most applications.”
Sam knows what I love, and it’s a rogue billionaire building a nuclear weapon. So when he recommended Sharon Weinberger’s new piece, “Could a Rogue Billionaire Make a Nuclear Weapon?,” he hit gold. The answer, of course, is yes. “Not everyone I spoke with about the study agreed that billionaires could—or would want to—operate a nuclear weapons business. After all, Musk’s grandiosity led him to buy Twitter, not a uranium mine in Kazakhstan…”
That’s it, folks. Have questions, comments, or ideas? This newsletter is sent from my email, so you can just click reply.
Forcing China’s AI researchers to strive for chip efficiency will ultimately shave America’s lead
In incididunt ad qui nostrud sint ullamco. Irure sint deserunt Lorem id officia dolore non. Anim dolor minim sit dolor et sint aliquip qui est. Ex in tempor laborum laboris dolor laboris ullamco quis. Enim est cupidatat consequat est culpa consequat. Fugiat officia in ea ea laborum sunt Lorem. Anim laborum labore duis ipsum mollit nisi do exercitation. Magna in pariatur anim aute.
In incididunt ad qui nostrud sint ullamco. Irure sint deserunt Lorem id officia dolore non. Anim dolor minim sit dolor et sint aliquip qui est. Ex in tempor laborum laboris dolor laboris ullamco quis. Enim est cupidatat consequat est culpa consequat. Fugiat officia in ea ea laborum sunt Lorem. Anim laborum labore duis ipsum mollit nisi do exercitation. Magna in pariatur anim aute.
Right now, pathbreaking AI foundation models follow an inverse Moore’s law (sometimes quipped “Eroom’s Law”). Each new generation is becoming more and more expensive to train as researchers exponentially increase the number of parameters used and overall model complexity. Sam Altman of OpenAI said that the cost of training GPT-4 was over $100 million, and some AI computational specialists believe that the first $1 billion model is currently or will shortly be developed.
As semiconductor chips rise in complexity, costs come down because transistors are packed more densely on silicon, cutting the cost per transistor during fabrication as well as lowering operational costs for energy and heat dissipation. That miracle of performance is the inverse with AI today. To increase the complexity (and therefore hopefully quality) of an AI model, researchers have attempted to pack in more and more parameters, each one of which demands more computation both for training and for usage. A 1 million parameter model can be trained for a few bucks and run on a $15 Raspberry Pi Zero 2 W, but Google’s PaLM with 540 billion parameters requires full-scale data centers to operate and is estimated to have cost millions of dollars to train.
Admittedly, simply having more parameters isn’t a magic recipe for better AI end performance. One recalls Steve Jobs’s marketing of the so-called “Megahertz Myth” to attempt to persuade the public that headline megahertz numbers weren't the right way to judge the performance of a personal computer. Performance in most fields is a complicated problem to judge, and just adding more inputs doesn't necessarily translate into a better output.
And indeed, there is an efficiency curve underway in AI outside of the leading-edge foundation models from OpenAI and Google. Researchers over the past two years have discovered better training techniques (as well as recipes to bundle these techniques together), developed best practices for spending on reinforcement learning from human feedback (RLHF), and curated better training data to improve model quality even while shaving parameter counts. Far from surpassing $1 billion, training new models that are equally performant might well cost only tens or hundreds of thousands of dollars.
This AI performance envelope between dollars invested and quality of model trained is a huge area of debate for the trajectory of the field (and was the most important theme to emanate from our AI Summit). And it’s absolutely vital to understand, since where the efficiency story ends up will determine the sustained market structure of the AI industry.
If foundation models cost billions of dollars to train, all the value and leverage of AI will accrue and centralize to the big tech companies like Microsoft (through OpenAI), Google and others who have the means and teams to lavish. But if the performance envelope reaches a significantly better dollar-to-quality ratio in the future, that means the whole field opens up to startups and novel experiments, while the leverage of the big tech companies would be much reduced.
The U.S. right now is parallelizing both approaches toward AI. Big tech is hurling billions of dollars on the field, while startups are exploring and developing more efficient models given their relatively meagre resources and limited access to Nvidia’s flagship chip, the H100. Talent — on balance — is heading as it typically does to big tech. Why work on efficiency when a big tech behemoth has money to burn on theoretical ideas emanating from university AI labs?
Without access to the highest-performance chips, China is limited in the work it can do on the cutting-edge frontiers of AI development. Without more chips (and in the future, the next generations of GPUs), it won’t have the competitive compute power to push the AI field to its limits like American companies. That leaves China with the only other path available, which is to follow the parallel course for improving AI through efficiency.
For those looking to prevent the decline of American economic power, this is an alarming development. Model efficiency is what will ultimately allow foundation models to be preloaded onto our devices and open up the consumer market to cheap and rapid AI interactions. Whoever builds an advantage in model efficiency will open up a range of applications that remain impractical or too expensive for the most complex AI models.
Given U.S. export controls, China is now (by assumption, and yes, it’s a big assumption) putting its entire weight behind building the AI models it can, which are focused on efficiency. Which means that its resources are arrayed for building the platforms to capture end-user applications — the exact opposite goal of American policymakers. It’s a classic result: restricting access to technology forces engineers to be more creative in building their products, the exact intensified creativity that typically leads to the next great startup or scientific breakthrough.
If America was serious about slowing the growth of China’s still-nascent semiconductor market, it really should have taken a page from the Chinese industrial policy handbook and just dumped chips on the market, just as China has done for years from solar panel manufacturing to electronics. Cheaper chips, faster chips, chips so competitive that no domestic manufacturer — even under Beijing direction — could have effectively competed. Instead we are attempting to decouple from the second largest chips market in the world, turning a competitive field where America is the clear leader into a bountiful green field of opportunity for domestic national champions to usurp market share and profits.
There were of course other goals outside of economic growth for restricting China’s access to chips. America is deeply concerned about the country’s AI integration into its military, and it wants to slow the evolution of its autonomous weaponry and intelligence gathering. Export controls do that, but they are likely to come at an extremely exorbitant long-term cost: the loss of leadership in the most important technological development so far this decade. It’s not a trade off I would have built trade policy on.
The life and death of air conditioning
Across six years of working at TechCrunch, no article triggered an avalanche of readership or inbox vitriol quite like Air conditioning is one of the greatest inventions of the 20th Century. It’s also killing the 21st. It was an interview with Eric Dean Wilson, the author of After Cooling, about the complex feedback loops between global climate disruption and the increasing need for air conditioning to sustain life on Earth. The article was read by millions and millions of people, and hundreds of people wrote in with hot air about the importance of their cold air.
Demand for air conditioners is surging in markets where both incomes and temperatures are rising, populous places like India, China, Indonesia and the Philippines. By one estimate, the world will add 1 billion ACs before the end of the decade. The market is projected to before 2040. That’s good for measures of public health and economic productivity; it’s unquestionably bad for the climate, and a global agreement to phase out the most harmful coolants could keep the appliances out of reach of many of the people who need them most.
This is a classic feedback loop, where the increasing temperatures of the planet, particularly in South Asia, lead to increased demand for climate resilience tools like air conditioning and climate-adapted housing, leading to further climate change ad infinitum.
Josh Wolfe gave a talk at Stanford this week as part of the school’s long-running Entrepreneurial Thought Leaders series, talking all things Lux, defense tech and scientific innovation. The .
Lux Recommends
As Henry Kissinger turns 100, Grace Isford recommends “Henry Kissinger explains how to avoid world war three.” “In his view, the fate of humanity depends on whether America and China can get along. He believes the rapid progress of AI, in particular, leaves them only five-to-ten years to find a way.”
Our scientist-in-residence Sam Arbesman recommends Blindsight by Peter Watts, a first contact, hard science fiction novel that made quite a splash when it was published back in 2006.
Mohammed bin Rashid Al Maktoum, and just how far he has been willing to go to keep his daughter tranquilized and imprisoned. “When the yacht was located, off the Goa coast, Sheikh Mohammed spoke with the Indian Prime Minister, Narendra Modi, and agreed to extradite a Dubai-based arms dealer in exchange for his daughter’s capture. The Indian government deployed boats, helicopters, and a team of armed commandos to storm Nostromo and carry Latifa away.”
Sam recommends Ada Palmer’s article for Microsoft’s AI Anthology, “We are an information revolution species.” “If we pour a precious new elixir into a leaky cup and it leaks, we need to fix the cup, not fear the elixir.”
I love complex international security stories, and few areas are as complex or wild as the international trade in exotic animals. Tad Friend, who generally covers Silicon Valley for The New Yorker, has a great story about an NGO focused on infiltrating and exposing the networks that allow the trade to continue in “Earth League International Hunts the Hunters.” "At times, rhino horn has been worth more than gold—so South African rhinos are often killed with Czech-made rifles sold by Portuguese arms dealers to poachers from Mozambique, who send the horns by courier to Qatar or Vietnam, or have them bundled with elephant ivory in Maputo or Mombasa or Lagos or Luanda and delivered to China via Malaysia or Hong Kong.”