A recommendation from last week’s “Securities” newsletter was a feverishly forwarded article in The New York Times that described how a professor at NYU was let go after students complained that his chemistry class was too hard and not changing in light of the unique challenges of the Covid-19 era. Responses were shrill across the spectrum, from folks lamenting the deplorable state of the nation’s youth to those who feel that education should better accommodate the mental health of students.
Left out of much of the conversation is just how awful STEM (science, technology, engineering, mathematics) education remains in America, particularly at universities but by no means exclusive to them.
It’s a theme that I have personally written about for more than a decade, all the way back to when I was a columnist at The Stanford Daily and caused a minor kerfuffle over an article entitled “Why I Left Science”. Reflecting on my options for study, I wrote in 2009 that:
I looked at the equation before me: large lecture classes with memorization-intensive coursework plus little support plus little opportunity to be involved in science equals bad Stanford education. So I left.
There is a yawning gap between what science education can be and what is actually delivered in most classrooms. The sciences should be endeavors of wondrous discovery and curiosity that inculcate a deep empathy for the laws and theories that hold our tenuous world together.
What’s delivered instead far too often are crowded lecture halls, impenetrable textbooks, and relative bell curve standards that brook no affordances to the talent that’s actually seated in the classroom. These conditions are worsened by the asinine need to “weed out” students, either from majoring in these fields or from pursuing medical school, guaranteeing that the already-prepared are the only ones who make it through the gauntlet.
How many of America’s best potential engineers and scientists have been dissuaded from these fields simply by the abominable morass that is introductory college education in these subjects? The answer is impossibly high, given that all fields but computer science have watched enrollments decline even as America’s student population swells.
A rebuilt education in STEM would start with experiential labs first. It would offer hacker and maker spaces where students can explore, invent and design everything from semiconductor circuits and synth bio systems to software apps and chemical reactions. It would guide students to find their desire in the field first before their brains are jammed with factual minutia.
Most importantly, the entire concept of “weeding out” must be damned to its grave. Elitism and high standards are not a problem, but far too often, weeding out is nothing more than a lazy crutch for lecturers who long ago should have been banished from the classroom. It’s well past time that universities stopped America’s anti-science descent and started asking: why did all the students leave? Check them out at midterms and you’ll find out why.
Insulating from the news is like breathing in asbestos
Every week the gusher of news surges just a bit more. Recently, we had the continued UK gilt crisis, global exchange rates like the Yen hitting multi-decade extremes, a coup d’état in Burkina Faso (the second this year for those counting), Russia pummeling Ukraine with its heaviest volley of missiles yet, and that’s not even getting into the sulfurous cacophony of the U.S. midterm elections or the baseless speculation around the 20th National Congress of China.
One response to this volume of information is just to turn it off. It’s anecdata, but an increasing number of my friends have just switched off the news entirely. And when I say entirely, I mean they have unsubscribed from all publications and have, for all intents and purposes, relocated to a cave with very bad 5G reception. Most of them claim it as a form of mental wellness — the news is simultaneously draining, exhausting, and depressing, so the best approach is simply to lance it entirely. Ignorance is bliss, in other words.
This is not a new phenomenon in tech circles of course. In much the same way that Soylent solved the “distraction” of eating, founders and software engineers have regularly told me over the years that reading the news hurts their productivity “vibe”, and so it’s best to just stay rigorously focused on product development and occlude the outside world.
That was easier when multiples expansion allowed Silicon Valley to essentially live in an ethereal bubble outside the normal world order. Economics and politics played no role in the daily life of company building, and with everyone awash with cash, no one needed to know how capital formed or how it actually flowed.
But insulation from the news is a form of deep privilege, an indication that financial resources can solve any arriving problems and that the savageries of life can be held at bay. News is ultimately a necessity to all but the lucky few. Think of the young men of Russia who received news that they would be imminently drafted into Vladimir Putin’s war and had mere hours to run before the government started banning their travel by plane and over land. Hope you had push notifications on.
Ignorance isn’t bliss for those left behind, or really any of us. Reading the news can sometimes be described as a citizen’s duty, a form of civic engagement in the polity. It’s much more individualistic and valuable than that though — it’s really about survival, and for some, a means of thriving. Knowing what’s going on is the key to adaptation, and adaptation — as hard and gruesome as it can be at times — is the path to bliss.
Governance complexity and infrastructure
The other story that whipped around my networks this week — and also from The New York Times — was the paper’s profile of California’s disastrous high-speed rail boondoggle. In particular, one passage was quoted by dozens of my friends:
The state was warned repeatedly that its plans were too complex. SNCF, the French national railroad, was among bullet train operators from Europe and Japan that came to California in the early 2000s with hopes of getting a contract to help develop the system. The company’s recommendations for a direct route out of Los Angeles and a focus on moving people between Los Angeles and San Francisco were cast aside, said Dan McNamara, a career project manager for SNCF. The company pulled out in 2011. “There were so many things that went wrong,” Mr. McNamara said. “SNCF was very angry. They told the state they were leaving for North Africa, which was less politically dysfunctional. They went to Morocco and helped them build a rail system.”
There was much titillating that supposedly-advanced California was discarded in favor of emerging market Morocco. That reputation is completely unwarranted: Morocco actually has a working high-speed rail line that connects Casablanca to Tangier and opened in 2018, the first on the African continent. With 186 kilometers of dedicated high-speed rail, Morocco actually has a longer network of bullet trains than the United States, where Amtrak’s Acela only hits its maximum speed on about 80 kilometers of track. Morocco also plans to expand the line significantly further in the decade ahead.
The wider pattern brought to light in this story though is the balance between complexity and governance that we explored back in “Marginal stupidity”:
[Joseph A. Tainter]’s theory is that there are declining marginal returns to investment in complexity, but he also implies that such complexity can also turn negative. There is a point at which complexity begets further complexity with no increase in productivity, essentially leveraging a collective “tax” on everyone to maintain an ever more complicated bureaucracy for no value.
Nations can clearly have too little government, with an inability to identify and track property rights, to adjudicate disputes, and to provide for general stability and well-being. On the other extreme, it’s entirely possible for governments to have so much useless complexity that they become BANANA republics (“build absolutely nothing anywhere near anything”).
California’s dysfunction is hardly a new story (unless you have decided to tune out the news!), but it’s important to remember how its dysfunction actually renders. High-speed rail isn’t a new invention, but it does require rigorous planning, optimization, and project management. It’s a skillset that’s clearly in a sorry state across the Golden State, and much of America to boot.
“Securities” Podcast: We will observe a battle for the true openness in AI
No technology has as many dual-use challenges as artificial intelligence. The same AI models that invent vivacious illustrations and visual effects for movies are the exact models that can generate democracy-killing algorithmic propaganda. Code may well be code, but more and more AI leaders are considering how to balance the desire for openness with the need for responsible innovation.
One of those leading companies is Hugging Face (a Lux portfolio company), and part of the weight of AI’s safe future lies there with Carlos Muñoz Ferrandis, a Spanish lawyer and PhD researcher at the Max Planck Institute for Innovation and Competition (Munich). Ferrandis is co-lead of the Legal & Ethical Working Group at BigScience and the AI counsel for Hugging Face. He’s been working on Open & Responsible AI licenses (“OpenRAIL”) that fuse the freedom of traditional open-source licenses with the responsible usage that AI leaders wish to see emerge from the community.
Ferrandis and I talk about why code and models require different types of licenses, balancing openness with responsibility, how to keep the community adaptive even as AI models are added to more applications, how these new AI licenses are enforced, and what happens when AI models get ever cheaper to train.
Our scientist-in-residence Sam Arbesman recommends the recent “Aims of Education” lecture at the University of Chicago from Professor Agnes Callard, who discusses civilization, death, and how we should use the time we've been given to build a meaningful life.
Talking about complexity and governance, ProPublica’s Sebastian Rotella and Kirsten Berg have a dynamite story on how Latin American drug cartels connected with Chinese money launderers to simultaneously evade global drug laws and Beijing’s capital controls. It’s an incredible financial tale, and one that shows just how hard it is to root out criminality in a densely interconnected global economy.
WePresent, a publication of file transfer service WeTransfer, has a great piece from design studio Bompas & Parr that explores the rituals of diplomatic dining. “And for all the planning that goes into such events, diplomats attending as guests may face the odd high-risk dining table challenge. ‘There’s a tradition in Kazakhstan to give a sheep’s head to an honored guest, who must allocate pieces of it around the table,’ says [Paul Brummell, a serving UK ambassador and former head of soft power at the Foreign, Commonwealth and Development Office in London]. ‘The etiquette around the process can be quite elaborate, as you are supposed to assign pieces according to the qualities of the other guests.’”
The antipode of our conversation with NYU professorJonathan Haidt on the “Securities” podcast back in June, Malcolm Harris offers a thoughtful analysis on “Why Are Kids So Sad?” focusing on economic factors over technological ones.
That’s it, folks. Have questions, comments, or ideas? This newsletter is sent from my email, so you can just click reply.
Forcing China’s AI researchers to strive for chip efficiency will ultimately shave America’s lead
In incididunt ad qui nostrud sint ullamco. Irure sint deserunt Lorem id officia dolore non. Anim dolor minim sit dolor et sint aliquip qui est. Ex in tempor laborum laboris dolor laboris ullamco quis. Enim est cupidatat consequat est culpa consequat. Fugiat officia in ea ea laborum sunt Lorem. Anim laborum labore duis ipsum mollit nisi do exercitation. Magna in pariatur anim aute.
In incididunt ad qui nostrud sint ullamco. Irure sint deserunt Lorem id officia dolore non. Anim dolor minim sit dolor et sint aliquip qui est. Ex in tempor laborum laboris dolor laboris ullamco quis. Enim est cupidatat consequat est culpa consequat. Fugiat officia in ea ea laborum sunt Lorem. Anim laborum labore duis ipsum mollit nisi do exercitation. Magna in pariatur anim aute.
Right now, pathbreaking AI foundation models follow an inverse Moore’s law (sometimes quipped “Eroom’s Law”). Each new generation is becoming more and more expensive to train as researchers exponentially increase the number of parameters used and overall model complexity. Sam Altman of OpenAI said that the cost of training GPT-4 was over $100 million, and some AI computational specialists believe that the first $1 billion model is currently or will shortly be developed.
As semiconductor chips rise in complexity, costs come down because transistors are packed more densely on silicon, cutting the cost per transistor during fabrication as well as lowering operational costs for energy and heat dissipation. That miracle of performance is the inverse with AI today. To increase the complexity (and therefore hopefully quality) of an AI model, researchers have attempted to pack in more and more parameters, each one of which demands more computation both for training and for usage. A 1 million parameter model can be trained for a few bucks and run on a $15 Raspberry Pi Zero 2 W, but Google’s PaLM with 540 billion parameters requires full-scale data centers to operate and is estimated to have cost millions of dollars to train.
Admittedly, simply having more parameters isn’t a magic recipe for better AI end performance. One recalls Steve Jobs’s marketing of the so-called “Megahertz Myth” to attempt to persuade the public that headline megahertz numbers weren't the right way to judge the performance of a personal computer. Performance in most fields is a complicated problem to judge, and just adding more inputs doesn't necessarily translate into a better output.
And indeed, there is an efficiency curve underway in AI outside of the leading-edge foundation models from OpenAI and Google. Researchers over the past two years have discovered better training techniques (as well as recipes to bundle these techniques together), developed best practices for spending on reinforcement learning from human feedback (RLHF), and curated better training data to improve model quality even while shaving parameter counts. Far from surpassing $1 billion, training new models that are equally performant might well cost only tens or hundreds of thousands of dollars.
This AI performance envelope between dollars invested and quality of model trained is a huge area of debate for the trajectory of the field (and was the most important theme to emanate from our AI Summit). And it’s absolutely vital to understand, since where the efficiency story ends up will determine the sustained market structure of the AI industry.
If foundation models cost billions of dollars to train, all the value and leverage of AI will accrue and centralize to the big tech companies like Microsoft (through OpenAI), Google and others who have the means and teams to lavish. But if the performance envelope reaches a significantly better dollar-to-quality ratio in the future, that means the whole field opens up to startups and novel experiments, while the leverage of the big tech companies would be much reduced.
The U.S. right now is parallelizing both approaches toward AI. Big tech is hurling billions of dollars on the field, while startups are exploring and developing more efficient models given their relatively meagre resources and limited access to Nvidia’s flagship chip, the H100. Talent — on balance — is heading as it typically does to big tech. Why work on efficiency when a big tech behemoth has money to burn on theoretical ideas emanating from university AI labs?
Without access to the highest-performance chips, China is limited in the work it can do on the cutting-edge frontiers of AI development. Without more chips (and in the future, the next generations of GPUs), it won’t have the competitive compute power to push the AI field to its limits like American companies. That leaves China with the only other path available, which is to follow the parallel course for improving AI through efficiency.
For those looking to prevent the decline of American economic power, this is an alarming development. Model efficiency is what will ultimately allow foundation models to be preloaded onto our devices and open up the consumer market to cheap and rapid AI interactions. Whoever builds an advantage in model efficiency will open up a range of applications that remain impractical or too expensive for the most complex AI models.
Given U.S. export controls, China is now (by assumption, and yes, it’s a big assumption) putting its entire weight behind building the AI models it can, which are focused on efficiency. Which means that its resources are arrayed for building the platforms to capture end-user applications — the exact opposite goal of American policymakers. It’s a classic result: restricting access to technology forces engineers to be more creative in building their products, the exact intensified creativity that typically leads to the next great startup or scientific breakthrough.
If America was serious about slowing the growth of China’s still-nascent semiconductor market, it really should have taken a page from the Chinese industrial policy handbook and just dumped chips on the market, just as China has done for years from solar panel manufacturing to electronics. Cheaper chips, faster chips, chips so competitive that no domestic manufacturer — even under Beijing direction — could have effectively competed. Instead we are attempting to decouple from the second largest chips market in the world, turning a competitive field where America is the clear leader into a bountiful green field of opportunity for domestic national champions to usurp market share and profits.
There were of course other goals outside of economic growth for restricting China’s access to chips. America is deeply concerned about the country’s AI integration into its military, and it wants to slow the evolution of its autonomous weaponry and intelligence gathering. Export controls do that, but they are likely to come at an extremely exorbitant long-term cost: the loss of leadership in the most important technological development so far this decade. It’s not a trade off I would have built trade policy on.
The life and death of air conditioning
Across six years of working at TechCrunch, no article triggered an avalanche of readership or inbox vitriol quite like Air conditioning is one of the greatest inventions of the 20th Century. It’s also killing the 21st. It was an interview with Eric Dean Wilson, the author of After Cooling, about the complex feedback loops between global climate disruption and the increasing need for air conditioning to sustain life on Earth. The article was read by millions and millions of people, and hundreds of people wrote in with hot air about the importance of their cold air.
Demand for air conditioners is surging in markets where both incomes and temperatures are rising, populous places like India, China, Indonesia and the Philippines. By one estimate, the world will add 1 billion ACs before the end of the decade. The market is projected to before 2040. That’s good for measures of public health and economic productivity; it’s unquestionably bad for the climate, and a global agreement to phase out the most harmful coolants could keep the appliances out of reach of many of the people who need them most.
This is a classic feedback loop, where the increasing temperatures of the planet, particularly in South Asia, lead to increased demand for climate resilience tools like air conditioning and climate-adapted housing, leading to further climate change ad infinitum.
Josh Wolfe gave a talk at Stanford this week as part of the school’s long-running Entrepreneurial Thought Leaders series, talking all things Lux, defense tech and scientific innovation. The .
Lux Recommends
As Henry Kissinger turns 100, Grace Isford recommends “Henry Kissinger explains how to avoid world war three.” “In his view, the fate of humanity depends on whether America and China can get along. He believes the rapid progress of AI, in particular, leaves them only five-to-ten years to find a way.”
Our scientist-in-residence Sam Arbesman recommends Blindsight by Peter Watts, a first contact, hard science fiction novel that made quite a splash when it was published back in 2006.
Mohammed bin Rashid Al Maktoum, and just how far he has been willing to go to keep his daughter tranquilized and imprisoned. “When the yacht was located, off the Goa coast, Sheikh Mohammed spoke with the Indian Prime Minister, Narendra Modi, and agreed to extradite a Dubai-based arms dealer in exchange for his daughter’s capture. The Indian government deployed boats, helicopters, and a team of armed commandos to storm Nostromo and carry Latifa away.”
Sam recommends Ada Palmer’s article for Microsoft’s AI Anthology, “We are an information revolution species.” “If we pour a precious new elixir into a leaky cup and it leaks, we need to fix the cup, not fear the elixir.”
I love complex international security stories, and few areas are as complex or wild as the international trade in exotic animals. Tad Friend, who generally covers Silicon Valley for The New Yorker, has a great story about an NGO focused on infiltrating and exposing the networks that allow the trade to continue in “Earth League International Hunts the Hunters.” "At times, rhino horn has been worth more than gold—so South African rhinos are often killed with Czech-made rifles sold by Portuguese arms dealers to poachers from Mozambique, who send the horns by courier to Qatar or Vietnam, or have them bundled with elephant ivory in Maputo or Mombasa or Lagos or Luanda and delivered to China via Malaysia or Hong Kong.”