Ukraine President Volodymyr Zelenskyy and NATO Secretary General Jens Stoltenberg at NATO Headquarters in Brussels. Photo by NATO.
Why some international organizations solve problems — and others don’t
This guest column is written by recurring “Securities” guest star Michael Magnani. Michael is a Political Risk Researcher based in NYC and graduated from NYU’s Center for Global Affairs with a concentration in Transnational Security. His first column was Principal Problems and his second was Loose BRICS.
Last month, I excoriated BRICS — the catchy acronym and annual conference for Brazil, Russia, India, China and South Africa — as a Potemkin entity devoid of any real agency. BRICS is hardly alone, for there is an alphabet soup of international pseudo-organizations and confabs (IPEF, APEC, G20, AU, RCEP, SCO, and CTSO come to mind) which do nothing more than have world leaders rub shoulders, engage in a photo op and argue over the wording of a closing memorandum no country will abide by anyway.
No international institution comes closer to nihilism though than COP, the “Congress of the Parties” and the technical name for the annual UN Climate Change Conference. The 28th gathering will run in December of this year and is hosted by the United Arab Emirates. In those 28 years, climate change has undoubtably gotten worse while COP has at best produced hollow agreements that countries regularly shirk (the one big breakthrough in climate negotiations, the Paris Climate Treaty at COP21, has done little to stanch global carbon emissions despite trying to create emissions accountability). To add to all the usual hypocrisy of attendees sandwiching their doomsday speeches with flights in and out on their private jets, COP this year is being chaired by an OPEC member state that relies on oil and gas for one-third of its GDP. Photo op, indeed.
If we were to dive into these international organizations and conferences and explain why they are largely ineffective in achieving their stated goals, this piece would rival the length of the Maastricht Treaty (look up a picture of that one). Instead, it is fruitful to highlight two that are actually effective and why: NATO and the European Union.
NATO is the most successful military alliance since the Allies in World War II, growing from 12 founding members in 1949 to 31 members now (32 pending Sweden’s accession). Part of this longevity has been NATO’s successful ability to repeatedly re-invent and adapt itself in the face of obsolescence and accusations of strategy drift. Following the demise of the Soviet Union in 1991, many analysts considered the purpose of the organization fulfilled and the alliance a deadweight during the so-called peace dividend of the 1990s. Instead, NATO pivoted to a peacekeeping role in the smoldering Yugoslav Wars while also adding former Eastern Bloc states as members (the repercussions of the latter are still being debated today).
Following 9/11, the U.S. invoked NATO’s Article V mutual defense provision — a first for the alliance that pushed it to a new level of coordination and activity. Over the course of the next twenty years, it became involved in counter-terrorism and counter-insurgency missions in places such as Afghanistan, Syria and Iraq. More recently, NATO has pivoted back to great power conflict in response to the Russian invasion of Ukraine and the rise of China. These recent events, particularly the former, have led to the largest shift in NATO policy since the alliance’s inception. Massive increases in defense spending and production, coupled with the invitation of the historically neutral Finland and Sweden, have heightened NATO’s influence.
There are still those that believe NATO is merely a listless bureaucracy in Brussels trying to find its purpose (French President Emmanuel Macron described its ‘brain death’ back in 2019). Yet, NATO is more vital than ever, particularly in light of so many other feckless international institutions. The reason is as simple as it is hard to acquire: agency. ‘An attack on one is an attack on all’ is not just a catchy marketing line in Article V, it’s an actual power regarding the use of force from 31 countries to the organization.
That power is backed up by multiple organizations within NATO: an integrated command structure involving the militaries and governments of all member states; a multinational, rapid-response corps that integrates military units from various member states and increases interoperability; extensive battlefield training and cyberwarfare programs; as well as one of the great equalizers in the form of NATO’s Enhanced Forward Presence (commonly dubbed battlegroups).
These eight battlegroups (formerly four before Russia’s invasion of Ukraine) are stationed in the Baltics and various Eastern European countries and are meant to augment and enhance the defense of member states bordering Russia with personnel and materiel from member states who are not situated alongside the organization’s eastern front. They demonstrate that a distant country such as Canada takes the defense of fellow member Latvia seriously, and indeed so seriously that Canadian soldiers have led NATO’s presence in Latvia since 2014.
NATO is a military alliance with real power, which is one reason that so many missions keep getting assigned to it. It owes its success and longevity to a variety of factors, including its ability to recreate itself and redefine its mission, the hard power capabilities of member states that comprise it, and chiefly, the agency offered to it from member states. And while NATO’s annual summit between heads of state offer plenty of bluster, politicking and the occasional cringe photo-op (the infamous blue carpet rollout comes to mind), principal- and staff-level negotiations have a history for fruitfulness that is entirely lacking in COP, BRICS and other international summits.
There just aren’t that many other international organizations with the treaty scaffolding and delegation of responsibility that allows them to accomplished their objectives. The only other one that comes close — and is arguably more impactful given its wider remit — is the European Union (EU).
Like NATO, the EU has had an uncanny ability to reinvent (and expand) itself, especially in times of crisis, from a six-nation agreement to create a unified market for coal and steel in the wake of the destruction of World War II to inventing new institutions and improving them in response to the Iron Curtain, the Yugoslav Wars, the Greek debt crisis, the COVID-19 pandemic, and Russia’s invasion of Ukraine. The EU, for all its malaise and the hate it receives, is the third largest economy in the world in both nominal and PPP terms. Its system, while bureaucratic given the union’s construction (again, please look at the size of the Maastricht Treaty), is a bold statement built on shared ideals, a free-flowing economic zone and a common currency that incentivizes membership and holds sway in international affairs. It’s an organization (and let’s be frank, a whole family of organizations) that has real power and agency to affect the world.
All of this isn’t to say that these two international organizations are perfect, but both were forged with serious challenges in mind that forced the development of strong institutions. NATO was designed to protect against the risk of Soviet expansion, while what would become the EU was designed to fuse the long-warring states of Europe together into a peaceful confederation.
The world is filled with large global challenges from climate change and public health crises to international terrorism and transnational crime syndicates. Yet, we could spoon through that giant bowl of alphabet soup of international organizations, conferences and economic agreements looking for an entity with the real power to address these challenges. The success of NATO and the EU should be a beacon that international organizations with the right delegations of power and resources can affect large improvements for all of humanity.
Lux Recommends
Our scientist-in-residence Sam Arbesman enjoyed influential computer scientist Fei-Fei Li’s essay in The Atlantic on “My North Star for the Future of AI.” “What makes the companies of Silicon Valley so powerful? It’s not simply their billions of dollars, or their billions of users, or even the incomprehensible computational might and stores of data that dwarf the resources of academic labs. They’re powerful because of the many uniquely talented minds working together under their roof. But they can only harness those minds—they don’t shape them. I’d seen the consequences of that over and over: brilliant technologists who could build just about anything but who stared blankly when the question of the ethics of their work was broached.”
I enjoyed an excellent pair of articles on the present state of finance and media (and recommend pairing with a nice Chianti). In Wired, Brendan I. Koerner wrote a dazzling profile of Ursus Magana, a TikTok talent manager who navigates the omnipotent algorithm that rules our culture today. “Magana also acknowledged that, like so many startups without outside investors, 25/7 Media remains just a few blunders away from the abyss: ‘If 50 percent of my talent or 50 percent of my staff doesn’t work out, my daughter doesn’t eat,’ he said.” Meanwhile, Kate Dwyer in Esquire has a realistic read on the economics of the novelist. “[Knopf editor Jenny Jackson] estimates that the full-time authors on her roster spend 20 hours per week writing op-eds, doing panels, communicating with their fans on social media, participating in PR campaigns, and taking meetings with Hollywood execs. That’s why the ones who have day jobs—the industry majority—must treat publishing their book like a second job.”
Sam enjoyed Benjamin Breen’s look at the innovation of the open-stack library. “The open stack library — a library in which books are prominently displayed and publicly browsable, rather than guarded in closed rooms or cabinets — is a specific historical invention, one that developed out of the intellectual idealism of the late eighteenth century.”
In fascinating news: Maine voters passed a constitutional amendment that would restore sections of the constitution back into printed copies of the state’s founding document. From a report on the subject: “The purpose of this Report is to explain from a legal perspective, and to the extent possible in light of the century and a half that has since passed, how and why the Maine Constitution was amended in 1876 to remove from printed copies of that Constitution, but not from the Constitution itself, the original language directing Maine to assume ‘the duties and obligations of this Commonwealth, towards the Indians within said District of Maine.’” [emphasis added]
Sam enjoyed Sarah Wells’s look in IEEE Spectrum on “Generative AI’s Energy Problem Today Is Foundational.” “‘You could say that a single LLM interaction may consume as much power as leaving a low-brightness LED lightbulb on for one hour,’ [Alex de Vries] says.”
Finally, Sam enjoyed reading the classic book on Adobe, Pamela Pfiffner’s “Inside the Publishing Revolution: The Adobe Story.” To me, Adobe’s extraordinary influence on the world has never quite seemed to capture the limelight like its web-focused tech giants, and that’s a terrible loss.
That’s it, folks. Have questions, comments, or ideas? This newsletter is sent from my email, so you can just click reply.
Forcing China’s AI researchers to strive for chip efficiency will ultimately shave America’s lead
In incididunt ad qui nostrud sint ullamco. Irure sint deserunt Lorem id officia dolore non. Anim dolor minim sit dolor et sint aliquip qui est. Ex in tempor laborum laboris dolor laboris ullamco quis. Enim est cupidatat consequat est culpa consequat. Fugiat officia in ea ea laborum sunt Lorem. Anim laborum labore duis ipsum mollit nisi do exercitation. Magna in pariatur anim aute.
In incididunt ad qui nostrud sint ullamco. Irure sint deserunt Lorem id officia dolore non. Anim dolor minim sit dolor et sint aliquip qui est. Ex in tempor laborum laboris dolor laboris ullamco quis. Enim est cupidatat consequat est culpa consequat. Fugiat officia in ea ea laborum sunt Lorem. Anim laborum labore duis ipsum mollit nisi do exercitation. Magna in pariatur anim aute.
Right now, pathbreaking AI foundation models follow an inverse Moore’s law (sometimes quipped “Eroom’s Law”). Each new generation is becoming more and more expensive to train as researchers exponentially increase the number of parameters used and overall model complexity. Sam Altman of OpenAI said that the cost of training GPT-4 was over $100 million, and some AI computational specialists believe that the first $1 billion model is currently or will shortly be developed.
As semiconductor chips rise in complexity, costs come down because transistors are packed more densely on silicon, cutting the cost per transistor during fabrication as well as lowering operational costs for energy and heat dissipation. That miracle of performance is the inverse with AI today. To increase the complexity (and therefore hopefully quality) of an AI model, researchers have attempted to pack in more and more parameters, each one of which demands more computation both for training and for usage. A 1 million parameter model can be trained for a few bucks and run on a $15 Raspberry Pi Zero 2 W, but Google’s PaLM with 540 billion parameters requires full-scale data centers to operate and is estimated to have cost millions of dollars to train.
Admittedly, simply having more parameters isn’t a magic recipe for better AI end performance. One recalls Steve Jobs’s marketing of the so-called “Megahertz Myth” to attempt to persuade the public that headline megahertz numbers weren't the right way to judge the performance of a personal computer. Performance in most fields is a complicated problem to judge, and just adding more inputs doesn't necessarily translate into a better output.
And indeed, there is an efficiency curve underway in AI outside of the leading-edge foundation models from OpenAI and Google. Researchers over the past two years have discovered better training techniques (as well as recipes to bundle these techniques together), developed best practices for spending on reinforcement learning from human feedback (RLHF), and curated better training data to improve model quality even while shaving parameter counts. Far from surpassing $1 billion, training new models that are equally performant might well cost only tens or hundreds of thousands of dollars.
This AI performance envelope between dollars invested and quality of model trained is a huge area of debate for the trajectory of the field (and was the most important theme to emanate from our AI Summit). And it’s absolutely vital to understand, since where the efficiency story ends up will determine the sustained market structure of the AI industry.
If foundation models cost billions of dollars to train, all the value and leverage of AI will accrue and centralize to the big tech companies like Microsoft (through OpenAI), Google and others who have the means and teams to lavish. But if the performance envelope reaches a significantly better dollar-to-quality ratio in the future, that means the whole field opens up to startups and novel experiments, while the leverage of the big tech companies would be much reduced.
The U.S. right now is parallelizing both approaches toward AI. Big tech is hurling billions of dollars on the field, while startups are exploring and developing more efficient models given their relatively meagre resources and limited access to Nvidia’s flagship chip, the H100. Talent — on balance — is heading as it typically does to big tech. Why work on efficiency when a big tech behemoth has money to burn on theoretical ideas emanating from university AI labs?
Without access to the highest-performance chips, China is limited in the work it can do on the cutting-edge frontiers of AI development. Without more chips (and in the future, the next generations of GPUs), it won’t have the competitive compute power to push the AI field to its limits like American companies. That leaves China with the only other path available, which is to follow the parallel course for improving AI through efficiency.
For those looking to prevent the decline of American economic power, this is an alarming development. Model efficiency is what will ultimately allow foundation models to be preloaded onto our devices and open up the consumer market to cheap and rapid AI interactions. Whoever builds an advantage in model efficiency will open up a range of applications that remain impractical or too expensive for the most complex AI models.
Given U.S. export controls, China is now (by assumption, and yes, it’s a big assumption) putting its entire weight behind building the AI models it can, which are focused on efficiency. Which means that its resources are arrayed for building the platforms to capture end-user applications — the exact opposite goal of American policymakers. It’s a classic result: restricting access to technology forces engineers to be more creative in building their products, the exact intensified creativity that typically leads to the next great startup or scientific breakthrough.
If America was serious about slowing the growth of China’s still-nascent semiconductor market, it really should have taken a page from the Chinese industrial policy handbook and just dumped chips on the market, just as China has done for years from solar panel manufacturing to electronics. Cheaper chips, faster chips, chips so competitive that no domestic manufacturer — even under Beijing direction — could have effectively competed. Instead we are attempting to decouple from the second largest chips market in the world, turning a competitive field where America is the clear leader into a bountiful green field of opportunity for domestic national champions to usurp market share and profits.
There were of course other goals outside of economic growth for restricting China’s access to chips. America is deeply concerned about the country’s AI integration into its military, and it wants to slow the evolution of its autonomous weaponry and intelligence gathering. Export controls do that, but they are likely to come at an extremely exorbitant long-term cost: the loss of leadership in the most important technological development so far this decade. It’s not a trade off I would have built trade policy on.
The life and death of air conditioning
Across six years of working at TechCrunch, no article triggered an avalanche of readership or inbox vitriol quite like Air conditioning is one of the greatest inventions of the 20th Century. It’s also killing the 21st. It was an interview with Eric Dean Wilson, the author of After Cooling, about the complex feedback loops between global climate disruption and the increasing need for air conditioning to sustain life on Earth. The article was read by millions and millions of people, and hundreds of people wrote in with hot air about the importance of their cold air.
Demand for air conditioners is surging in markets where both incomes and temperatures are rising, populous places like India, China, Indonesia and the Philippines. By one estimate, the world will add 1 billion ACs before the end of the decade. The market is projected to before 2040. That’s good for measures of public health and economic productivity; it’s unquestionably bad for the climate, and a global agreement to phase out the most harmful coolants could keep the appliances out of reach of many of the people who need them most.
This is a classic feedback loop, where the increasing temperatures of the planet, particularly in South Asia, lead to increased demand for climate resilience tools like air conditioning and climate-adapted housing, leading to further climate change ad infinitum.
Josh Wolfe gave a talk at Stanford this week as part of the school’s long-running Entrepreneurial Thought Leaders series, talking all things Lux, defense tech and scientific innovation. The .
Lux Recommends
As Henry Kissinger turns 100, Grace Isford recommends “Henry Kissinger explains how to avoid world war three.” “In his view, the fate of humanity depends on whether America and China can get along. He believes the rapid progress of AI, in particular, leaves them only five-to-ten years to find a way.”
Our scientist-in-residence Sam Arbesman recommends Blindsight by Peter Watts, a first contact, hard science fiction novel that made quite a splash when it was published back in 2006.
Mohammed bin Rashid Al Maktoum, and just how far he has been willing to go to keep his daughter tranquilized and imprisoned. “When the yacht was located, off the Goa coast, Sheikh Mohammed spoke with the Indian Prime Minister, Narendra Modi, and agreed to extradite a Dubai-based arms dealer in exchange for his daughter’s capture. The Indian government deployed boats, helicopters, and a team of armed commandos to storm Nostromo and carry Latifa away.”
Sam recommends Ada Palmer’s article for Microsoft’s AI Anthology, “We are an information revolution species.” “If we pour a precious new elixir into a leaky cup and it leaks, we need to fix the cup, not fear the elixir.”
I love complex international security stories, and few areas are as complex or wild as the international trade in exotic animals. Tad Friend, who generally covers Silicon Valley for The New Yorker, has a great story about an NGO focused on infiltrating and exposing the networks that allow the trade to continue in “Earth League International Hunts the Hunters.” "At times, rhino horn has been worth more than gold—so South African rhinos are often killed with Czech-made rifles sold by Portuguese arms dealers to poachers from Mozambique, who send the horns by courier to Qatar or Vietnam, or have them bundled with elephant ivory in Maputo or Mombasa or Lagos or Luanda and delivered to China via Malaysia or Hong Kong.”