Securities

Reckless Regulators

Photo by tiero via iStockPhoto / Getty Images

Why are politicians so frightened by AI?

Regulators want to kill the future of artificial intelligence. That’s my only conclusion after a blistering week of global regulatory attacks.

It started on Monday with the long-awaited publication of President Joe Biden’s executive order on “the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” At just under 20,000 words, it lays out a gargantuan, whole-of-government effort that “establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.”

“And more.” Not unlike Ronald Reagan’s scariest nine words in the English language (“I’m from the government, and I’m here to help”), we should always be alarmed when regulators start talking about “and more.” But a lot more is what we got this week.

Across the ever-warming pond in the United Kingdom, Prime Minister Rishi Sunak held an AI Safety Summit at Bletchley Park, the famed research center where Alan Turing and a motley band of early cryptographers and computer scientists broke Nazi Germany’s encryption systems and helped Britain and the Allies win World War II.

The highlight of the summit was the “Bletchley Declaration,” which was signed by 28 countries and the European Union and argued that “the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed.” It was heart-warming to see China, Saudi Arabia, Nigeria and the United Arab Emirates sign the declaration — if only they would address “human rights,” “transparency,” “accountability,” and “ethics” in their own activities.

But I digress. Hopping the Channel, the European Union made progress on its comprehensive AI Act, legislation that has been debated for two years but is inching every closer to final passage and implementation. The EU’s regulatory model — increasingly popular globally — would divide AI into four risk categories from minimal to unacceptable, and place increasingly onerous testing and security requirements as the risk increases. Problematically, most applications are considered “high risk”, including potentially recommendation systems and social media algorithms. Unacceptable risks are forbidden, so high risk uses are the most regulated within the system.

Finally, former Google CEO Eric Schmidt and DeepMind co-founder Mustafa Suleyman penned an op-ed in The Financial Times two weeks ago calling for an “IPCC for AI.” The goal, in tandem with the United Nations’s High-level Advisory Body on Artificial Intelligence, is to do for AI what the Intergovernmental Panel on Climate Change has done for warming temperatures, namely, scientifically adjudicate exactly what is happening with artificial intelligence and write consensus reports. The goal is to save us from AI as much as the IPCC has saved us from climate change.

All regulators (so far!) emphasize the positive prospects of AI and argue that they don’t want to proscribe the progress of this technology. From the beginning of Biden’s order, “ Artificial intelligence (AI) holds extraordinary potential for both promise and peril. Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure.” That parallels the opening line of the Bletchley Declaration, which says that “Artificial Intelligence (AI) presents enormous global opportunities: it has the potential to transform and enhance human wellbeing, peace and prosperity.”

Yet, all of these regulatory actions have come together in just the past few days. For a novel family of technologies that have barely entered the market (OpenAI’s ChatGPT was launched less than a year ago and Stable Diffusion was launched a little more than a year ago), it’s gobsmacking the crazed and craven alacrity by which politicians are attempting to globally regulate a technology we didn’t even know functioned a few months ago.

Why the intensity? From what I can surmise, it’s driven by an imaginative fear propelled by fictional Hollywood thrillers. Take this story in Time Magazine this week:

The issue of AI was seemingly inescapable for Biden. At Camp David one weekend, he relaxed by watching the Tom Cruise film “Mission: Impossible — Dead Reckoning Part One.” The film's villain is a sentient and rogue AI known as “the Entity” that sinks a submarine and kills its crew in the movie's opening minutes.

“If he hadn’t already been concerned about what could go wrong with AI before that movie, he saw plenty more to worry about,” said [deputy white house chief of staff **bruce reed**], who watched the film with the president.

Mission: Impossible! Emphasis on “impossible”! What’s frustrating with this pervasive fear is that while generative AI technologies offer us new and exciting capabilities, the evolution of AI is synonymous with the evolution of software code and the digitalization of human decision-making. This is the most slippery part of the regulatory regime that was activated this week: there’s not even a good definition of what artificial intelligence is.

Take Biden’s executive order, which says “The term ‘AI model’ means a component of an information system that implements AI technology and uses computational, statistical, or machine-learning techniques to produce outputs from a given set of inputs.” Math, in other words.

One of the sections of the order demands that the government “Provide clear guidance to landlords, Federal benefits programs, and federal contractors to keep AI algorithms from being used to exacerbate discrimination.” First, we know from books like Richard Rothstein’s The Color of Law: A Forgotten History of How Our Government Segregated America that humans have done a pretty good job of creating incredibly discriminatory housing systems, no computers or AI algorithms required.

But let’s worry about the human dimension later and just focus on the digital: where exactly is the line between dumb software code and artificial intelligence? In New York City, it’s common practice for landlords to adjudicate rental applicants with what’s dubbed the 40x formula: documented annual income must be 40 times the monthly rent of the apartment. Thus, an entry-level studio at $3,000 per month requires an annual income of $120,000. This formula is built into housing applications as a form of mechanical intelligence.

The irony, of course, is that the use of formulas and very basic “AI models” was pursued by landlords precisely because it warded off human discrimination (and limited lawsuits). Rather than having a biased human rent officer adjudicate an application, every applicant would instead be judged by an objective standard regardless of background. We actively pursued AI as a solution to human problems, and now regulators want to reverse course.

In fact, what’s jarring is just how many negative uses identified across this week’s regulatory actions already exist in law and how many have nothing to do with artificial intelligence. From the Biden order: “These principles and best practices will benefit workers by providing guidance to prevent employers from undercompensating workers, evaluating job applications unfairly, or impinging on workers’ ability to organize.”

What does any of this specifically have to do with artificial intelligence? AI overlords aren’t underpaying employees — human managers and corporate executives are. Unions aren’t busted by Terminator-style Pinkerton agents, but rather by … human Pinkerton agents. If we’re concerned about employee welfare, that goal should itself be the focus of regulatory and legislative action — not this histrionic focus on artificial intelligence.

Even in the national security realm, AI’s supposedly new capabilities are wildly oversold. For instance, there are widespread fears that AI will help to accelerate the development of next-generation bio and chemical weapons, and yet, as I wrote a year and a half ago before all of this hubbub in “AI, dual-use medicine, and bioweapons”:

While there are widespread fears of mad scientists inventing deadly contagions in hidden wetlabs in the caves of Waziristan, the reality is that the world is already familiar with incredibly viral and deadly pathogens. Ebola, as just one choice example, kills roughly half of anyone infected with a relatively high virality rate. As one former presidential advisor on bioweapons explained to me years ago, Mother Nature is quite efficient at producing terrifying bioweapons all on her own, no mad scientists required. Our public health response to a naturally-occurring pandemic and one that is man-made will be exactly the same.

We don’t need better weapons — the best possible weapons are already available today. Humanity was pretty good at finding ways to completely destroy itself well before AI ever arrived.

For such a nascent and unproven technology, it behooves us to tread much slower on regulation. We need to encourage widespread experimentation and openness in the development of the bleeding-edge of AI performance and capabilities. We should be encouraging the distribution of open-source AI models to as many scientists and institutions and users as possible. Everyone should have access to the best AI models humanity has ever crafted.

Why? Look at the incredible human progress made possible by the diffusion of powerful computation over the past few decades. What’s bizarre is that the computer has created trillions of dollars of wealth, improved the lives of billions of people, and allowed scientists and inventors unparalleled power to chart past the frontiers of humanity’s knowledge into the depths of the unknown.

Critically, we did all of that with minimal global regulation despite the dangerous uses of the computer. Imagine if regulators came together in the 1970s at the dawn of the personal computing revolution and the internet and attempted to manage the imaginative “risks” of these new devices? Imagine if we had locked in IBM, Burroughs, DEC, and other large incumbents by preventing new entrants like Apple and Microsoft from transforming and democratizing computation?

Artificial intelligence is yet another, more advanced form of computation. The existential risks from AI are no different than that of supercomputers crunching numbers the past few decades, because AI is just supercomputers crunching numbers. Better numbers, more numbers, but the same computations nonetheless.

Admittedly, there will be bad actors who use AI tools for evil. Some nations will equip their drones with killer AI technology that murders civilians. Terrorists will use AI to scan maps and visual data to identify physical weaknesses and exploit them. Landlords will mold AI to match the discrimination of tenants they are looking for in a building. Employers will use AI to optimize shift schedules and make the lives of employees even more unstable than before.

But here’s the thing: all of that was done before. That’s the permanent promise and peril of artificial intelligence, technology and humanity itself. We are capable of extraordinary leaps of knowledge and wisdom, of developing capabilities for self-actualization and expression that our ancestors can only dream of. But we are also capable of the most heinous and extraordinary acts of evil imaginable. The curse of humanity is how little our technologies ever seem to really influence our own moral decision-making. Instead of regulation, I want AI to flourish as far as it can — “and more.”

Lux Recommends

  • Ian Urbina, founder of non-profit The Outlaw Ocean Project, has an enrapturing investigative report on the overseas fishing fleet of China in The New Yorker. “The country now catches more than five billion pounds of seafood a year through distant-water fishing, the biggest portion of it squid. China’s seafood industry, which is estimated to be worth more than thirty-five billion dollars, accounts for a fifth of the international trade, and has helped create fifteen million jobs. … China’s fleet has also expanded the government’s international influence. The country has built scores of ports as part of its Belt and Road Initiative, a global infrastructure program that has, at times, made it the largest financier of development in South America, sub-Saharan Africa, and South Asia.”
  • Our “Securities” producer Chris Gates enjoyed a recent episode of the War on the Rocks podcast featuring Raj Shah of Shield Capital, who talks about his transition from F-16 pilot to the Pentagon and now into defensetech investing. Shield announced a $186 million inaugural fund last month.
  • Alexander Sammon wrote a complex yet human investigation into avocado production in Michoacán, Mexico for Harper’s. “In Cherán, however, there was no such violence. Nor were there any avocados. Twelve years ago, the town’s residents prevented corrupt officials and a local cartel from illegally cutting down native forests to make way for the crop. A group of locals took loggers hostage while others incinerated their trucks. Soon, townspeople had kicked out the police and local government, canceled elections, and locked down the whole area. A revolutionary experiment was under way.”
  • Tess Van Stekelenburg highlights the announcement from Isomorphic Labs about expanding the capabilities of AlphaFold, Google/DeepMind’s protein-folding model. The new updates allow for more precision and better predictions around "ligands (small molecules), proteins, nucleic acids (DNA and RNA), and those containing post-translational modifications (PTMs).”
  • In the China-watcher community, the article to read was Evan Osnos’s “China’s Age of Malaise” in The New Yorker, a plangent follow up to Osnos's best-selling 2014 book, Age of Ambition: Chasing Fortune, Truth, and Faith in the New China. “The space for pop culture, high culture, and spontaneous interaction has narrowed to a pinhole. Chinese social media, which once was a chaotic hive, has been tamed, as powerful voices are silenced and discussions closed. Pop concerts and other performances have been cancelled for reasons described only as ‘force majeure.’ Even standup comics are forced to submit videos of jokes for advance approval.”
  • Our scientist-in-residence Sam Arbesman enjoyed Jose M. Gilgado’s piece on “The beauty of finished software.” “Once you get used to the software, once the software works for you, you don’t need to learn anything new; the interface will exactly be the same, and all your files will stay relevant. No migrations, no new payments, no new changes.”
  • Finally, I have been enjoying Santi Ruiz’s newish newsletter, Statecraft, which explores policies that have actually delivered results. His best so far was on PEPFAR, in a piece entitled, "How to Save Twenty Million Lives.” “The interagency fights were blood on the floor. Hatchet work. It was the president that stopped all that. When you go meet people in the government one-on-one, they are the most dedicated, committed people. Most people don't go into government to fight over budgets, over turf. They go in and are not paid well and work their tails off because they want to make a difference, but the system doesn't give them much chance to make a difference. They get ground down and ultimately start fighting for budgets, protection, and turf. You get about 20% of people who are so jaded by the whole thing. 80% still wanna do something. If you don't offer the 80% something, the 20% win.”

That’s it, folks. Have questions, comments, or ideas? This newsletter is sent from my email, so you can just click reply.

continue
reading