Human Martians, autonomous warfare and deepfaking elections
This has been an incredibly busy week over here in New York, so this column is going to be a bit truncated and late as I dash it off on deadline.
One of the most audacious visions for the future of the human race is space travel, of releasing ourselves from the shackles of this Blue Orb and flinging ourselves across the solar system and on to the Milky Way. Elon Musk has made that vision one of his ever-growing personal missions, and researchers have conducted prolific work on what it would take to live in space, settle the moon and Mars, and undertake faster-than-light space travel (as an example this week, my friend Anna-Sofia Lesiv at Contrary Capital put together what she dubbed The Mars Colonization Tech Stack).
Last year on the podcast, Chris Mason described one of his key solutions to human settlement in space: genetically reengineering Homo sapiens to be native to planets outside Earth (just one of many ideas he presents in his book, “The Next 500 Years: Engineering Life to Reach New Worlds”).
All of this optimistic speculation can make one delirious at possibility, and so it’s helpful every once in a while to take a step back from the speculative frontiers and instead examine our known reality.
That’s why we just published a brand-new, two-part podcast episode this week on the future of human settlement on Mars. I interviewed Zach Weinersmith, the popular artist behind the comic Saturday Morning Breakfast Cereal, who co-wrote A City on Mars alongside his wife, Kelly Weinersmith.
Download the “Securities” podcast episodes:
- 🔊 Astronauts all lie, but the biggest lie is that we will colonize Mars (Zach Weinersmith, Part 1 of 2)
- 🔊 Why a Mars settlement could never be a libertarian paradise (Zach Weinersmith, Part 2 of 2)
The two weren’t cynical, and in fact, are quite the optimistic duo. Their goal with the book was to blueprint the next steps required for space travel, both from a scientific lens (Kelly has a PhD in ecology and researches how parasites manipulate host behavior) as well as a policy lens (what does space law look like when you exit the nation-state paradigm of terra firma).
As they accumulated research and consulted with more and more experts though, their hopes for space settlement dramatically dimmed. The synopsis is: it’s all but impossible to realistically imagine a Martian settlement.
That doesn’t mean scientists aren’t figuring out some pieces of the puzzle. Ecologists and environmental scientists — aggressively searching for more efficient means of recycling water and other materials on planet Earth in the face of climate disruption — are discovering ever more sustainable cycles for reusing resources. That’s know-how that could be just as useful in space as it is on Earth.
Yet, the gap between what we know and what we would need to know to settle Mars remains practically as vast as ever. For example, Zach and Kelly emphasize that our knowledge of how the human mind degrades in space is extraordinarily limited. One key reason? Despite sending dozens of astronauts off Earth, they all lie during their medical interviews in order to return as quickly as possible. Zach emphasizes that most memoirs from both American and Russian astronauts eventually reveal this fact, and so we in fact have little accurate data to analyze on the psychological impacts of space.
That’s just the science and health aspects. The policy and ethical implications are even more challenging. In an extensive passage, the couple write about sex in space, and ultimately their concerns about pregnancy. What would it mean to be a second-generation moon settler? Given the human body’s development, it’s entirely possible that a child’s birth and maturation in outer space would circumscribe their return to Earth. Is this a form of human experimentation on children, and do we as a society approve that?
In part two of the episode, I talk with Zach less about scientific principles and more about the existentialism of settlement: does having the goal of Mars act as an accelerant of innovation and progress? Zach, who described himself as a “professional dream killer” said:
I don't think it's bad for people to have goals, but I do think if at some point we're getting serious, and as you mentioned, there really is a revolution in the ability to put stuff in space, we have to move past the fantasy. If all the engineers want to believe whatever they want and it gets them to produce internet, like I'm talking to you on Starlink right now, that's great. But at some point I'd like the lawyers to show up.
He also noted that a multi-planetary species may not reduce existential risk, but could actually increase it by making war between planets possible, noting that “there's still a risk of people fighting over [the moon or mars] because it has salience to people. So, I would rather reduce that salience, let the lawyers remove all the fun.”
Take a listen to the show — it was great. That leads me into three other events I’ve been hosting this week.
I have been heads down designing a new riskgaming scenario on the future of AI in national security, specifically focused around the challenging tradeoffs that the Pentagon must make around autonomous warfare. Unlike the speculation that flows rampantly out of the Mars settlement community, AI in defense is real, it’s already here, and the Pentagon is deeply worried about its position as the Wall Street Journal covered this week.
What are some of those tradeoffs? For one, the balance between safety and speed of development. Is it better from a security perspective to have less secure engineering around AI with the goal of accelerating its development, or is a safer approach ultimately likely to lead to a more durable peace? Another tradeoff is between open and proprietary work. Open-source engineering around AI is great for consumers and developers, but it also opens the door for any actor globally to take advantage of cutting-edge machine learning and use it for evil.
But the tradeoffs keep coming. How much should the Pentagon give up on the “Defense Fordism” of the past (egregiously expensive aircraft carriers and fighter jets) and fund the emerging tech of the future? How should the Defense Department balance the commercial development of AI with the dual-use nature of many of these technologies into the defense world?
It’s a lot to process, which is why the riskgaming scenario I’ve been developing (currently dubbed “No Man’s Land”) is so vital. I had the chance this week to run the first three sets of trials of the game to interested parties, with a more public release hopefully coming up here in the months ahead.
The biggest lesson learned so far might be that few people seem to understand how tech companies strategically release open-source software either to bolster their own position in the market (think Google releasing Kubernetes as a way to grow its developer ecosystem) or to undercut competitors who are ahead (think Meta releasing LLaMA to undermine the market advantages of the Microsoft / OpenAI tie-up). That might be a game design problem, or it could just be a gap in understanding how tech companies strategically maneuver in competitive markets.
Finally, I co-hosted with Josh Wolfe and Grace Isford from Lux here alongside Sam Englebardt and Cole Mora at Galaxy Interactive as well as Miles Taylor, Evan Burfield and Xander Schultz at TheFuture.us a day-long policy lab on election security in the age of AI. Deepfakes are drawing inordinate attention from the press and critics, who are both properly alarmist about the potential future of these technologies while also mis-calibrating at times what today’s cutting-edge software can actually accomplish.
With 54 people in the room, it was a frenetic conversation that included a keynote from a prominent government official, a riskgaming scenario I designed, several “provocations” from tech CEOs building products in this space, plus a deeper dive on options around mitigation.
The biggest realization for some in the room was comprehending the sheer number of people who are involved in securing American elections. We offered individual roles during the riskgaming scenario to each of the 54 participants — and we still had roles left over, because so many government agencies, private companies and civil society leaders are part of the solution. Communications, as one might expect, got messy. Information was dropped, forgotten or lost. Some people went to the bathroom and missed their moment to offer critical information at just the right time — exactly duplicating the serendipitous chaos of a real crisis.
AI manipulation of elections is already here while the digital tools to fend it off are in their infancy. The policy processes we have in place are not up to the task of responding to such a complex phenomenon. One consensus that emerged from the room was the need to build up trust around elections at the most local level, focused on a bottoms-up strategy rather than top-down communications. But that’s far easier said than done.
I dubbed this scenario “DeepFaked and DeepSixed?” and the answer so far appears to be, “Yes.” Maybe we should double down on Martian settlement no matter how impossible it is.
- Thanks to a referral from Shaq Vayda, I was intrigued by this startup Shareholder Vote Exchange profiled in The Wall Street Journal that allows shareholders to sell their proxy votes to the highest bidder. Corporate governance has become something of a sham as exchange-traded funds (ETFs) increasingly own large percentages of all large public American companies. Are there incentive alignments that work with selling proxy votes, or is it just adding another layer of chaos to an already tough public market landscape?
- Our scientist-in-residence Sam Arbesman highlighted Jonathan Zufi’s fan site memorializing the 40th anniversary of Apple’s Macintosh computer, which launched in January of 1983. Even decades later, it remains a design icon and a technical landmark, an extraordinary disjuncture in the progress of computing.
- This week, my friend Dan Wang posted his annual letter for 2023 that has become an institution for many tech, finance and other readers all around the world. "Can America’s headstart in AI make up for its manufacturing deficiencies? Perhaps. I worry however that one of America’s superpowers is to spin up yarns to reduce the urgency for action. The United States can relax either because China will be pulled out to sea by the receding tide of demographic decline, or Silicon Valley will produce superintelligence — and it will be on America’s side.”
- Should you read all of your books in print? (Yes.) Does it make a difference? Researchers exploring the question are finding that print does indeed exceed digital literature in developing cognitive functioning. There are a lot of caveats of course, but “The researchers suggest that more ‘shallow’ cognitive processes might be invoked by digital text characteristics such as ‘short length and fast-paced stimuli,’ in contrast to print reading materials.”
- Finally, if you haven’t had time to watch Anatomy of a Fall, you are missing out on an extraordinary film. The Palme d’Or winner at Cannes last year and now a nominee for best picture at the Oscars, the film chronicles in limpid human detail the complexities of family life withering under the excruciating microscope of a courtroom examination. What truths are admissible as evidence, versus what truths are, well, true? How do we express our meaning of existence to others, when we don’t even really understand it ourselves? An absolutely superb work of art.
That’s it, folks. Have questions, comments, or ideas? This newsletter is sent from my email, so you can just click reply.