Securities

WTF Happened in AI in 2023?

Description

Hey, it's ⁠Danny Crichton⁠. 2023 was an incredibly busy year, and nowhere was there more fervent attention than on artificial intelligence. OpenAI launched ChatGPT at the very end of 2022, and its implications found purchase this year among more than one hundred million users and the regulators who serve them. Those product developments don't even get into the crazy governance crisis at OpenAI a few weeks ago, which saw Sam Altman and then the board of directors toppled in a story that likely outshone the collapse of Silicon Valley Bank as the most important tech crisis of the year.Billions of dollars of venture capital flowed into the AI space, with investors funding everything from data infrastructure and better model training to the applications that are already beginning to transform industries across the world. Governments have moved with alacrity to regulate this new technology, but progress is unabated and unstoppable.The ⁠"Securities"⁠ podcast has aggressively covered these developments throughout 2023, with interviews with more than a dozen experts in all facets of this new technology, from the corporate executives building these products and the generals using these new features for American defense, to the critics who caustically analyze AI's supposed truthful implications and the philosophers debating the theory of mind and consciousness of these systems.So as the final episode of the podcast this year, I wanted to connect all of these separate discourses around artificial intelligence together into one cohesive package. We clipped nine of the best segments from episodes across 2023 — special thanks to our producer ⁠Chris Gates⁠ on finding these treasures. A retrospective, an incitement to innovation, a warning — it's all here, so let's get started.

This episode was produced, recorded and edited by Chris Gates

Music by ⁠George Ko

Transcript

This is a human-generated transcript, however, it has not been verified for accuracy.

Danny Crichton:                                      

Hey, it's Danny Crichton. 2023 was an incredibly busy year, and nowhere was there more fervent attention than in artificial intelligence. OpenAI launched ChatGPT at the very end of 2022 and its implications found purchase this year among more than 100 million users and the regulators who serve them. Those product developments don't even get into the crazy governance crisis at OpenAI a few weeks ago, which saw Sam Altman and then the board of directors toppled in a story that likely out shown the collapse of Silicon Valley Bank as the most important tech crisis of the year. Billions of dollars of venture capital flowed into the AI space with investors funding everything from data infrastructure and better model training to the applications that are already beginning to transform industries across the world. Governments have moved with alacrity to regulate this new technology, but progress is unabated and unstoppable.

The "Securities" podcast has aggressively covered these developments throughout 2023 with interviews with more than a dozen experts in all facets of this new technology. From the corporate executives building these products and the generals using these new features for American defense, to the critics who caustically analyze AI supposed truthful implications and the philosophers debating the theory of mind and consciousness of these systems. So as the final episode of the podcast this year, I wanted to connect all these separate discourses around artificial intelligence together into one cohesive package. We clipped nine of the best segments from episodes across 2023. Special thanks to our producer, Chris Gates, in finding these treasures. A retrospective, an incitement to innovation, a warning: it's all here, so let's get started.
                                                     

We started the year off with a bang with the persistent and prescient AI critic Gary Marcus, who through his writings and his tweets has excoriated the weak foundations of truth in AI foundation models. It was a thoughtful conversation, but also a landmark one since despite answering thousands of questions on AI in his career, no one had previously asked Gary an AI generated question until we did. Here's me, Gary, and my Lux partner, Josh Wolfe.

Josh Wolfe:                                            

This is one of my big burning questions, and that question is, you go back 20 years. With the democratization of the ability to produce content, Danny could write a blog, he could launch a million articles on TechCrunch, I could do a blog. Everybody was publishing content and the thing that became abundant was the text that was being produced and the thing that was scarce was the ability to find that needle in the haystack, ergo, search. And whether it was going to be Yahoo or Lookout or AltaVista or ultimately Google, that was the value. Today, the ability to produce content of all kinds has been democratized and not just true content or opinionated content, but fake content. And it feels to me like the veracity is the scarce thing, to be able to detect truth.

gary marcus:                                          

That is 100% of what keeps me up at night right now.

Josh Wolfe:                                            

How do I know you're telling the truth?

Gary Marcus:                                          

That's right, you don't. You are guessing that I'm not a simulacrum. Maybe it doesn't matter whether I'm real or not. What I think you should be worried about if you care about democracy is that the world is about to be inundated in high volume bullshit, some of which will be made by individuals, but you should especially be worried about the wholesale thing. So we've always had retail bullshit. Somebody writes a blog post that's just not true. But now these tools allow you to write as many copies and variations on themes as you want as fast as you want.
                                                     

So let's say that you would like to persuade people that vaccines are bad and you shouldn't take them. Well, you can ask one of these things. You might want to evade the guardrails, and we can talk about that, in ChatGPT per se, but you use a large language model, maybe a different one like Galactica that you could get on the dark web now, and you can say, write me stories about the negative consequences of vaccines and include references and it will go ahead and it will make a story about something that was published in the Journal of American Medical Association that said that only 2% of people saw a benefit, which is not true. There's no article, there's no 2%. And then you hit the refresh button and now you got another article and it says it was the Lancet and it was 4%. And you can write a little macro to do this and in a few minutes you can have 20 or 30 or 50 or 100 stories.
                                                     

And yes, it's true that if you're not Joe Rogan and you disseminate this, not as many people are going to listen. But if you can disseminate millions of copies, some of them will stick. It's just going to change the dynamics of veracity, and I think this is going to play into the hands of fascism. It's also going to make search engines less useful because the chances are that when you look for something, that you actually get what's real and not garbage is going to change and not in a good way.

Josh Wolfe:                                            

I asked ChatGPT what is the most controversial question you would ask Gary Marcus on a podcast? And it said, Gary, in your work as a researcher and entrepreneur in the field of AI and machine learning, you've been known for being critical of deep learning and advocating for more symbolic and rule-based approaches. Given the recent advancements and success of deep learning in various applications, how do you reconcile your stance with the current trend in the industry?

Gary Marcus:                                          

It's not a bad question. I would say that it misrepresents me in a way that I'm frequently misrepresented, it's not unique to Chat. It's, by the way, the first time Chat's ever proposed a question to me, so there's some landmark in all of that. What I have actually proposed are hybrid systems that are a mix of symbolic and pure deep learning and it doesn't get that nuance. It thinks that I'm purely advocating for symbolic. Then again, Yann LeCun has made that mistake a few times even after I've corrected him, so it's not unique in the history of society that such a question would be made. How would I reconcile...

Danny Crichton:                                        

2023 was a year of transformations, and not just because AI transformers have become mainstream. Across industries, AI combined with long horizon changes to business models ushered in a new world for many. Nowhere was that more obvious than in video and movies where the Hollywood writers and actors strikes dominated the headlines for months. But the new future of Tinseltown is already being written, with AI at its core, working in tandem with human creatives. In this clip, my partner, Grace, reports from New York City's Metrograph Theater on the first AI film festival hosted by Lux Portfolio company, Runway.
                                                     

Last week you were in New York City and you went to the first ever, I believe, AI film festival. What was that like? What did you see?

Grace Isford:                                        

It was amazing. So for context, it was actually hosted and put on by Lux Portfolio company, Runway. There were 10 films that were presented at the film festival, all with AI as a major component of the film. There were over 500 submissions, and really two core observations I had. One was how sophisticated the films were in terms of just the editing, the immersion, it really felt like we were at any other film festival. The second point is how interspersed AI was. So I couldn't tell you or distinguish, oh, that was AI or that wasn't in many cases. It was just kind of this really cool, quite literally generative experience where the film was naturally leading into each other.
                                                     

So a good example, there was an amazing film, actually the one that won at the New York film festival, is a woman dancing. And it started as a film version of a woman dancing, and then all of a sudden she almost transmuted into different patterns, designs, and was in different locations. And it was a really cool example where it almost felt like we were in this virtual world that still was tied to reality. Another big takeaway I had was talking to Chris.

Danny Crichton:                                        

And this is the CEO of Runway?

Grace Isford:                                          

Yes, and co-founder. He wants every major motion picture to have AI. It's not, oh, that's an AI film or this is an AI film festival. He just wants it to become a film festival where every major motion picture you see presenting at the Academy Awards has AI as part of it.

Danny Crichton:                                        

So we had this AI film festivals at Metrograph, sort of a famous iconic theater down in South Manhattan. Well, let me ask you something, obviously special effects have been around for decades, original Star Wars, you're going down the trench in the Death Star, explosions around it, et cetera. What is the distinction in your mind between a special effect, something that we've seen in Marvel films and et cetera, and this new wave of generative AI tools?

Grace Isford:                                          

Yeah, I really think one of the coolest parts about it was the generative angle of it, meaning I could prompt and write down, "group of people walking in prehistoric times in this way in claymation form." And so it's the precision and also creativity you can create just from words. In many ways it's actually some of the same things you've already seen, but taking much less time and much more efficiently to produce. Because often a lot of maybe your old animation films you've seen on Disney from Pixar have taken hours and months to produce. In Runway, you can probably do a lot of those things in a fraction of the time with a fraction of the cost. And so I think it's even less about what's actually different when you see the film and more about what's going on behind the scenes and the technology powering that.

Danny Crichton:                                        

Well, I'm thinking about the expense of claymation and some of the like Wallace and Grommet and some of these old films, and they were always really short because claymation in and of itself is an extraordinarily labor-intensive art form. You're constantly moving all these pieces, so the movies are only 80, 90 minutes each. And now from what I've been able to see and experiment with is you can literally just type in like "make this claymation", it happens, it actually looks amazing. And so you have this amazing effect and then this sort of adds into this whole prompt engineering category of exactly how you write these kind of textual prompts. Was there any discussion about that at the film festival?

Grace Isford:                                        

Well, one of the ironic aspects was a few of the films incorporated prompt engineering into the film. So for example, it was like film AI or artist AI where you could actually see the user interfacing with the technology and generating it live. I thought that was indicative of where we're at on the technology curve today. The fact that we're even having to point out to the end user, okay, this is how AI is kind of being incorporated, is indicative we're still very early. One more thing I'll add is just the accessibility factor. So if it took you hours, months, hundreds of thousands of dollars to produce that claymation, now you could be a user at home going on Runway online for example and making it yourself. And so that accessibility dynamic I think folks aren't probably talking enough about and it's pretty exciting.

Danny Crichton:                                        

Well, I think that's the beauty of a lot of this generative AI is obviously it's opening a palette. So in special effects, you're sort of locked to the software you have, LightWave, Maya, a bunch of major tools in the 3D animation space. And you can do an immense amount, but sometimes you really have to fight the tool. Same with Photoshop, if you've ever edited images. You have to create filters, you combine filters together to create unique styles. And what I think is so interesting is if you can see in your head and you can describe what you're envisioning, you don't have to necessarily learn this tool from ground up where you're adding, okay, there's 25 filters, they have to connect in this certain way and if I do all this perfectly, I'm going to get the effect I'm looking for. Now you can just describe it.
                                                     

I want to see this, I want to see this kind of style, I want an Afrofuturistic style with a claymation with some other piece to it. And you've got... I was going to say Black Widow, but Black Panther. Too many Marvel films. Black Panther mixed with Wallace and Grommet all together and no one's ever seen that before, and you can do that in minutes. And that to me is what makes this so different from the special effects stuff that we've seen going on before.
                                                     

OpenAI was clearly one of the most dominant stories of 2023 and no product launch was as hotly anticipated this year than the launch of GPT-4, which happened in March. Filled with brand new features and greater fidelity, the new model wowed with its increased precision and facility with languages, images, and other forms of media. Here's Grace and I on our first impressions.
                                                     

The speed at which both every single model is getting better and they can start to be combined combinatorially. Like they're actually able to take that voice, or as you said, and this was in the live stream, a picture of a website where you're like have a graphic at the top, a button here, it's able to translate that straight to code. That's happened in just a couple of months and I don't think we've ever seen that speed of technology progress in any field in the last couple years.

Grace Isford:                                          

And one more thing I'd add to that is just the feedback loop is compounding, right? The number of data points that OpenAI is getting every time you and I use ChatGPT to feed into that model or Runway is getting every time someone is leveraging the technology is compounding those model's strengths as they fine tune. So that's important. I would say and push back, we're still not quite there on full reasoning. And if you actually look at the AP scores it tested poorly on, English literature and composition and reasoning were the ones I think it was like a two.
                                                     

So I even was trying to draft something this morning on a blog post based on past blog posts I had written on the API economy and how that's reflected in the current Python SDK world and Anthropic API and OpenAI IPI and how that's all reflecting and it was okay. Like I don't think it was anything that profound or analytical or unique about it, but it was very good at spitting out a task and doing what I said. And so that second-order of like depth of human complexity we're moving towards for sure and it's getting better, but I still don't think we're there. As GPT-4 continues to improve, is that second or third-order thinking particularly based on a given prompt or text or image input.

Danny Crichton:                                        

For every positive development in AI, there will be nefarious users who use these newly found powers for evil. One version we saw growing over the course of 2023 was chat phishing, the term coined by Josh to describe using AI bots to defraud an individual by pretending to be their loved ones or perhaps a famous individual. Coupling excellent text and audio transformers, chat phishing and the wider market of deepfakes is only just beginning. Here's Josh and I discussing how chat phishing came to be and what happens next.

Josh Wolfe:                                            

Okay, so let's start with the premise, which is all of these technologies as they progress, democratize, meaning something that was once very expensive and only in the hands of a very few suddenly becomes very cheap and widely accessible in the hands of nearly everybody. And so Hollywood effect studios, you had to be a Weta or a George Lucas or a James Cameron and you had the ability to commandeer a $50 or $100 million budget and you would have the best designers in technology and the cutting edge stuff and you'd take the guys and put them in the green room and you put them in the black suits with the white dots and you'd do the tracking and all this crazy technology. All of that is becoming obsolesced. You can on your laptop today do special effects that Avatar was doing 20 years ago. Was that the first Avatar?

Danny Crichton:                                        

14.

Josh Wolfe:                                            

And Terminator before that and the morphing that you saw in those early Michael Jackson videos and then in the Abyss and then in Terminator, all of that today is like child's play. And so the rotoscoping techniques that used to be extremely expensive are now virtually simplistic at the touch of a button from a browser in something like RunwayML. So all of that is democratizing the ability to produce content. If you extend that further and you think about all the different modalities of producing content; text, image, video, voice, pretty much anything that is an output today, it is all becoming democratized and there's many competing programs that are allowing you to do this.
                                                     

So let's look at a natural progression. We always like to spot here at Lux the directional arrows of progress, and I'm just going to give you some anecdotal ones. You can find other adjacent ones. You go back a few years, Google introduced for those that were using Gmail, simple auto-replies. Things that were the statistically or most probabilistic responses given the context of the email without fully reading all of the content, of a good reply. You write me a note, "Looking forward to seeing you on Tuesday," it gives me three choices: okay, great, looking forward to it. I pick the third one, looking forward to it. It's a little bit more human. And some percentage of your emails today can be very quickly dismissed with these auto-replies. Okay, now fast-forward to today. You are on the cusp of Microsoft releasing something called Copilot, which is a different thing not just for code generation, but introducing ChatGPT into a variety of Microsoft products.
                                                     

What are the vast majority of enterprise people use Microsoft for? Not just obviously to conduct daily work but marketers and salespeople. So it's easy to imagine that we will be flooded not by the Nigerian princes for whom we might wistfully wish we were once receiving emails again from, but just troves of marketers who are now suddenly way more productive and efficient. Their ability to send out not just hundreds or thousands of emails using a CSV, meaning an Excel spreadsheet equivalent with lots of people's names and contact that's uploaded into a CRM, a customer relationship management tool like a Salesforce or something in Microsoft, but now they can send it to tens of thousands of people. And people might reply or delete or they can follow up and all of that can be basically actuated and activated automatically.
                                                     

Okay, so we're going to get a lot more spam, but we won't be able to tell it's spam because it might seem like it's a very customized, personalized message. It might be taking stuff from my Twitter feed and be like, hey, saw your thing, really liked your comment on this. I also blah blah, blah, and all that could be fake. Okay, so over time you start with these auto-reply messages from Gmail. You go into enterprise basically blasts of what I coined the neologism this week I call chat phishing. I looked online, there was no mention of chat phish, so it may be the first coinage of it. But the idea that these things are basically going to be phishing for people and trying to wheel them in.
                                                     

Okay, well the next thing you see these early incarnations, whether it was the Steve Jobs podcast with Joe Rogan, which wasn't a real one. You could sort of tell, it was tinny, it was low quality. There was another one recently of a video of Leonardo DiCaprio that was giving a speech at the UN. There were five or six different voices that were put into him. You had Bill Gates, you had Elon Musk, you had Steve Jobs. It became more believable and it was quite impressive. Now if you segment that by demographics, if you're an old person that receives a call, and we already have indications that this is happening, those robocallers are not the classical pause, hello? Hello? And then it's like, if you're calling about your insurance, people are going to be exploited by these systems.
                                                     

Okay, so where's that lead you? You start with these auto-generated emails. You go to widespread marketing spam and chat phishing. You get voice calls where you might actually even be conversing with the other person on the other end thinking it's an actual human and it's believable enough and conversational enough that it passes the equivalent of a relatively low hanging fruit Turing test. You might even be able to program some of these things for enterprises to have some parameters in which they can negotiate some simple things of, we'll take up to this price or below this price and let it actually negotiate on your behalf and it probably will effectively, I see that happening within a few years where bots are negotiating with people and then ultimately bots are negotiating with bots.
                                                     

The logical extension of that to me is two things: thinking about abundance, think about scarcity. If all of that stuff is abundant, go back 20 years with the democratization of text and blogs and ultimately Twitter and Facebook posts and all this stuff. The thing that became abundant was the information on the internet. What became scarce was the ability to find the thing, ergo, search. Now whether that was Google or LookSmart or AltaVista or Yahoo, I don't know. You know who the winner was in hindsight, obviously we know it was Google, but you knew that search was going to be the scarce and therefore valuable thing. Today with the abundance not just of information but misinformation and fake things, what becomes scarce is veracity, is truth.

Danny Crichton:                                        

While writers and analysts emphasize coverage of text, audio, and video as AI's critical domains, artificial intelligence is a general purpose technology with immense implications for all fields of human experience. And one of those experiences is arguably our most important, and that is our sense of smell. Today, we have the ability to encode vision and hearing into our computers, transmit them, and receive them somewhere else, but we completely lack that capability when it comes to scent. That's where Lux's own portfolio company, Osmo, is taking the lead. Here's Osmo's CEO, Alex Wiltschko, and I talking about how AI is transforming the science of smell.

Alex Wiltschko:                                        

The core problem, the thing that was in the way, and this is a century year old problem, the thing that was in the way of understanding why things smell the way that they do is what's called the structure-odor relation problem. If you can draw a molecule as a structure, you know, the ball and stick model where the atoms are dots and the bonds are lines just like you draw in chemistry class on the whiteboard. If you can draw that structure, can you look at it and can you predict what it's going to smell like? And the answer, except for some really basic smells, the answer is no. The answer is if you move one bond over, one pair of electrons somewhere else, you can go from roses to rotten eggs.
                                                     

And that starkness, that non-linearity in medicinal chemistry is called an activity cliff. And that's what makes the problem hard is little tiny changes can take you from a molecule that is so popular that it actually forms memories of what cleanliness is like the smell of dryer sheets, that kind of powdery smell. It's the smell of lily of the valley. That's in some countries and in sometimes is one molecule, lyral. But if you just move one bond over, it's totally odorless. Totally useless from a commercial perspective. So very small changes make huge differences to the smell of a molecule. That was the mystery.
                                                     

Now, the thing that had come to pass a little bit before maybe 2, 3, 4 years before I started the group was machine learning in the field of chemistry started to really work. It started to be something that you could build a research program off of. And that was because of a lot of very hard and very deep thinking by some folks from Harvard, a former mentor of mine, Ryan Adams, Alán Aspuru-Guzik, and then also a lot of Googlers, Steve Kearns, Pat Riley, and then David Duvenaud who spent some time at Google as well. And they made machine learning work for chemistry, and that was actually a really big thing.
                                                     

And the big, this is a very strange thing have to be obsessed about, but machine learning didn't work on molecules because machine learning really was only working on things shaped like rectangles. So machine learning was really good at things that were shaped like a grid, images, really good at things shaped like beads on a string, which is how we think of language, and really bad or just didn't work on things that had arbitrary shape and molecules aren't grids. The general way that we talk about objects that are structured like molecules is graphs. Maybe a couple years before we started working on smell, graph neural networks started to work really well.
                                                     

So that was the breakthrough that we capitalized on, that we built on top of in the group, which is to say, let's use graph neural networks, which have already been shown to work in areas of drug discovery and in chemistry, let's try that for smell. We were able to train neural networks that accurately predict what things smell like. And we could do so at state-of-the-art levels, and actually a paper that is in review right now, it turns out we could do it at superhuman levels.

Danny Crichton:                                        

So covering more smells, more domains than our 350 or so nasal channels.

Alex Wiltschko:                                        

More reliably. So what we did actually, we predicted the smells of molecules that had never been smelled before. So completely new, some of them had never been made before. And we kept our predictions secret, the neural network predictions, and then we had a panel of people that we trained to rate these odors reliably smell the molecules and tell us what they thought it smelled like. Now, each person, individual in the panel can be good or bad at that task, but the average, the collaboration of those people is much better. So the question is is our neural network worse than the worst person in the panel? Is it better on average? And it turns out our software is better than the average panelist most of the time. Which means if you're going to add a new person to the panel, you'd actually prefer to add our software program than to train up a new panelist. And we call that superhuman odor prediction performance.

Danny Crichton:                                        

The history of Silicon Valley is deeply entwined with the development of America's defense. From the transistor and semiconductors of the 1950s and 1960s, all the way to the artificial intelligence of the present day. No controversy in that modern history garnered more headlines though than Project Maven, the ill-fated project to use Google's best engineering to empower the Pentagon's global defense mission. In this clip, Josh and I chat with retired Lieutenant General Jack Shanahan, who was the founding director of the Pentagon's Joint Artificial Intelligence Center or JAIC on Project Maven and what went wrong and why the outcome could have been so very different.

"LIEUTENANT GEN...:                                    

So when we started Project Maven, as I said, we got about four startup companies on contract pretty quickly, but then people were really surprised that Google ended up on contract with us. Like, whoa, who? People were a little dismissive of this little Maven thing, they're hearing about it, and then all of a sudden we got this biggest company in the United States in tech on contract with us. And the reason we wanted Google, the reason we needed Google is because of the wicked problems that we were dealing with. It was this particular sensor that went on one of these unmanned aerial vehicles, commonly known as a drone. Basically this thing could park over an entire city and look at the entire city 24/7. No analyst on the face of the planet could be able to analyze that information. Just couldn't do it. Honestly, the numbers-

Josh Wolfe:                                            

Was this what people were referring to as the Gorgon sensor?

"LIEUTENANT GEN...:                                    

Yes, MQ-9 Gorgon Stare. Very specific sensor, and this is an important distinction here, that sensor, when it was on a drone, there were no weapons on that drone. It was just an intelligence surveillance reconnaissance sensor. You didn't put weapons on that MQ-9. That was not understood by some people in Google, unfortunately, and I'll get to all kind of-

Josh Wolfe:                                            

And for people's benefit, MQ-9, adjacent platform to what is commonly known as the Predator.

"LIEUTENANT GEN...:                                    

Yeah, Predator and Reaper. So this is a Reaper with a MQ-9 Gorgon Stare and it'd look at just an entire city block. And humans, we ran the numbers, we knew the numbers. A human could get through 5 to 6% of that scene. They just couldn't do it. This could do it in theory instantly if you did it, but it was extraordinarily complex problem set, which is why we need Google software engineer. We're talking down to the pixel level trying to determine one vehicle in the middle of an entire city in this big scene. And Google is very, very helpful in that process. Now, in doing this, they made a decision internal to the company. They had been doing work with Department of Defense, but nothing related to sort of drones. They wanted to keep it quiet. They understood that some people in the company probably wouldn't be excited about this project with the department, so they elected to not be very transparent.
                                                     

And that was their decision. We supported them. I said, we'll do whatever you want. We'll talk about it if you want us to talk about it, we won't if you won't. So we didn't talk much about it. And that led, unfortunately, to a lot of assumptions that I think were flawed about what this project was, what it was not, and it was just building to a protest internal to the company. Overall numbers, very small. I would say it was about 2000 people plus or minus a little bit. In the size of Google, it's actually very small, but it sort of had led to this internal turmoil to the point where at the end of the contract, they didn't leave us early, they decided not to extend the contract. We saw it coming. I say, I kind of use different words at times, but I say I was extremely disappointed, I was not shocked. We knew it was coming because they just couldn't deal with the internal turmoil. They had to stop it and sort of do a reset and decide what the company really wanted to do and have a conversation inside the company.

Danny Crichton:                                        

ChatGPT may only have been launched a little more than a year ago, but it's unprecedented speed of distribution has raised the ire of regulators throughout the world. Earlier this year, within the span of just two weeks, the European Union made progress and would eventually sign an understanding on its long delayed AI Act. The United Kingdom would host AI notables from around the world at the famed headquarters of the World War II codebreakers at Bletchley Park, eventually leading to the so-called Bletchley Park Statement. And in the United States, president Joe Biden would publish an exhaustive executive order targeting the regulation of AI. The speed and comprehensiveness of these actions are worrying innovators, and that's what I talked about in this clip with Techmeme's Brian McCullough and the editor of the Supervised newsletter, Matthew Lynley.
                                                     

Because one of the most notable stories that I saw over the last couple of weeks was this anecdote in Time Magazine about Joe Biden, who was at Camp David, watched Mission: Impossible - Dead Reckoning Part One, which is about an AI takeover from a sort of AGI agent that sort of takeovers the planet and was like, oh my God, that's so dangerous. We got to regulate this. And that's actually what sped up the executive order. And so part of me is questioning like, is the challenge between this black box is that we're sort of filling in that black box with our own fears and our own science fiction of what artificial intelligence is possible? Because at the same time we're trying to regulate and block everyone from using it, I can't get it to order food for me.
                                                     

And so there seems to be a huge gap between the capability to what it's actually capable of doing. And to me, I see this like unbelievable that the US, the UK, the European Union, private industries all coming together in like two weeks on a topic that on other issues, which we could have absolutely used more help, they've been absent for decades.

Brian McCulloug...:                                    

I mean history hat would say that that would be, again, after 30 years of feeling like they were behind the eight-ball in terms of technological innovation, regulators want to at least present that like they are ahead of the technology this time. The sci-fi analogy that I would use in this debate, because obviously there's been, given when the movie robot first came out and the term was, that's like almost 100 years ago. There's been 100 years in fiction of fear, of making machines that are smarter than us or more capable than us. The science fiction analogy that I would use for the opposite side of this debate is that this is the compute and the computers that in our best imaginations for the future, we always imagined would happened. I'm talking about what we imagined in the 1950s in that sort of Jetsons' version of computing. But the real science fiction analogy would be the Star Trek computer.
                                                     

So on the one hand you have the fear that the robots and the computers are going to take over the world, enslave us or whatever. The other side of it is, what Captain Picard does on the Enterprise is computing that is just, make it so, right? It's I want this done. I don't have to click an icon, I don't have to enter commands, I don't have to even deal with a file menu or anything. So in a way, you've got the debate that this is technology that could blow up the world. On the other side, this is the computing sort of utopia that we have always wanted to have. Humane, was it this week or last week, came out with that AI pin. And so we've gone through everything. We've had the pads became the iPad, ever since we had flip phones we had the communicator, and now they have the combadge.
                                                     

The combadge requires the Star Trek computer to make that a reality. And so that is the computing utopia of, I don't need to know anything underlying. I don't need to know various apps, I don't need to juggle any balls in the air, I just need something done, I ask the computer to do it, and it gets done. And so in a lot of ways, I feel like this debate is what everyone's deepest primal fears have always been about computers versus the sort of promise of computing that we always were sold but never actually delivered. We've been dealing with file menus for 40 years, going back to the command line. This is the sort of computing that... Because no one's talking about, yet, AGI would be computers that are smarter than us. What we're dealing with right now for probably the foreseeable future is computers that aren't smarter than us in the sense that they make decisions for us, they just obviate all of the messiness of us making the decisions.

Matthew Lynley:                                        

Yeah, well first off, Midjourney is a command line, so let's not forget about that. To get into the infrastructure side, because I'm an infrastructure geek, one of the things in analytics is there's this idea of that if I can query my data with natural language, like you hear that described as the Holy Grail, right? Like can I communicate with the data that I have, whether I'm an enterprise company that's trying to figure out like, oh my God, am I going to lose this customer, or if I'm just a normal user, you consistently hear that described as the Holy Grail.

Danny Crichton:                                        

Techno-optimism and the effective accelerationist movement were both major topics in the tech industry this year with bold pamphlets circulated from the likes of Marc Andreessen and Vitalik Buterin, the inventor of Ethereum. Yet it seems clear, and even highlighted in the context of this summer's Oppenheimer film by Chris Nolan, that we can no longer pretend that the technologies we build as engineers have no consequences and that their dual use natures should just be ignored. Josh and I talk in this clip about the need for what we dub a techno-pragmatism that leaves technological progress unchained, but which also confronts the negative effects of these technologies head on.

Josh Wolfe:                                            

I think the second whole page personally or starting on let's get real, where you've got this bifurcation of the unrealistic euphony of techno-utopianism, which is pedal to the metal, full speed ahead, don't stop, anybody that is in our way is an obstacle, run them over, shame them. And then you've got people that are like, whoa, whoa, whoa, precautionary principle, let's put on the brakes, let's slow it down. Don't you realize that we're headed towards disaster? And we're taking an approach, no, neither is practical. Both are absurd. The future is a destination.
                                                     

You take these two polar opposite views, you've got apocalypse on one end that maybe people are going in reverse to. And you could argue that there are geopolitical forces and certain sensibilities and values that want to take us back to the past. And then you've got this pro social, pro techno-utopia. If you are rushing towards the darkness of the former, what ends up coming into focus in our eyes is futility and nihilism and resignation, which basically leads to the strong men and the populists and the Trumps and they're focused on destroying institutions and it ends up invoking this bleak imagery of bombed out, rubble razed, sintered cinder blocks that we unfortunately see in the news. The crumbled infrastructure, the erosion, the erased vitality of all color, all life, just this essential blah, this grayness to which we in our letter say, no, thank you. No, thank you. We don't want that.
                                                     

And then lying at the other extreme, you've got the Pollyannas who are promoting the promise of this techno-optimist utopian, and that too is like in its euphonic exuberance just utterly unrealistic and it always has been. And you have the cliches there that you could put into Midjourney and generate, which are these sci-fi, sleek chrome metal reflective bubble glassed architectures, the biodomes that Bezos is actually building, the air trams, the flying cars, these manmade marvels all springing forth amidst these fountains of fauna and greenery and technological progress.
                                                     

But the reality is that that kind of free proliferation of technology is no panacea. It ignores all of the human consequences. And true progress comes from this restraining inhibitory focus and controls, not of regulation, but we call criticism and error correction. It's the equivalent of having maybe a seatbelt, maybe the side view mirrors, maybe a rearview mirror, but just proceeding with intention but with a pragmatism. And the call for pragmatism is totally lost with the Pollyanna calls for just full speed ahead.

Danny Crichton:                                        

Well, I think obviously we're recording this post the OpenAI saga, right? So I mean this is maybe the greatest example of this tension between promise and peril. You look at OpenAI, the concern was from the stability and the safety folks, which is we're getting closer and closer to artificial general intelligence. At some point it could take over the whole planet. We need to stop this before we get that far. It's an existential risk to the survival of the species. Versus AGI could actually help so many people. It could help us in healthcare, it could help us with education. We were just talking right before the show about how do we give more one-on-one tutoring? Well, imagine AGI, everyone has the best professor in the world teaching them every single day for 8 billion people. What would that change in society?
                                                     

And so we see these two extremes literally at the board level, but just in general around AI because it's focused on existential risk. So I think one of the questions I would have to you is it's great to talk about housing or apartments or a lot of this infrastructure where, look, the highs and the lows, there's not a huge variation. But when you get into these sort of existential questions, pandemics is usually one that people focus on with biology, they focus on AI. There's this concern that we're potentially building technologies either that could potentially save everyone or could kill everyone and they're just completely at the asymptotes on both sides.

Josh Wolfe:                                            

The fear of the Pandora's box. The fear of the Pandora's box is what leads people to rationally say that the precautionary principle is warranted. The fear that you are unleashing the genie from the bottle and it can't be put back in. You heard Elon just yesterday when he was with our friend Andrew Sorkin on the DealBook stage. Where are we in this? Are we facing existential risk? Is the genie out of the bottle, Andrew asked, and Elon said its head is definitely peeking out.

Danny Crichton:                                        

Finally, this year, AI triggers many philosophical and empirical questions about the meaning of consciousness and intelligence. Does AI know anything? Does it think? Does it have a mind? These are deeply engrossing questions, and it was a delight to wade through them on the show with Erik Hoel, the writer of the popular Substack newsletter, the Intrinsic Perspective. In this clip, Josh and Erik ask what the purposes of dreams are and whether human minds and AI minds dream for the same reason.

Josh Wolfe:                                            

A thought I had, which I wanted to ask you is about these two winters. There was a winter of AI. And there's of course, in the presumption of the analogy of winter, that there's some cyclicality, some seasons, that there could be a spring that follows. A winter of AI, a winter of consciousness, I've always had this instinct, this suspicion that some of the things we will learn or posit about consciousness and how our brains work will come from computer science and maybe vice versa. A lot of the analogies that we use for neural networks came from the actual biological structure of brains that we then applied in both software and algorithms and circuits. And then conversely, I was totally caught, and I haven't found another example of this yet, by a theory that you posited, which was this overfitting brain hypothesis. Can you explain that and can you also say if there are other analogies you're finding in the current state of AI research that are informing the analog and biological brains that we have?

Erik Hoel:                                            

Sure, and I think it's a great observation that we do. It is sort of a more specific form of the notion of taking technologies and trying to understand ourselves through them. That's not always a bad thing, right? Because sometimes they can open up new metaphors. I mean, one thing is that when I got my PhD, as I said, I worked on integrated information theory. But the originator, Giulio Tononi, is also one of the most famous sleep neuroscientists. So he's very interested in why we sleep. There's this sort of big unanswered question of why do animals spend eight hours a day, I mean, not all animals obviously, but why do you spend such a chunk of time sitting around defenseless when you could be out foraging or something? So that's a real serious scientific question. And I became very interested in this question of, well, why do we dream?
                                                     

And as I was exploring the dream literature, I was extremely dissatisfied with what was in the dreaming literature. You had hypotheses like maybe dreaming is for emotional regulation. But then if you look at the majority of dreams, they're sort of emotionally neutral. You have the theory that maybe dreaming is for memory replay. Okay, so you're replaying the memories and there's all sorts of exciting papers that get put into Nature and Science about memory replay and sleep, except that you don't replay any of your memories during sleep and everyone knows that. It just does not at all fit with the phenomenology of dreaming, by which I mean if you were to describe the conscious experience of dreaming, what is that like? I wanted to see if I could put together a proposal that would shift the focus of the field away from these hypotheses that seem to take dreams as these epiphenomena.
                                                     

So like basically the memory replay, people would say, well, you are replaying memories, but it's that that neural activity is not the neural activity that's behind your dreaming. Your dreaming is sort of this like downstream, happy phenomenal effect of it or something. So they come up with stuff like this. So I thought, well, what if you just took dreaming very seriously and you said that the biological purpose of dreams are the dreams themselves? It's to experience this dreaming in the same way that you go around the world and you experience things when you're awake. And if you look at dreaming, it has all sorts of properties in its conscious phenomenology. So dreaming is sparse. It's much sparser. You don't see your full visual scene in a dream most of the time. You have sort of like this abstract, sketchy, sort of conscious experience that you could describe as like lossy. Dreaming is hallucinatory, right? So it's extremely, very strange category breaking things happen in dreams.
                                                     

And all those things struck me as very similar to the manipulations that machine learning theorists do to their data to get their systems to generalize. So this is this very common problem in statistical modeling, which is called overfitting. And it's basically like if you have a bunch of points, you know along line, you can always draw some line that just goes randomly and wildly and matches every single point and so you can sort of overfit your model and there's always this drive to do that when learning. And the way to avoid that is generally to sort of corrupt the data that you're feeding it and make it weird. And so it just seemed very obvious to me that that is very possibly what dreaming is doing. It's just this corruption of data and the corruption is the point. The corruption is healthy.

Danny Crichton:                                        

Which is so counterintuitive because you would've thought you want to train a model precisely, you have to train it more and more and narrower and narrower. But the counterintuitive conclusion is, no, you want to introduce noise into the system so that when the machine or the algorithm or the program or the prompt is encountering reality, it's more adaptive to actually find the thing as opposed to being very narrowly trained and making errors of omission.
                                                     

2023 was a whirlwind and 2024 looks set to be even more revolutionary. As I wrote last week in Securities, it's increasingly looking like 2024 might be one of those watershed years that go down in the history books. A 1989, a 1968, or a 1945. Politically, countries around the world, including the United States, are going to hold major elections. Technologically, we're already primed for several important developments; from the launch of the Apple Vision Pro to several new AI products that we've already gotten early looks here at Lux. And then scientifically, there's a pathbreaking work in biology in areas like Xenotransplantation that looks set to transform the life of every person on the planet. As they say, if you thought 2023 was bewildering and bedazzling with its frenetic energy, you ain't seen nothing yet.

continue
listening