Securities

May the AI be ever in your favor

Description

While much of the venture world has hit a reset in 2023, you’d never know that in artificial intelligence, where fire marshals are shutting down crammed engineering meetups and startups are once again raising at eye-watering valuations. Why the excitement? Because for founders, technologists and VCs, it feels like the everlasting promise of AI dating back to the 1950s and 1960s is finally on the cusp of being realized with the training and deployment of large language models like GPT-3.

To hear about what’s happening on the frontlines of this frenetic field, Lux Capital partner Grace Isford joins “Securities” host Danny Crichton to talk about what she’s seeing in 2023 across the AI tech landscape.

We talk about her impressions at the recent AI Film Festival in New York City hosted by Lux’s portfolio company Runway, how developers are being empowered with new technologies in Python and TypeScript and why that matters, and finally, how the big tech giants like Microsoft, Google and Amazon are carefully playing their cards in the ferocious competition to lead the next generation of AI cloud infrastructure.

Transcript

This is a human-generated transcript, however, it has not been verified for accuracy.

Danny Crichton:
Should we give it a shot or just go with the normal one? We're updating the tagline.

Grace Isford:
I want to hear the new one. Yeah.

Danny Crichton:
So the regular one is, "Hello. This is Securities, a podcast newsletter devoted to science, technology, finance, and the human condition." Our new one is, "Securities is a global community of engineers, scientists, and capitalists who are empowered by Lux Capital to construct and defend the future of humanity's prosperity."

Grace Isford:
Wow.

Danny Crichton:
It's a lot more intense. It's a mouthful. Maybe I should come up with something less mouthy.

Hello. This is Securities, a global community of engineers, scientists, and capitalists who are empowered by Lux Capital to construct and defend the future of humanity's prosperity. I'm your host, Danny Crichton. Man, that was a mouthful. We're working on a new tagline, and that was a little thick. But more important than that and our AI alphabet soup generative messaging tagline, is our guest today, Grace Isford, my partner here at Lux Capital. Grace, welcome to the program.

Grace Isford:
Thanks, Danny. I like the tagline. I think we could leverage generative AI to spin a few different options up there.

Danny Crichton:
Unlike my writing, it actually would've been better, probably more boring, but better. So obviously, the theme today, Generative AI, we just did a newsletter last week on the guru-less gorilla, as I called it, of generative AI, and how it's potentially going to wipe out the creative class. But you've been focused on generative AI for the last couple of months as it's surged with OpenAI's launch in November, a ton of different companies underway. I just want to start in. Last week you were in New York City, and you went to the first ever, I believe, AI film festival. What was that like? What did you see?

Grace Isford:
It was amazing. So for context, it was actually hosted and put on by Lux Portfolio Company Runway. There were 10 films that were presented at the film festival, all with AI as a major component of the film. There were over 500 submissions and really two core observations I had. One, was how sophisticated the films were in terms of just the editing, the immersion really felt like we were at any other film festival.

The second point is how interspersed AI was. So I couldn't tell you, or distinguish, oh, that was AI or that wasn't. In many cases it was just kind of this really cool, quite literally generative experience, where the film was naturally leading into each other. So a good example, there was an amazing film, actually the one that won at the New York Film Festival is a woman dancing, and it started as a film version of a woman dancing and then all of a sudden she almost transmuted into different patterns, designs and it was in different locations. And it was a really cool example where it almost felt like we were in this virtual world that still was tied to reality. Another big takeaway I had was talking to Chris.

Danny Crichton:
And this is the CEO of Runway.

Grace Isford:
Yes. And he wants every major motion picture to have AI. It's not, oh, that's an AI film, or this is an AI film festival. He just wants it to become a film festival where every major motion picture you see presenting at the Academy Awards has AI as part of it.

Danny Crichton:
Yeah. So this AI film festival is at Metrograph, sort of a famous iconic theater down in South Manhattan. Well, let me me ask you something. Obviously special effects have been around for decades, original Star Wars, you're going down the trench and the Death Star explosions are added, et cetera. What is the distinction in your mind between a special effects, something that we've seen in Marvel films and et cetera, and this new wave of generative AI tools?

Grace Isford:
Yeah, you'd really think one of the coolest parts about it was the generative angle of it, meaning I could prompt and write down group of people walking in prehistoric times in this way in claymation form. And so it's the precision and also creativity. You can create just words. In many ways it's actually some of the same things you've already seen, but taking much less time and much more efficiently to produce. Because often a lot of maybe your old animation films you've seen on Disney from Pixar have taken hours and months to produce. And Runway, you can probably do a lot of those things in a fraction of the time with a fraction of the cost. And so I think it's even less about what's actually different when you see the film and more about what's going on behind the scenes and the technology powering that.

Danny Crichton:
Well, I'm thinking about the expense of claymation and some of the Wallace and Gromit, and some of these old films. And they were always really short Because claymation in and of itself is an extraordinarily labor-intensive art form. You're constantly moving all these pieces. So the movies are only 80, 90 minutes each. And now from what I've been able to see and experiment with is, you can literally just type in, make this claymation, it happens. It actually looks amazing and exactly like claymation. And so you have this amazing effect. And then this sort of adds into this whole prompt engineering category of exactly how you write these kinds of textual prompts. Was there any discussion about that at the film festival?

Grace Isford:
Well, one of the ironic aspects was a few of the films incorporated prompt engineering into the film.

Danny Crichton:
Oh, wow.

Grace Isford:
So for example, it was like film AI or artist AI, where you could actually see the user interfacing with the technology and generating it live. I thought that was indicative of where we're at on the technology curve today. The fact that we're even having to point out to the end user, this is how AI is kind of being incorporated, is indicative we're still very early. One more thing I'll add is just the accessibility factor. So if it took you hours, months, hundreds of thousands of dollars to produce that claymation, now you could be a user at home going on Runway Online, for example, and making it yourself. And so that accessibility dynamic I think folks aren't probably talking enough about and it's pretty exciting.

Danny Crichton:
Well, I think that that's the beauty of a lot of the generative AI is obviously it's opening a palette. So in special effects you're sort of locked to the software you have, Lightwave, Maya, a bunch of major tools in the 3D animation space, and you can do an immense amount, but sometimes you really have to fight the tool. Same with Photoshop, if you ever edited images. You have to create filters, you combine filters together to create unique styles.

And what I think is so interesting is if you can see in your head and you can describe what you're envisioning, you don't have to necessarily learn this tool from ground up where you're adding, okay, there's 25 filters, they have to connect in a certain way. And if I do all this perfectly, I'm going to get the effect I'm looking for. Now you can just describe it. "I want to see this, I want to see this kind of style on an Afro futuristic style, with a claymation, with some other piece to it." And you've got, I was going to say Black Widow, but Black Panther, too many Marvel films, black Panther mixed with Wallace and Gromit, all together, and no one's ever seen that before. And you can do that in minutes. And that, to me, is what makes this so different from the special effects stuff that we've seen going on before.

Grace Isford:
Right. And you mentioned Marvel. Think of a large language model that is trained on Marvel movies and so, hey, I want to go online and create a movie in the style of Marvel. You could do that theoretically. You could see a world where that could happen. And that is pretty exciting just to think about. I could create my own Marvel film, we could create one tomorrow maybe.

Danny Crichton:
Yes, the Lux Marvel film, which hopefully would do a little bit better than Iron Man. And then Quantum Mania, I think we have way more stuff in the laboratory here than the movie did. So that means we can go from, I guess what, we're on Phase Five or Six of Marvel. I've sort of lost track, but up to Phase Infinity, which does get at this point I was writing about this week in the Securities newsletter around the Garrulous Guerrilla. And the joke here was that it's garrulous Because it's these prompt engineers. You're typing words into the generative AI prompts to get the work done. It's guerrilla, Because it's almost guerrilla warfare on the creative class as you just described.

Look, there's thousands of special effects studios. They're actually fairly heavily concentrated overseas. Most of them are not in the United States, although there are some, but a lot of them are in Asia, South Korea, India has a lot, Vietnam, these hubs where people have been able to do this work on the kind of deadlines and schedules that big movie companies like Marvel need. With these new generated AI tools, you really have this question of, what happens to a lot of these jobs where that effect that used to take 50 people a month to do now takes one person 12 minutes?

Grace Isford:
I would say I'm a techno optimist. So maybe we could argue and debate a little bit about this. I think I generally see a lot of the opportunities for AI to make everyone 10x more productive. So not just the film editor, but actually someone who's doing outsourced automation or labeling, in a foreign country or elsewhere. And so I think across the entire stack you're going to see the opportunities for AI to make you 10x more productive and really be interspersed into any product across any aspect of the stack. And so the hope would be rather than replacing jobs per se, it could just make you 10x or 20x more efficient.

Danny Crichton:
And obviously there have been a couple of reports on, we were talking about Garrulous Guerrilla, and groups of professions that might be affected in the coming years, one of which is customer support and customer success, where instead of actually calling someone, figuring out why my laptop doesn't turn on, ChatGPT or an equivalent kind of generative AI large language model could actually come up and say, "Hey, okay, here's all the solutions that have been tried. There's whole universe of stuff that we can do." It's completely conversational. It might actually be more qualified Because it has this massive repository and database it's working on. And then, of course, if it doesn't work, you can still go to a human. So your techno optimism is companies will still invest in the humans at the end of that chain. My techno pessimism is, at some point it'll just hang up on you and tell you to F-off.

Grace Isford:
I think there's a bit of truth to both of those things. But I would say, for those who are in maybe support role functions or customer support that you just mentioned, you're going to see a lot more of tech working with them. And in the short term, at least, being an additive tool to help them do their job, what remains this constant, in an age of full AI actually like relationships, reputation, so that customer support or sales agent's reputation with the account executive and the company and/or creativity. And so you almost think of the second order analytical skills. If that customer support agent has every single data point or option to tell you at their fingertips, they're going to need almost more second order thinking. And so this is a whole broader conversation around how do you actually re-skill and help folks think about those thinking toolboxes, maybe than actually the rote Q&A and/or support roles that maybe we have trained.

Danny Crichton:
Well, when I think of it as another one of our portfolio company here, Formic, which is building actual physical robots, not online robots and chatbots, but actual physical robots on the manufacturing line designed to work in tandem with humans in the line next to each other, which has always been a really large problem, Because robots don't have a sense of where their space is or who's around them. And so you have to work on safety and you have to work on how people work together. But in many of these cases it's about upskilling and re-skilling. So you are now working in tandem with the robot. The robot's doing a lot of repetitive tasks that are really painful for human muscles and the human musculoskeletal system, but then, you know, you can actually focus on the stuff that the robot does not do well. So I can see a path where the software side all chat, all intellectual side follows roughly the same pattern.

But this has happened so fast in a couple of months. I keep having to emphasize, five months ago there was no ChatGPT and there was no OpenAI launching their stuff. They had some interesting papers, DALL·E had been out, and people were experimenting and it seemed open-ended, and now it has 100 million users. So it went from nothing to 100 million people in weeks. We've never seen anything like that in history. But now people are curious, how do I integrate that into my own software? How do I actually build upon it? What do I need to do? So what are we seeing at the infrastructure layer?

Grace Isford:
So this gets me really excited 'cause I like thinking a lot about where does value actually accrue? And I like focusing more on those picks and shovels regardless of industry and the computational sciences. So, where I've been spending a lot of time is actually thinking about how do you build applications with large language models or LMs? So these large language models, as you just mentioned, are super powerful, but they're actually pretty hard to build an application. They're quite general as well, based on your function. And so in order, if you believe LMs will be leveraged and built in sophisticated applications and used in enterprise production at scale, these prompting mechanisms need a way to be knitted together. How do we actually plug them into your data sources, integrate them with your different applications, and then also tell them what to do in the right way. So think of it almost like this middleware software layer between the large language models like your OpenAIs, GPT3 or your Anthropics model, or even... Google just launched their Lambda to flexibly build that infrastructure and connect the apps.

Danny Crichton:
Well, what's interesting here, I mean, we've spent decades developing all these pieces of infrastructure in the computing world. So we built out databases, the original SQL databases back in the 1970s and '80s. Oracle becomes one of the largest tech companies building upon that as Microsoft, as well. And then we built all these different layers and tooling around that. So you had storage, you had compute, you had networking, to connect it all together. And then we entered this AI world. And because there's this LLM, this large language model, and to get a sense of this, these are petabyte scale. I think I might be exaggerating a little bit. I think it's like a compressed 80 gigabytes, but there's a 100 billion parameters going on, to a trillion parameters in these models, which means they're massively, massively large. We can't store them on our own computers, we can't have them on our phones.

And so fundamentally we sort of have this shared resource, almost like a 1960s mainframe of... stretching the analogy here a little bit, but we have this shared resource. We can all use this model at the same time, but how I want to use it, how you want to use it is different. The way I think about this is, each of us has our part of how to use it. I want my own little workspace, my own little way of using the model, I need to adapt it to fit me, but the model can't be changed in that way because it's too expensive to retrain. And so you're creating these layers, and basically instructions, to your computer of, like, "How do I generate this video based on a text input from a customer that's coming in from six different offices simultaneously and here's the decision criteria and here's how it connects together and here's the output and where it should go."

And by connecting that all together, you call it orchestration, this is sort of the maestro connecting all the pieces and parts to put the pieces together. And it's very exciting because again, this is so early, only a couple of months in, but to me that is the first step being able to use it as in a production system.

Grace Isford:
Exactly, right. It goes back to what we just talked about is why are there not more enterprise use cases in production for large language models today? And it's exactly because of that, we don't have the tooling right now to do that. One other thing that I think is quite relevant is just increased exposure from the developer perspective. There hasn't been an open source Python library ever really before for large language models besides these more open source foundational models. And so, I've been thinking a lot... and actually, ironically, Langchain just launched a Typescript, a library. And so, Langchain historically has been in Python. Most data science tooling has been in the Python language. And we're actually seeing how do we increase the number of developers actually adopting ML tooling or large language model tooling in this case.

And I think Langchain's pushing into Typescript. Typescript can run anywhere, compile at once, and then you could use it for parallel programming, async programming, all the libraries run asynchronously. That's just one more small example, but indicative of a broader trend, that we're going to need to see more ways for these large language models to be used by mass audience, whether it's in terms of these open source libraries, whether it's in terms of developer tooling and/or infrastructure, to integrate that API to create a feedback loop and fine tune that model based on your best specific use case.

Danny Crichton:
Yeah, my takeaway from this is Python became sort of the default vernacular, if you will, for the data science community. So it became very popular. It's been around actually for years. I want to say it's a '95 product, like it's been around for decades, but it really came into its own in the 2010s Because it really became the core tool for data scientists. So NumPy, SciPy, a bunch of others, ML models, et cetera. Anything that was done statistically with startups, and the tech community, and more broadly, really was done mostly in Python, unless you needed something high performance, in which case you probably went to Julia. And meanwhile, TypeScript or JavaScript was really popular in the web development community.

So if you wanted to build an online website, an online web app, you oftentimes went into that library, and the two oftentimes worked in tandem with each other so that your Python might run on the server, and JavaScript in the browser, or maybe TypeScript on the server. And that was part of the goal of that language was to create a safer server language that was a compliment to what was in the browser. But what I find interesting is obviously with so much of the data science community on Python that it's just been the default for everything in AI over the last five to six years. So PyTorch, and a bunch of the other libraries built in Python, all the large language models also focus there. And so the idea of just being able to expand that to whole categories of developers, people who have no access to it today, don't know the background, don't really want to get into it, and suddenly they have either the bindings or just the wider ability to connect into those libraries, to me really opens up the floodgates to new apps and new possibilities.

Grace Isford:
Totally right.

Danny Crichton:
So, let's pivot the conversation a little bit. You've hosted a couple, we meetups, you've obviously went to way more, you send photos around internally here, but it looks very vibrant. Does this... feels like... I don't want to make a bad analogy, but crypto two, three years ago where I never thought so many people would be excited about LLMs, which when I was growing up was a Master's in Law, usually around tax, and now is the most exciting tech topic going on in the world today. But I want to flip from the startup world into the big tech world, because obviously there's been a lot of news from the big tech companies going on from Facebook to Microsoft and Google. What's happening there today and what do you think is happening next?

Grace Isford:
But before we get to that, I do want to address kind of the crypto point you mentioned. The energy is certainly tangible. I mean, at the last meetup I was at, there was over 200 people there. We didn't even have room for everyone to sit down.

Danny Crichton:
It's always good when the fire marshals coming in and starting screaming.

Grace Isford:
Yeah. But I do think, and this is kind of one of the reasons why I love my job, is in any tech trend, there are always going to be tourists. And I actually think there's always going to be speculators and technology and things wrapping technology, are claiming to have differentiation when they really don't. And so, I actually think there are many parallels between AI and crypto right now where there's a lot of fair-weather fans excited by the hype and community and trying out a lot of these cool things. And so it's really important for any investor, LP or even company builder out there, to really think about what is my defensibility. One thing I like about Runway and our portfolio is, they've built their whole tech infrastructure stack, for example. So yes, they're an application serving videographers, but they're really focused on how are we differentiating our tech infrastructure from the ground up, and our horizontal suite of products for said user. I get worried when companies are just quickly wrapping an API, you or I could probably do it pretty easily in a day.

Danny Crichton:
Exactly.

Grace Isford:
And then calling it a venture-backed company. So that's just one thing I like to dog ear. Transitioning over though to big tech, I think-

Danny Crichton:
Are they tourists?

Grace Isford:
I think big tech, as you think about where they sit in the stack, they're really focused on that storage and compute aspect and compute is critical, if you're running any of these large language models at scale today. We unfortunately don't have a good alternative really. And so you've seen there's some allegiances come up, and pretty much any AI company has to have a contract with a Microsoft Azure with a GCP. Take your pick. I don't think we've solved that problem, even though costs have come dramatically down to train these models. And I'm optimistic that it will increasingly become less expensive to train and deploy these models, it still is a crucial part of it. And one of the reasons why OpenAI and Microsoft have such a close relationship.

Danny Crichton:
And when I think about this is like, several of the large tech companies, not all of them, and weirdly enough specifically, Facebook never went down the Cloud route. But for Amazon, for Google and Microsoft, obviously massive Cloud platforms offering compute, storage, networking, which is the core ingredients you need for an AI model. And we mentioned 100 billion parameters. The costs are going down to train these models. Unfortunately, we keep increasing the number of parameters exponentially. So we want to go to that trillion parameter model. Some folks have suggested that could be hundreds of millions of dollars just to train it one time. Not even updating it or refreshing it on a regular basis, but just a single run through of the training time, several hundred million dollars. All that actually ends up as power, as actual energy going into these data centers. And so some people have done some math of, you know, need almost a small fusion reactor just to keep one of these models alive.

But what's interesting is those three companies in particular, Google, Microsoft and Amazon built those platforms over the last 10, 15 years, now reaping this huge benefit for the fact that they have the infrastructure needed to do all the compute, to do all the storage necessary for these big models, and then serve them. And so even after they get the money for building the models themselves, they want to get into that revenue stream on the application side of, okay, now we're delivering an interesting application, how do we go do that? And so my understanding, as you mentioned, Microsoft, OpenAI... Google has a horse on the race.

Grace Isford:
Anthropic, I would say, is their allegiance right now where they just invested a sizable chunk in the company. Think of them as a alternative to OpenAI. Google themselves also has their own Lambda model, which they've been using for Bard, their kind of chatbot. And then Facebook also recently launched a model, as well. So you are seeing a few different large language models be launched. The key limiting factor there is, it's expensive to do. So I don't think you're going to see 20, 30 large language foundational models here.

Danny Crichton:
And let's focus on Amazon and Facebook. Does Amazon have a horse in the race?

Grace Isford:
No. I would view them almost as... and Josh and I on our team have talked about this, is kind of as the weapons provider today, where they are kind of helping deal, and kind of power the infrastructure, but they're okay letting a few others step into the ring a little bit more aggressively there.

Danny Crichton:
I haven't seen Jeff Bezos's profile photos in the more recent years. He definitely looks like kind of an arms dealer of AI models and LLMs. And then Facebook. So Facebook never developed a Cloud platform, has never offered that as a service, unlike the other three that we've been talking about. How are they playing in this AI world?

Grace Isford:
I would say Facebook is pretty cool in many ways because they have data centers that could be theoretically harnessed. They have a really strong PyTorch team and AI kind of infrastructure there that is pretty well known, especially for distributed technology. So how do you... you're running these GPUs at scale? They've focused less on going head to head in this battle, I would say, of the compute providers, but I could see them continuing to be a meaningful player, either through acquisitions or maybe for specific consumer use cases, whether they branch that more into their AR VR metaverse department, and/ or just continuing to power kind of their hardware infrastructure stack. So I wouldn't sleep on Facebook yet.

Danny Crichton:
Wouldn't sleep on Facebook yet. I will say if you believe in the Metaverse, we may not have pants, but at least he'll be able to talk with good generative AI. And then, before we close out, our last episode here on the Securities podcast, Gary Marcus, AI critic, believes large language models kind of bullshit. I mean to put it frankly, he thinks that the first death attributable to AI will happen this year, mostly because AI does not know any level of truth. So therefore it could make something up, it could threaten you in some way. You could ask it for medical advice, it tells you something that it'll kill you. You follow through thinking it's smart and you end up realizing it's not. How do you feel about the large language model focus in the AI community? Do you sort of agree with him that there are huge flaws? Is it empowering, somewhere in between?

Grace Isford:
I'd probably both agree and disagree with him. I do think AI at large language models will become a commodity in itself, and so I do not think it's like, oh my gosh, this is so unique and so game changing. I think every company's just going to have large language models, or AI is a part of it and eventually we're just going to not even know, just as with the AI Film Festival, we're not even going to know where AI starts and stops. I do think the ethical questions and the way in which AI is used, is a really important one. And I think more attention needs to be given to that.

Where I would feel comfortable today, is having AI be more assistive, so feeding doctors information. But I think we need a lot more to do around where the information is coming from, what percent accuracy do we have? What are our confidence intervals? Where is that data coming from if that doctor wants to go look at the footnotes of where that's coming from. We're seeing early innovations in that space, but I do think that is going to have to become also synonymous with anything we're using in AI to make sure we're using it responsibly.

Danny Crichton:
Well, Grace Isford, thank you so much for joining us.

Grace Isford:
Thanks, Danny.

continue
listening