"Securities" Podcast: May the AI be ever in your favor

Danny Crichton
Should we give it a shot or just go with the normal?
Yeah, so the normal the regular one is Hello. This is a securities podcast newsletter devoted to science, technology, finance and the human condition. Our new one is securities is a global community of engineers, scientists and capitalists who are empowered by Lux capital to construct and defend the future of humanity's prosperity. Wow. It's a lot more intense. It's a lot. It's a mouthful, maybe I should come up with something less mouthy.
Hello, this is security is a global community of engineers, scientists and capitalists who are empowered by Lux capital to construct and defend the future of humanity's prosperity. I'm your host, Danny crane, and man, that was a mouthful. We're working on a new tagline. And that was a little thick, but more important than that. And our AI alphabet soup generative messaging tagline is our guest today grace is forward my partner here at Lux capital grace, welcome to the program.
Grace Isford
Thanks, Danny. I like the tagline. I think we could use leverage generative AI to kind of spin a few different options up there.
Danny Crichton
Unlike my writing actually would have been better, probably more boring but better. So obviously the theme today generative AI, we just did a newsletter last week on the gorillas gorilla, as I called it of generative AI and how it's potentially going to wipe out the creative class. But you've been focused on generative AI for the last couple of months is it served with opening eyes launch in November, a ton of different companies underway. And I just want to start in you last week you were in New York City. And you went to the first ever I believe, AI Film Festival. What was that? Like? What did you see?
Grace Isford
It was amazing. So for context, it was actually hosted and put on by Lux portfolio company runway, there were 10 films that were presented at the film festival, all with AI as a major component of the film, there are over 500 submissions. And really two core observations I had. One was how sophisticated the films were, in terms of just, you know, the editing, the immersion really felt like we were at, you know, any other film festival. The second point is how interspersed AI was. So I couldn't tell you or distinguish, oh, that was AI, or that wasn't it. In many cases, it was just kind of this really cool, quite literally generative experience where the film was naturally leading into each other. So good example, there was an amazing film, actually, the one that one at the New York Film Festival is a woman dancing, and it started as a film version of a woman dancing. And then all of a sudden, she almost transmuted into different, you know, patterns designs, and it was in different locations. And it was a really cool example, where almost felt like we were in this virtual world, that still was tied to reality. And another big takeaway I had was talking to Chris, and this is the CEO, and co founder, he wants every major motion picture to have AI. It's not Oh, that's an AI film, or this is an AI Film Festival. He just wants it to become a film festival, where every major motion picture you see, you know, sentient the Academy Awards, has AI as part of it. Yeah. So. So we had this AI film festivals at metrograph, sort of a famous iconic theater down in South Manhattan, or I guess use something obviously, special effects have been around for decades, original Star Wars going down the trench and the Death Star explosions are added, etc. What is the distinction in your mind between a special effects something that we've seen in Marvel films and etc. And this new wave of generative AI tools? Yeah, it really think the one of the coolest parts about it was the generative angle of it, meaning I could prompt run right down, you know, group of people walking in prehistoric times in this way, in claymation form, right. And so it's the precision and also creativity, you can create just rewards. In many ways. It's actually some of the same things you've already seen, but taking much less time and much more efficiently to produce because often, you know, a lot of maybe the or your old animation films you've seen on Disney from Pixar, right, have taken hours and months to produce and runway, you can probably do a lot of those things in a fraction of the time with a fraction of the cost. And so I think it's even less about what's actually different when you see the film and more about what's going on behind the scenes and the technology powering that?
Danny Crichton
Well, I'm thinking about the expensive of claymation, you know, and some of them like Wallace and grommet and some of these old films, and they were always really short. Because claymation in and of itself is an extraordinarily labor intensive art form, you're constantly moving all these pieces to the movies are only at 90 minutes each. And now from what I've been able to see and experiment with is you can literally just type in, like make this claymation it happens. It actually looks amazing, and exactly like claymation. And so you have this amazing effect. And then this sort of adds into this whole prompt engineering category of exactly how you write these kinds of textual prompts. Was there any discussion about that at the film festival?
Grace Isford
Well, one of the ironic aspects was a few of the films incorporated prompt engineering into the film.It was a film AI or artists AI where you could actually see the user interfacing with the technology and generating it live. I thought that was indicative of where we're at on the technology curve today. You know, the fact that we're even having to point out to the end user OkayThis is how AI has kind of been incorporated as indicative, we're still very early, one more thing I'll add is just the accessibility factor. So if it took you, you know, hours, months, hundreds of 1000s of dollars to produce that claymation. Now you could be, you know, a user at home, going on Runway online, for example, and making it yourself. And so that accessibility dynamic, I think, folks aren't probably talking enough about it. And it's pretty exciting.
Danny Crichton
Well, I think that that's the beauty of a lot of the generative AI is, obviously it's opening a palette, right? So in special effects are sort of locked to the software, you have LightWave, Maya, a bunch of major tools in the 3d animation space. And you can do an immense amount. But sometimes you really have to fight the tool. You know, same with Photoshop, have you ever edited images, you have to create filters, you combine filters together to create unique styles. And what I think is so interesting is like you can, if you can see it in your head, and you can describe what you're envisioning. You don't have to necessarily learn this tool from ground up where you're adding, okay, there's 25 filters, they have to connect in a certain way. And if I do all this perfectly, I'm gonna get the effect I'm looking for now you can just describe it, I want to see this, I want to see this kind of style on a future, you know, an afro futuristic style with a claymation with some other piece to it. And you've got, you know, I was gonna say Black Widow, but Black Panther, Tony Marvel films, Black Panther, mixed with Wallace and Gromit all together. And no one's ever seen that before. And you can do that in minutes. And that, to me is what makes this so different from the special effects stuff that we've seen going on before. Right. And you mentioned Marvel, think of a large language model that is trained on Marvel movies. And so hey, I want to go online and create a movie in the style of Marvel, you could do that, theoretically, to see a world where that could happen. And that is pretty exciting.On Marvel film, right? We could create one tomorrow, maybe,
yes, the Lux Marvel film, which hopefully would do a little bit better than Batman and then quantum mania, I think we have way more stuff in the laboratory here. The movie did certainly wouldn't be rotten. So that means we can go from I guess what we're in Phase Five or six of Marvel, I've sort of lost track. But up to phase infinity, which does get at this point, I was writing about this weekend securities newsletter around the gorillas gorilla. And the joke here was that it's grueling. Because it's these prompt engineers, you're typing words into the generative AI prompts to get the work done. It's guerrilla because it's almost guerrilla warfare on the creative class, as you just described, look, there's 1000s of special effects studios, it's actually fairly heavily concentrated overseas, or most of them are not in the United States, although there are some, but a lot of them are in Asia, South Korea, India has lived Vietnam, these hubs where people have been able to do this work on the kind of deadlines and schedules that big movie companies like Marvel need, with these new generative AI tools, you really have this question of what happens to a lot of these jobs where, you know, that affects that used to take 50 people a month to do now takes one person 12 minutes,
Grace Isford
I would say I'm a techno optimist. So maybe we could, we could argue and debate a little bit about this, I think I generally see a lot of the opportunities for AI to make everyone 10x more productive. So not just the film editor, but actually someone who's doing outsourced automation or labeling in a foreign country or elsewhere. And so I think across the entire stack, you're gonna see the opportunities for AI to make you 10x more productive, and really be interspersed into any product across any aspect of the stack. And so the hope would be rather than replacing jobs per se, it could just make you 10x or 20x, more efficient.
Danny Crichton
And, and obviously, there have been a couple of reports on we were talking about gorillas, gorilla, and groups of professions that might be affected in the coming years, one of which is customer support and customer success, where instead of actually calling someone figuring out why my laptop doesn't turn on chat GPT or an equivalent, you know, kind of generative AI large language model could actually come up and say, Hey, okay, here's all the solutions that have been tried, there's a whole universe of stuff that we can do, it's completely conversational, might actually be more qualified, because it has his massive repository database it's working on. And then of course, if it doesn't work, you can still go to a human. So your techno optimism is, companies will still invest in the humans at the end of that chain. My techno pessimism is at some point, I'll just hang up on you until you to eff off.
Grace Isford
I think there's a bit of truth to both of those things. But I would say, for those who are in, you know, maybe support role functions or customer support that you just mentioned, you're going to see a lot more of tech working with them. And in the short term, at least being an additive tool to help them do their job. What remains this constant in an age of full AI, actually like relationships reputation, so that customer support or sales agents reputation with the account executive and the company, and or creativity, right. And so you almost need like the second order analytical skills. So that customer support agent has every single data point or option to tell you at their fingertips. They're gonna need almost more second order thinking and so there's a whole broader conversation on how do you actually reskill and help folks think about those those toolbar thinking toolboxes? Maybe then actually, you know, the road q&a or support roles that maybe we have trained
Danny Crichton
when we think of it as you know, another one of our portfolio company here's here, Formic, which is building actual physical robots, not online robots and chat bots, but actual physical robots on the manufacturing line, designed to work in tandem with humans in the line nex to each other, which has always been a really large problem, because robots don't have a sense of where their spaces and who's around them. And so you have to work on safety. And you have to work on how people work together. But in many of these cases, it's about upskilling. And rescaling so you are now working in tandem with the robot, the robots doing a lot of repetitive tasks that are really painful for human muscles and the human musculoskeletal system. But then, you know, you can actually focus on the stuff that the robot does not do well. So I can see a path with a softer side, all chat all intellectual side follows roughly the same pattern. But we talked a little bit about these apps and the application layer. So we're talking about Microsoft launching a HIPAA compliant Chatbot. That's super interesting. But what you're getting out there is really a question of infrastructure. So how do you go build all these apps? And so I'm curious. I mean, this has happened so fast, in a couple of months, I keep having to emphasize, like, five months ago, there was no chat GPT. And there was no open AI launching this stuff. They had some interesting papers, Dolly, who had been out and people were experimenting, it seemed open ended. And now it has 100 million users. So it went from nothing to 100 million people in weeks. We've never seen anything like that in history. But now people are curious, like, how do I integrate that into my own software? How do I actually build upon it? What do I need to do? So what are we seeing at the infrastructure layer?
Grace Isford
So this gets me really excited, because I like thinking a lot about where does value actually accrue. And I like focusing, you know, more on those picks and shovels, regardless of industry in the computational sciences. So where I've been spending a lot of time is actually thinking about how do you build applications with large language models, or LM, so these large language models, as you just mentioned, are super powerful, but they're actually pretty hard to build an application, they're quite general as well, based on your function. And so in order, if you believe LMS will be leveraged and built in sophisticated applications and used in enterprise production at scale, these prompting mechanisms need a way to kind of be knitted together, it's how do we actually plugged them into your data sources, integrate them with your different applications, and then also tell them what to do in the right way. So think of it almost like this middleware software layer between the large language models like your open API's, GPT, three, or your throb picks model, or even, you know, Google just launched their their lambda to flexibly build that infrastructure and connect the apps.
Danny Crichton
Well, what's interesting here, I mean, we've spent decades developing all these pieces of infrastructure in the computing world. So we build out databases, the original SQL databases back in SQL databases back in the 1970s. And 80s, Oracle becomes one of the largest tech companies building upon that as, as Microsoft as well. And then we built all these different layers and tooling around that. So you had storage had compute, you had networking to connect it all together. And then we enter this AI world. And and because there's this LLM, this large language model and and to get a sense of this, these are petabyte scale, I think I might be exaggerating a little bit, I think it's like a compressed 80 gigabytes. But there's 100 billion parameters going on to a trillion parameters in these models, which means they're massively massively large, we can't store them on our own computers, we can't have them on our phones. And so fundamentally, what you sort of have the shared resource, almost like a 1960s, mainframe of extracting the analogy here a little bit, but we have this shared resource, we can all use this model the same time, but how I want to use it how you want to use it is different. And so when I see a company like laying chain, way I think about this as is like, each of us has our part of how to use it, I want my own little workspace, my own little way of using the model, I need to adapt it to fit me, but the model can't be changed in that way, because it's too expensive to retrain. And so you're creating these layers and basically instructions of to your computer of like, you know, how do I generate this video based on a text input from a customer that's coming in from, you know, six different offices simultaneously. And here's the decision criteria. And here's how it connects together. And here's the output and where it should go. And by connecting that all together, you call it orchestration, this is sort of the maestro connecting all the pieces parts to to put the pieces together. And it's very exciting because again, this is so early, and only a couple of months in but to me that that is like the first step to being able to use this in a production system.
Grace Isford
Exactly right. It goes back to what we just talked about is why are there not more enterprise use cases in production for large language models today? And it's exactly because of that we don't have the tooling right now to do that. One other thing that I think is quite relevant is just increased exposure from the developer perspective, right? There hasn't been an open source Python library ever really before for large language models, besides these more open source foundation models, and so I've been thinking a lot and actually, ironically, link chain just launched a TypeScript library. And so link chain historically has been in Python, most data science tooling has been in the Python language. And we're actually, you know, seeing how do we increase the number of developers actually adopting ml tooling or large language model 20 In this case, and I think Ling Chang is pushing that TypeScript. TypeScript can run anywhere, compile once, and then you could use it for parallel programming, async programming, all the libraries run asynchronously. That's just one more small example. But indicative of a broader trend that we're gonna need to see more waits for these large language models to be used by mass audience, whether it's in terms of these open source libraries, whether it's in terms of, you know, developer tooling, and or infrastructure to integrate that API, create a feedback loop and fine tune that model based on your best specific use case.
Danny Crichton
Yeah, my takeaway from this is, you know, Python became sort of the the default vernacular, if you will, for the data science community. So it became very popular. It's been around actually, for years, I want to say it's a 95 product, like it's been around for decades. But it really came into its own in the 2010s, because it really became the core tool for data scientists. So NumPy, Sai pi, a bunch of others, ml models, etc. Anything that was done statistically, with startups in the tech community, and more broadly, really was done mostly in Python unless you needed St. Highperformance. And in which case, you probably went to Giulia. And meanwhile, TypeScript or JavaScript was really popular in the web development community. So if you wanted to build an online website and online web app, you oftentimes went into that library. And the two oftentimes worked in tandem with each other. So Python might run on the server and JavaScript in the browser, or maybe TypeScript on the server. And that was part of the goal of that language was to create a safer server language that was a complement to what was in the browser. But what I find interesting is obviously with with so much of the data science community on Python, that has just been the default for everything in AI over the last five to six years. So pie torch and a bunch of the other libraries, built in Python, all language, large language models also focus there. And so the idea of just being able to expand that to whole categories of developers, people who have no access to it today, don't know the background don't really want to get into it. And suddenly they have either the bindings or just a wider ability to connect into those libraries to me really opens up the floodgates to new apps and new possibilities. totally right. So let's let's go pivot the conversation a little bit. So we've been talking about startups, the meetups, you've hosted a couple you've obviously went to way more in the in the community. And to me, you send photos around internally here, but it looks very vibrant. This this, this feels like I don't want to make a bad analogy. But like crypto to three years ago, where I never thought so many people would be excited about MLMs, which when I was growing up was a master's in law, usually around tax, and now as the most exciting tech topic going on in the world today. But I want to flip from the startup world into the big tech world, because obviously there's been a lot of news from the big tech companies going on from Facebook, to Microsoft and Google, what's happening there today. And what do you think's happening next,
Grace Isford
But before we get to that, I do want to address kind of the crypto point you mentioned the energy a certainly tangible I mean, at the the last meetup I was that there was over 200 people there we even have around for every one of those.
Danny Crichton
It's always good when the fire marshals coming in and starting screaming.
Grace Isford
Yeah, but I do think and this is kind of one of the reasons why I love my job, right is in any tech trend, there are always going to be tourists. And I actually think there's always going to be speculators and technology and things wrapping technology are claiming to have differentiation, but they really don't. And so I actually think in there are many parallels between AI and crypto right now where there's a lot of fairweather fans excited by the hype and community and trying out a lot of these cool things. And so it's really important for any investor LP or you know, even company builder out there to really think about, you know, what is my defensibility? One thing I like about runway in our portfolio is they've built their whole tech infrastructure stack, for example. So yes, they're an application serving videographers, but they're really focused on you, how are we differentiating our tech infrastructure from the ground up in our horizontal suite of products for that user, I get worried when companies are just you know, quickly wrapping an API, you or I could probably do it pretty easily.Calling you know, a venture backed company. So that just you know, one thing I like to dog here, transitioning over though, to big tech, I think
Danny Crichtion
they tourists
Grace Isford
think big tech, as you think about where they sit in this stack, they're really focused on that storage and compute aspect. And compute is critical. If you're running any of these large language models at scale today, we unfortunately don't have, you know, a good alternative, really. And so you've seen there's some allegiances come up and pretty much any AI company has to have a contract with Microsoft Azure with a GCP. You know, take your pick, I don't think we've solved that problem, even though costs have come dramatically down to train these models. And I'm optimistic that it will increasingly become less expensive to train and deploy these models. It still is, you know, a crucial part of it. And one of the reasons why open AI and Microsoft have such a close relationship.
Danny Crichton
Yeah, and when I think about this is like, you know, several other large tech companies, not all of them. And weirdly enough, specifically, Facebook never went down the cloud route. But for Amazon for Google, and Microsoft, obviously, massive cloud platforms offering compute Storage Networking, which is the core ingredients you need for an AI model. And we mentioned 100 billion parameters, the costs are going down to train these models, unfortunately, we keep increasing the number of parameters exponentially. So we want to go to that trillion parameter model. Some folks have suggested that could be hundreds of millions of dollars just to train at one time, not even updating it or refreshing it on a regular basis, but just a single run through of the training time, several 100 million dollars, all that actually ends up as as power as actual, like energy going into these data centers. And so some people have done some math of like, you know, you need almost a small fusion reactor just to keep one of these models alive. But what's interesting is like, you know, those three companies in particular, Google, Microsoft and Amazon, built those platforms over the last 1015 years now reaping this huge benefit for the fact that they have the infrastructure needed to do all the compute to do all the storage necessary for these big models, and then serve them. And so even after they get the money for building the models themselves, they want to get into a revenue stream on the application side of Okay, now we're delivering an interesting application, how do we go do that. And so, my understanding, as you mentioned, Microsoft open AI, Google has a horse in the race
Grace Isford
anthropic, I would say is their allegiance right now, where they just invested, you know, a sizable chunk in the company, think of them as an alternative to open AI, Google themselves also has their own lambda model, which they've been using for bars, they're kind of Chatbot. And so and then Facebook also recently launched a model as well. So you are seeing a few different large language models be launched the key limiting factor there as it's expensive to do so I don't think you're gonna see, you know, 2030, large language foundational models here.
Danny Crichton
And let's, let's focus on Amazon and Facebook. does Amazon have a horse in the race?
Grace Isford
No, no, kind of I would view them almost as and Josh and I on our team have talked about this is kind of as the weapons provider today They are kind of helping deal and kind of power the infrastructure, but they're not. They're okay, letting a few others kind of step into the ring a little bit more aggressively there. I haven't seen Jeff Bezos, his profile photos and the more recent years, he definitely looks like kind of an arms dealer of AI models and MLMs. And then Facebook. So Facebook never developed a cloud platform that has never offered that as a service. Unlike the other three that we've been talking about. How are they playing in this AI world? I would say Facebook is pretty cool, in many ways, because they have, you know, data centers that could be theoretically harnessed. They have a really strong pytorch team and AI kind of emerged there that is pretty well known for especially for distributed technologies, how you're running these GPUs at scale, they've focused less on going head to head in this battle, I would say of the compute providers, but I could see them continued to be a meaningful player, either through you know, acquisitions, or maybe for specific consumer use cases, whether they brand that more into their AR VR, no Metaverse department and or just kind of continuing to power kind of their hardware infrastructure stack. So I wouldn't sleep on on Facebook yet.
Danny Crichton
When sleep on facebook get I will say if you believe in the metaverse, we may not have pants, but at least we'll be able to talk with good generative AI. And then before we close out our last episode here on the securities podcast, Gary Marcus, AI critic believes large language models kind of bullshit. I mean, frankly, he thinks that the first death attributable to AI will happen this year, mostly because AI does not know any level of truth. So therefore, it could make something up, it could threaten you in some way you could ask it for medical advice tells you something that it'll kill you, you fall through thinking it's smart, and you end up realizing it's not. How do you feel about the large language model focus in the AI community? Do you sort of agree with him that there are huge flaws? Is it empowering somewhere in between?
Grace Isford
I'd probably both agree and disagree with him. I do think AI at large language models will become a commodity in itself. And so I do not think it's like, oh my gosh, you know, this is like, so unique. And so game changing, I think every company is just going to have large language models, or AI is a part of it. And it's we're eventually we're just gonna not even know, just as with the AI Film Festival, we're not even gonna know, you know, where AI starts and stops. I do think the ethical questions and the way in which AI is used is a really important one. And I think more attention needs to be given to that where I would feel comfortable today is having AI be more assistive, right, so feeding doctors information, but I think we need a lot more to do around where the information is coming from, what percent accuracy do we have, right? What are our confidence intervals? Where is that data coming from if that doctor wants to go look at the footnotes of where that's coming from. We're seeing early innovations in that space. But I do think that is going to have to become also synonymous with anything. We're using an AI to make sure we're using it responsibly.
Danny Crichton
Grace is for thank you so much for joining us.
Grace Isford
Thanks, Danny.
While much of the venture world has hit a reset in 2023, you’d never know that in artificial intelligence, where fire marshals are shutting down crammed engineering meetups and startups are once again raising at eye-watering valuations. Why the excitement? Because for founders, technologists and VCs, it feels like the everlasting promise of AI dating back to the 1950s and 1960s is finally on the cusp of being realized with the training and deployment of large language models like GPT-3.
To hear about what’s happening on the frontlines of this frenetic field, Lux Capital partner Grace Isford joins “Securities” host Danny Crichton to talk about what she’s seeing in 2023 across the AI tech landscape.
We talk about her impressions at the recent AI Film Festival in New York City hosted by Lux’s portfolio company Runway, how developers are being empowered with new technologies in Python and TypeScript and why that matters, and finally, how the big tech giants like Microsoft, Google and Amazon are carefully playing their cards in the ferocious competition to lead the next generation of AI cloud infrastructure.