Riskgaming

The Orthogonal Bet: Building a Fractal Combinatorial Trope Machine

Description

Welcome to the ongoing mini-series The Orthogonal Bet. Hosted by ⁠⁠Samuel Arbesman⁠⁠, a Complexity Scientist, Author, and Scientist in Residence at Lux Capital.

In this episode, he speaks with Hilary Mason, co-founder and CEO of Hidden Door, a startup creating a platform for interactive storytelling experiences within works of fiction. Hilary has also worked in machine learning and data science, having built a machine learning R&D company called Fast Forward Labs, which she sold to Cloudera. She was the chief scientist at Bitly and even a computer science professor.

Samuel wanted to talk to Hilary not only because of her varied experiences but also because she has thought deeply about how to use AI productively—and far from naively—in games and other applications. She believes that artificial intelligence, including the current crop of generative AI, should be incorporated thoughtfully into software, rather than used without careful examination of its strengths and weaknesses. Additionally, Samuel, who often considers non-traditional research organizations, was eager to get Hilary’s thoughts on this space, given her experience building such an organization.

Produced by ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Christopher Gates⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Music by ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠George Ko⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ & Suno

Transcript

This is a human-generated transcript, however, it has not been verified for accuracy.

Hilary Mason:                                          

I'm building a company called Hidden Door. We are about four and a half years into it, and what we're doing at Hidden Door is creating a place for fans to be able to come and be immersed in the worlds of their favorite fictional universes, whether those are from books, movies, TV shows, other places.
                                                     

And we've drawn a ton of inspiration from tabletop role-playing games, from some of that fan fiction energy to really let fans come in and tell a story where there's no pre-written, middle and end. So they create the beginnings and then they create their character and then off they go, and every version of it is completely different from everyone else's version of it.

Samuel Arbesman:                                      

Tell me more a little bit about what led you to found that company as well as some of the things that you were doing before that that got you to this point.

Hilary Mason:                                          

Sure. I mean, the short story is that I have always read fiction. I love fantasy. I love sci-fi. I was always that kid who would shove a paperback book in my pocket. And when I traveled or went to sleepaway camp, my major concern was how I was going to pack enough books to not run out on that trip. And so that was me. And I also played a ton of tabletop games, a little bit in high school, a lot in college and grad school.
                                                     

And at the same time studied machine learning, have been pretty much working in machine learning data science for the last 20 years, particularly around startups, including in social media and thinking about questions of how do ideas spread online and how do we build products that can surface things that are meaningful and that create great experiences. Always for me in that space of there's some technical capability, there's some data asset, there's something that has changed, and now what is the product and experience in business surface area to explore around it.
                                                     

Given this longstanding interest, again, four and a half years ago when I started Hidden Door, my co-founder and I, we had done a ton of work in technology around language generation, around embeddings, had built a bunch of production products and systems in things like customer service and things like news summarization for understanding financial markets, things that were running at scale.
                                                     

But we were convinced that there was still a bunch of area to explore around what was possible with the tech. And that by far the most interesting place to play was in this area of gaming, immersive, fictional worlds.

Samuel Arbesman:                                      

When you were talking about new technologies, figuring out how new technologies expand the space of possibilities, one of the things with games though is, at least in my understanding, at least when it comes to the video game and computer game world, it was always driven by what is the new technology and what does then that allow us to do, whether it's we can now have first-person 3D shooters or whatever it is.
                                                     

I feel like that intersection of, okay, let's try to do something new with games and let's try to think about the frontier of computer science and machine learning and technology, these things really do connect. Was that one of the things where you're like, okay, I'm already deeply interested in games and games are the perfect platform for experimenting with new technologies, so let's just combine all these things together?

Hilary Mason:                                          

I think you're right that games have traditionally been that platform for playing with new technologies. And I want to pull on that a little bit because one of the tensions I have loved dancing around in all of the things I've worked on in the last 15, 20 years has really been as a machine learning or data science minded person, you are typically trying to build a system that optimizes for some quantitative function, like some objective function.
                                                     

But as somebody designing product experiences, you are often in a space where there isn't one.And what I love about working in games and entertainment is that you cannot even really pretend there is anything that is purely quantitative about the experience we're trying to create. And it requires an entirely different way of thinking about building products. But I've worked on things like this going all the way back.
                                                     

I used to be the chief scientist at a company called Bitly that did short links on the internet. I joined there in 2009 ish. And my job there was very much to take the data asset and the algorithms we had and try and build products, and we did. We built real-time search and discovery. We built content discovery recommendations. These are all also product experiences where it is not purely quantitative.
                                                     

It's not something where if you're trying to rank 100 million things by some criteria of what feels best, when I put in this phrase, that's the kind of fuzzy product problem I'm talking about, and games and entertainment is a perfect place to work in that way.

Samuel Arbesman:                                      

But in the world of gaming and entertainment, people still try to make it quantitative. They'll say, we have metrics for engagement or things like that. People try to do that. But what you're saying is fundamentally you have to focus more on, okay, does the story grab you? Or do you feel engaged? Do you feel connected to the characters? These things are much more important than some of these other kinds of metrics. Am I misunderstanding that?

Hilary Mason:                                          

There are different ways to think through where we use those metrics, and what you're talking about are product usage metrics. So what is engagement? We look at all this stuff. How many minutes are people staying? Are they coming back? Did they come back if they played this way versus that way? Absolutely. But that only tells you what is good enough or what is better when you already have... It tells you if A is better or B is better, but it doesn't tell you what A and B could possibly be in the first place. And so that is the difference here.

Samuel Arbesman:                                      

And so if you don't have this more larger guiding vision than in the same way that the movie industry has tons and tons of sequels and superhero movies, you end up with this massive shrinking of the possibility space because you're not actually trying to think in a very disciplined way, okay, what should these experiences really look like? Is that the right way to think about it?

Hilary Mason:                                          

That's how I think about it.

Samuel Arbesman:                                      

Awesome. Okay. Well, I'm glad to hear that I understood. I would love to hear more about the kinds of experiences you're trying to build at Hidden Door, as well as the tools and technologies that you're using. I think right now especially when it comes to text and AI, there's lots of different ways to do this.
                                                     

It could just be as simple as here's a prompt for having a collaborative storytelling experience in ChatGPT. What you're doing is a lot more sophisticated than that. And so yeah, I'd love to hear a little bit more about what you're doing, how it differs from some of these more naive approaches.

Hilary Mason:                                          

Well, I'm not sure that it's fair to say it's more sophisticated, rather it's different and it's purpose driven. So what we do at Hidden Door is think a lot about what is the experience of creating a character and what makes it believable that my character is in this fictional world that I've come to know through let's say reading a novel? Who can I be in this world? So you can't just be anyone and you can't be someone that doesn't make sense.
                                                     

And then once you're in the world, what makes it feel like you're really there? What are the little details? We can pick on Star Wars. Somebody is going to mention the force at some point in an hour one time. There are certain linguistic signifiers of that immersion. There are certain cultural within the fictional world, cultural signifiers, the way people speak, the style of what you're looking at.
                                                     

All of those things, as well as the kinds of plots and interactions and characters you'll meet and what's possible and what's not possible. And then when you think about if you were just going to have a story experience with ChatGPT or any large language model, you can just put in, and now I go up a level and I get the thing I want and I'm the king of the world and I'm done now, and that is not fun.
                                                     

A game needs to push back. It needs to set limits. It needs to show you how you can move forward, and it needs to reward that progress, but not all at once. And so we've been very thoughtful about building a system that creates meaningful narrative that is driven by players' choices and players' creativity in those choices and how they push against that system.
                                                     

But it also enforces certain gaming structures. You can't just go in and type in, "I win. The end." That does nothing. And what should that do? You also can't come in and be like, okay, I'm in a Regency World at a dinner party and I'm going to pull out a laser gun. It won't allow you to do that because that's not within what you should be able to do within the rules of the world.
                                                     

So it is as much about how you enforce the limitations and communicate those to players as it is how you improv yes and their directions to do something that surprises and delights them, but is also consistent with what they're asking for without making it feel like it's an authoring tool. Sone of the things we sat down many years ago and laid out is like, is this basically an add-on to word processor?
                                                     

Is it a tool for writers? And the answer is no, not what we're doing. Not at all. There are other folks pursuing that and doing so with a bunch of success I think. But we're doing another thing, which is to say that the authors, the original creators of the world, have done an amazing thing. They made this thing. We all love it. How do we capture that essence of it and then let players create their own stories within its constraints?
                                                     

And that means we also have a significant amount of controllability around the rules of the world, the way characters behave in the world, NPCs, the way certain plots may or may not unfold because we need that ability to preserve the original creator's vision. And the last thing I want to pull out of your question is that I don't believe that LLMs are themselves creative, and we can debate this all day.
                                                     

But also in the game experience we create, the original authors of the world are creative, the players are creative. That's where that spark comes from. The system exists to facilitate that creation and to push back in a way that feels good overall. So it is also thinking about what is the tech really good for here? How does it work? And then how do we create that overall experience in a way that it's not about the artifact you get at the end, so much as it is about the artifact being something that happened because of you and your choices.

Samuel Arbesman:                                      

I love this. This is a great provocation. It's almost like the system you're building is that interstitial glue that makes sure the creativity of the world creator and the player, they actually all truly blossom in their most true to themselves kind of way. This is really interesting. And I guess also related to the world building and making sure that the logic of the world is embodied in these stories, you mentioned with these various LLMs, you can just say, "Oh, I win," or whatever.
                                                     

I had this experience where I found this great prompt for it was like D&D Dungeon Master, and I was doing it with my kids. And it was really good, but at one point we're like, oh, how many pieces of gold do we have, and gave us some answer. And I said, well, actually I am the heir to some nobleman's fortune or whatever, and then we actually have much more.
                                                     

And then, of course, the chatbot DM just like fluidly said, "Oh, I'm so sorry. You are correct," and then we could then buy whatever we wanted. There was no pushback along the rules of the world. And so actually related to that, the very old text-based games like Zork and things like that, there were very clear rules. And it was almost like too rigid because it was like you could only do very, very simple things.
                                                     

And then the other extreme is the world of LLMs where you can basically do anything, but there is no sense of worldness. And when you're actually constructing that, and it sounded to me from the way you were talking about it there was almost this element of hard coding or building in story logic and understanding how worlds operate, which to my mind almost felt like to a certain degree a return to the whole good old-fashioned AI stuff of actually building in some of these symbolic rules.
                                                     

And I imagine it's not quite like that, but I assume you're probably blending a little bit of these things together to make it actually true to worlds. How do you think about all that?

Hilary Mason:                                          

That's absolutely how we think about it, in the sense that we consider our system to be a fractal combinatorial trope machine, in the sense that the system has a written understanding of what is a bar brawl? What is a dramatic dinner party? What is a himbo? That's my favorite character trope. It keeps coming up.

Samuel Arbesman:                                      

What is that?

Hilary Mason:                                          

It's like Ken from the Barbie movie. It's a bimbo, but a him. Generally very warmhearted. We have a crime boss trope. We have trapped in an escalating natural disaster, which might be a rock slide or burning building. All of these exist in our system in a way that is handwritten in its most generic form. And when I say fractal, we have things encoded like, what is the narrative story structure of an overall story?
                                                     

We very much draw from your English 101 like, oh, it builds up. There's tension. There's a release. There's more tension, and then we have a lovely conclusion. And then depending on how the player plays, you might end up doing something like, cool, we're beginning. The system's great at setting up beginnings. Like, hey, we're beginning. We're in a dive bar. This is a real game I was playing the other day.
                                                     

I was hacking a jukebox, and that jukebox revealed that there was some information about a military device. And then the system says, "Cool. Do you want to go on a heist? Do you want to go do that thing?" And we're like, yeah, okay. So off we go. We do our heist. We find out where it's hidden. We figure out how we're going to get to it. We managed to break in.
                                                     

We steal the thing. Everything's going great. And then our system will be like, cool, we've had this much story of this type. Do we want to have more story? How much? What characters should it be? Do you want to go fence the item you stole? Do you want to try to use it for some other larger purpose? Do we end you at a family barbecue to celebrate our success and our relationships with each other like Fast and the Furious style?
                                                     

Do we give you some other... You succeeded, but not quite, so we're going to let you win a bit, but then you're going to see the bad guy off in the distance be like, "I'll get you next time." It is really great at pulling in these tropes in their generic forms at different levels of the story all the way down to if a player's like, I cook an omelet, we'll be like, cool, we have a crafting structure.
                                                     

We have a cooking trope. You need ingredients, things can go wrong. Are we at a point in the story where we want something to go wrong for you? We do put our finger on some of those. And then if so, what? Well, what goes wrong when you're cooking? You don't have a recipe. Your ingredients have gone bad. Your stove catches on fire. It'll come up with an appropriate complication to throw in front of you.
                                                     

And then at the end it'll be like, cool, you accomplished these things. So here we're going to give you the omelet and describe it because of the way you played through that little bit of structure. And this might be an omelet that also is like, oh, this is the gift you're going to give to your friend because you need to console them if you're playing a more social plot.
                                                     

As I'm talking, this probably sounds like a bunch of... I don't know. It is combinatorial tropes here and there, but that is the structure our system pulls in and then presents to players, and then players get to decide. And then to your point about Zork and those sorts of text adventure games, we let you type in whatever you want your character to do or the system will suggest things, because we have some players who don't like to type.
                                                     

That's great. In AI UX design, it's really helpful to show people a range of possibilities. So even if they're going to type something, letting them know what kind of thing they should be thinking about is very helpful. But they can type anything they want and then it'll riff off that. And so it is a little bit like Zork, except in Zork, if you didn't use the correct verb and the exact name of the noun, things didn't go so well.
                                                     

You can use colloquial language, a very fuzzy language to say like, oh yeah, I pick up the coffee, or I take the coffee, or I grab that coffee, or whatever version of it works for you and it should more or less work. I think there's one more thing to mention, which is that we do a lot of work on our world models. What is easy in a world like our reality? If I'm playing a story here and I say, "I take a sip of my coffee," I should not have to roll dice for that.
                                                     

That's no special skill. I'm an adult. I've probably done that every day. But if I say something like I leap into the air, do a backflip off this pipe on the ceiling, and then come up with a witty rejoinder to whatever it is you've just said, probably I have to roll some dice. That's a little risky. My character stats are not always that great in a couple of those dimensions. So we do a lot of work to encode the default world model.
                                                     

And then for each fictional world that's on the platform, we think a lot about how do we make this one feel like itself? How do the rules change here? What do we have to encode? And this is honestly a mix of code and data and even natural language.

Samuel Arbesman:                                      

So there's this generic world model. Even across different stories, there is this unified world and then we just tweak it in different ways depending on the universe of each story. That's super. What are the different dimensions of these worlds?

Hilary Mason:                                          

It's everything from what is normal in the sense of what level of technology. We had a bug where, this was almost a year ago, a player got a PlayStation in The Wizard of Oz. No, that's not supposed to happen.

Samuel Arbesman:                                      

Different worlds where animals can talk. There are some worlds where that's very normal. Other worlds, that would be considered particularly strange.

Hilary Mason:                                          

Exactly that. So in the world of The Wizard of Oz, every animal can talk, not all choose to, but that is a rule of the world. Other things we look at are honestly how much violence is in this world? Where do we place this particular world in what we call our genre mix? So that is the kinds of plots that you are going to get as well as the vocabulary of the world. What are the tricks to making the style render?
                                                     

And even though we do work with authors and adapt their novels, we're not trying to mimic their writing style in most cases. We're trying to create a language that's an expressive style that fits. Just to make that very concrete, you can play in the world of Pride and Prejudice. But if we tried to render in Jane Austen style English, that would be a very ponderous and probably laborious role-playing experience. So we try to use language that is of the style of that world without being exactly trying to mimic her prose.

Samuel Arbesman:                                      

That's a Jane Austen light kind of thing to make it readable.

Hilary Mason:                                          

Yes. So readable, good pacing.

Samuel Arbesman:                                      

But has a few anachronism kind of things to make you feel like you're in that world.

Hilary Mason:                                          

Right. We've been joking all week because working on something that's a very Gothic comic book graphic novel world, and every scene has somebody brooding. For every time a character is brooding, we're going to take a drink or put something up on the wall because it's just too much. It's a very Gothic bit of language. So it's that stuff. And then it's particular rules around is there magic? How does magic behave? Is there death? Is there a lot of death?

Samuel Arbesman:                                      

And related to also when you're talking about these tropes and this fractal combinatorial space of tropes and combining them all together in this very principled and deliberate way, when I think about traditional LLMs and not traditional AI, but the current crop of generative AI when it comes to language models, one of the things that I've seen written about, and I've actually done a little bit of writing about this myself, is the fact that, I mean, these models, because they've imbibed so much text, they actually have certain of these tropes or narrative devices embodied within them, but in a very implicit way.
                                                     

And so understanding them is very important to, I don't know, understand the Waluigi effect or some of these other kinds of things that people talk about where in Chekhov's gun might actually be very relevant when you're engaging with these LLMs. But what you're saying is if you're actually very interested in building a storytelling energy, sorry, storytelling engine, you have to actually have these things deliberately coded in as opposed to just part of the vast stew of text that is in there. Because otherwise, you'll get them, sometimes you won't get them, and the stories won't actually work.

Hilary Mason:                                          

That's exactly right. So we made a decision to encode the tropes manually because, and this is my like, oh, I'm an old machine learning person approach, which is to think about how many do we think we need in the system for this to work for all the variety of stories we might want to tell. And the answer is on the order of thousands.
                                                     

So we can do that over the course of a year or two. That can be handwritten. Sometimes we play with generating as a shitty first draft, but it's not even good enough. And I think this comes back around to saying, what is an LLM actually really great at in this kind of system?
                                                     

For one thing, understand what a trope is in our game, make sure it is represented consistently, make sure we have the ability and the game design and narrative design folks on our team have the ability to be like, "Oh, that story didn't play the way I wanted it to. I'm going to go edit the core system content, and then play it again and see," because they need the ability to actually create something rather than just hope the LLM gods give them the interpretation of some trope that they want.
                                                     

And then the last bit is realizing that what we do with the system is take that generic representation of the set of tropes in a moment, and then the particular characters, the setting, the world rules, and we smoosh it all together. And in here we do use LLMs in that one step to translate it into a moment of rendered story. And intuitive level, we always have unstructured input in.
                                                     

We have a it is a very sophisticated Postgres database that represents essentially our game engine with every character item, location, its stats at a programmatic level. And we use that structure then to generate the text to pull the art that folks see in the game in that moment. But for us, it's realizing and making very thoughtful choices around where we want to have creative control and where we're happy just saying, okay, an LLM, whether we're going to train our own or it's going to be trained...
                                                     

We're going to use let's say ChatGPT or whatever it may be. We're just hoping that the zeitgeist of the data it's trained on encodes the tropes and the stories and the way we want it, which is not...That's too much risk variability and honestly, too much mediocrity for what we're trying to do over here. And I want to say on the other side, we are not trying to build an AGI intelligent chat system that can do any task in the universe. We are trying to build a really incredible fictional, immersive role play story experience. So for this very narrow task, the approaches we're taking are much more robust.

Samuel Arbesman:                                      

Oh, that's great. And I feel like it's also in this current moment with AI, there are many people who are either, the current LLM models and generative AI are like they're the solution for everything, or there are people who are just across the board skeptical. And I feel like what you're saying is no. Each one of these different techniques has its strength and weaknesses and let's combine them in a way that allows each of those features to shine.
                                                     

I mean, so related to that then, how do you think about the current AI excitement and hype? Is it useful, but complementary to what you're doing? Do you even just shut it out while you're just continuing to build the thing that actually makes sense for you? Are you highly skeptical? How do you think about all that?

Hilary Mason:                                        

It's a good question. I am personally both optimistic, but incredibly pragmatic about building machine learning AI products. And I think that having been doing this for a very long time, there is a lot of fairly obvious stuff that we as a community should know that is... I just see people making the same mistakes people made 15 years ago over and over and over again.

Samuel Arbesman:                                      

Is that a combination of youth and historical ignorance, or is it just people are just so excited they've forgotten what we should know?

Hilary Mason:                                          

I mean, I honestly have always been of the opinion that people should be excited and they should be experimental. And I love that. One of the areas I'm personally most excited about is seeing how artists are beginning to take and turn the tech to their own purposes in a way that works for them or to create things that are just really interesting or creepy or whatever that feeling they're trying to create is.
                                                     

I'm not one of those people who says like, "Oh, they're going to do it wrong, therefore they shouldn't be allowed." No, everyone should get to play with it. Excitement is great. Experimentation is great. What's frustrating to me is the intense let's say waste of resources and capital and human energy and creativity going into trying to do stuff in a way that if we'd take in a small breath and thought about it might not work out.
                                                     

And I'll give you an example here, because I don't like speaking vaguely and negative ways. Google launched an AI summary on their search product that was so bad that it's telling people to put glue on their pizza or all this other ridiculous stuff. They have the talent and the wisdom in that organization to have not done that.
                                                     

So I'm not going to speculate as to why they did, but let's just say that I assume there were certain market and leadership pressures making people go really quick and throwing a lot of that wisdom out the window to just try and get something in the market, and that is not good. And I don't work for Google. I never have. I don't know anyone who's working on that.
                                                     

So I'm sorry if I do and I've hurt your feelings, but y'all could do better. That is one example of an area where if you give it 30 seconds of thought, which we can do right now, you could build something much better. Let's give it a little bit of thought, which is to say, what is the problem we are trying to solve when we're searching for information? Well, for me, I like to think about it as there's a tremendous amount of information out there in the world.
                                                     

I want a system that is not going to give me one right answer, but is going to help me, an intelligent human, understand that landscape of information and incorporate it into what I know already. So I would much rather see say a visualization of like, oh, you're looking for pizza recipes. Here's different clusters of the way people all around the world think about pizza. Do you want to go into any one of these? I see you're from New York.
                                                     

People in New York tend to have these very strong opinions about pizza. Maybe you want to make it this way. But it is thinking about how do you present and visualize for a person the landscape of information for them to come to some understanding and make a good decision. Not how do you summarize one answer which is going to be wrong because we know hallucinations are not a thing we're going to fix. It's part of the system. It's that thinking.
                                                     

And I think it comes down to, and we were getting at this earlier a little bit, building products around this kind of tech is different than it is around purely deterministic tech. Designing is different. And collectively, everything down to how we hire people and manage them has not been designed for this. So now you're trying to do this in a system designed for building something else. And you layer in all that hype and the market pressure, and then you get a lot of stuff that maybe we collectively could have done better.

Samuel Arbesman:                                      

So do you think related to that that it's either going to be very difficult or impossible for some of these large companies to be able to adapt and it's the role for some smaller companies to be able to think through from the ground up better ways of incorporating these technologies into products or even just build organizations that are better able to handle them, which is not to say that therefore every startup is going to be good and successful, there are many ones that the speed to action are going to also be making the exact same problems, or are there other ways that we should be thinking about that?

Hilary Mason:                                        

I think it's multifold. I actually do think larger companies are in an excellent position to do this well, and I say this because of the work I did at Fast Forward Labs, which was acquired by Cloudera. We've worked with many folks on the enterprise side in building machine learning AI products. The one thing that is required there is leadership and the ability to have a clarity of vision and an understanding of where the value actually is really.
                                                     

When the bits hit the business, how is that going to work out for you? What is actually useful? And then a little bit of that creativity and willingness to change things in an organization to get there in a way that works. I don't see a way for anyone to ever do that without really strong leadership. That itself is open to having some technologist involved or trusted or at least a voice in the room.
                                                     

So I'll say that. On the other side, I think startups are beautiful because they are diverse and there is let's say even more, in my opinion, which could be wrong, like bad ideas in startups and even more good ideas in startups. They're just further out on the distribution of weirdness.

Samuel Arbesman:                                      

It's a much higher variance when it comes into the startup world.

Hilary Mason:                                          

Exactly that. So startups will figure this out, but not all of them by any means. There'll be a lot of silly stuff along the way, but that's also okay. Because maybe sometimes something really brilliant does come out of the stuff that if someone knows too much what they're doing, they would never have done it.

Samuel Arbesman:                                      

Sure. Yeah, yeah. No, I like that. And going back to what you were talking about with artists and creators and authors, encouraging them to experiment with these technologies and get them to be excited about these kinds of things, what has been your experience with working with authors or world creators in terms of partnering with your company?
                                                     

I imagine the ones that you're talking to are already predisposed to be excited by this kind of thing, but do you feel as if authors are excited by this possibility of taking the worlds that they're creating and turning them into this new medium? How are these world creators thinking about these kinds of things?

Hilary Mason:                                          

The people we talk to and collaborate with are tremendously excited about it, but let me tell you why, because it comes back to understanding the current landscape in the market and how it feels like in many communities you are either pro AI, and that means you are pro taking artists' work without their permission and generating stuff in their style. You are pro ripping off writers' work without their permission, no data providence, and using it without giving them any recognition or part of the reward.
                                                     

So there's that side. And then there's the other side, which is just broadly anti AI at all because it stands for that kind of very clear, explicit exploitation. And we have heard from people on all sides. When the writers' strike happened, we started getting outreach from some folks who were like, "Hey, I want to exploit some writers over here. Can you help us?" It is not... The answer was no. I have to be very clear about that.
                                                     

But there is not pragmatic rationality in that part of the market, and that is because the business choices that are being made are not being made in a way that reflects essentially the value of the creative contribution. So I want to lay that out as the landscape on which we work. And now I get to talk to people who are creating movies, who are writing books, who are thinking about what they might do. And most folks are not rejecting things out of hand, but they are very skeptical.
                                                     

And this is on us as people building businesses and experiences in the space to really show that there is a way forward that is respectful of that creative work and creative vision. We are at no point and nor have we ever tried to replace writers with machines. That is not the point, and I don't believe it's possible.
                                                     

What we are trying to do is to give fans a way to celebrate and be immersed in the things they love and for those fans to essentially bring what I would call their fanfic energy in and say, "But what if in this world? What is it like to run the shoe store that was briefly mentioned in one sentence of this much broader story? What kinds of shoes would be there?" And maybe they have a whole vision for experiencing that space. That is not something the author would really dwell on.
                                                     

The thing we aspire to create is that feeling for the fan. If you could sit down with that author and you could improv riff together on what you might find in a corner of that world or an adventure they'd never thought about, that's the feeling we're trying to create for them. So coming all the way back to how we talk to folks who are creative and show them what we're doing, it's really saying, look, we love and respect your work. We want to pay you for it. So we license their work.
                                                     

We sign contracts. We give them money and rev share. And we want their fans to be able to say, "Look, I finished this book. It was an amazing 12 hours of my life. And now I want to go spend another 12 or 20 hours in this world thinking about it." And it is not in any way a replacement for the next book. It is just taking that energy and channeling it into a place where they can get excited about the next book. And that is really fun. We're here in July 2024.
                                                   

 

Over this next year, what we are going to see is hope. We are going to see people who are aligning their technical, their product, and their business approaches with that respect for people who are creating things, who are authors, who are writers, who are making films and media experiences, because there is no other way forward. You war game out how this plays out.
                                                     

We go full on exploitation. We don't pay writers. We don't pay artists. All we get is the mediocre output of some LLM that nobody wants that. I'm making a broader statement, but it's not a world I want to live in. And on the other hand, if we say no AI whatsoever, okay, we miss out on this opportunity we have to create and facilitate some of these experiences where fans can really be immersed in a way that just hasn't been possible before.
                                                     

And so yeah, I think this is all about finding that path, designing our business incentives along with our product and our tech to support it, and then just trying to do that really well. And I'm the CEO, so I get to try to do all of those things.

Samuel Arbesman:                                      

And I love the way you put all this where it's not just AI populum being generated or whatever it is, but it's like this human machine partnership. We're getting the humans and the computers work together and make everyone be the best versions of themselves, but also there's also the partnership between the creator and the fans and saying that is also deeply important. And if we can use technology to help facilitate that and allow the fans to feel even more involved in certain things without taking away the ownership of the creators, that's also even better. I love that.

Hilary Mason:                                          

This is about that fan energy.

Samuel Arbesman:                                      

Last, I want to just change gears to discuss Fast Forward Labs, and you and I have talked about this for many years. One of my other deep and abiding interests is around research happening outside of traditional researchy organizations. And so obviously research can happen in big corporate industry labs or university settings or sometimes even startups. That needn't be the only types of organizational structures.
                                                     

And I feel like Fast Forward Labs was trying to build a slightly different structure to allow people to do research and R&D in a different way. Yeah, I'd love to hear more about how you came up with the structure and how you felt that Fast Forward Labs fit into this now actually a pretty strong and growing ecosystem of non-traditional organizations.

Hilary Mason:                                          

So I founded Fast Forward Labs, was a company, in 2014. I was looking to create a new mechanism for applied research. Machine learning is my field. So I started there. I did have grander ambitions that the model could apply beyond just there, but it was realizing that the power of an organization like Fast Forward is that it was essentially a bridge between several communities.
                                                     

And that a lot of the creative energy came at the intersections between different fields, different communities, different people thinking about related problems and being able to bounce them off each other in a structured way. And so with Fast Forward, and I mentioned this earlier, it was applied research, so things we thought could be useful in production the next two to five years.
                                                     

But largely it was designed to be an outsourced research organization with independent taste who could help our customers benefit from that taste to enrich whatever it was they were doing. And so to say that in business terms, the way I designed the business model is that we got paid upfront for access to 12 months of our research. So we would write every quarter a report, which was a technical document. Most of them are still online today for free.
                                                     

And it was written in a way where the first chapter was what even is this thing? These were topics like causal inference or natural language generation, deep learning for image object recognition, very specific capabilities, and then how does it work conceptually? So the idea being you could hand it to your CEO and they could read it and get something out of it. How does it work technically?
                                                     

We assume you have some, if not a PhD in mathematics and some technical background. And then what can you do with it? What are the ethics of it? We hired sci-fi authors to write, if this goes to the absolute extreme, what could happen, just to help people think about it, and where do we think it's going next? So we would make our own prediction. I think what was valuable about Fast Forward was a couple of things.
                                                     

One of which was that it was a shortcut for people to access some of that research capability when they themselves didn't have it in-house, or if they did have it in-house, any larger company just gets so sort of... They stay within their own walls a little too much. So we were good friends to have in the sense of bringing some different ideas and a different way of thinking about things. And we also could take on a lot of risk that even corporate research groups couldn't.
                                                     

And this was one thing I didn't realize when I started the business, but three to four years in was very clear. I had built a business into larger companies' inability to invest in research projects. And by that I mean that in-house they couldn't take on the risk of doing something for six months and having it not pay off, but they could hire us outside to do that initial exploration for them and then decide whether to bring it in-house or not.
                                                     

So it was a lovely business in that too. It was a lot of fun. And the other thing I realized, the team at Fast Forward was what really made the company work so well. And it was a team of folks who came from very different academic backgrounds and came to machine learning and data science. So folks from neuroscience. We had an IP lawyer on the team.
                                                     

We had folks from economics, tons of many physicists. I kept hiring physicists. They make great data machine learning people. Few of us computer scientists around, but a lot of people who really had that quantitative background and thrived in an environment of new ideas, new problems, lots of people to bounce those ideas around with.

Samuel Arbesman:                                      

I definitely think we need more structures and people from diverse backgrounds, but as well as people to think about ideas almost like agnostic as the disciplinary silos and fields and domains and things like that.

Hilary Mason:                                          

It was obvious to me in 2014 that research and machine learning and data had hit issues in academia on the corporate side. So thinking about a structure that could work, those issues have only gotten 10 times worse. And so now is a moment for this flowering of alternative research organizations.
                                                     

And it is really interesting to think about how you fund it in a way that ties that funding to value, but it can't be a quarterly profit metric because that's too short term and it can't be like do whatever you want for 10 years with no measurement of it because it's very, very... I like money as a way to keep score. Are we doing something useful or not? That design, I think we're in a really interesting moment for figuring that out.

Samuel Arbesman:                                      

That's awesome. And maybe on that optimistic note of all these potentials, that could be a great place to end. Thank you, Hilary, so much. This is fantastic to talk about all these different ideas. So I really appreciate it. Thank you.

Hilary Mason:                                          

Thank you.

continue
listening