Lux Capital's 3rd Annual AI Summit: A Canvas for Creativity
Last week, Lux Capital brought together nearly 300 of AI’s builders and dreamers—CEOs and coders and researchers alongside musicians and filmmakers and visual artists—for the firm’s third AI Summit. If last year was about intelligence as the new critical infrastructure, this year was about intelligence as expression—AI not just as a system that thinks alongside us, but one that creates with us too.
That idea was embodied by the day’s opening act, when a violinist walked out and played Vivaldi’s Winter, cleverly remixed by AI music platform Suno. Lux Partner Grace Isford then welcomed everyone and announced the conference’s theme: the AI Canvas. “With the AI foundations in place,” she said, “the question shifts from can we build to what should we build?”
To kick off that exploration, Lux Partner Brandon Reeves talked to Cognition CEO Scott Wu about how the company went from a Friday 11:59pm email blast sent out to a researcher listserv after Sam Altman was (very temporarily) ousted from OpenAI, to a premier applied lab building AI software engineers. “A lot of labs focus on code, but we focus on software engineering,” Wu said, describing how their model Devin is a true collaborator, akin to a small army of junior software engineers that works with your team.
Check out Brandon and Scott’s conversation below, where they go on to discuss how reinforcement learning is “horrible but the best thing we have.”
In the next session, Sakana AI CEO David Ha argued that the AI field was too important to be so concentrated in just the Bay Area and Beijing—that Japan, with a proud history of cutting-edge consumer technology—needed its own model with its own collaborative sensibilities. He posted a tweet from June 2023 where he made this point and said, simply, “I will make this happen.”
Now Sakana is working directly with many Japanese companies and conglomerates to help ameliorate an aging workforce and productivity decline. Check out David Ha’s “Stroke of Genius” talk below:
To continue the theme of “how broad is the AI Canvas?” we heard a fascinating conversation between several of the world’s top AI frontier researchers: Professor Kyunghyun Cho from NYU & Genentech, Professor Shirley Ho, from the Simons Foundation, Polymathic and NYU, and Professor Sasha Rush, from Cornell Tech and Anysphere.
Rush opened the panel commenting just on the breadth and depth of the people at the summit—“like NeurIPS but fewer people” before getting the panel into a discussion on grounding LLMs in physical reality, like modeling exploding stars or nuclear reactors, problems Ho believes AI will soon be able to tackle alongside researchers with the right data and a few small breakthroughs. Cho believes the big breakthrough will come when we make the process of discovery more systematic, so AI can actually make those discoveries for us. You can see their full conversation below:
Next up, was a talk that tldraw’s Steve Ruiz described as “kind of on the nose” for the AI Canvas theme. He described how tldraw was “the first vibe coding tool” that lets you build and iterate on whatever visual you can draw or dream up. Come for diagrams of Jevon’s paradox as it relates to TikTok, stay for an explanation of why AI agents are best described as fairies.
Moving from drawing to cinema and sound, the following panel on AI Expression featured Ale Matamala, co-founder of Runway, Rebecca Hu of Suno, and the filmmaker Kirby Ferguson, with a conversation moderated by Alex Konrad of Upstarts Media. These tools, Hu pointed out, are democratizing the arts, with many people trying out Suno who have never played an instrument before, matching personal lyrics to new sounds that give birth to a song that was previously trapped in their head. They discussed how these tools make iterative experimentation easier, and how they’re trying to strike the balance between serving a broad base of users while still ensuring that these tools have fine-grain output controls for advanced practitioners and artists.
Check out the full discussion, including whether Kirby sees himself as a “black sheep” in the filmmaker community for using AI tools, in conversation below:
Afterward, Justin Barber from Glenwood came out to introduce four virtuoso visual artists: Rita Agafonova, Valentina Calore, Calvary Rogers, and Wendi Yan. Their works, all collaborations between artists and algorithms, explored the many ways AI can serve as a new kind of canvas. During the break, the conference guests scoped out the gallery while debating and connecting over where their worlds of art and intelligence are starting to converge.
When we returned, Vipul Ved Prakash and Tri Dao from Together AI came on stage for a conversation moderated by Ankit Mathur from Databricks on the technical foundations that make all this innovation possible. After all, if AI is the new canvas, Together AI’s work is the easel—the infrastructure which everyone needs to paint.
They discussed the trending costs of AI and compute, the popularity of open source tools, the one-year release cycle of Nvidia and the opportunities left in hardware, and why we only have one or two percent of the compute we need in the next ten years. Check out the full video below to find out why Together AI believes we’re entering "the largest infrastructure construction cycle in human history.”
Google DeepMind’s Matt Johnson then came up to do a live demonstration of Jax, which he described as “turpentine” in the metaphor of the day, or perhaps more accurately as “numerical computing library for high-performance large scale machine learning.” Jax powers not only Gemini, but everything from Alphafold to Anthropic to Apple products. Jax, Johnson argued, gives you AGI—by which he means, of course, awesome scalability, great expressiveness, and incremental composable control. His fast-paced talk below gets into all the technical details.
To explore how AI is shaping the “living canvas,” Lux Capital Partner Tess van Stekelenburg moderated a panel with Kathleen McMahon, CEO of Valthos, Sam Sinai, co-founder of Dyno Therapeutics, and Kenny Workman, co-founder of LatchBio. They discussed open source and viral genomes, how AlphaFold has saved scientists “billions of years of time,” and how AI technology can help turn “five to seven dials at once,” potentially enabling custom, individualized medicines based on a patient’s specific biology and needs
That led to McMahon previewing the next day’s announcement of Valthos and their mission: to build next-generation biodefense that will counteract the inherent asymmetric advantages of our adversaries. After all, LLMs make it easier for novice terrorists to come up with novel pathogens: and while bad side effects don’t deter pathogens, they do prevent cures. You can check out the full explanation and discussion below.
From flesh to steel, the next talk featured Lachy Groom and Karol Hausman from Physical Intelligence in conversation with Will Knight from Wired. A live video feed of robots folding laundry, unlocking chests, and lighting matches played as they discussed how they’ve made major breakthroughs in generalization (performing tasks in completely new environments), but there are still many more to come in reliability, speed, and inference-time reasoning. While scaling laws are very difficult in robotics (there’s no clear performance metric, unlike with language models, and there are many types of robots to boot), when asked about commercialization, Lachy was quite confident. “If you build scalable general intelligence,” he argued, “there’s no product-market fit question.”
Next up was Naveen Rao, the serial entrepreneur and CEO of the brand new company Unconventional AI. Naveen took us on a journey back through the history of computing, from the mechanical “Babbage machines” to the analog “Vannevar Bush Differential Analyzers” to the ENIACs that became today’s modern digital computers. Naveen made the controversial, “unconventional” case that now that as we approach an energy wall, we should consider returning to analog computing to build a modern, more efficient kind of computer architecture.
The last panel of the day featured Lux Capital Partner and Co-Founder Josh Wolfe and Dan Grossman from Amazon, moderated by Dan Primack of Axios. Primack couldn’t resist asking Grossman about the recent AWS outage and if Amazon was too big to fail, to which Grossman replied that one of the interesting stories was about complexity, resiliency, and redundancy, given the different impacts on different customers. They went on to discuss the partnership between Amazon and Anthropic (Wolfe speculated: “If I’m a betting man, which I am, I would bet that Anthropic ends up owned by Amazon at some point”) and how the history of capital expenditure in tech benefits users in the end but there’s enormous amounts of capital destruction along the way as “everyone individually rationally does something that is collectively irrational.”
Check out the full conversation, including a breakdown of the incentives behind the poachapalooza AI talent wars, below.
The day closed out with a series of lightning talks from a variety of researchers and experts, including Aknit Mathur from Databricks, Ryan Daniels from Crosby, Karina Nguyen (an ex-OpenAI researcher), Rahul Sengottuvelu from Ramp, Zongheng Yang from Skypilot, and Tyler Angert from Patina. The variety of the talks—from legal briefs to story boards to spontaneous software, were representative of the wide variety of people attending the conference, who then went on to connect and collaborate over drinks and bites at the concluding happy hour.
For Grace Isford, who conceived the entire event, that was the point. “The most valuable innovations emerge at intersections,” she argued, “when researchers meet operators, engineers meet artists, and industry leaders meet scientists.” The summit proved that the space we make for those intersections is itself a canvas—one wide enough for all of us to paint on together.
You can also watch the videos on the AI Canvas 2025 Youtube Playlist.