"Securities" Podcast: We will observe a battle for the true openness in AI
No technology has as many dual-use challenges as artificial intelligence. The same AI models that invent vivacious illustrations and visual effects for movies are the exact models that can generate democracy-killing algorithmic propaganda. Code may well be code, but more and more AI leaders are considering how to balance the desire for openness with the need for responsible innovation.
One of those leading companies is Hugging Face (a Lux portfolio company), and part of the weight of AI’s safe future lies there with Carlos Muñoz Ferrandis, a Spanish lawyer and PhD researcher at the Max Planck Institute for Innovation and Competition (Munich). Ferrandis is co-lead of the Legal & Ethical Working Group at BigScience and the AI counsel for Hugging Face. He’s been working on Open & Responsible AI licenses (“OpenRAIL”) that fuse the freedom of traditional open-source licenses with the responsible usage that AI leaders wish to see emerge from the community.
In today’s episode, Ferrandis joins host Danny Crichton to talk about why code and models require different types of licenses, balancing openness with responsibility, how to keep the community adaptive even as AI models are added to more applications, how these new AI licenses are enforced, and what happens when AI models get ever cheaper to train.