Securities

“I have three girls; the second one is bionic”

Description

Technology’s prime and still growing role in society has led to a crescendo of criticism that it has exacerbated inequality. Critics say that the economic models and algorithms underpinning out apps and platforms are tearing apart our social fabric, fracturing the economy, casualizing labor, and increasing hostility between nations.

But for all the negativity around technology, there is a parallel positive story of how technology can empower people to achieve their best lives. Whether it’s dynamically adjusting insulin pumps that allow diabetics greater freedom to pursue their dreams, or reliable algorithms that can reduce human bias in everything from hiring to dating, technology has also added tremendous value to society.

That’s the theme of “The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future,” a new book by Orly Lobel, the Warren Distinguished Professor of Law and Director of the Center for Employment and Labor Policy at the University of San Diego.

Lobel joins host Danny Crichton to talk about how her daughter became bionic, why alarmist titles of recent critical tech books belie the comparative advantage of algorithms, the actual black box of human minds, feedback loops in doctor’s offices and the medical professions, and finally … sex robots. Because they have feelings (and algorithms) too.

Transcript

This is a human-generated transcript, however, it has not been verified for accuracy.

Danny Crichton:
No, I love it, and I'm now kind of getting into the show itself, but you do end with this line that I liked, which was this, "Conjuring a contrarian, constructive vision about the potential of technology," and I love this kind of contrarian, constructive vision was sort of a nice fixed phrase. Chris, are you ready?

Chris Gates:
I'm ready. Three, two, one.

Danny Crichton:
Hello and welcome to Securities, a podcast and newsletter devoted to science, technology, finance, and the human condition. I'm your host, Danny Crichton. This show is all about securities: national security, health, security, economic security to name a few, and how those securities are attained and how they they are lost. Technology has a particularly outsized role in determining who has security and who does not, whether it's access to cheap drones in Ukraine or the availability of an MRI machine in a hospital. That prime role has led to a growing crescendo of criticism that technology has exacerbated inequality, that the economic models and algorithms that underpin the apps and platforms we use every day are tearing apart our social fabric, fracturing the economy, casualizing labor, and increasing hostility between nations.

Maybe it's doing some of that, but today's guest argues that it doesn't have to be that way, and in fact, technology is the route to a better, more equal world. Joining me today is Orly Lobel, the Warren Distinguished Professor of law and director of the Center for Employment & Labor Policy at the University of San Diego. Her new book, The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future was just published by Public Affairs. Orly, welcome to Securities.

Orly Lobel:
Thank you. Thank you, Danny, for having me.

Danny Crichton:
You start your book on what is a huge and technical policy question with a very personal anecdote, with a great first line, one of my favorite first lines I've read in quite some time. Quote, "I have three girls; the second one is bionic." How did your own experiences help shape your thesis for the book?

Orly Lobel:
So really throughout my career and growing up in Israel and coming to the United States, there has always been an aspect of technology that has been energizing, fueling equality, creating opportunities. With this line, "I have three girls; the middle one is bionic," it's really about how two years ago the FDA approved a closed-loop insulin pump. My daughter is a type one diabetic. She is now using artificial intelligence, machine learning to be lifesaving. She wears an insulin pump. She wears a continuous glucose monitor, a CGM. Her blood sugars are much safer. There's a lot of this alarmist question of, what are these applications, these digital technologies that are wearable that we interact with all the time? What do they do? They extract our data. They monitor us.

I kept seeing in the debates, and not only in the public debates and the media reporting, but also in the way that policy is being shaped, that the more alarmist focus on harms and risks and wrongs has been shaping our focus and our attention and the solutions that we're creating. And we're really missing and even slowing down this great potential that, again, is lifesaving in some contexts of health, but in other contexts it's just fueling equality and creating more wellbeing and access and financial opportunities.

Danny Crichton:
Well, I think you get at something really specific, which is there's been this surfeit of books published in the last particular decade. I think of Shoshana Zuboff's Surveillance Capital, Frank Pasquale's The Black Box Society, Weapons of Math Destruction. There's a whole number of them which give very clear criticisms of technology, AI and machine learning and algorithms and bias, and the fact that a lot of this is just not transparent, that we don't have control over it. We don't have agency over our technologies or this ambient collection of data that we have going on in our world.

But you kind of argue the opposite, and I think your daughter is a perfect example where, as someone who has type one diabetes, just a couple of years ago, you had to prick your finger multiple times a day, up to 12 or 15 times to monitor your glucose. You had to shoot yourself with insulin multiple times a day: prior to meals, morning and an evening. And that meant that you sort of had to devote an immense amount of time to focusing on your own health and protecting yourself, as well as having this sort of challenge socially of you're getting a lunch at school or whatever the case may be. And, okay, I got to go walk over here and just take out these shots and handle it in that context. And with a constant glucose monitor, and now with these new technologies, smart insulin and you talk about it in the book, it's now actually adapting. So it's actually learning from your daughter's experience.

It can actually inject insulin just as it's needed in exactly the right amount of dose. And so it's actually very empowering in the fact that your daughter's now able to integrate better socially, is able to spend more time on the stuff that she enjoys, on creativity as opposed to her own health. And so I feel like that perspective is sometimes lost in a lot of these conversations.

Orly Lobel:
Very much so, exactly like you described. It's not just that it takes away some of the time and energy that's devoted to managing things that require calculation. There is really that learning component and the rationality and the accuracy that leverages massive amounts of data that even if we devoted all this time to it, there's patterns that we can't see. And that would be true not just for these health applications, but for so many other areas we're trying to get right. The ability to compute and to see the patterns and so many scattered data that's out there is just something that humans can't do.

You mentioned how all these books that are worried about algorithmic transparency, and you mentioned some of these titles, Weapons of Math Destruction, that are alarmist. There are many more, like Automating Inequality and The New Jim Code kind of conveying that algorithms are biased and technically wrong, and a lot of that is going on. And what we have to ask about the comparative advantage and the comparative transparencies. It's not enough to just point to something that went wrong. So take the example of self-driving cars, autonomous vehicles. There's these reportings of, "Oh, there was a robo-taxi that got into an accident. That means they're unsafe."

And I think what a lot of the debate has been missing is this comparative question of we're not asking if they're a hundred percent accurate or a hundred percent safe or a hundred percent unbiased. I mean, that's what a lot of people are asking, but that's the wrong question. The right question is compared to what we have right now, compared to the status quo, abilities and humans transparency. So humans, and I talk about this in the book, humans are black boxes.

Danny Crichton:
Well, I love this idea of there's the black box of the algorithm in the computer, but also the black box in our head because you make reference in your book to a famous landmark paper by Sendhil Mullainathan. His famous resume experiment, randomized trial in which he algorithmically randomized names, backgrounds, resume line items with a little bit of valence around gender and race, and basically showed that identical resumes with just slightly different sounding names actually leads to very disparate response rates, both from just a cold email and whether you get a call back, whether you actually get a job. And that was, I almost say, what? Almost 20 years ago. I think it's almost-

Orly Lobel:
I mean, it's 20 years ago, but it has been replicated all over the country and all over the world by so many other researchers. And, frustratingly, it's so stagnant. We continue to show all this bias.

Danny Crichton:
And what I think is interesting is, so what we found is there's an immense amount of bias. We've also see this with the Lilly Ledbetter case at Goodyear, in which women statistically and also anecdotally, are often and consistently underpaid compared to equally qualified men. What's interesting and what your argument is in The Equality Machine, is rather than sort of say, "Well, we're just replacing one black box with another." We're actually replacing the human organic black box that can't really be solved with a more transparent, open black box, which is the algorithm, something that we could actually monitor, evaluate. We could collect the right kind of data. We can actually plan what kind of data we need and actually prove that something is unbiased, which we can't really do in the human context.

And, in particular, and I think because of your employment background, you sort of center the early part of the book on this topic. You talk about a company called Textio, which has been doing this around job postings and evaluating the language that it's using in job posting. So it finds, obviously the jobs that mentioned strength or competition tend to attract men more than women. But for instance, having more bullet points will lead to more men and less women applying for the same job. And part of my question to you is, how much of this is sort of reproducible? I think one of the things you're sort of getting at is you almost need a system to improve all this as opposed to the one-off experiments we've seen in economics and a lot of the social sciences.

Orly Lobel:
The great advantage with these algorithms that are being developed and designed for particular contexts like diversifying the workplace, they're adapting, and they can be tested. There's this digital paper trail that can be examined. Another great example that I think about a lot is Regina Barzilay who's at MIT, and she's one of the people who first developed these algorithms that can screen for breast cancer. They're kind of replacing radiologists.

Danny Crichton:
I believe it was Regina around the cancer diagnostics, which was, she had multiple doctors look at these mammograms. They missed in one case, then they had an MRI, and it was like, "Well, the cancer's everywhere. Emergency." Then they did another follow-up test and it was like, "Nope, that was a false positive." And so you're sort of bounced around the system in which there's no learning. And what you really emphasized, I think, in the book is to describe a dichotomy between Google and Meta, which knows more about your preferences than you do yourself, to going into the doctor's office. And it's like every MRI, every test, every diagnostic is completely independent of each other. None of them are going back into an algorithm. None of them are going into the black box to actually improve it.

And so your doctor's feedback loop, assuming you have a great doctor who's learning and trying to always improve their craft... I was always told your doctor's only as good as they come out of medical school. And it sort of gets locked in. And so, to me, adding in a lot of that AI and algorithm in there is super important to improve medical care. And then you get to something really important, which is, I see this criticism all the time of like, "Well, the top experts can beat a computer." We see this in poker. We had Annie Duke on earlier this year. We see this in a lot of fields where the top specialists, the top expertise does beat technology.

The problem is, as you point out, rarely does anyone have access to that. They're rare people. They're standard deviations above the norm. And particularly if you look at medical care across, I think we're up to 7-8 billion people now, heading to 10 billion by the end of the century. All of them need great medical care. And that's something you can actually reproduce with algorithms, and I think it's particularly opportune because we have a radiologist shortage in the United States right now. Tests are now weeks behind schedule. We can't even get a look at our tests, let alone two.

And you sort of reserve the last few chapters of your book on the intimacy side and love and romance and twofold. One is on this piece around dating apps, which obviously are maybe the most canonical social technical systems where algorithmic design becomes really crucial. Do you have all these biased humans who have very strong preferences and tastes that they're looking through on these apps? I'd be curious your thoughts there. And then you have a whole section on sex robots, which is something that we have not been able to talk about on the program somehow. We've missed that in our programming, but-

Orly Lobel:
It's about time. Yeah.

Danny Crichton:
It's about time. Right here, right now. How does The Equality Machine influence both dating apps as well as this rise of sex robots, intimate bot friends and this kind of AI-driven romance?

Orly Lobel:
These are complicated philosophical moral dilemmas that we face all the time in our relationships. Again, because the book is looking for opportunities. I really look for these ways that companies can think about what they're doing in the more thoughtful way. We think in a nuanced way about design and opportunities, it turns out that having this access to such a greater pool of people that you wouldn't otherwise encounter by design... I kind of compare different ways of different apps that make race more salient versus save that for later encounters. And it just turns out their evidence is very clear that when people give themselves the opportunity to look broader, they actually can overcome some of their biases and diversify their preferences, shape their preferences differently. So there's kind of that nudging happening.

Now with sex robots, that's again, so interesting to see how we're very skeptical. I interview, and I look at a pattern of female reporters, so women reporters that are identifying themselves as feminists and very skeptical. And there's a pattern in the stories in the media in the past couple of years, there's a reporter who says, "Oh, I'm not going to enjoy this experience." And then she travels to... Actually, here in Southern California, we have some of these sex robots that are being designed and are getting better at using facial recognition and emotional recognition to learn their partners and to feel more realistic. The hardware that's getting better and more realistic.

And there's suddenly this eureka moment where, "Oh, maybe I can enjoy it." So really I'm kind of calling for much more openness and much more imagination and opening ourselves up to those opportunities. And, in general, in technology, I talk about how the train has left the station. The technology is there, so we might as well create it in ways that benefit us, that enhance our wellbeing, our pleasure, our enjoyment, our opportunities, our inclusiveness, all our goals and values that we really care about.

Danny Crichton:
I will say that there's no way, it's not the headline, but so many deep insights beyond sex robots and the engineering of hookup culture, from medicine to economics, everything in between. And I would just summarize this from your last two pages, but your goal is to sort of conjure a contrarian, constructive vision about the potential of technology, and I just think that's really exciting. So Orly Lobel, new book, The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future. Orly, thank you so much for joining us.

Orly Lobel:
Thank you for having me.

continue
listening