Securities

Lt. Gen. Jack Shanahan on rebuilding trust between Silicon Valley and the Pentagon

Description

As the birthplace of semiconductors and computers, Silicon Valley has historically been a major center of the defense industry. That changed with the Vietnam War, when antiwar protesters burned down computing centers at multiple universities to oppose the effort in Southeast Asia, as well as the rise of countercultural entrepreneurs who largely determined the direction of the internet age.

Today, there are once again growing ties between tech companies and the Pentagon as the need for more sophisticated AI tools for defense becomes paramount. But as controversies like Google’s launch of Project Maven attest, there remains a wide chasm of distrust between many software engineers and the Pentagon’s goals for a robust defense of the American homeland.

In this episode of “Securities”, host Danny Crichton and Lux founder and managing partner Josh Wolfe sit down with retired lieutenant general Jack Shanahan to talk about rebuilding the trust needed between these two sides. Before retirement, Shanahan was the inaugural director of the Pentagon’s Joint Artificial Intelligence Center, a hub for connecting frontier AI tech into all aspects of the Defense Department’s operations.


We talk about the case of Project Maven and its longer-term implications, the ethical issues that lie at the heart of AI technologies in war and defense, as well as some of the lessons learned from Russia’s invasion of Ukraine the past year.

Transcript

This is a human-generated transcript, however, it has not been verified for accuracy.

Danny Crichton:

Hello and welcome to Securities, a podcast and newsletter devoted to science, technology, finance, and the human condition. I'm your co-host, Danny Crichton. And today, we have Lieutenant General Jack Shanahan, retired after a 36-year military career, where he finalized as the Inaugural Director of the US Department of Defense Joint Artificial Center from 2018 to 2020.

Jack, thank you so much for joining us.

Jack Shanahan:

Thank you, Danny, and really happy to be here with you and Josh for this conversation.

Danny Crichton:

Absolutely. And we also have, joining us also from New York, Josh Wolfe, founder and managing partner of Lux Capital.

Josh, welcome as well.

Josh Wolfe:

Great to be with everybody.

Jack, I think it was on the tail end of a whirlwind war to Southeast Asia, care of Tony Thomas, T2, who's since become a venture partner here at Lux, and I was doing a read-out in the Pentagon and got to meet some incredible folks, and you were one of them. And you were really prescient in the role of artificial intelligence and being able to do signal detection and look at patterns of life.

You were instrumental in Project Maven, which bore with it, in the commercial world and arguably in the cultural zeitgeist, statement of controversy. So maybe we can dig into that. And then critically important in helping to stand up and run JAIC, which is the Joint AI Center. So I'm not sure where we start on that, but what was the earliest kernels for you of recognizing that artificial intelligence in modern warfare was really important?

Jack Shanahan:

Yeah. So a couple of things. First, Josh, I think back to that meeting, and I've never forgotten it for a lot of reasons. One because of-

Josh Wolfe:

Was I wearing something ridiculous?

Jack Shanahan:

No, just you were somebody that I would say was way ahead of the time in terms of where Defense Department was in these technologies. And Dave Spirk, who's this uber connector, brought you in and said, "Okay, you need to hear this."

And what I would say, if I remember this, put words in your mouth, but Josh said something to the effect of, you were stunned in your visit. And when I say stunned, I don't mean stunned in a good way. I mean, stunned by the lack of technology that was evident in these combat situations. Because we did have ongoing combat operations in the Philippines at that time, and everybody was, they recognized the need, this thirst for this technology, it just wasn't available to them.

So you had some critical comments about what we needed to do and it was music to our ears. Because we felt, at that time, we were little bit of a insular and not getting this wide reception in the Department of Defense, but that's okay. We were proud to be, what I would say, is this small, elite, fast moving team.

And sort of the kernels that you go back to, Josh, for me it was just recognition. We couldn't do business any other way. It was just, those early days of Maven, before we even had the Joint AI Center as a vision, we're all about intelligence, surveillance, reconnaissance. How do we take this just incredible massive amounts of data that every intel analyst had to look at, mind-numbing work, hours and hours and hours on the day and do something different with it? To allow them to, I say, augment, accelerate, automate that collection, that exploitation, that analysis.

And it was just so clear to us there was nothing in the Department of Defense that was ready for fielding. And I don't disparage what was going on in the research laboratory. Some of the best research, still today on AI, but it was in the research phase. It was what we call low technology readiness level, just not ready to be put into the hands of operators. You saw this, you saw the need for this on the ground, where we needed to put something in somebody's hand as quickly as possible. So really, that was the genesis for Project Maven.

It was, we can't find this in Department of Defense, where is it? Immediately obvious to us. It was available in commercial industry and Silicon Valley, and that's where we went going. And we went very fast, as you know, Colonel Drew Cukor ran that team. There was nobody like him, I would say, in those days, and I really haven't changed my assessment. A classic disruptor and innovator and said, "We're going to get this done, one way or another."

He was passionate about it. He had time in combat, downrange in Iraq, and he said, "We just cannot allow this to happen." And that's how we got our jumpstart on Project Maven.

Josh Wolfe:

I would've hired him in a heartbeat to any one of our high-tech companies. He was a very shrewd, very smart, technologically sophisticated systems analyst to be able to look at the entire picture.

One of the things, and this is unclassified, was looking at the patterns of life. So you have enormous amount of information that is coming off of aerial imagery platforms, whether that's satellite companies like ours in the past, like Planet Labs, that were able to do 50 centimeter resolution and get roughly 30 frames a second video. And then, others that were taking real time videos on more sophisticated government-only platforms.

There's a question of what is this idea of pattern of life and where you're looking for these aberrations? So in an unclassified way, can you explain to the audience what kind of things you would be looking for?

Jack Shanahan:

Yeah, that's a really important point too because this is one of those things we learned in Project Maven. We weren't surprised by it, although I would say maybe we were pleasantly surprised because we hoped this would happen, is we developed these technologies. I mean, we didn't develop them. We went out to commercial industry, we got these startup companies, we brought them in, we got Google on contract and said, "Here's the problem we need to solve."

And at that time, it was two things, computer vision, natural language processing, processing these enormous amounts of information. But we did it for the intelligence community. What we rapidly found out is we put these technologies in the hands of the special operators who were using the intelligence, but they were not intelligence analysts, and they found different uses for it. And one of those uses, so now you had, I would call this, nexus between the operators and the intel analyst about pattern of life. What do we mean by pattern of life?

Well, I want to look at an area of interest, and we'll call it just a named area of interest. There's something happening there that we're very interested in. And maybe there was an IED the previous day, and we want to be able to trace that explosion. I say, IED, an improvised explosive device. Something blew up, something maybe killed innocent people. We want to trace that back all the way to where that vehicle came from.

What village was it in? Who was there with them? What weapons do they have? What is the pattern of life that tells us more about what we're dealing with, rather than what they had to deal with at that time, was one specific incident at one specific time in space with no understanding of the bigger context here. This was about understanding that larger context. So critical when you're talking counter-terrorism, counter-insurgency operations. So the operators and the analysts started working together. Each sort of tailoring their own screens, so to speak, with these Maven capabilities, but each being able to share what they saw on their respective screens.

It's a powerful way of doing business. And I don't have to tell you to you understand the importance of putting minimal, viable products in the hands of people as soon as possible, and let them figure out how to do it. We didn't pretend that we knew the best ways to use these technologies. What we did know is that they knew how to do it, and if we gave it to them, they would figure it out faster than we ever could do a couple of thousands a mile away in the Pentagon.

Josh Wolfe:

So for people's benefit, if you imagine, a digital artifact or a digital asset, which could be some particular marking in a video, it could be a white van, a blue van, a motorcycle, and you see that going up a certain way, and every day from 7:00 AM to 8:00 AM somebody's doing a commute. And all of a sudden, there's an aberration in that pattern of life. That suddenly, instead of going to work seven days a week or five days a week as they do, they suddenly are veering off.

And then they go into a building, and the building, and they stop and they park there, is met by another vehicle. And that vehicle took a nap or in path from its pattern of life, you might suggest that subject A and subject B were meeting and that something was being exchanged. And maybe it was benign and innocent, or maybe to the point here where you knew that there was an event like an IED explosion.

A term that T2 and many others use, this idea of left of bang. How do you get before the event, if you were to rewind the DVR of life, and understand what were the patterns and the sequences of events that led to that? And how might you disrupt that to help prevent human suffering and loss of life in the future? So yeah, very profound.

Jack Shanahan:

And I was just going to say, Josh, to your very point, this is what... When you say, what does AI and ML, what do machine learning, artificial intelligence, do extraordinarily well? Things like pattern analysis, things like finding signal in noise, and most importantly, as you just said, aberrations. What's different?

As a human, I just get exhausted. An analyst can only look at a screen for so long, I'm going to miss something. It's just going to happen. Whereas, the machine does what the machine does very well. And it says, "You want me to look at this. When I find that thing you wanted me to look for, I will let you know."

And as we got further and further in this journey, and we never quite got there yet, because this becomes quite a bit more difficult, is to do what you were suggesting is, now can we put context on top of that? And even more than that, can we get to this idea of reasoning? Who's doing what for what reasons and where? This is where you say left of bang, I want to take this as far back as who's funding these people? Where are their weapons coming from? What is Pakistan's involvement in this? And back and back and back, almost to the Stan McChrystal way of looking at the world. And Tony Thomas was also leading the way in understanding this pattern of life development.

And that's what we found, is these capabilities, these technologies could do that, as well as analyze enormous amounts of just data coming off of... It could be scraps of paper in somebody's pocket. It could be digital data. It could be CDs, whatever it was. You needed something to be able to analyze that as closely to real time as you could, because it's very possible there was another imminent threat coming to someplace in the country.

Josh Wolfe:

I was super impressed with the creativity, as you were just describing the sort of various panoply of signals. It could be something that's analog, it could be something that's digital, but it really takes an act of imagination, creativity, to think about how are these things interconnected and moving the puzzle pieces together.

And that was something where the human in the loop, the human intelligence piece of this, was not going to basically be coming from the machine. The machine might be able to process and analyze and find the aberrations, but it was really the human that was saying, "I wonder if there's some link between this and this," and then turn to the machine to be able to prosecute and interrogate the data that's coming off of that.

Jack Shanahan:

What do humans do so well? This is why we pay people to do this, rather than machines, is the deductive, inductive, abductive reasoning, but they just didn't have time to do those things. Humans are good at logic and getting through a problem and understanding what the bigger context may be. So this idea, and this gets into a much, much bigger concept of human machine teaming and how do you do this right, we're not even close to figuring that out right now.

I'd say commercial industry is doing a little bit better in many respects than that, than the military is, just because we're still far behind. But get through that data as quick as possible, and as you said, let's put minds at work on that extra piece. So how do we provide the context to that?

Josh Wolfe:

Let's talk a little bit about that human and machine interaction. This goes back all the way to Kasparov and Deep Blue, or Big Blue, I forget what it was, where we had IBM's chess machines and humans. And basically, the conclusion was machine might be human, but human plus machine will be machine. That there was sort of this pairing that a man and machine was going to be better.

There's all kinds of different dimensions of this. There's the analytical, there's the intelligence piece, and then there's the ethical piece, which is one that is really the source of profound debates in society and one that was of course involved in Maven as well.

So let's talk about the collaborations between man and machine, and the ethical layer on top of that.

Jack Shanahan:

Yeah. So important to go down this path. First of all, I have become the strongest possible proponent of saying something to the effect of, we have to completely redesign how humans and machines interact in the future environment.

We have gotten away with it in the past because machines, for whatever their flaws, humans could fix those flaws. And human to machines weren't necessarily interacting in a way that they're going to interact in an AI-driven world. It is the machine is reporting the result, the human would interpret those results, the human would take action. And a lot of times, we built systems in the military that were not just, say, optimized for user interface, user experience. They were very good at what they did, but they were not very sophisticated in UI/UX.

It's going to change, and I believe will radically change in the future. So we do have to think about how we redesign these systems so that humans and machines are truly in some sort of partnership. And I'm not saying that to make it an anthropomorphic idea. It is just that humans and machines are going to be constantly interacting.

There'll be different roles, responsibilities, interdependencies between these. And in some cases, the human will do almost everything. In other cases, the machine has to do almost everything, and human intervention can actually lead to a worse result. But in many cases, I think this is a bell curve, right? The most of the bell curve in the middle will be human and machines constantly interacting, and we have to think about what that interaction looks like.

And then this other point, Josh, which is, I put this just at the top of my list always, is this ethical piece of it. How do we redesign these systems to be responsible? And what I mean by responsible? Safe, lawful, and ethical. Now, you could get into a whole philosophical discussion, what do we mean by ethics? But I would tell you, in the Department of Defense, one of the things I'm most proud of is pushing those responsible AI tenants out the door. The Secretary of Defense signed them because we understood.

First of all, people don't generally want to trust the United States military with AI, unfair as it may be. There's a lot of lingering distrust there about what we're going to do with these. So we had to prove we're serious about this. We had to prove we were being transparent about this. We were trying to prove that we were going to adhere to international humanitarian law no matter what kind of machine it was, AI-enabled or otherwise.

So the idea of setting a foundation, and a pretty high bar, if we're going to use these in combat, we want them to meet this bar before we ever consider fielding. And that brings into other things like test and evaluation, how seriously we have to take test and evaluation. And I think there has been some shortcuts in commercial industry and T&E over the years, test and evaluation, that we couldn't afford because of the gravity of some of these systems.

Now, there's a difference between a business function machine and one that is involved in making life and death decisions by humans. They're very different there. But we have to put that bar high to begin with, and I thought we did that very well. And from what I can tell, being on the outside now, the department is continuing to take that very serious approach to these technologies.

Josh Wolfe:

I have to tell you, the distinction you just made in commercial, some companies are testing autonomous vehicles and it's causing loss of life as people trust these things or led to induce to trust them. And it's just not ready for primetime on the highways. And you haven't seen their sort of regulatory apparatus crushed down on that.

Whereas, I was super impressed inside one of these operating centers, out in a undisclosed location, and I'm watching drone footage and I'm watching drone pilots. And you've got somebody there that is flying and somebody there that's operating the potential munitions and kinetics. And there's these two, three other people on top of them.

And I'm like, "Are those engineers? Are they visual?" No. They were lawyers. And they were there to make sure that there was a different technology, which was the Code of Law and the Code of Ethics, on how they would engage or not engage, and the hierarchy of those decisions and how they got made. And I was just blown away because I just assumed that you had somebody there that was effectively playing a modern video game, and when it was their time to make a call that they would make it.

But no. There was this legal apparatus, this ethical apparatus, which gave me enormous inspiration and comfort. And I don't think the public really realizes that. So maybe talk about those rules that you helped to define, and that the DOD signed off on.

Jack Shanahan:

Yeah. And the audience, of course, can't see me because I was nodding my head. Because I knew exactly who you were going to say. They're lawyers, and nobody understands the role that these lawyers play. And the lawyers' role goes far back in the development of a system.

And I've been involved in a couple of different projects, writing about this, in terms of, for these AI-enabled systems, lawyers, just like they'll be on the floor of operations, helping commanders make those final decisions about whether to launch a weapon or should not launch a weapon. They'll also be involved in the design phase and the development phase and the testing phase.

Now, it won't necessarily be a heavy thumbprint during those phases. But somebody has to have some degree of oversight about what these systems are going to do, how they're going to operate. There are many, many similarities between an AI-enabled system and other systems, but there are some crucial, and I would say, fundamental differences, especially when you start looking at a future online learning systems.

I mean, still fairly deterministic today other than sort of behavior under stochastic conditions, you may get some weird results. But still, for the most part, largely deterministic results, with some of these machine learning systems. But we're going to get to a future where it's going to be more than that. So we have to take these into account from the very beginning.

And just to take a long story very short, the Defense Innovation Board, who had been our mentors from the very beginning, who had been very critical of the department for not recognizing the revolution in AI and ML that was happening in commercial industry, got us going, helped us get moving with Maven, and sort of right by our side and helping us out. But they did an 18-month project to say, what should be the principles of responsible artificial intelligence?

They came up with five. They turned them over to us. We didn't do a whole lot to change them, just we made them a little bit more military operational specific. Turned them to the Secretary of Defense. And at that time, he signed them. And they're five. Responsible, equitable, traceable, governable, reliable, if I remember right. I've never forgotten them. And they've gone a step further, since I retired, to take those.

As I always used to say when I was interviewed about this, as hard as it was to get to those five principles, it was easy compared to how do you actually implement them. And we started working on that. We started looking at how do we put contract language in to say, "I can't hold you accountable in your contract, but what we'd like to do is say, can you get us to the governable principle?" How would you do that? What about reliable? What about accountability?

And so, we were in this sort of iterative process that continues to this day. And this is where, those of us who have been in the military a long time, like as you said earlier, Josh, 36 years for me, there seems to be a pretty deep divide at times about those who have never seen the military operate and those who have.

And Josh, you saw it on the operations floor. Someone will be held accountable and things go awry in a battlefield or a battle space. Could be in cyberspace, could be in space above us. Someone will be held responsible and accountable. A lot of people seem not to trust the military and believe that will happen. They say, "Tell, the machine did it. Are you going to sue the software maker, developer?" No, of course not. Commanders get held accountable. Have we always done that perfectly in the United States military? No, but that's a separate problem of holding people accountable.

We will not have this idea of a responsibility gap where machines get a break because they're AI-driven and no human would know how those operate. That's unacceptable for us, especially in this area of life and death decision making. Humans make those decisions, machines do not make those decisions. And when something goes wrong, somebody's going to be held accountable. And that's the way it should be.

Josh Wolfe:

Extremely and profound ethical position, one that I don't think the public appreciated. And I don't think that they appreciated it definitely at the time of the controversy within Google, let's say, around Project Maven. If you can talk about that, if you were there and sort of were witnessing some of the debates. We want protests, we want public debate.

That was an interesting situation, where a US company was really, in a sense, overrun or overruled by employees, to say, "We don't want to work on this thing because we believe it's unethical." And Microsoft and other people sort of took a different stance to say, "No, we have a duty to help reduce human suffering and serve our government, because that's what allows us as a company here in the United States and our free capital markets to exist."

What was it like inside the Pentagon around the time of those debates, and how were they being shaped? And is there something that you would have, in hindsight, done differently to shape that public debate? Or do you think that it sort of all worked out in the end?

Jack Shanahan:

No. I would do a couple of things differently on both ends. But to sort of put this into, to synthesize-

Josh Wolfe:

And if you can-

Jack Shanahan:

Yeah.

Josh Wolfe:

And sorry, if you can, just for those that aren't familiar with that debate around Project Maven at that time, maybe also just give the historic context.

Jack Shanahan:

Yeah, that's what I wanted to do. So when we started Project Maven, as I said, we had about four companies, startup companies, on contract pretty quickly. But then, people were really surprised that Google ended up on contract with us. Like, "Whoa." People were a little dismissive of this little Maven thing. They're hearing about it. And then, all of a sudden, we got this biggest company in the United States in tech on contract with us.

And the reason we wanted Google, the reason we needed Google, is because of a wicked problems that we were dealing with. There was this particular sensor that went on one of these unmanned area vehicles, commonly known as a drone. Basically, this thing could park over an entire city and look at the entire city 24/7. No analyst on the face of the planet could be able to analyze that information, just couldn't do it. Honestly, the numbers-

Josh Wolfe:

Was this what people were referring to as the Gorgon sensor?

Jack Shanahan:

Yes, Gorgon Stare. MQ-9 Gorgon Stare, very specific sensor. And this is an important distinction here. That sensor, when it was on a drone, there were no weapons on that drone. It was just an intelligence surveillance reconnaissance sensor. You didn't put weapons on that MQ-9. That was not understood by some people in Google, unfortunately. And I get to-

Josh Wolfe:

And for people's benefit, MQ-9 adjacent platform, to what is commonly known as the Predator.

Jack Shanahan:

Yeah, Predator and Reaper. So this is a Reaper with a MQ-9 Gorgon Stare. And it looked at just an entire city block. And humans, we ran the numbers, we knew the numbers, a human could get through five to 6% of that scene. It just couldn't do it. This could do it, in theory, instantly if you did it. But it was extraordinarily complex problems which is why we need Google software engineer. We're talking down to the pixel level, trying to determine one vehicle in the middle of an entire city in this big scene. And Google is very, very helpful in that process.

Now, in doing this, they made a decision internal to the company, they had been doing work with Department of Defense, but nothing related to drones. They wanted to keep it quiet. They understood that some people in the company probably wouldn't be excited about this project with the department, so they elected to not be very transparent. And that was their decision. We supported them. I said, "We'll do whatever you want. We'll talk about it if you want us to talk about it. We won't if you don't." So we didn't talk much about it.

And that led, unfortunately, to a lot of assumptions, that I think were flawed, about what this project was, what it was not. And it was just building to a protest internal to the company. Overall numbers, very small. I would say, it was about 2,000 people, plus or minus a little bit. And the size of Google is actually a very small. But it sort of had led to this internal turmoil, to the point where at the end of the contract, they didn't leave us early, they decided not to extend the contract.

We saw it coming. I say, I kind of use different words at times, but I say, I was extremely disappointed. I was not shocked. We knew it was coming because they just couldn't deal with the internal turmoil. They had to stop it and sort of do a reset and decide what the company really wanted to do, and have a conversation inside the company.

Now, you two are in business, you understand. To us, it looked like people were telling the senior leaders at the company how to run the company. I mean, there's a board of directors, you have a CEO, all these other things. And yet, this small protest was going inside the company. It felt very odd to us that one of the biggest and most successful and greatest companies in the United States in technology was not going to work with the Department of Defense on intelligence, surveillance, reconnaissance, to help protect US troops, allied and partner troops, to minimize collateral damage, to do all these other things that had nothing to do with weapons. But immediately, it became a killer drones discussion, and fomented a little bit by some people that were interested in making that the center of attention.

Josh Wolfe:

So I want to pause on that part because I, at times, feel sometimes like I have conspiratorial views. Organic protest, of course, is something that is entirely reasonable. People are expressing their values and even if they don't fully understand the situation, they can and should protest, and protest loudly. And management and the public and the press should cover that.

But the potential influence of foreign information operations, fomenting, dissent, something that we've seen in elections and in other issues that can create divisiveness amongst Americans and amplify some of the worst angels of our nature, was there evidence of that, of foreign information operations or others, that were fomenting some of this dissent?

Jack Shanahan:

Without getting into details I can't get into, I say there was evidence. Somebody said I made accusations. I said they're not accusations, they're assertions. They're assertions based on evidence.

And just look at it this way, if you're in China or Russia, do you really want Google to work with the Department of Defense? It's no effort on their part to do some influence and information operations, stir the pot a little bit and hope it works. And in this case, it actually worked. It was free. Well, what a return on investment that is, stop Google working with the Department of Defense.

So yes, there was evidence. I'm not going to get into details because I can't, but it doesn't surprise me. It shouldn't surprise anybody else. And these things happen in life. But unfortunately, I think maybe there was some naivete of people who believed that this was all just purely an organic matter, and there was no outside influence, but we did have evidence to the contrary.

Danny Crichton:

Well, General Shanahan, I appreciate all this. I mean, I want to talk about the transition between threats because you were there at JAIC 2018, 2020. You announced your retirement from the service after 36 years in January of 2020. One of those very propitious, depending on your point of view, turning point in history. But COVID becomes a huge story just two months later. But it was sort of an inkling, just a couple of people sick in Wuhan, and starting to go into Milan and Italy.

And I think about when you were there, counter-insurgency, Afghanistan, Pakistan, tracking the movements and social networks across these groups. And we pivot two years forward, and we're talking about East Asia, we're talking about new threats, Nation State Actors, you accelerate into the Ukraine, Russia, in the last couple of years, it feels like the entire battlefield has shifted. We've had totally different domains, totally different threat environment. And yet, artificial intelligence to me seems like the fundamental across all that.

So I'm curious, as the Inaugural Director you set this place up, how much has that world changed because of the different threats versus how much was sort of continuity between the two?

Jack Shanahan:

Yeah. I would like to say and claim that we saw this coming, but we had to focus on the Middle East first. And people say, "Well, why didn't you just start with China?"

Well, first of all, that's where we did not have people on the ground dying. And two, we didn't have full motion video of battlefields in the Middle East, like we had in the Middle East. We didn't have that in China. We didn't have in North Korea. So it was a matter of where is the data and where is the operational imperative. But we knew it was coming. I was talking about this at that time, that there was something else we've got to get ready for and that is a peer competitor. And it will surprise us, the magnitude of this.

But it's interesting, Danny, you mentioned COVID. I saw COVID almost as a bridge project. Because right before I retired, about six months before, and COVID had just hit in January of 2020, and we came up with a project called Project Salus, and I was so proud of our team. We're still very young and forming, and trying to hire people and get a budget and do all those other things you have to do as a startup. And all of a sudden, we kind of went into lockdown mode. But we had a team of people that said, you know what, Northern Command, which is our big command out in Colorado that's responsible for the homeland defense and other things, and the National Guard, who of course is involved in the state matters every day and helping governors control pandemics and disaster relief, needed help. And we formed a project called Projects Salus to do just that.

And it was really, if I look at it in economic terms, it was a supply and demand problem. Whether it was food stuffs or vaccines or something else, you have some surplus over here, you have demand over here. How do you make that work? And we had access to incredible amounts of data, including a CEO of a company I won't name, it's one of the biggest in the company. He said, "You want our data? You got it. Just do it." And so, everybody just came together and said, "We're going to go after this." We made a lot of progress with that.

But at the same time, it was like we're watching with one eye, COVID, with this other eye, that rapidly became our whole body turning in the other direction, and say, "But we have got to focus on what's happening in two different places, the Indo-Pacific, and then of course what's happening now, Ukraine and Russia."

And what you said, I could not agree more that the common thread across all of this will be AI/ML and other technologies, and quantum at some point and other things. And it's how these technologies will be put together in new and different creative ways that I think ultimately will make the difference. I think Josh, and his travels around visiting different organizations in the military, see that it's really the operating concepts that matter. That the technologies or individual technologies, it is the diffusion of those and how you put those together in a battlefield that will really make the difference between what side has got a competitive advantage or not.

And the good news is, again, I don't have the details because I'm not privy to the classified, but I know that Project Maven's being used very successfully in support of Ukrainian operations right now. Because that's what we always expected. This is going to be a data-driven fight. You've got this weird juxtaposition of trench warfare that looks like World War I, with bleeding edge drone technology, that at some point will have more AI on it.

And so, how do you make all that work together? One of the ways you do that is bring in a lot more technology in the form of AI and ML. So, the world is changing and the Department of Defense recognizes the change. They still have a lot of work to do to accelerate it.

Danny Crichton:

And when you talk about accelerating and continuing on that change, I mean, obviously, you were at the Vanguard, building out the center, trying to connect the dots across the different services, the combatant commands where everyone's sort of on their own own, budgets, zone purposes and missions. You were trying to connect the dots and say, "Look, artificial intelligence doesn't matter if you're in the Middle East, doesn't matter if you're in China. The techniques are the same, the scientists, the research, the products oftentimes can overlap."

Do you think the Pentagon has gotten better about thinking about AI as a fungible resource, fungible products that can be used in different contexts? Or is it still a struggle to get AI into the loop?

Jack Shanahan:

It's still a little bit of a struggle, with signs of optimism. And I know, I mentioned Dave Spirk's name a little bit earlier, Dave was the first Chief Data Officer of the department. And he wrote, and got published, a DOD data strategy. That, to me, is sort of the starting point. You've got to have a data vision and a data strategy.

Too many times, the department has tried to jump into AI/ML without understanding what's going on beneath the hood or underneath the hood. If I were to say what worries me most about the Defense Department, and many other places in the federal government, is this idea of the underlying infrastructure and architecture was built in a hardware age, in an industrial age. It needs to be digital, modernized for an information age and a software-driven age.

And my former CTO at the JAIC, the first CTO I had, Nand Mulchandani, a commercial industry guy, came from cybersecurity in Silicon Valley. He's now the CIA's first CTO, interestingly enough. And he wrote a report, we co-wrote it, but he really wrote it and I edited it, on this idea of software-defined warfare.

And our point is, if you're in commercial industry, you read this and say, "What do you mean the department doesn't already do this?" You're shocked by the fact that-

Danny Crichton:

Right.

Jack Shanahan:

... there are not software best practices other than in these little pockets maybe around the department. But you've got to get the underlying infrastructure right, which includes, of course, the data piece, and then you can actually start moving much faster.

You can get away with doing the AI/ML in what I call boutique pilot projects. They just don't scale. This is what industry has learned so many times, so much faster than the department has, this idea of not just speed. Department's getting a little bit better at speed, but they've got to get much better at scale.

Now, is it easier for an Amazon or Microsoft or Google, who are born as digital companies, and the more data they get, the better they are and the more powerful those algorithms become? Of course. The department has to accelerate that part of it. Everybody wants to do AI/ML, but nobody's quite willing to make the investments underneath to say, "There's only one way to do this, and then we got to fix how we're doing things beneath the hood."

Danny Crichton:

I think particularly in commercial-

Josh Wolfe:

I have-

Danny Crichton:

Yeah. Please, Josh.

Josh Wolfe:

Sorry. I had one quick question.

I was amazed. I think it was in the Philippines, you've got the Chinese 50 Cent Army. People are basically paid 50 cents for every tweet and piece of information propaganda that they're putting out there to advance some of the interests of the Chinese Communist Party.

We had I think one woman in her mid-50s in Central Florida, or South Florida or Tampa, basically putting out tweets and it said on the bottom of it, "This was approved by the State Department," basically voiding it of all efficacy. So we're fighting, in a sense, an asymmetric bureaucratic warfare against an asymmetric digital army that is putting out propaganda.

What is your take on this information space, and where AI and ML are going to play a role in that?

Jack Shanahan:

Well, I tell you, Josh, I couldn't agree more with how you just said that. I do have a background in this. At one point, as a one-star general, I was in charge of information operations for the Department of Defense, on the joint staff. It's kind of a special organization within the joint staff. And I had a huge portfolio of all sorts of strange and crazy things, including this thing I was handed in 2011, called Cyber, and go, "Okay, what the hell is this?" But that's the difference.

Josh Wolfe:

If I'm not mistaken, I think it was a half joke, but one of the doors in one of these areas was like the dangerous ideas room?

Jack Shanahan:

Yeah, yeah, that's right. And we spent a lot of time thinking about influence and information operations. Some of the leading thinkers on this, to include by the way, and the Chinese on the PLA, People Liberation Army, in this idea of the cognitive domain. I don't look at it as a domain necessarily, but we're really targeting people still.

And so this idea of we are in an asymmetric fight, we do have our hands tied behind our back when it comes to influence and information operations. The idea of having to go up seven layers to get approval to send a tweet, as absurd as it sound, has truth to it, as you know from your experience there. We have got to move at the speed of relevance. And if that means you're going to accept a higher degree of risk sometimes, I say, so be it. Why? Because there's a temporal dimension to this. You'll move on, right? Just like in social media world. 24 hours later, it's a different conversation. So you've got to be willing to take a little bit higher risk.

I think what you saw is probably an appetite at the lowest tactical level, saying, "Please let us do this. We know what we're doing at this level. We know our adversary, we know what they're doing. I want to react to them right now. I don't want to wait 72 hours because it's too late at that point." They're onto a different game and a different message, and they're going to succeed because they're always going to do it faster than we can do it.

So, I would like to believe that despite its fits and starts over the past probably five to eight years, the Global Engagement Center at State understands the problem. Can they move necessarily fast enough to get away with this? And this is, again, back to almost a Maven analogy. People start conflating the idea of influence operations in the United States with what we're trying to do overseas. We stayed away from anything having to do with the American public. It was verboten. We just would never go there.

But, by God, if we're slowing down in what we're trying to do overseas, we're just going to lose. Because these technologies, you can guess that China and Russia are adopting AI and ML for information purposes. We've seen that in election interference. We're going to see it in spades in 2024. We've got to be prepared for it, and we have to move as fast.

Now, there are all other things that are germane to that conversation as well, as we need people who understand cultural context and understand the sophistication with these messages. We don't want to do just sort of dumb messaging just because we can't. You get that wrong and we're in trouble. So as I used to say, we're better off doing no information operations than bad information operations. But with these technology, we have a chance to do very good and very sophisticated and very quick operations, and we should be moving a lot faster.

Josh Wolfe:

I'm watching something of geopolitical importance today, which is our reliance on semiconductors, of course, and the world's reliance on semiconductors in Taiwan, and that being a key reason for its import in contested industrial space.

We're standing up a TSMC, we are standing up a fab in Arizona. And I'm watching, as the protests are starting to begin between the labor unions, they're on the ground in TSMC, and some of the rhetoric that is being created externally about racism and how American workers are lazy. And I'm just looking at this and being like, I mean, this is just such an easy vector. It's such an easy vector to exploit the emotions of people and get them riled up, and suddenly get this thing that is important for US domestic production of semiconductors to get slowed down and grinded to a halt and the gears of human animus.

And we do need that domestic defensive capability against those external operations. But I do know, to your point, that DOD is not focused on that yet.

Jack Shanahan:

Yeah. And was it the CEO of TSMC who said, "Look, I am all in favor of putting a plant there in Arizona. But you're in the United States, and you have some issues you're going to have to work through." They have to do everything, as you just said, sort of the union aspect of it. Where do I get the water from? Which is, these things really require one hell of a lot of water. But just the idea of, do you have the worker base required?

When they started TSMC, maybe it was a little bit more of an authoritarian environment at that time, that you said, "We're going to do this. This is a national priority and we're going to go and we're never going to stop." And are we prepared to do that in the United States, in Arizona and other places, where we put these fabs in? I don't know yet.

But we can't afford the delay of... Okay. I don't even want to be dismissive of the red-cockaded woodpecker, but we're going to run into those sort of issues. You can't move any further until you've dealt with this individual concern. Okay, but deal with it. You've got to keep moving or we're going to be in trouble.

Danny Crichton:

I know we only have a couple minutes left. I want to close this out, because we're up into 2023. We're just talking about foreign operations, and obviously there's a new set of enabling technologies around generative AI.

We suddenly have the ability to not just create authentic sounding messages, but we can do it at scale. And when we talk about scale, we're talking millions of messages customized by person potentially. We can ingest millions of data points about individual voters or individual people. We can custom write those scripts back to you.

General Shanahan, I know you've commented a little bit on the implication of this. And so, just as we close out, on the current threat in environment and technologies that are here, how do you see generative AI playing a role in all this, from your perspective?

Jack Shanahan:

Yeah. Your point is, one I will emphasize, was the idea of micro-targeting. We are already seeing signs of it. I think it's going to get even more lucrative for the adversary to do it, because I have so much information about you individually. This idea in the military of, now could I go after a military member's family? Can I do all these other things to sort of catfish them, spearfish them all? Yes, you can.

Now, is it going to be productive to do that? I don't know yet, but we're not prepared for that. This is a conversation we have to get very serious about because it's going to accelerate. It won't be hard to come up with these individualized messages, and do them at a volume where you could ignore one or two of them, but you can't ignore a million of them. And then what's the impact cumulatively on our society? I think that's a very serious and a different discussion, but one that we also have to have is the societal effects of these, and we're going to drive ourselves further apart. So we need to have a serious conversation.

I also think we're going to be at the point very soon, we'll probably at it already, where you need AI and ML to detect AI and ML. Because you just don't have human capacity to be able to say, was that generated by generative AI? You're going to need generative AI to do that, in the sort of classic generative adversarial network approach to doing business. You're going to have to do that. So it will be sort of this continuous, like it always has been, cat and mouse game of one side gets better, the other side gets better detection, and so on and so forth. But we have to be ready for it and we're not ready for it right now.

I do have my reservations about generative AI, but I also am a incurable optimist about the potential for these, to do everything from intelligence analysis to forced planning, to looking at how we fight against this particular adversary based on the people that are involved in that battle, or whatever. So it is enormous potential.

It needs a lot of work still, and I was happy to see that Craig Martell, the director of the CDAO, the Chief Digital AI Office in the Pentagon, has just agreed to stand up a generative AI task force, which is called Task Force Lima. And it can't just be about how we use these for battle space purposes, or back office functions, which probably have more immediate impact on back office than anything else. But it's also about this information and influence, things that we have to be concerned about.

It's coming fast at us, and it is a tsunami. And we're not quite as ready as we need to be for it yet.

Danny Crichton:

I'm optimistic as well. But with that, General Shanahan, thank you so much for joining us.

Josh, thank you for joining as well.

Jack Shanahan:

Thank you, Danny. Thank you, Josh. Real pleasure to talk with you.

Danny Crichton:

Thank you.

continue
listening