The Washington PostDemocracy Dies in Darkness

Transcript: The Futurist Summit: The New Age of Tech: Intro & The Next Frontiers

By
March 21, 2024 at 2:32 p.m. EDT

MR. KHOSLA: Good morning, everyone. Welcome to The Washington Post. I’m Vineet Khosla. I’m delighted to welcome all of you to our Futurist Summit.

What a time to be alive, right? Can you guys believe on this day, 18 years ago, Jack Dorsey sent out the first tweet? A year after that, we had iPhone. A year after that, we had Facebook surpass Myspace. Also, I can't believe we are still talking about Myspace. I am still mad about it. But I still feel sorry for Tom a little bit.

[Laughter]

MR. KHOSLA: From that era to now, we have come a long way, right? The covid really accelerated a lot of these digital transformations. Our workforces went hybrid. Artificial intelligence came in. We had the unfortunate war breakout in Ukraine, and you had AI drones fighting into the war over there. And now we are living in this wave of LLM AI, where any text can be turned into video, where you can go and talk to these assistants, and you can get really good answers.

So today we're going to focus on where we go next, right? We have a really great panel of people coming here to talk to us. We have Intel CEO Pat Gelsinger, OpenAI's Anna Makanju, and we have famous venture capitalists like Hemant Taneja and a lot of other innovators joining us in the few hours we have today. We will also talk to a few Washington power players. We have Senator Todd Young, Senator Mark Warner, and we will talk to them about their efforts on regulating big tech. We'll talk about TikTok, and we will talk about how the policy and the law is shifting in light of all the big changes that are coming to our life.

So whether you're joining us here in this room or you're out there on the Zoom, I hope at the end of the day today when we leave, we really get this realization of the paradigm shift that we are now living through. And I believe this paradigm shift is as important as a paradigm shift we have lived through the birth of the internet, the invention of the cell phones.

Before we go further, I do want to shout out to Mozilla for sponsoring this event. So Mozilla Open Source, whoo!

[Applause]

MS. KHOSLA: And we will start our day in a conversation with the head of DARPA, Stefanie Tompkins, and my colleague, Bina Venkataraman.

Quick story about DARPA. You guys know they invented internet and GPS? All right. Everyone's heard of that. Did you also know DARPA funded initial research to Stanford Research Institute to build CALO, cognitive assistant and learning agent, something, which became the basis of Siri, which was the company where I got hired as the first engineer to start building Siri out. And Siri led to Google Assistant, Alexa, ChatGPT, OpenLLMs, full circle.

We will watch a quick video, and then we will talk to the head of DARPA. Thank you.

[Applause]

[Video plays]

MS. VENKATARAMAN: Good morning, everyone, and welcome. It's my delight. I'm Bina Venkataraman. I am the Post columnist covering the future and innovations transforming the world, and it's my pleasure to welcome to the stage this morning Dr. Stefanie Tompkins, director of DARPA. Welcome.

DR. TOMPKINS: Thank you so much.

MS. VENKATARAMAN: Well, this is going to be fun. I want to start with how you kind of view our moment of technological change and innovation. The former CEO of Google, Eric Schmidt, was recently interviewed by Time and said that he sees the current pace of innovation as outstripping by orders of magnitude the innovation he's seen in his lifetime. You've been around the block for a long time yourself and driving technological change. Do you agree with that, or do you think it's a bit of a bit of hype? Are we just in another chapter of the digital revolution? Where are we?

DR. TOMPKINS: You know, I'm not sure if the story is fully written yet. In fact, I know it's not written yet. So we might have a different answer as we look back at different points in the future. But right now it absolutely feels like things are accelerating, and it's for a variety of reasons, right? Some of that is simply because you have a whole planet full of people with a lot of technology that has been democratized and made available at many different levels. So innovation is happening everywhere, not just in large government labs, but also with things like artificial intelligence and robotics. The ability to just do research and science is moving much more quickly as well. What the impact of that is, you know, we'll find out.

MS. VENKATARAMAN: Has that pace of breakthrough with AI surprised you? Were you expecting it?

DR. TOMPKINS: You know, it's a good question. DARPA has been engaged in AI since the very, very beginning, and if you'd asked me a few years ago--well, let me take a step back and say the technology itself, not that surprised, but I think the implications of the technology on society and how quickly it's spreading and where it's sort of being used and where it can be used, that has an element of surprise that I don't think we have prepared ourselves for.

MS. VENKATARAMAN: So Vineet reminded us that DARPA is the agency that we can thank and U.S. taxpayer money behind it for such innovations as just the internet, foundational parts of GPS. Obviously, over the decades, DARPA has invested a fair amount in machine learning, and this precedent or history of investing in innovation is impressive. And I think people are wondering from your vantage point, what is going to be that next generation of breakthrough? What are you working on now that is going to transform society in the future?

DR. TOMPKINS: That's a--that's a fun question, and I'm going to have to narrow it down, right? So just for context, for everybody, you know, DARPA starts about 50--five, zero--new technology programs every year. Obviously, not all of them pan out, and so we're constantly pruning and, you know, gardening, those different programs, and we work across all disciplines.

Certainly, there are things going on in microelectronics and in computer science that I think are part of a really broad cultural conversation right now, maybe one of the areas that we're most intrigued by because of the applications everywhere is going to be synthetic biology and its use not just in the obvious areas like medicine, but also in in sort of everywhere, right, so biological materials, concrete, ways of generating food and materials wherever you need them, things like that. So we've got things going on, particularly depending on biology, where you're going to be able to make what you need, where you actually need it.

MS. VENKATARAMAN: By growing cells or growing biological organisms that would replace materials or--

DR. TOMPKINS: No, by using the biological organisms themselves as the factory to create the things that you want. So we have a program that actually uses water and air and then microbes to pull molecules together to actually generate food.

MS. VENKATARAMAN: Really fascinating. What kind of food?

DR. TOMPKINS: According to my program manager, very tasty food.

MS. VENKATARAMAN: Oh, okay. It's not like Impossible Burger. I'm sorry, for anyone who loves it.

[Laughter]

DR. TOMPKINS: Yeah. Some of our previous programs focused only on nutrition, and we realized as we were working with some of the military servicemembers who might have to eat this that we should think about flavor as well.

[Laughter]

MS. VENKATARAMAN: Yes. Please think of the troops and the rest of us when it comes--so I know we have a clip to show of one of the areas where DARPA is working on technology that is for the military and for our returning troops but also could potentially be transformative for the world. So I'm wondering if we can show that clip and if you can tell us what we're looking at. This is your advanced prosthetics work.

[Video plays]

DR. TOMPKINS: Oh, absolutely.

So a number of years ago, DARPA postulated a really interesting question, which is whether or not you could fundamentally change the state-of-the-art in prosthetics. One of the great blessings of advancements in medicine is that many, many fewer troops were dying on the battlefield, but a lot of them were coming back with really debilitating injuries.

And we had a DARPA program manager--these are the people who conceive and create our programs--who was an Army medical doctor, and he threw out a challenge. He said if you lost your arm--if you could play the piano before you lost your arm, I want you to be able to play the piano afterwards. And that call to arms brought together neuroscientists and roboticists and sensor people, and through a series of challenges, we've gotten to a point where you actually have that. The technology now exists to have that type of capability, where people can feel with a robotic hand, and they have the dexterity and the range of motion that they need to be as they were before they lost that.

MS. VENKATARAMAN: Wow. So in that clip, at the end, I saw that the man is touching--I don't know if it's a partner's hand. So is it right to think that he can actually feel her hand?

DR. TOMPKINS: Yes, yes. Yeah. I will say--I mean, I'm kind of glad we don't have the audio. We would all be weeping and hunting for Kleenex boxes because they're feeling it. He's feeling it for the first time. It's his wife.

DR. TOMPKINS: Wow. Well, that's moving. I hate to make a hard turn here, but I'm going to.

[Laughter]

DR. TOMPKINS: Okay. Yeah.

MS. VENKATARAMAN: When people think about DARPA these days or at least often when I think about it, I'm a little worried about the idea of autonomous weapon systems and sort of runaway robots that could be out there, you know, inflicting violence without our control. And I know that there's some amount of investment that DARPA is doing and, of course, creating those autonomous systems and has been creating and thinking about AI for military applications. How much of your work is focused on that, like ceding more control over to autonomous systems, so we have less humans involved, versus figuring out how we actually contain and control or get better outcomes from autonomous systems?

DR. TOMPKINS: So I would actually say a huge part of what we think about is more on the latter half. So first, one, our very starting point is always that it is about a partnership and a partnership between machines and humans, not about replacement of humans, but maybe the shifting of, you know, the balance of workload and the amount of attention that humans have to pay when they're being assaulted in all directions, you know, by too many things.

Two, we have a philosophy when we are dealing with new technology of really trying to build in from the very beginning, thinking about the ethical, the legal, and the societal implications of the technology if we are successful. And we work very closely with ethicists and legal scholars and behavioral scientists to help us project and predict what kinds of things could happen, so that in the course of any technology development, we are also trying to gather the data that, you know, policymakers would need to understand in order to inform on how it could be used, what kinds of regulations would need to be put into place.

It's not perfect, right? You can never imagine everything, but it is our job to try.

MS. VENKATARAMAN: So I understand you have the first ethicist, a scholar in residence at DARPA, who's working on these issues of what is the social implication of this tech before we roll it out. How does that work exactly? And tell me how it works in relationship to the private sector, because we see companies rolling out technology now, you know, even if we take--assume the best of intentions without necessarily thinking about those societal implications down the road. So DARPA drives some degree of tech development. It doesn't drive all of it, right?

DR. TOMPKINS: Right.

MS. VENKATARAMAN: Some of this is happening in Silicon Valley and elsewhere in the world.

DR. TOMPKINS: Yeah. So for context, in terms of like who drives what, I think when you imagine what happens in the commercial world, obviously there's a lot of money going into technology development. But it is always going to be aligned primarily with a business motive, right? You have to be able to make money. And then the kinds of things that DARPA and other government organizations tend to think about, we're often thinking about the problems that don't align in that space.

So, you know, in the commercial world, if something goes wrong with AI and you make a bad recommendation for a movie to watch, the consequences are very low. If we make a bad recommendation on a battlefield, the consequences are tragic and really immense, and we often don't have, you know, vast amounts of consumer data. In fact, we never have vast amounts of consumer data to train on anyway. So we have to have fundamentally different ways of thinking about how to use and how to actually develop artificial intelligence and autonomous systems.

But to your question about how that--how our thought process flows out, I actually think that a lot of the commercial organizations are eager for solutions to some of the problems that they face, but they're in that tragedy of the commons where it's not the highest priority of any one organization. It is our highest priority. So if we do it and we have solutions that work, we expect that they would very quickly proliferate, and we are in constant conversations with a lot of the companies that are driving these. And they even partner with us on a number of things.

MS. VENKATARAMAN: Do you have any examples of what you mean by working through these ethical and societal implications for technology? Can you share with us how you're doing that?

DR. TOMPKINS: Yeah, absolutely. I brought up earlier, synthetic biology, and so that's an area, I think, where everybody can sort of use their imagination and worry about what could go wrong if people aren't careful as you're editing genes and adapting microbes and things like that.

So we had a program called Safe Genes a number of years ago, and it was looking at the development of something called "gene drives." Gene drives are where if you make an adaptation, it doesn't sort of work itself out through inheritage, like over generations of an organism. Anything that you add in might actually then sort of come back out again, right, through breeding and things like that. But gene drives are designed to be permanent, and we were really worried about the implications of that. So we had a program where we were focusing on how you would undo the effects of gene drives if things went wrong.

MS. VENKATARAMAN: And this is things like when you release a swarm of mosquitoes--

DR. TOMPKINS: Right.

MS. VENKATARAMAN: --that have been genetically engineered to perhaps not spread malaria.

DR. TOMPKINS: Right.

MS. VENKATARAMAN: You're worried about maybe how that propagates in the population, and you want to be able to pull it back--just in case anyone didn't know.

DR. TOMPKINS: Yes, absolutely.

And so one of the things--so that program had an ethical, legal, societal implication. So I'm going to just use the term ELSI. Sorry. It's--we love our acronyms, right? So we had an ELSI panel, ethicists and biological scholars and folks who could really help all of the teams work through the thought process as they were developing technology and say, is this the direction we want to go in, or if we do go in this direction, what kinds of things will we need to understand, again, from a regulation perspective? And a really fascinating thing happened within the team--within the teams that were working on the program is they independently--with the advice of the scholars, they came together and came up with a code of ethics that they said that they would be willing to abide by, that they thought would be really, really important. And that kind of independent code of ethics then forms a really natural baseline if you wanted to regulate a technology to say, instead of a clean sheet of paper and trying to impose things, we actually already understand the voice of the people who are doing this work. And there may be other considerations that you'd have to bring in, but it's an amazing starting point. And it was done by people who were deeply steeped in all aspects of the technology.

MS. VENKATARAMAN: And are folks like the Gates Foundation and others who are working on gene drive technologies taking up any of that work?

DR. TOMPKINS: So I believe so. I believe that it has very organically spread into a variety of different communities. I don't want to specifically put words in the mouth of any one organization.

MS. VENKATARAMAN: I want to ask you about China. So depending on who you talk to these days, China is either outpacing the U.S. in terms of its military investment in AI and its development and investment in biotechnology, or it's lagging way behind the U.S. in terms of LLMs and other forms of AI. How do you assess where we are relative to China in terms of investment in emerging technologies and whether there should be a cause for worry?

DR. TOMPKINS: You know, I think like everything else, the answer is always going to be much, much more nuanced, and everybody wants to have an easy answer. We're behind. We're ahead. And it's never that simple. And even in--so in something like AI, there are many, many different aspects of the technology, and there are some areas where they are working faster, and there are areas where we're working faster. Some of that comes down to priorities.

As you know, for example, China has vast access to data about their citizens that we choose not to collect, right? And so that means that we're going to go down different paths and work in different directions.

So any moment in time, you probably need a hundred different questions to like really try to assess the more subtle and nuanced level of who might be stronger in one area or another.

MS. VENKATARAMAN: Do you think that that data resource is going to be a driver of huge net gains in this, you know, the Chinese asset of data collection?

DR. TOMPKINS: I don't know. I mean, I think it's an interesting thing that they have, and it's a resource that we don't work with, but we have other resources, and we have other ways of thinking about technology. In some ways, it drives us to be more innovative and more creative in how to solve problems while adhering fundamentally to our values about protecting privacy.

MS. VENKATARAMAN: Say more about that because I think--well, Vineet drew this line between the work that led to Siri that then led to the LLMs, but there are people who say that the LLM development and the fact that the U.S. kind of got there first--or U.S.-based companies got there first says something about the U.S. model of innovation. Do you buy that?

DR. TOMPKINS: Yeah. You know, I think a lot of things happening in our country say a lot about our model of innovation and the very distributed model and that ideas come from anywhere. They are not centrally driven, and they are not centrally prioritized.

That actually really reflects the DARPA model. We are an organization that is entirely bottoms-up in terms of where the ideas come from. We bring in brilliant scientists and engineers from across the ecosystem and then tell them to change the world, and when they get comfortable, we kick them out, right?

[Laughter]

DR. TOMPKINS: So we take that U.S. model, and we sort of accelerate it vastly, and so clearly, we're believers in it. So accept the fact that I have a bias, but I do believe in that model, and I believe that the fact that amazing ideas and opportunities come from many, many different sectors across the country is a huge strength. And that's a strength that we should be playing too.

DR. TOMPKINS: I am curious about whether you see that role changing. So government and government investment have been foundational to a lot of the technologies that have changed the world in the digital age, whether it's internet or the fact that Google came from NSF-funded scholars that became the founders of the company. But now I think there are some people questioning whether the government and the public sector are playing that same level of role. Are they investing in a sufficient level to be key players in innovation? Has the private sector just completely outpaced the government role? What do you see as the role of government in the innovation ecosystem going forward as we see these huge technologies take hold?

DR. TOMPKINS: So for government, in general, I really do see the role of government is to focus on those kinds of problems that are not--they are going to be often longer term, and they are going to be the problems that industry won't concentrate on.

And it's just like your question about China, right? It is very nuanced. There are things that industry is simply designed to do much better and much faster, and we are more than happy to let them do that. But there are problems they will never look at, because it is not tied to sort of business models and profit motives that someone has to if we really want to make sure that we're solving technological problems broadly and in a way that actually advances society, not just into certain sectors of society.

And then DARPA, in particular, our role is to take the risks and to do the really big crazy things that as soon as we have demonstrated something as possible, we get out of the way as quickly as possible, as quickly as we can to let others move forward with that technology. But, in general, big risks, crazy risks, things that no one even imagines are possible are not aligned with the way industry typically thinks or works, and so I think it's an ecosystem, it always will be. The balance might shift a little bit, but I don't--I'm not seeing an end to the need for the kind of government investment that has happened all along.

MS. VENKATARAMAN: Well, I'm told I only have time for one more question, but because this has been so fascinating, I'm going to cheat and ask it as a two-parter.

[Laughter]

MS. VENKATARAMAN: And so the question is, what concerns you most when you think about technology and how it's transforming the future, and then what makes you the most optimistic when you think about technology and the future?

DR. TOMPKINS: Oh. Well, you know what? It's a two-part question, but I think it's probably--it's a one-part answer.

MS. VENKATARAMAN: And that is efficiency.

DR. TOMPKINS: It is, that with all of the technological advancements that are happening, it actually means that the pace of science itself and our ability to answer questions is moving much more quickly and the scope that we can look at.

So we've been talking about AI and large language models. There are--the ability to harness that to allow you to explore multiple research pathways much more quickly means that everything is moving faster. So huge risk of it being abused or things going wrong but incredible upside opportunities to solve immense problems and to defend against those risks. So it's an exciting time. It's a fascinating time, and of course, everything that we're doing is truly trying to focus on some of those risks and potentials for surprise that we want to guard against from a national security perspective.

MS. VENKATARAMAN: This has been absolutely wonderful. Thanks so much for joining us today, Dr. Stefanie Tompkins, director of DARPA.

DR. TOMPKINS: Thank you for having me.

[Applause]

MS. VENKATARAMAN: Don't go anywhere. My colleague Cat Zakrzewski is going to be out here shortly with Anna Makanju from the company everyone's talking about, OpenAI. Thanks so much, everyone.

[Video plays]