Sponsor Content from

Issue 1
Chapter: What does it mean to get AI right?

Much of the current conversation around the rise of artificial intelligence can be categorized in one of two ways: uncritical optimism or dystopian fear. The truth tends to land somewhere in the middle—and the truth is much more interesting. These stories are meant to help you explore, understand and get even more curious about it, and remind you that as long as we’re willing to confront the complexities, there will always be something new to discover.

Q&A

Doing the Most Good

AI has immense potential to help humanity. At the same time, it also poses complex ethical questions. James Manyika, Google’s Senior Vice President of Research, Technology, and Society, believes that achieving progress requires building powerful systems that benefit people and society—systems that help cure disease and expand access to both information and opportunities—and addressing risks around serious challenges like bias, misuse, and safety.

Photography by Cayce Clifford

James Manyika likes to think about how to responsibly steer AI's development for the benefit of society. That's one of the many reasons Google hired him to serve as its Senior Vice President of Research, Technology, and Society, a role that has him leading the company's efforts to ensure its AI innovations positively impact humanity. Manyika focuses on helping to advance AI's development and guide Google on building AI that benefits people and helps solve pressing societal problems, creates inclusion and access for all, and addresses risks that come with intelligent systems. He advocates nuance, humility, and considering topics from multiple perspectives when advising on AI's development. As Manyika sees it, there's no such thing as a one-sided problem—especially when it comes to AI.

Question Let’s start with the big picture. There’s a lot of discussion about what it means to “get AI right.” What does that mean to you?

James Manyika At the highest level, getting AI right has two sides to it. On the one hand, it’s about making sure we build AI that will benefit people through its capacity to assist, complement, empower, and inspire people in every field of human endeavor—from the everyday to the ambitious and imaginative. This also includes AI that helps advance scientific breakthroughs and discoveries, and helps in solving pressing societal problems and creates access and opportunity for everyone. The other side is just as important, making sure we build AI that addresses the risks and complexities that come with having powerful and highly capable systems. We have to get both sides right.

Question To do good and not do harm simultaneously.

Manyika Yes, to do the most good. Not just do good, but do the most good—and do all the things that without the help of AI we can’t do at all, at scale, or fast enough. Things that benefit people everywhere, and improve lives. This is critically important and it’s the thing that motivates us.

Question Right. And that highlights why the stakes are so high. It’s not just because of the risks, but also because of the real potential positive benefits for humanity. That’s on the line, too. Can you paint that picture, as you see it, of the ways in which we might experience those benefits across society, if we do get it right?

Manyika Let’s break this down into a few categories. First, we actually have to build AI that is powerful and capable enough to not only assist and help people, but also help us solve the seemingly impossible problems we want to solve for society. This could include things like discoveries and breakthroughs in science so we can cure cancer and create new drugs and therapies. For example, we’ve been working with a consortium of researchers to create a “human pangenome,” a new resource that better represents human genetic diversity, allowing scientists and doctors to more accurately diagnose and treat diseases with AI. You’ve seen what we have achieved with AlphaFold to solve a 50-year grand challenge to predict the structure of proteins. It’s predicted the structure of all proteins known to science, all 200 million of them, opening up wider possibilities to help researchers understand diseases, discover new drugs and therapies—and help tackle many neglected diseases. The breakthroughs from AI also allow us to address pressing societal crises that people are experiencing today, like the impact of climate change and increasing extreme weather. Powered by AI, our flood forecasting program began with just 2 countries and now covers more than 80 and provides forecasting up to 7 days in advance of a flood to 460 million people in harm’s way. I should also add that one of the ways that AI will contribute to society is through helping to power the economy, especially through its potential to drive productivity growth, which has been sluggish for a while, and will be even more critical to prosperity as society ages.

Next, once the systems are capable enough, we need to make sure we actually apply them in such a way that everyone benefits from its development and the things we will solve using it. It is really important to make sure that everybody benefits.

Question And what about the risks involved in creating these systems?

Manyika Right. That’s the other side, and there you’ve got a range of complexities. We have to make sure we’re building trustworthy systems that people can actually believe in and will not cause harm—that they’re not going to amplify societal biases or generate toxic or dangerous outputs. Then you’ve got questions about misapplication and misuse—are these systems used for the right things? Using AI systems to create misinformation is not an ethical use. Using AI systems to pursue criminal activities, for cyber hacking or terrorist acts, surveillance—those are examples of misuse of these systems. There are also complexities of how AI will impact other aspects of our economy besides enabling productivity, such as work. There is both the possibility of augmenting what people do, as well as the risk of substituting what people do. The development of this technology and how it’s used, will likely result in both happening—this is what Erik Brynjolfsson has written about in his “Turing Trap” paper. As he points out, the choices we make in AI’s development, its use, as well as the policies and incentives around all of this, will affect outcomes. But according to most recent research, over the next decade or more, assuming continued economic growth, more jobs will be created than lost, but the majority of jobs will change—all raising the stakes for skill development and adaptation.

There’s also a question of alignment, which many in the field have thought about for a long time, going all the way back to Alan Turing. When we say alignment, we mean: How does society make sure that these systems do what we want? That they are aligned with our goals, with our preferences, with our values? The issue is as much about us as it is about how we build these systems. For example, you and I can say we want a system aligned with our values. But which ones? Especially in a complex and global world, involving many people and places with varying cultures and views and so on—these are the classic problems of normativity. These are questions about us.

Question Going back more specifically to that narrow idea of alignment for a moment—as you said, it’s defined as this idea of ensuring that AI actually does what’s intended by a given directive, is that right?

Manyika But even that is unclear! For example, do you want alignment with the specific instructions you give it? In other words, do you want it to solve a problem as stated and follow a stated goal precisely? Or do you want it to figure out your actual intention—despite what you say—and solve that, which may not be exactly how you’ve stated the goal? For example, maybe you say you want to exercise every day, but in the end you’re not really exercising every day. So does the system align with what you’ve said you want to do, or what you’re actually doing? Or should it align with what is “best” for you?

Then you’ve got an additional complexity, which is: Should it align with you individually, the majority, a particular group, or with the union of everyone’s goals? Or does it align with what’s good for society, despite what society itself says? Those questions become even more complex if you consider a world in which we each have our own AI agent. Presumably we would want each AI agent to be aligned with its “owner”, but what if my AI is technically more powerful than yours? To me, all these questions have less to do with the particular technology, per se. They’re not technical questions. They’re questions for us as society. Many of them are as old as society itself.

The technical questions, to me, seem solvable over time. It’s a bit like saying, “I want the error rate of a well-defined error to go down.” That’s a very specific problem. That’s an engineering problem, a technical problem. Or if you said, “I don’t want bias,” and if you define what the bias looks like, one can try to solve for that problem. Now, that does not guarantee that the engineers will succeed in solving for the error rate or for the bias, but at least it’s a well specified problem. I worry more about the second problem, the human one, the question of defining bias in the first place, and there are many such questions, the questions of us and what we want, because we’ve been grappling with those questions for thousands of years.

Question Any discussion of human values quickly becomes, as you said, a question of whose values, and that question is both philosophical and deeply political. How might we begin to negotiate those differences and potentially competing goals?

Manyika AI is forcing us to look at ourselves in the mirror. Because we now actually have to answer these questions! Before, they were theoretical, normative, and philosophical issues that humanity was grappling with. AI is presenting us with an opportunity, and perhaps raising the stakes, for us to deal with them. For example, we’ve always had bias in society. On many of the questions like bias, like fairness, even safety, many of the current shortcomings of AI are with respect to some normative sense of the perfect society or human, or a set of ideals of the world we want, but continue to struggle to achieve, hopefully without giving up. So I would certainly not want to have AI worsen harms or shortcomings of society—so we should fix that, but perhaps it may also help us address the shortcomings in our society towards the society we want. So in way, AI is putting a mirror in our face to say, “Okay, humanity, this is what you look like. How do you want to deal with this?”

Question Which is a pretty big question.

Manyika Yes. And I’m not a philosopher, but I think the work that philosophers and researchers are doing in AI typically gives us some frames to think through. One frame is about focusing on the things we all mostly have come to agree on, such as a universal human rights framework. But that’s a bit of a floor as opposed to a ceiling on values. The second approach is what in philosophy is often called “the veil of ignorance.” This is based on some of the work of John Rawls and other philosophers, and the idea is to come up with values or principles that you’d live with if you didn’t know who you were going to be in that society or what your station was going to be, or what your endowments would be. What principles would you be comfortable living with?

The third is similar to a bottoms-up approach, which is to aggregate what everybody seems to be doing and base everything on that. But, you could end up with tyranny of the majority, or the sort of problems and approaches that researchers in social choice theory think about.

We could complicate this even further. What we’ve been discussing is what’s typically talked about as the normative challenge, which is that your values may be different from mine, or may vary between this group and that group. There’s also what’s typically referred to as the plasticity challenge, which is that what you and I might have agreed on twenty years ago may be different today.

Question And our answer to that question, just like our values, changes over time.

Manyika Exactly. Imagine if we’d invented AI in 1950 and we locked-in on whatever the values we wanted at that point in time, forever. We’d probably all look back on them now and say, “How the heck did we agree to do that? We don’t think that anymore.”

Question It feels like we’re being presented with an opportunity for reformation on some level, for possible societal transformation. But there’s also a chance we don’t do that and simply cement the values and inequities of the society we already live in, or worse, create new issues of inequality. How can we work to do the most good, while avoiding those kinds of outcomes?

Manyika The inequality question is important. That’s why I emphasize that part of “getting it right.” I’m convinced that AI will create an incredible bounty of opportunities, in the economy and in scientific discoveries, but part of getting it right is making sure everybody participates and benefits—by everyone, I mean all people, communities, small businesses, organizations of all kinds, countries and regions of the world. That will not happen on its own, we have to work to make sure everyone is able to participate and to benefit—it won’t be automatic.

I love the fact that we’re working on moonshots that try to do that—to solve problems with solutions to benefit everyone. Let me give an example: Google Translate originally launched in 2006 supporting 11 languages, using statistical machine translation. Once we introduced deep neural networks in 2016, we were able to improve translation quality and reach our current number of supported languages—134—which is extraordinary. There’s more than 7,000+ languages spoken around the world—if you cared only about language translations for wealthy groups of people, you might’ve stopped at twenty. But we went to 134. Our research team has a moonshot goal, which is to build an AI model to support language and speech tools for more than 1,000 languages.

Of course, speaking and writing aren’t the only ways that people communicate, and so we’re building our models to be multimodal, meaning they’re capable of unlocking information across different formats like images and video, and the many other ways people communicate.

Question Wow.

Manyika The fact that we’re aiming for that is a good thing, because if people are going to benefit from language translation, which helps get access to the world’s information, knowledge, and insight, and opportunities, that’s a good thing. But how do you make sure that happens everywhere, on everything and for everyone?

Of the 7,000+ spoken languages, only a few are well represented online, which means our traditional approaches to training language models on text from the web fail to capture the full breadth of the world’s languages. Our moonshot goal aims to solve that by working with people and communities around the world to source representative speech data.

This idea of collaboration with communities is also central to many of our projects, for example our Project Elevate Black Voices initiative. Research shows that Black people in the United States often have a worse experience when using automatic speech recognition technology when compared to white speakers. We’ve established an incredible partnership with Howard University that is working to create an African-American English speech dataset to help Black people have a better experience with voice products, and not feel the need to “code switch” in order to be understood by technology.

Question I want to ask you about regulation. As you said earlier, these systems and their outputs have to be trustworthy. That’s a baseline. But we also have to consider the potential for clashes of divergent value systems or competing goals. Someone who has a profit motive won’t necessarily care about doing the most social good. How do we negotiate that? Is it okay to not all be aligned in the same way? Can we accommodate competing goals across different sectors? Can we agree to disagree? And if not, what kind of arbiter do we need?

Manyika Yes, I think regulation is part of the answer. But when I think about the role of regulation with regards to AI, again, I come back to it having two sides. Yes, I want regulation to limit all the bad things we don’t want, the risks and other downsides. But I also want to enable an ecosystem that can work toward the things we want and create incentives for those things.

Regulation can and should do both things. It can help stop the bad stuff and enable the good stuff. I think all these things are two-sided. Even take the alignment question. Sure, we have to solve the technical side to make sure the systems do what we’ve agreed on. But we have to agree first on what we as society want. We have to work on both problems. It’s not one or the other. It’s hard work, but let’s do both. When we make it only a technical problem, I think humanity is taking a pass. It’s throwing the problem back to the technology and saying: Solve for what we want. Even though we haven’t told you what we want.

Question It sounds like folks who don’t usually collaborate will find themselves needing to.

Manyika Well, we have to do all this work together. We need to decide what we want, what benefits we want, what problems we want to solve, what issues we want to avoid. Whether you are a scientist, a technologist, humanist, a government regulator, in civil society, citizens—we need to solve this stuff together.

Question To what extent are those collective conversations already happening?

Manyika That’s one thing I’m excited about. The fact that we’re having these conversations now, this early in AI’s development, as opposed to after the fact, or late, I think that’s great. I think in the case of social media, that conversation happened way too late. I think the collective conversations we need are starting.

For example, Stanford University has set up the Human-Centered AI Institute which was established as a multi-disciplinary institute with computer scientists, economists and other social scientists, philosophers and more. In another example, I’m serving as the vice chair of the National AI Advisory Committee that was established by Congress to advise the president. The makeup of that committee involves technologists, computer scientists, academics, people in civil society, labor union leaders. Very different people from very different vantage points, and with a wide range of perspectives and concerns. And in spring of 2022, the American Academy of Arts and Sciences published a volume of its journal Daedalus that I guest-edited on “AI & Society”—in it I included perspectives from many prominent computer scientists, economists, legal scholars, philosophers, public servants and others, all with diverse perspectives, all grappling with the possibilities of AI for society.

At Google, we have set up the Digital Futures Fund, through which we’re providing $20 million in grant funding from Google.org to think tanks and academic institutions to foster debate about AI policy and support responsible approaches. And in an industry example, we were involved in setting up the Partnership on AI, which now involves hundreds of organizations, from companies, universities to civil society. We’re also working closely with Anthropic, Microsoft and OpenAI to set up a new industry body—the Frontier Model Forum—focused on ensuring safe and responsible development of frontier AI models.

I think we need to have more of these kinds of collaborations. My view is that, in any of those conversations, we need to make sure we’re bringing different perspectives together, as opposed to reflecting only one side or the other. That’s the collective challenge in my view—to focus on and solve for both the beneficial impact and the challenges and risks. That, to me, at the end of the day, is what we have to get right.

Question As humans, we have immense limitations in understanding the entirety of any scenario. We are really bad at understanding counterfactuals, in particular. Is there any way we can use that knowledge of our own limitations to our advantage here? Could it help us identify blind spots, perhaps, or at the least somehow accommodate them?

Manyika Let me give you a technical answer first. There’s this idea that’s been proposed by people like Stuart Russell at Berkeley and others, which is about using our own incomplete understanding and uncertainty in information to create some wiggle room in how we specify goals for the systems so that they’re not trying to optimize some precisely or badly stated goal, but can instead generally move in the right direction, with human guidance. This is an example where we’re using our own blind spots, so to speak, to actually be a feature so we don’t have overly prescribed, precise goals that could be harmful. In that, there is the possibility of a technical answer.

I think the other part of it is to focus on the out­­comes we want rather than the methods of achieving them, given how fast the technical and scientific advances are progressing and the uses of AI are evolving. For example, the way we used to think about solving for bias eight years ago in AI systems is totally different today. Eight years ago, we said, “Well, if you want to solve bias, then clean up the data.” Well, that might’ve been correct eight years ago, but the capabilities have moved on. Today, you may want to train a system on everything, because in several cases, systems that are trained on everything have been shown to be better capable of actually detecting biases and doing something about it. That’s an example of moving lightly when it comes to prescribing solutions, because the technical capabilities are moving so quickly. You don’t want to outrun these capabilities with overly prescriptive ways to solve things, especially if these ways are soon surpassed.

Question It’s all very complex, indeed.

Manyika Yes, but we must and can work through it. I have confidence in humanity’s resilience and ingenuity, but we have to do the work, together. This involves questions such as, what should our institutions look like in the age of AI? What would it mean to be human in the age of AI? Or questions like what does it mean to be intelligent or educated in the age of AI? For example, in 1970, to be considered a smart kid, you’d have to be able to do math in your head. We got past that; calculators forced us to get past that. Now, you have kids who are brilliant mathematicians who may be terrible at doing math in their heads, but they’re still brilliant mathematicians. One could say the same thing about the ability to recite facts, dates and such. We’ve evolved our thinking of what it means to learn and be educated. The same might be said for creativity. There was a time when we used to think that because hip-hop deejays were sampling they must not be great musicians. Some of these questions of intelligence, creativity, of what it means to be human in the age of AI will be unsettling, but also exciting and perhaps liberating. But it will take all of us working together, with both bold ambitions for society, serious consideration of the challenges and risk, and a healthy dose of humility and willingness to learn and course-correct as we move forward. I think that’s something we’re going to have to get right, too.