The Washington PostDemocracy Dies in Darkness

Transcript: Future of Work: A New Innovation Playbook

By
March 28, 2024 at 2:31 p.m. EDT

MS. ABRIL: Hello, and welcome to Washington Post Live. I’m Danielle Abril, tech at work writer here at The Post.

Today I'm joined by Microsoft's corporate vice president of AI at Work, Jared Spataro, to talk about how technology is changing the workplace. Jared, it's so good to see you again, and welcome to Washington Post Live.

MR. SPATARO: Likewise. Great to be with you. Thanks for having me.

MS. ABRIL: Absolutely. Let's get started. So I want to start with obviously the big thing that's going on at Microsoft, which is Microsoft Copilot. You guys launched it last February. That's--for those of you that don't know, that's Microsoft's AI assistant. And as you know, I tested it. But for our viewers, please explain how Copilot works and how you believe using it impacts or using AI impacts worker productivity.

MR. SPATARO: Oh boy. You could get me started here. Well, let me just start by saying AI can do some amazing things, but it needs to be in the right places. So with Copilot, what we've done is we have created an assistant that takes the magic of generative AI and puts it in the places that people work today. So that means places like your email and Outlook. It means Word, Excel, and PowerPoint. It also means in new interfaces where you can chat about all of your work data, everything related to your job. So you can ask questions like, "Hey, look at my calendar over the last month, and tell me how I spent my time, and give me suggestions for how I could improve." So it both helps people in what they do today in the apps that they're familiar with and kind of opens new vistas for them.

And then if I just get to the heart of what we see, we think of it not just as a tool, some sort of incremental improvement, but as a whole new way to work. It takes different habits, takes different skills, but it also gives you outsized rewards. And we're really excited about that and some of the studies that we've done from Copilot users.

MS. ABRIL: So we did try out Copilot and as well as Gemini on Google Workspace, which is their AI assistant. At the help desk, we wanted to see really, you know, are these things easy to use? And we found that although, you know, the AI tools really help complete tasks, you probably shouldn't rely on them entirely to do your job. How should workers think about when to use AI and what they should consider or keep in mind if they're considering turning to these tools for work?

MR. SPATARO: Well, it's a good observation. We used the name "Copilot" on purpose to indicate that the Copilot shouldn't do your job for you, that you still are the pilot. You're in the driver's seat, and it's your job to do your job. But we recommend that if you get the right skills, you can use Copilot to advance, move the ball down the field faster than ever before. Now, one really important thing that I'll say, Danielle, that's really just key for people to understand about this new generation of technology is it's unlike anything that they've experienced with computers before. Most people think of a computer the way they do a calculator. I'm going to punch in a question in a calculator. It's in the form of numbers, and I'm going to get the right answer back. That's not the way generative AI works. In fact, sometimes it's not even right. It gets things wrong, almost like a person can get things wrong. But when it gets things wrong, we tend to find that it's what we call "usefully wrong." It actually helps you move, again, your work forward. But you have to trust and at the same time verify. So you have to learn a new way of working. In many ways, it's kind of like working with a colleague that is learning your job and trying to help you do it better.

MS. ABRIL: So I want to pick up on something you actually just talked about. You know, AI doesn't always get it right. I mean, it tries to give you what it thinks is probably the answer based on--also, we learned based on how you ask the question, which if you turn a question around three different ways, you might actually get the right answer. But when we think about AI, you know, a lot of users are really worried about, you know, those times it gets it wrong, it hallucinates, which is, again, kind of making up things, or just misinterprets your question or what you're trying to get at. And I wonder, you know, in terms of being able to identify the errors, my understanding from experts I've spoken to is as generative AI continues to advance, it gets harder and harder to identify where it's wrong, because it's so believable. The answers come out, they look wonderful, and you're like, yeah, that sounds right. And you go ahead and push it through. What are your thoughts around these issues, especially as they relate to higher-risk work, you know, work that could have medical, financial, or legal consequences?

MR. SPATARO: It's a great question, and you have to understand the technology, at least just fundamentally, to get that it is--what it's doing is it's reasoning statistically, that the actual term is "stochastically." So much like a human does, we don't reason perfectly about questions. We take the available facts. Sometimes the facts aren't enough. Sometimes our judgment isn't quite correct. And then we kind of reason, and you're exactly right. I think people are worried that when the answers get presented, it feels so reasonable that it's easy to believe.

There kind of are two things that I have seen in my own usage that have made a really big difference. Number one, there is a new technique that we call "grounding" that allows you to actually make sure you give the AI tool the latest information related to a question, a prompt that someone asks. And that grounding technique actually reduces hallucination significantly. So if you're asking, for instance, a question about a recent event, the Copilot will actually go out to the internet, collect information on that event from reliable sources, and then use that as its fact base to reason over when it gives you the answer.

And then the second thing, which seems small but is incredibly important, is we actually provide references, literal explicit references in the Copilot answer to the source data that the Copilot used to provide the answer to you. And it prompts a new skill. You have to read what's there and then spend a little bit of time in making sure that you're looking at those sources so that you understand, have a bit fuller understanding of the sources.

Now, some people will ask, "Well, Jared, shouldn't you just do it yourself?" My own experience has been, no, not really. It's like having a very competent research assistant that is pulling together relevant information. At the end of the day, though, that name is so important. It's the Copilot. You are meant to do your job.

So it's a new way of working. I think that's--I couldn't stress that enough. As you learn it, we are finding that users are faster. In our battery of tests, for instance, people were almost 30 percent faster on common information worker tasks, with no change in their accuracy as in the responses of the tests that we ran.

MS. ABRIL: So don't get lazy. You got to go look and make sure everything's exactly where it should be. Got it.

Well, you know, obviously a lot of workers see the value here in expediting mundane tasks with AI, things like drafting emails or organizing their inbox. But what do you think about AI for more complex tasks, and in what scenarios might it be helpful, and what scenarios might we want to steer clear of when it comes to those complex tasks?

MR. SPATARO: Sure thing. Let's start from where it's not good yet. It's not good at math, it turns out. We can augment it with mathematical skills. We are, for instance, wiring it up to Microsoft Excel, and there it's learning how to use a calculation engine together with its large language model. And it's still learning. We're still learning how the tool can be used. So there are some domains where I would say make sure you understand its strengths and weaknesses.

When it comes to complicated tasks, things that I would call "long-running chains" or sequences of tasks that need to be done, oftentimes what's best is to use the tool in its form today to help you complete portions of that, but for the human to be the one that is putting them all together.

So if you're working on a budgeting process, for instance, pulling together the latest information, looking at that information and analyzing, perhaps even looking at it from different angles, all of that is fantastic. But make sure you're the one that is going from end to end to have the budget make sense. And that's true of almost all long-running kind of complicated, sophisticated tasks today.

But the great news is, Danielle, it's getting better every week. Week in and week out, as we are learning more about how people use it and as the technology is improving, I can really see the improvements in my own usage.

MS. ABRIL: And in terms of where it's kind of doing well in those complex tasks?

MR. SPATARO: Sure thing. The types of things that it's particularly good at are summarizing. So it does a great job when you have lots of information. You need to get the key points. Does a really nice job, for instance, in meeting settings. It can be a meeting you attended where it's providing notes, or it can even be a meeting, in my case, that I don't attend anymore. Lots of meetings I don't attend and I just ask it to take notes for me so I understand what the key points are. I can even query that after the meeting and ask questions about what happened.

It does particularly well in email, as a simple example. So I use it every day to summarize long email threads. We all get them. You know, I hate to read from the bottom up. I don't have to any longer. It does great in drafting replies. In fact, saves me a lot of time there.

And it's particularly good in a set of what I'd call "sophisticated information retrieval scenarios," things like, "Hey, it looks like I'm going to meet with this customer on Thursday. I remember that there's been lots of emails. We even had a meeting. Someone wrote me a document to get me ready. Can you pull all that together for me and give me essentially an information pack?" It's incredible at that type of work. And again, if you think about what that can save you, it's not just minutes, but for me, oftentimes hours.

MS. ABRIL: I'm going to squeeze in an unexpected question here because you just mentioned something, and I just recently remembered that, you know, Zoom is also kind of getting into the game and trying to release some AI capabilities that go across different apps that they've started to release. And you mentioned, obviously, one of the things that I found really helpful in our Copilot test was that, you know, I'm going to have a meeting with my boss. What are the last conversations and emails and documents we've collaborated on? And it could quickly kind of give me an update, "Oh, yeah, this is what we need to talk about in the meeting." But I do wonder, you know, a lot of times a lot of workers work across apps, right? And we don't necessarily work on solely Microsoft or solely Google or solely Zoom. We're kind of using probably a mix of things. People just like different features from different providers. Would Microsoft ever consider, you know, opening up its AI to work cross-function across apps?

MR. SPATARO: We not only consider it, we've done it. It turns out that out of the box, so without any configuration, you certainly can reach into your email inbox and to your documents and the places that are already a part of what we'd call Microsoft 365.

But the product itself has the ability to use connectors, in fact, over 1,200 connectors that we provide that plug into everything from SAP to any other system you can imagine, HR systems, workflow systems, any type of system that you have out there to allow you to grab that information and what we say is "to reason over it." And that's incredibly powerful.

So when you combine the ability to pull financial data together, for instance, with unstructured data and have that give you a full view, maybe even with your CRM data of a customer to get you ready for a meeting, there's just nothing out there like it.

And you hit on a very key point. You know, people live in very what we would call heterogeneous environments that will persist for a very long time. We recognize that we're not the center of the world, but the advantage we think we have for individual users is we can be where they do spend a lot of their time every day in the tools that they're really familiar with and that they choose.

MS. ABRIL: Well, Jared, we're kind of running low on time, and I have so many questions I want to get in, so I'm going to kind of bounce around here. But late last year, you know, you--Microsoft found out that digital natives, basically Generation Z, the youngest workers in the workforce right now, are falling behind in adopting AI at work. That's kind of shocking to me given that we see how often Gen Z is using AI in their personal life and breakup messages and things that seem a little bit frivolous for usage of AI. What's happening here?

MR. SPATARO: Well, I think it's a combination of a bunch of factors, but one that I've spoken of previously that really, you know, for me has touched my imagination, I've realized, wow, there's something there, is that people who are doing best with AI today are those who have managerial experience. And the reason that that's true, if we go back to what we've already talked about, is that you really do best with the technology when you interact with it the way that you'd interact with a direct report. When you work with someone who reports to you, who you're responsible for, you have to give them a lot of context. You get to know kind of their strengths and weaknesses. You don't settle for what we call kind of a one-shot type of approach, "Hey, I'm going to give you this assignment. I hope you get it right." No, instead, they come back to you. You give them some coaching. You guide them in different ways. All of those types of skills are really, really important for using AI.

And what I've observed in our tests, even anecdotally, is that often when you're just new in the workforce, you're kind of learning those things. It's not the first set of things you learn. So managers, people who've been managing real people, are doing very well, and that's a really interesting thing to think. I think of it this way, you know, what types of skills will new graduates need in the near future? Well, I actually think we're going to need to help new graduates know how to manage people and manage AI. And that's a really interesting thing to think about education and entering the workforce.

MS. ABRIL: Well, and that also tells us a little bit about how organizations should think about maybe their workforce and getting people ready to use AI if they're already starting to adopt these things. Is that something we should be thinking about from a management level?

MR. SPATARO: Absolutely. Skilling and entering into a new era requires kind of a new way of thinking about what people need to know, and I think it's important, if I could frame this up, not to think of this technology as just incremental. If you think of it as like, oh, yeah, this is, you know, kind of like a new incremental productivity improvement, you're going to approach it the wrong way, or at least that's my observation with customers. Those who realize at the individual, the personal level, and at the organizational level, this really is a brand-new era as we approach work, those people are doing the best because they have a lot of imagination. And they quickly lead, as you indicate, to new skills. They realize, oh, I'm going to have to teach my people. No one's born knowing how to use a Copilot, but you can learn it fairly quickly, and then it can make a really big difference.

MS. ABRIL: So we only have a few minutes left, but this is a question that keeps coming up in my work, and no matter how much I talk about AI, people are very concerned about the future of AI and where it brings us in terms of jobs and the workplace. You know, generative AI is only expected to get better. As you mentioned, it's getting better every day. And I know right now, as you started off this conversation, Copilot is exactly that. It really needs a person in the driver's seat to really not only tell it what to do, but make sure it's doing the thing that we asked it to do properly. But, you know, as this advances, what does this mean for jobs in the future? Could we see a future--we're talking to a lot of young people who are worried about entry-level jobs disappearing, or even the case where AI isn't necessarily, you know, taking a job away per se, like a full job, but it's taking away so many tasks, so many repetitive tasks from people, that instead of needing five people to do a job, maybe you only need three because they are a lot more productive now because AI is taking away some of the stuff that's been bogging them down for so long. So in that respect, you would still see some sort of decreasing of jobs because it's making people so much more productive. What is your take on where this heads and what this means for jobs?

MR. SPATARO: And I'm incredibly hopeful and optimistic, but let me explain why. Whether it's electricity, whether it's a steam engine, it could be a word processor or a PC, we always have this immediate reaction to new technology. Uh-oh, like, this is going to do stuff that people do today. What are the people going to do? Does that mean they'll be out on the streets?

It is true that there's displacement, meaning that jobs shift, that things that people used to do may now be done by machines. There's no doubt about that. As you look at the old typists of yesteryear, you saw people whose job was to literally type on behalf of other people, and then the introduction of the PC changed all of that. But what we always see, what we've seen since the beginning of the Industrial Revolution and recorded history on really getting into what science and innovation does, is that it creates new opportunities. So my advice for people who get worried about that is I say, look, it's very natural to be worried, but instead of taking that energy and funneling it into anxiety, funnel it into innovation, funnel it into the future, and learn about the technology. Embrace the technology. You'll see that creates more opportunity, both on the grand scale, the macro scale, as well as on the individual scale.

And we're seeing that already. People who have AI skills, they are already getting ahead, because in many ways, they kind of have the equivalent of a backhoe when many of the rest of us are still using shovels. It's exciting that there's a bright future, and we just have to embrace it.

MS. ABRIL: We only really have one minute left, but I want to squeeze in this question, so if we could get a brief answer. You know, obviously, we're seeing the development of new collaboration tools and new ways to work. AI is really changing the modern workplace. What do you expect to happen next? Where do we go from here?

MR. SPATARO: Well, I expect AI to be so woven into the way that we do things that it will sit alongside humans, and going forward, our teams, the people we work with will include not just humans, but also these AI assistants. It will become a natural part of the way we do things, and that skillset will be incredibly important for every worker, no matter where they sit in an organization.

MS. ABRIL: Wonderful. Jared, thank you so much for your time. Unfortunately, we are out of time, so we'll have to leave it there. Jared Spataro, thank you so much again for joining us, and we'll be right back in just a few minutes with our next guest. Please stay with us.

[Video plays]

MS. KOCH: Hi. I'm Kathleen Koch, a longtime Washington correspondent.

Every business and individual wants to protect their most sensitive data, right? But that's becoming more challenging by the day as cybercrime increases, leading often to both loss of revenue and customers. But AI is giving companies a new incredibly effective tool.

Well, here to talk with me about that today is Todd Cramer. Todd is director of Security Ecosystem Business Development at Intel. Thanks for joining me, Todd.

MR. CRAMER: Great to be here, Kathleen.

MS. KOCH: Todd, how would you describe the threat landscape that businesses face today? We know that cyberattacks are more frequent, but how have they evolved?

MR. CRAMER: Yeah, sure. I mean, obviously people have seen the news and the headlines, right? There's no shortage of factoids that show that this is having an impact. Take ransomware, health care, school districts. Even personal individuals are getting devastated by these types of attacks, and so we know it's top of mind. And so you've seen the security industry elevate in importance and increase in budget spend by companies of all sizes to better prepare to protect their workers. And so that's the first part.

The second part is these attackers are constantly evolving. They know how security software like antivirus that we run at home or endpoint detection and response that enterprises run, how they look for these threats. So they're constantly obfuscating and trying to hide, and so I think some of the things we can talk about here today is how AI is going to help uncover those hidden areas that help bolster protection overall.

MS. KOCH: So are the criminals using new tools?

MR. CRAMER: They are. You have these crime syndicates actually syndicating ransomware-as-a-service tools, other things. You know, the cost to actually run a ransomware attack, I think, is down to a thousand dollars. So anybody--it's not just nation-states--these hackers, kids can get a hold of these tools, learn how to get access to a credential, learn how to launch their favorite attack, and get paid in Bitcoin behind the scenes. So there's lots going on that are showing these defenders are moving more rapidly and having success.

MS. KOCH: So how is AI so important in trying to--in thwarting these types of attacks?

MR. CRAMER: Yeah, sure. The AI itself--so you think about all these alerts that are generated in the security software. It's approaching a point--or already has that humans can't process all this. What's a false positive? What's a real attack? Do I respond to this alert from this user's desktop? So AI, from a digital assistant standpoint, will enable the security analysts to better triage all of these alerts.

The second part is what I'll call "AI for security," and these are unique things that we're working on today to use AI itself to help uncover these attacks so they can't hide, like I talked about previously.

MS. KOCH: So how has Intel then been incorporating AI into your products to boost security for businesses and individuals?

MR. CRAMER: Sure. On the server side, you know, the security software has been using cloud-side AI and deep learning. So everything's gathered off an endpoint agent. You workers--all that data is sent to a cloud, and the AI is done in the cloud itself to discover the threats.

This year, we launched Intel® Core™ Ultra, and it has an Intel NPU, a neural processing unit alongside a GPU and the CPU. All three of those do AI optimization. So now what you have is the ability for the security software industry to apply AI to run it directly on your machine using your laptop, and when you put that AI closer to the detection source, you can do interesting things, right? You can't send all that user data to the cloud. It's impractical from a cost standpoint, but if you put the AI there to do its job, now we're seeing these novel use cases of increased ability to classify malware on the fly and all sorts of pieces. So we're not only accelerating the ability of these vendors to find threats, but we're actually detecting more of those threats with this AI.

MS. KOCH: And I take it this is faster and more scalable than anything that humans can do.

MR. CRAMER: That's right, and so you're just going to see the ability not only of the bad guys to launch these attacks at scale, but, you know, this AI for good, the ability of hardware to enable security software, to do AI, to do it faster and better, to scale. And those type of things are possible here. So that's where we're headed.

MS. KOCH: I know that we're just, you know, really--it's the tip of the iceberg. We're just beginning to see what AI can do. What is on the horizon? And from a security perspective, what would you say is the next big innovation that we can expect?

MR. CRAMER: Sure. Before I answer where we're headed, I'll just point out that for the last five years, Intel's threat detection on our vPro laptops, we have had AI for security already working on that. It offloads to our GPU. It does a CPU detection assist for ransomware, for crypto-jacking. That's rolled out on a billion PCs today, and you may not know that on your personal computer, there's an extra assist from the hardware that you get, right? So that's here today.

And then this year, it's all about the year of this AI PC. We're working with the likes of CrowdStrike, the Defenders, all of those type of vendors to unleash these novel use cases with this new horsepower down on the client.

So you're seeing ISVs release software, put out blogs on new techniques, and then in the longer term, you're going to see these security analysts, the tools for the analysts to be able to triage things, heal infrastructure on the fly. So it's their ability to respond that's going to automate, and that's where we're going to be a couple of years out.

MS. KOCH: That's really exciting, especially if it keeps us and our data safer.

MR. CRAMER: We hope so.

MS. KOCH: Thank you so much, Todd Cramer, director of Security Ecosystem Business Development at Intel. Really enjoyed the fascinating conversation.

MR. CRAMER: Thank you.

MS. KOCH: All right. And now back to my friends at The Washington Post.

[Video plays]

MS. ABRIL: Welcome back. To those of you who are just joining us, I'm Danielle Abril, Tech at Work writer for The Washington Post.

My next conversation will examine how the workforce will need to be educated and trained in new ways with the rise of technologies like AI. I'm joined now by Dr. Ayanna Howard. She's the dean of Ohio State University's College of Engineering. Dean Howard, welcome to Washington Post Live.

DR. HOWARD: Thank you. Thank you.

MS. ABRIL: So you are a roboticist, an entrepreneur, and an educator, and you use an interesting term called "humanized intelligence" instead of "artificial intelligence." Can you tell us, you know, what's the distinction there?

DR. HOWARD: Well, so when we think about intelligence just in general, when we put it in computing or artificial, really what we're thinking about is how do we use AI to augment our human functions? And so humans are fundamental to thinking about the data, thinking about the outcomes, thinking about what it is that we want to do, and so when I think about humanized intelligence, it's really about thinking of the next generation of intelligence, of AI but coupled tightly to engage and enable us, improve our quality of life and incorporate things around workforce development. And so that's why I've used "humanized intelligence" really for the last 15 years.

MS. ABRIL: That makes a lot of sense.

In assessing the impact of AI on jobs, you have said that while the technology will change existing jobs, there could be whole new fields as well as new opportunities. How so? Can you explain that a little bit, and what role will the human emotional quotient you've spoken of play?

DR. HOWARD: Yeah. So one of the things we think about computer science--and everyone's like, oh, I want to be a computer scientist, and I want to go into coding, but computer science as a discipline, like, I can enroll in major in computer science. If you look at engineering, which existed in the, you know, 1850s, 1860s, there was no discipline called computer science, and yet computer science is one of the fastest-growing jobs, even today.

Jobs change as we expand and increase our technology footprint, and so when I say we don't even know what jobs are for the next year, for the next 10 and next 20, it's because as we grow, as we advance, we are required to think differently about how we train our students, how we train our next generation.

I like to think about robotics. You know, when we do have self-driving cars on the road--who knows when that will happen? But it will. Do we need robotic mechanics? What about the gas station attendants? Are there going to be a new breed of gas station attendants that can deal with robotic cars? We don't know, but we really need to train the next generation so that they can adapt to the new requirements in the new jobs.

MS. ABRIL: You know, it's funny. We're already seeing that in San Francisco where I'm based. You know, we're seeing the self-driving cars run around and trying to deal with the issues there, so, yeah, a reality already coming to life.

You know, that being said, you mentioned this--we saw it a little bit in the intro--about how, you know, students are going to need to be educated in new ways to adapt to this new workplace and economy. I want you to expand on that. You know, how do we need--you talked a little bit about prompt engineering and things like that, but what are ways that we need to think about how, you know, students are going to be educated, especially when you're talking about jobs that we don't even know are going to exist? How do you educate that workforce, and are you already seeing these changes happen?

DR. HOWARD: I am seeing the changes happen, and I am a proponent of college education. And the reason is it's not necessarily the discipline, but when you go to college, even if it's a two-year college, you are given the tools to think, ask questions, and figure out things. And so when we provide the tools--and I always think about, can we create a computer scientist that really is fundamentally a humanities student? Can we add those together? So that I love English, I love language, but can I figure out how to use large language models to create better writing styles, to tease out editorials? Those are the kinds of things we have to think about as the next generation of the tools.

And so in education, what we do is we teach students how to learn so that when they change their jobs 20, 30, 40 years, they are comfortable with--"Oh wait, it's a new job, new requirements. Oh wait, I know how to learn. I know how to figure this out. I can adapt as the jobs adapt."

MS. ABRIL: So it's really about the skill set and less about necessarily like, you know, training to be, you know, a computer scientist or a coder or a software engineer and specifically like just knowing how to learn and knowing the skills you might need going in to sort of adapt to those new jobs.

DR. HOWARD: Exactly, exactly. And that, believe it or not, is difficult. Learning how to learn, learning how to ask questions, learning to be curious always and all the time, even learning to question the things that come out in terms of AI and artificial intelligence, those are hard skills.

Traditionally, if you think about it, as students, a teacher comes. They say; you believe them. So now when you're in the workforce, when you're interacting with AI as your copilot, for example, it's like, "Oh, I'm just going to take the guidance. I'm just going to understand," and so that whole skill set of being curious and questioning and expanding our own knowledge, liking to learn, is actually not always natural when we think about it.

MS. ABRIL: I want to read an audience question we have from Indra Klein in Washington, D.C. Indra asks, "Given the perceived gap with respect to technical skills needed for the future of work, any thoughts on better integration of tech-related skills in K-12 curriculum with traditional basic subjects?"

DR. HOWARD: Yes. I truly believe that--and I'll call it "computational skills" versus "computer science," but computational skills, which also links to AI, should be one of the elements.

So you think about K-12. We just assume that you will learn how to read. We just assume that you will learn how to do basic math skills. And we've designed our curriculum from K-12 so that at the end of high school, you are able to read at some grade level. You're able to do some basic math. We need to do the same thing around computational skills. When you have a student in kindergarten and they are using their apps, which they are, actually asking them, okay, let's think about the data. When you're providing and you're playing your game, do you know what data is? Let's define data. What do we think about when we're collecting your data? We can start as simple as kindergarten and continue going through that, and so by the time you're in high school and graduate, you will have a basic understanding and the skill sets of computation, whether it's early, very basic Python coding, or maybe it's just like, "I know how to prompt my chat agent, my chat bot, so that I can get better answers for my homework." We have to integrate that deeply. I think of it as AI literacy, computer science literacy.

MS. ABRIL: Now, that's a total, total change from the way that probably you or I were educated growing up. So that would be amazing to see what comes out of that. If you're educated at kindergarten with that knowledge and then you go through college, I bet you have a totally different mindset.

I want to move to another question. We saw in the intro video, you know, you emphasized coding will be a key skill for future generations. I want to talk about who specifically will benefit most from this, and how should we think about coding for future generations? I mean, you talked a little bit about it in this last question. Can you expand on that a little bit?

DR. HOWARD: Yeah. So when I think about coding, it really is how do we give students, the next generation of workers, the future workforce, the ability to think in a logical fashion, identify a problem, figure out the steps to solve that problem, and then come up with a solution. I call it "engineering know-how,” but whenever one says “engineering,” it's like, “oh my gosh, that's like too hard.” But when you say coding, people are like, "Oh, okay. I can learn how to code." But coding really gives you that ability to process in that very logical, sequential, problem-solving way, and so that's really--when I say “coding skills,” it's the ability to think in that way, think in that logical sequence, problem-solve, and it really doesn't matter what the code is. I need to figure out how to make you think differently, and that's why that skill is so important when you go out.

I know people who learned how to code in one language umpteen years ago, say, you know, BASIC or Pascal or Assembly. Now they can program in any language that comes up. I say code language of the day is very easy to pick up because you have that basic understanding of how to think about the basic building blocks, the sequences, and what are the outcomes that are expected.

MS. ABRIL: So we're already seeing generative AI tools gradually being incorporated into the workplace, you know, from ChatGPT to--you know, our previous speaker talked about Copilot. You know, what are the basic skills that employees need to know as they engage with these AI tools, and how do we go about learning them?

MS. ABRIL: So I believe in trial by error, so just use it. Just think about what is it that you want to solve. What is it that you want to do? If you are a journalist and you would like to write an article, ask your agent, whether it's Copilot or Gemini or Anthropic Claude, you know, "Oh, please write an article about the future of work for The Washington Post." Ask it three different ways, and you as a person can see, "Oh, wait, I get a better response if I ask or prompt it in one way versus the other." And that allows you to basically learn by doing. I think that's really the only way that most people can become comfortable of using these tools and not being afraid that they're using it wrong. It's just like you would ask a parent or a child and you don't like the answer, you ask again and you ask again until you get the answer that you want. You should do the same thing when you're using these tools that are augmenting our work and making us more efficient. But we also have to learn how to get it to make us more efficient.

MS. ABRIL: So obviously, one of the big barriers to adoption is trust, people trusting these tools for various reasons. Maybe they're worried about it, maybe they're scared of it, or maybe they've just had some experiences with hallucinations and errors. What role do the private and public sectors play in building trust in a technology that could disrupt jobs and society in other ways?

DR. HOWARD: So I think of trust in really two tranches. So one, a lot of individuals say, oh, I don't trust AI, or I don't trust the companies. And you look at the surveys and oh, yeah, okay. But then if you look at the behavior, the behavior doesn't match what people say. People will still use the tools. People will still use the AI. When it hallucinates, it's not like they cut it off like, oh, I'm never going to use it again. It's like, oh, okay, I'm going to try it again and see what it says. And so what we call behavior is so much different. The behavior of people shows that we actually trust the use of these tools, whereas surveys is like, no, I don't trust it.

I think one of the things is that we are not being truthful to ourselves. We are not really thinking about the fact that we like a lot of these AI tools, because when it works, it does make us more efficient. It does make us better. It does improve our learning capabilities. And when it's wrong, that's when we're like, oh, we don't trust it, but we're going to still use it. And so that's kind of when I think about trust in those two tranches in those aspects.

And so for companies, I think that, one, companies should take a little more responsibility about thinking of how do I build and really distrust on the behaviors of the AI so that people won't start over-trusting it in terms of its use.

I always think about, you know, maybe we should have denial of service at some point. Like, today we're going to have an off day for AI. I mean, could we have companies actually think about that as a possible way of building distrust but then also trust in that aspect?

MS. ABRIL: You're one of 27 experts appointed to the National Artificial Intelligence Advisory Committee. Recently, prominent academics have called on the Biden administration to fund AI researchers to help them keep up with the tech giants. Do you also share this concern?

DR. HOWARD: I do. So one of the problems with current versions of AI--and really, it's generative AI--is that you need a lot of computes, which basically means you need backends. You need to figure out how do you have these large servers, which means that is very, very costly.

As an academic researcher, most researchers aren't able to afford to take a billion parameters or learn around a billion types of input data to come up with a solution, whereas companies can.

But if you think about the fundamental things that have moved our entire society forward, that has come from basic research. That has come from researchers working together, trying to figure out, sometimes working 20 years, and then "Oh my gosh. We have an mRNA vaccine." This is what's required, and so really thinking about what can we do when we can have government support in terms of open-source resources, in terms of computes that are freely available to researchers and basic researchers, both in academia and in the K-12 system. It allows us to work on problems that might take 20 or 30 years to solve. There is no, necessarily, commercial benefit in the now, but it can solve things that are really important in terms of education, health care, water scarcity, manufacturing mobility, all the things that we care about as a society but is not necessarily financially profitable in the now.

MS. ABRIL: So as I mentioned in your intro, you're also an author, along with everything else, being an entrepreneur, being an educator, and you wrote the book, "Sex, Race, and Robots: How to Be Human in the Age of AI." How does AI inherit human biases, and can it be circumvented? I kind of wonder, you know, is the genie out of the bottle? We've seen so many of these cases and situations where AI has gone wrong and just been really biased in their answers. Where are we at? Can we pull some of that back? Would love to hear your opinion on that.

DR. HOWARD: Yeah. So the reason why AI in its current rendition has bias is this element of what I called "humanized intelligence." AI is learning from our data, and the fact is, like, as people, we are biased. We have historical biases. And if you think about collecting data in the last year versus the last 10 years versus the last 20 or 30, there's extreme cases of bias against pretty much any group you can think about. And so AI inherits this bias.

Now, I will say that AI is better than biases in people. But the problem is, is because we over-trust AI, when it says something, we very rarely question. And so that bias gets amplified. If it's making decisions around, again, housing, policing, surveillance, it's inheriting that bias. So how can we circumnavigate? It is out of the box. The genie is out of the box. It's out there. It's being deployed.

I think one of the things we can do to deal with this is, one, putting in more safeguards for the things that are around our liberties, around things that really can cause harm to us as a society. And when I say safeguards, safeguards are about human oversight. Safeguards could be about rules and regulations such that they're designed not to impede progress, but to ensure society does not have any issues, there's no harm to society. And we can look at the UN to say, okay, what are these things that are really, really dangerous for AI to be in and make sure that we have human oversight till we get to the point that the bias in AI is mitigated. It will never be eradicated because we as people will always have some biases, but at least mitigate the bias from AI based on their decisions.

MS. ABRIL: And you recently wrote technologists aren't trained to be social scientists or historians. We're in this field because we love it, and we're typically positive about technology because it's our field. That's a problem. We're not good at building bridges with others who can translate what we see as positives and what we know are some of the negatives as well.

So, you know, as we kind of wrap up this segment, where do you think those bridges need to be built better in terms of where we are today?

DR. HOWARD: Well, one, I think that as technologists, we should be learning about things such as ethics and social science. Even though, you know, I got into engineering and coding, because that's what I want to do, I'm not really a great reader. And so it's like, oh my gosh, I have to read more. No, I want to do math. I want to code. So it's also anti what I enjoy.

But I think what is--we are becoming so much more reliant on technology, and in order to contribute to the good, we also need to think about what is the harm that we are instituting and what is the harm that we are perpetuating. I can think about coding. I can think about my tools. I can think about AI. And I'm able to put in guardrails in my own code when I think about it. It's so much harder after the fact, and so I think it's not just technologists. We should be self-aware and start learning ourselves, but I think companies should also help their technology workforce to become more ethical. And it's not good enough just to put in an ethicist and a team, because as technologists, we tend to respect other technologists. I'll just say that. And so it really is helping us become better, better humans, better society, in society. It's hard because it's based on a human problem, not on a technical problem. But I think it's one of the ways to get to the other side.

MS. ABRIL: Got it. Yeah, that makes a lot of sense.

Unfortunately, we're actually out of time. So we're going to have to leave it there. Dean Ayanna Howard, thank you so much for joining us today.

DR. HOWARD: Thank you. Thank you for having me.

MS. ABRIL: And thanks to all of you for watching. For more of these important conversations, sign up for a Washington Post subscription. Get a free trial by visiting WashingtonPost.com/live.

I'm Danielle Abril. Thanks again for joining us today.

[End recorded session]