Sections

Commentary

Building trust in human-machine teams

A U-2 spy plane that is being used to test AI applications lands at an air field.

From the factory floor to air-to-air combat, artificial intelligence will soon replace humans not only in jobs that involve basic, repetitive tasks but advanced analytical and decision-making skills. AI can play sophisticated strategy games like chess and Go, defeating world champions who are left dumbfounded by the unexpected moves the computer makes. And now, AI can beat experienced fighter pilots in simulated aerial combat and fly co-pilot aboard actual military aircraft. In the popular imagination, the future of human civilization is one in which gamers, drivers, and factory workers are replaced by machines. In this line of thinking, the future of war is marked by fully autonomous killer robots that make independent decisions on a battlefield from which human warfighters are increasingly absent.

Although not unfathomable, such scenarios are largely divorced both from the state of the art in AI technology and the way in which the U.S. military is thinking about using autonomy and AI in military systems and missions. As the former director of the Joint Artificial Intelligence Center, Lt. Gen. Jack Shanahan, has made clear, “A.I.’s most valuable contributions will come from how we use it to make better and faster decisions,” including “gaining a deeper understanding of how to optimize human-machine teaming.”

Human-machine teaming is at the core of the Department of Defense’s vision of future warfare, but successful collaboration between humans and intelligent machines—like the performance of great teams in business or sports—depends in large part on trust. Indeed, national security leaders, military professionals, and academics largely agree that trust is key to effective human-machine teaming. Yet research by the Center for Security and Emerging Technology on U.S. military investments in science and technology programs related to autonomy and AI has found that a mere 18 of the 789 research components related to autonomy and 11 out of the 287 research components related to AI mentioned the word “trust.”

Rather than studying trust directly, defense researchers and developers have prioritized technology-centric solutions that “build trust into the system” by making AI more transparent, explainable, and reliable. These efforts are necessary for cultivating trust in human-machine teams, but technology-centric solutions may not fully account for the human element in this teaming equation. A holistic understanding of trust—one that pays attention to the human, the machine, and the interactions and interdependencies between them—can help the U.S. military move forward with its vision of using intelligent machines as trusted partners to human operators, advancing the Department of Defense’s strategy for AI as a whole.

Conceptualizing trust

In recent years, the U.S. military has developed many programs and prototypes pairing humans with intelligent machines, from robotic mules that can help infantry units carry ammunition and equipment to AI-enabled autonomous drones that partner with fighter jets, providing support for intelligence collection missions and air strikes. As AI becomes smarter and more reliable, the potential ways in which humans and machines can work together with unmanned systems, robots, virtual assistants, algorithms, and other non-human intelligent agents seems limitless. But ensuring that the autonomous and AI-enabled systems the Department of Defense is developing are used in safe, secure, effective, and ethical ways will depend in large part on soldiers having the proper degree of trust in their machine teammates.

Trust is a complex and multilayered concept, but in the context of human-machine teaming, it speaks to an individual’s confidence in the reliability of the technology’s conclusions and its ability to accomplish defined goals. Trust is critical to effective human-machine teaming because it affects the willingness of people to use intelligent machines and to accept their recommendations. Having too little trust in highly capable technology can lead to underutilization or disuse of AI systems, while too much trust in limited or untested systems can lead to overreliance on AI. Both present unique risks in military settings, from accidents and friendly fire to unintentional harm to civilians and collateral damage.

Building trustworthy AI

As previously noted, research from CSET on U.S. military investments in autonomy and AI has found that the word “trust” is scarcely mentioned in the descriptions of various autonomy and AI-related research initiatives under the Department of Defense’s science and technology program.  In our most recent report, we identified at least two explanations for this gap.

First, technology has outpaced research on human-machine teaming and trust. There is a great deal of research on human trust in automation, such as autopilot functions in planes and cars, or industrial robots that perform scripted tasks based on specified rules. Fewer studies, however, focus explicitly on human collaboration and trust in the more advanced machine learning systems we have today, which can not only detect patterns but also learn and make predictions from data without being explicitly programmed to do so.

It is difficult to estimate exactly how such technological advances could affect trust in human-machine teams. On the one hand, people already tend to overtrust highly complex systems, and the increased sophistication of these intelligent technologies could reinforce such inclinations. On the other hand, as Heather Roff and David Danks point out, the very ability of advanced AI systems to learn and adapt to their environment—and in turn, change and behave in unexpected or incomprehensible ways—could undermine the human team members’ trust. Clearly, additional research is needed to contextualize and help reconcile some of these tensions.

Second, trust can seem like an abstract, subjective concept that is hard to define and even harder to measure. Defense researchers and developers, therefore, focus less on studying trust directly and more on systems engineering approaches that “build trust into the system” by making AI systems more transparent, explainable, and reliable.

The U.S. Army, for instance, is interested in autonomous vehicle technology to run resupply convoys in contested environments and conflict zones. While fully autonomous trucks don’t yet exist, there are promising concepts for a mix of manned and unmanned trucks that could put fewer soldiers at risk on such dangerous missions. For soldiers to have trust in these unmanned vehicles, however, they need to know things like what the truck will do if it encounters an obstacle. From a systems engineering perspective, this means specifying and implementing capabilities such as information extraction through “what-if” type queries and information communication, so that the system can explain its reasoning and behavior in a way that the human operator can easily understand. “Building trust into the system,” in other words, is a technology-centric approach to engendering trust in human-machine teams by enhancing system features and capabilities closely related to trust, such as transparency, explainability, and reliability.

Testing and experimentation of new technologies is also an important part of building trustworthy AI partners that could soon be deployed alongside human warfighters. One example is DARPA’s Squad X program that partners U.S. Army and U.S. Marine Corps infantry squads with unmanned ground and aerial vehicles equipped with advanced sensing gear to improve the warfighters’ situational awareness and decision-making in hostile environments. One of the key lessons learned from a series of experiments that Squad X ran in early 2019 is that it is important to include the AI in the mission planning and rehearsal stages. Doing so allows soldiers to “wrestle with how to trust AI.” Ultimately, the goal is for human warfighters to get a better understanding of how these autonomous systems will behave on the battlefield and have greater confidence in them as partners in future missions.

Interestingly, the emphasis on trust by defense researchers and developers sometimes gets overlooked in media coverage of DoD’s AI programs. DARPA’s Air Combat Evolution (ACE) program, for instance, drew a great deal of attention when an AI system beat one of the Air Force’s top F-16 fighter pilots in a simulated aerial dogfight contest. ACE’s program manager, however, emphasized that “regardless of whether the human or machine wins the final dogfight, the AlphaDogfight Trials is all about increasing trust in AI.”

Although only a few of DoD’s science and technology research related to autonomy and AI focus directly on studying trust in human-machine teams, it is clear that scientists and developers working on future defense technologies are committed to building trustworthy and reliable AI systems. Testing and experimentation, even early in the technology development cycle, also helps developers build better systems by providing insights into the factors that influence trust in human-machine teams as well as directly contributing to the development and proper calibration of trust between operators and their machine partners. That said, these are predominantly technology-focused approaches, and, as such, are unlikely to be sufficient in ensuring that the U.S. military has a holistic understanding of trust in human-machine teams.

A holistic understanding of trust

Human-machine teaming is a relationship—one made up of at least three equally important elements: the human, the machine, and the interactions and interdependencies between them. Building trustworthy AI that is transparent, interpretable, reliable, and exhibits other characteristics and capabilities that enable trust is an essential part of creating effective human-machine teams. But so is having a good understanding of the human element in this relationship.

What does it take for people to trust technology? Are some individuals or groups more likely to feel confident about using advanced systems, while others are more reluctant? How does the environment within which human-machine teams are deployed affect trust? Insights from cognitive science, neuroscience, psychology, communications, social sciences, and other related fields that look into human attitudes and experiences with technology provide rich insights into these questions.

For example, research shows that demographic factors such as age, gender, and cultural background affect how people interact with technology, including issues of trust. These factors and variations are highly relevant when thinking about the future of multinational coalitions like NATO. If some U.S. allies are less comfortable with using AI-enabled military systems, gaps in trust could undermine coalition-wide coordination, interoperability, and overall effectiveness. 

Stressful conditions and the mental and cognitive pressures of performing complex tasks also influence trust, with research showing that people generally tend to over-trust a machine’s recommendations in high-stress situations. Broader societal structures also play a role, with organizational and workplace culture conditioning how people relate to technology. For instance, different branches within the military and even individual units each have a unique organizational culture, including different postures toward technology that are reinforced through training and exercises. These different cultures and postures affect how soldiers, sailors, airmen, and Marines make decisions and behave, including their trust and reliance on technology—whether they accept an algorithm’s conclusions uncritically, reject its assessment and follow their own intuition, or seek further input from others higher up the chain of command.

A focus on the human element is therefore a necessary complement to technology-centric solutions. Without it, trust requirements built into a given system may cultivate appropriate trust in a particular human-machine team, but there is no guarantee this “built in” trust will hold for new human team members or across different mission environments. Insights from research on the different cognitive, demographic, emotional, and situational factors that shape how people interact with technology as well as the broader institutional and societal structures that influence human behavior can therefore help augment and refine systems engineering approaches to building trustworthy AI systems. Such a holistic approach to trust would best advance the U.S. military’s vision of using intelligent machines not only as tools that facilitate human action but as trusted partners to human operators.

Margarita Konaev, Ph.D., is a research fellow with the Center for Security and Emerging Technology, focusing on military applications of AI and Russian military innovation.
Husanjot Chahal is a research analyst with the Center for Security and Emerging Technology, focusing on AI competitiveness, data, and military applications of AI.

Authors