Preparing the Military for a Role on an Artificial Intelligence Battlefield

Reuters
November 19, 2019 Topic: Security Region: Americas Tags: Artificial IntelligenceMilitaryTechnologyRobotsWar

Preparing the Military for a Role on an Artificial Intelligence Battlefield

The Pentagon could emerge as a leader and a model for how to ensure ethics are embedded into artificial intelligence systems.

The Defense Innovation Board—an advisory committee of tech executives, scholars, and technologists—has unveiled its list of ethical principles for artificial intelligence (AI). If adopted by the Defense Department, then the recommendations will help shape the Pentagon’s use of AI in both combat and non-combat systems. The board’s principles are an important milestone that should be celebrated, but the real challenge of adoption and implementation is just beginning. For the principles to have an impact, the department will need strong leadership from the Joint AI Center (JAIC), buy-in from senior military leadership and outside groups, and additional technical expertise within the Defense Department. 

In its white paper, the board recognizes that the AI field is constantly evolving and that the principles it proposes represent guidelines the department should aim for as it continues to design and field AI-enabled technologies. The board recommends that the Defense Department should aspire to develop and deploy AI systems that are: 

1. Responsible. The first principle establishes accountability, putting the onus on the human being for not only the “development, deployment, [and] use” of an AI system, but most importantly, any “outcomes” that system produces. The burden rests on the human being, not the AI.  

2. Equitable. The second principle calls on the DoD to take “deliberate steps” to minimize “unintended bias” in AI systems. The rise of facial recognition technology and the subsequent issues of algorithmic biases show that the board is right to prioritize mitigating potential biases, particularly as the DoD continues to develop AI systems with national security applications. 

3. Traceable. The third principle addresses the need for technical expertise within the Defense Department to ensure that AI engineers have an “appropriate understanding of the technology” and the insight of how a system arrives at its outcome.

4. Reliable. The board’s fourth principle essentially says that an AI system should do what it has been programmed to do within the domain it has been programmed to operate in. AI engineers should then conduct tests to ensure the “safety, security, and robustness” of the system across its “entire life cycle.” 

5. Governable. The fifth principle tackles the need for fail-safes in situations where an AI system acts unexpectedly. The AI system should be able to “detect and avoid unintended harm,” and mechanisms should exist that allow “human or automated disengagement” for systems demonstrating “unintended escalatory” behavior. 

These ethical principles are a worthwhile and necessary step in a series of actions the Defense Department has recently taken on AI. The department stood up its Joint AI Center nearly eighteen months ago to act as the central hub for AI deployment across the Department. The department then released its AI strategy this past February, prioritizing the concept of a “human-centered adoption of AI.” 

Now that the board has proposed these ethical principles, it will fall to the JAIC to advocate for their adoption and make them actionable. Adoption and implementation are important because they indicate to the U.S. public and other nations that the U.S. defense community is taking AI ethics and risk mitigation seriously. In the past, international organizations, like the OECD, as well as companies like Microsoft or Google, introduce AI ethical principles or establish ethics boards, without always defining mechanisms for implementation or accountability. The Pentagon has the opportunity to be forward-thinking by not only adopting these principles, but actually establishing mechanisms to abide by them. 

Additionally, these principles may relieve some of the concerns that tech employees have voiced about working on Defense Department projects or provide some top cover for tech executives looking to partner with the department on AI-related projects. While these principles will not solve every issue in the relationship between the department and tech community, the adoption of these principles should act as a signal that the Pentagon is serious about embedding safety and mitigating risk in its AI systems. 

Assuming the board’s principles are adopted, the Defense Department will then have to turn its efforts toward implementation. The Pentagon will need long term leadership, buy-in from department leadership and outside groups, and increased technical expertise to apply these principles moving forward. The leadership of the JAIC will be instrumental in encouraging the Department’s many components to incorporate this guidance into the design and deployment of their AI systems. In addition to the JAIC’s leadership, the principles will need long term support from the highest levels of the Pentagon, regardless of the current Secretary of Defense. 

The Defense Department should also seek support and buy-in from outside groups, including private sector partners and AI researchers in the tech community. The board engaged numerous voices from the private sector, academia, and AI research community as they developed these principles. As the department begins to formulate policies for implementation, it should collaborate with AI technologists that are at the forefront of research on safety, risk, and unintended consequences in AI. 

Finally, the department will need additional technical expertise to successfully execute these principles. The Pentagon should follow the board’s recommendation and develop a strategy for both recruiting additional AI engineers and creating programs to train existing department personnel in “AI-related skills and knowledge.” 

The board has accomplished its task, and it is now up to the Defense Department to undertake the challenge of adoption and implementation. The road ahead will certainly have its hurdles, but with the right support and expertise, the Pentagon could emerge as a leader and a model for how to ensure ethics are embedded into AI systems. And in an AI field with a lot of uncertainty, this would be a major victory. 

Megan Lamberth is a research assistant for the Technology and National Security program at the Center for a New American Security.

Image: Reuters