BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

How AI Principles Help Shape AI Globally

Following
This article is more than 3 years old.

When it comes to adoption of artificial intelligence, the US Federal Government is moving at a rapid pace. On February 11, 2019, President Trump signed Executive Order 13859 announcing the American AI Initiative, the United States’ national strategy on artificial intelligence. As part of this strategy the US took into consideration principles on Artificial Intelligence published by the Organization for Economic Cooperation and Development (OECD). The AI Today podcast interviewed Adam Murray, International Relations Officer from the US Department of State to discuss these principles in more detail and why it’s important to have AI principles around responsible & trustworthy AI discussed and adopted on an international level.


OECD AI Principles

Adam Murray is a foreign relations officer and a US diplomat, and has been working with the Department of State for over 13 years. His background includes many postings around the world from Burma to Paris. As the US delegate to CDEP and chair of the OECD Network of Experts on AI (ONE AI) Adam helped craft the OECD principles on AI. 

Officially launched in early 2020, the OECD Expert Group on AI (AIGO) is made up of members from across the globe. ONE AI is an advisory group providing expert input to the OECD’s analytical work on AI, identifying possible trends and topics around AI. The members provide AI-specific expertise around policy, technical and business topics. The OECD Network of Experts will meet regularly throughout the year and aims to produce deliverables to help push global AI policy forward. 

The OECD principles of AI focus around five complementary values-based principles for the responsible stewardship of trustworthy AI as excerpted from the OECD website:

  • AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
  • AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
  • There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
  • AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
  • Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles. 

The recommendations that the OECD created regarding AI are focused around responsible and trustworthy AI. They were adopted in May 2019 and the most notable aspect is that it’s the first time a large number of member countries came together to acknowledge their goals for the possibilities and future of AI in the world. While these are only guidelines and do not have the force of law, they serve to provide ethical weight for those looking to adopt AI. The peer review process in place will help to keep people and countries on track and implementing the concepts espoused by the principles. 

On the podcast, Adam shared that at first there weren’t a lot of international voices from governments per se around these principles, but that they had many voices from many backgrounds helping to get these things together. Many countries were represented including the United States, Singapore, Russia, Dubai, as well as many countries of various sizes. Even with the global makeup, the group was able to reach consensus on many different issues. 

Digging deeper into the OECD’s AI principles

Adam explains the OECD recommendations are divided into two sections. The first section deals with principles for responsible stewardship and trustworthy AI. This section is designed to contribute to the optimistic outlook for how AI can play a big role in the broader wellbeing, economic growth, and innovation in society. It is designed to help promote safety, security and transparency when it comes to using AI in the everyday world. 

The second part looks at national policies that governments can implement. These principles look at things like investing in AI research and development, and preparing the workforce for a future where AI is in even broader use. It also talks about how policy can be made and shaped in the future as things around AI continue to develop and change. 

As countries move forward with adoption of AI for various processes, decision making, and tasks, there are necessarily concerns and questions around transparency and explainability of AI systems. Part of the recurrent discussion is understanding what governments can and should do to help push certain policies and regulations around AI forward. Adam explains that educating government legislative and judicial bodies and helping to foster a general understanding of how AI can be used is key to that understanding so there can be trust in using AI systems. Research is currently taking place to make AI more explainable and work from organizations such as ATARC are moving toward advancing transparency assessments for AI models.

Another big effort being made is to find the balance between creating an environment for innovation and creativity while still protecting individual civil liberty. As data is the heart of AI, it becomes critical to understand the data governments use to train AI models. Adam emphasized that a bunch of different disciplines and perspectives on AI must be brought in to understand the scope of the required balance. Organizations and governments need to take a look at the rapid pace of AI advancements and changes that are being made and really give perspective to the use of data to support those models so that they are working with privacy and civil liberty in mind.

Next steps with OECD

Countries around the world are increasingly focusing their strategic resources on advancement of AI. Increasingly, governments see how the adoption of AI can provide many strategic advantages to their industrial ecosystem, military and defense, academic institutions, and other critical areas. AI is rapidly becoming a priority, and efforts from the United States are helping to advance AI in a way that provides economic benefit without sacrificing individual privacy and liberties. As part of global engagement in AI, representatives from the United States understand that they must engage internationally to promote a global environment that supports AI research and innovation. 

The OECD has put together the AI observatory so different parties can engage with the principles as they are being adopted and implemented. As AI continues to become more a part of our daily lives, and governments are finding strategic advantage with the technology, these discussions become even more important.

Follow me on TwitterCheck out my website