INTERVIEW: Deloitte Says Companies Need Ethical Framework Around AI

In this article:

Deloitte’s Irfan Saif and Maureen Mohlenkamp

By Oliver Estreich

From CRM tools offered by Salesforce.com Inc to highly-complex products from IBM, artificial intelligence (AI) has become central to corporate strategy. While the use of AI is mixed across organizations and industries, early adopters are quickly realizing that building trustworthy AI programs – using related data and technologies ethically – can have both short- and long-term benefits when properly supported by leadership. Deloitte Risk & Financial Advisory’s Maureen Mohlenkamp, who specializes in ethics and compliance, and Deloitte AI Leader Irfan Saif discussed with CorpGov. The full interview is below:

CorpGov: How do leaders at AI-using organizations manage ethics?

Mr. Saif: We polled over 550 C-suite and other executives working at organizations using AI, and found that nearly half expect to increase AI use for risk management and compliance efforts in the coming 12 months. Yet, just one in five said their organizations have an ethical framework in place for such use of AI.

CorpGov: What is trustworthy AI?

Mr. Saif: To me, trustworthy AI is deeply tied to an organization’s ethical framework. Trustworthy AI includes a number of components, not the least of which are: standards around the ethical use of AI; training for talent on the ethical use of AI; and, specialized guidance for product teams on how to monitor AI solutions for ethical compliance; and, a strong tone at the top on AI ethics—where board involvement can be invaluable.

CorpGov: What types of questions should boards be asking about AI ethics?

Ms. Mohlenkamp: AI ethics is as much about understanding the risks as it is about establishing a process for avoiding them. Boards need to ask questions early and often about ethical use of technology and data to mitigate unintended and unethical consequences, whether AI is involved or not. As data and technology uses evolve, tying efforts directly to organizational mission statements and corporate conduct policies can help organizations ensure that future advancements start with a strong ethical foundation.

CorpGov: Where should boards and other leaders start, when assessing the current state of AI ethics within their organizations?

Ms. Mohlenkamp: While the board might focus on defining a governance framework that can be used to think about AI and ethics at the highest level, it will be valuable for management to conduct a gap analysis. This can be a great way to assess an organization’s current practices and understand how to adapt or enhance existing organizational policies, procedures and standards to fit in the age of AI. Other enterprise stakeholders such as chief data officers, CIOs and CISOs along with legal, ethics and compliance leaders can support both sets of efforts as subject matter specialists. They will also be likely evangelists and enablers in helping their respective organizations manage and govern AI ethics on an ongoing basis.

CorpGov: How can leaders help prepare their organizations for ethical AI use?

Mr. Saif: For many organizations, AI presents a steep learning curve. Traditional business professionals will need to learn how to team with data scientists and technologists to achieve strategic goals and to explain the changes in the environment. Meantime, those developing algorithms and managing data will need to be specially trained to identify and mitigate bias within AI applications.

An educated and tech savvy workforce is better positioned to ethically embrace the opportunities that AI use creates.

 

Contact:

editor@corpgov.com

www.CorpGov.com

Editor@CorpGov.com

Twitter: @CorpGovernor

Advertisement