WASHINGTON – Today, global tech trade association ITI offered recommendations to the Brazilian Congress as it seeks to regulate the development and application of artificial intelligence (AI). In comments to the Brazilian Senate’s Committee of Legal Scholars, ITI underscores its support of Brazil’s overall goal of building an ecosystem of trust in AI through forward-thinking approaches to governance, recognizing the need to ensure that AI systems are fair, transparent, accountable, and privacy-respecting. To meet this goal, ITI offers suggestions pertaining to the definition of AI, taking a risk-based, context-specific approach to regulation, transparency, liability, bias and human oversight.

“There is a fast-growing global dialogue around how best to turn broad responsible AI principles into practical steps that both companies and policymakers can implement,” ITI wrote in its submission. “That is why we broadly recommend in our comments that any new AI regulation should support and build on these ongoing efforts to establish best practices, rather than risk cutting them short with inflexible rules that may not be able to adapt to a rapidly changing field of technology.”

In its submission, ITI offers several recommendations for the Committee of Legal Scholars to consider in AI regulation and policy:

  • Defining AI. Brazil should avoid creating a broad definition of AI. ITI urges the Committee to evaluate the potential definition of AI to focus on software that learns and in the context AI is used and suggests adopting the OECD definition of AI.

  • Principles-based, risk-based regulatory approach. ITI supports the overall goal of building a thoughtful, proportionate, and risk-based approach to AI governance. The regulatory framework should be flexible enough to permit this agility. It should encourage organizations to identify risks, address them and adapt their mitigation measures throughout the life cycle of an AI application in an iterative manner.

  • Transparency of AI Systems. ITI offers a series of considerations around transparency, including suggesting that Brazil be very clear as to what is meant by “transparency.” It also encourages Brazil to take a risk-based approach to any transparency requirements.

  • Addressing AI Bias. ITI offers a series of considerations around managing AI bias, especially in the context of legislation. It urges Brazil to consider that not all bias is harmful, that there are many tools beyond ensuring quality input data that may be utilized to address bias, that legislation should not be prescriptive in how organizations address bias, and that it should reflect the need to potentially process sensitive data to mitigate bias.

  • Human Oversight. ITI recommends that the level of human oversight of an AI system be determined based on individual use case, once again taking a risk-based approach.

  • Liability. ITI urges Brazil to examine existing liability law to see if there are any specific and tangible gaps that need to be addressed. Liability should be assigned based on the role of the party in the AI lifecycle and the party’s ability to control the use of the AI system and mitigate risk.

On June 10, ITI’s Courtney Lang presented to the Brazilian Senate's International Seminar on "Challenges in Artificial Intelligence Regulation: International Contribution to the Brazilian Lawmaking Process”, where she discussed the different approaches to AI regulation and the role of transparency. Read her full presentation here.

Read the full comments here.

Related [Artificial Intelligence]