Artificial Intelligence: How The EU AI Act Can Enable Responsible Innovation In AI And Machine Learning

DISCLAIMER: All opinions in this column reflect the views of the author(s), not of Euractiv Media network.

Credit: Shutterstock

Jens-Henrik Jeppesen is Director of Public Policy, EMEA at Workday.

While Member States have been reviewing the European Commission’s proposal for a Regulation on Artificial Intelligence – the AI Act – for some time, the European Parliament is now  set to begin its deliberations. It is an enormously important task. The AI Act proposal is the first of its kind anywhere in the world, and it is likely to set norms and standards globally for development and deployment of AI technology. It will regulate an incredibly broad set of technologies and tools to be used by all manner of companies and sectors, so it is important to get it right. Already today, AI is improving healthcare, optimising commerce, improving energy resilience, enhancing employees’ careers, and driving human progress in countless other ways. Businesses use applications incorporating AI technologies across their operations to support better business decisions, accelerate operations, and deliver data-driven predictions to inform better human decisions.

Workday provides financial, human capital management, planning, and analytics applications to large organisations globally. Our applications are delivered through the cloud and are trusted by thousands of customers and tens of millions of their employees. At Workday, we are fully engaged in ML-driven innovation. We harness the power of ML to help our customers make more informed decisions and accelerate operations, as well as assist workers with data-driven predictions that lead to better outcomes.  We believe that the most transformative uses of AI are those that leverage the insights and predictive power of AI to enhance human judgment and decision-making, rather than seeking to replace it.

Workday has been a vocal proponent of proportionate and risk-based regulation of AI, both in Europe and the US, and we have contributed to the Commission’s consultations preceding the AI Act. Overall, we commend the Commission for the AI Act proposal. It is well-crafted, meticulously thought through and comprehensive, and a good basis for the legislative process. By setting robust fundamental rights safeguard for AI systems that cause risk to health, safety or fundamental rights, the legislation can create a vibrant market for trustworthy and ethical AI systems.

As Member States and the European Parliament review the proposal, we believe they should maintain a number of its core elements. First, they should support the risk-based approach. It makes conceptual sense to categorise use scenarios along a risk scale such as that proposed by the European Commission, and impose regulatory requirements on that basis. Some AI systems can cause unacceptable risks, some little or no risk. Some are standalone software systems that may have fundamental rights implications, and some are embedded in physical products as health and safety components. While maintaining the risk-based approach, we think the regulation can be improved by tightening the definition of AI as a whole, which seems to include software and tools not normally associated with AI. Further, it would be important to ensure that the high-risk definition does not inadvertently encompass use scenarios that do not actually produce material risks.

Another essential element in the proposal is the principle of self-assessment. This enables AI providers to comply with relevant obligations throughout the design and development process. Self-assessment is especially important for provision of software as a service, where improvements and upgrades are released frequently. A third-party assessment approach would extend time to market as companies would depend on outside assessment bodies for approvals of such updates. By restricting third-party assessment to very few types of high-risk applications, the AI Act avoids the risk of overburdening assessment bodies with cases.

The main challenges we see in the draft AI Act concern the product safety regulatory model chosen by the European Commission – the New Legislative Framework approach. This existing product safety framework seems well-suited to AI that is embedded in products such as Internet of Things devices, autonomous cars, robotics, or other machinery. However, for many standalone software applications, protection of fundamental rights – not health and safety – are the main concerns. For such software systems that are constantly updated and improved, it would be more relevant to use a set of process-based rules and requirements to guide providers of AI systems and the organisations that use these technologies. This would be a better approach to ensure that fundamental rights concerns are properly addressed when AI systems are developed and used in high-risk scenarios.

Another problem associated with the product safety model is the assumption in the AI Act, that an AI system is handed over, much like a physical product, to the customer with instructions for use. In this model, the provider is held responsible for the safety of the product, and in the AI Act, obligations and requirements fall overwhelmingly on the provider, rather than the user. However, In many enterprise and business-to-business use cases, AI systems are deployed and customised under the control of the customer organisation (user), and the user determines how the AI system interacts with data controlled by the user. It would be advisable to ensure that the allocation of responsibilities in the AI Act are amended so they are flexible enough to accommodate this type of deployment scenario.

Workday will continue to contribute to the legislative process, based on our analysis of the AI Act, and on our White Paper on building trust in AI/ML. We support international coordination, through the OECD, the Global Partnership on AI, and most recently, in the EU-US Technology & Trade Council, where AI will be an important area for cooperation. The dual objectives behind the EU’s AI strategy – benefiting from the potential of AI while addressing associated risks – are shared by policy makers in the US and many other countries. An innovation-friendly, proportionate, and risk-based EU AI Act is essential to achieving these goals.

Subscribe to our newsletters

Subscribe