The European Commission proposed a new set of regulations to both make AI trustworthy and spur its development. While rules still need to be ratified, they have the potential to affect any business leveraging or developing AI in the EU.
A proposed set of regulations from the European Commission for a sweeping regulatory framework governing artificial intelligence is under consideration in the European Union. It seeks to develop new global norms for AI by balancing the safety and fundamental rights of people and businesses while strengthening AI uptake, investment and innovation.
As the growth of artificial intelligence (AI) continues to accelerate, there’s no stopping the issue of AI ethics and compliance from knocking on our doors. Trustworthy AI is becoming a business imperative, as it fosters consumer confidence and engenders brand equity. In fact, Gartner projects that by 2025, 30 percent of contracts for the use of AI technology and services will incorporate specific transparency and traceability provisions designed to mitigate compliance risks and promote ethical use of AI.
Let’s take a look at what the proposed regulation will mean for enterprises.
The Effort to Make AI Trustworthy – Who Does It Impact?
The proposed regulation applies to all developers that market or provide AI systems in the EU (regardless of whether such providers are established in the union) and to those who have member-state-based users. Providers are defined broadly to include natural or legal persons, public authorities, agencies or other bodies that develop AI systems. The scope of the regulation also extends to distributors, importers and any third parties who make substantial modifications to the functionality and performance of AI systems.
And this is very true especially for the EU proposed AI Regulation. We should assess not just the (good) intention of the regulator, but the – problematic – impact on current and future stakeholders (risk of circumvention, risk of dishomogeneous implementation) and on individuals https://t.co/eWrjA5VJpO
— Gianclaudio Malgieri (@JcMalgieri) May 17, 2021
Which AI Systems Do the Proposed Regulation Cover?
AI systems under the regulation encompass a wide range of methods and algorithms, including supervised, unsupervised and reinforcement machine learning for a given set of human-defined objectives that generate outputs such as content, predictions, recommendations or decisions influencing the environments they interact with.
The proposed regulation also extends to machinery, such as industrial and consumer robots that incorporate AI, including stringent conformance obligations to ensure compliance with product safety.
The ambition of these rules is to establish a balance between the socioeconomic utility of AI systems and fundamental rights that give people the confidence to embrace AI-based solutions while encouraging businesses to develop them.
Understanding Different Levels of AI Risk
To achieve this objective, the cornerstone foundation of the regulation is a risk-based approach that classifies AI systems into four categories based on a combination of factors that include the intended purpose, the number of impacted persons and the potential risk of harms. The categories are:
- Prohibited AI: This includes AI systems that use subliminal techniques that cause physiological or psychological harm, exploit vulnerable groups, provide social scoring by public authorities that may result in discrimination or unfavorable treatment and collect biometric data for use by law enforcement (subject to well-defined exceptions) in public spaces that may lead to profiling or biased facial recognition.
- High Risk: This includes AI system providers that include AI technology used in critical infrastructures (e.g., transport); educational or vocational training (e.g., scoring exams); human resources (e.g., CV-sorting software for recruitment procedures); essential private and public services (e.g., creditworthiness for loans or public assistance); law enforcement (e.g., evaluation of the reliability of evidence); migration, asylum and border control management (e.g., verification of travel documents); and administration of justice and democratic processes (e.g., applying law to a concrete set of facts).
- Limited Risk AI: AI system providers, such as chatbots, are required to incorporate transparency provisions to inform users that they are interacting with an automated customer response management system.
- Minimal Risk AI: All other providers of AI systems, while not specifically covered by the proposed, are encouraged to institute responsible use of AI best practices on a voluntary basis.
Obligations for Providers of AI Systems
The proposed regulation imposes stringent requirements on providers to ensure delivery of trustworthy AI. These provisions include:
- Implementation of continuous and iterative AI risk management and governance processes that identify, monitor and remediate foreseeable risks in the application of AI systems;
- Rigorous data governance best practices that ensure training data accuracy, intended purposes for which training data is used and identification of possible biases in training data to minimize harms on the fundamental rights of natural persons;
- Maintenance of auditability of AI systems through the implementation of technological measures, including access to “event logs” to ensure traceability of AI system performance;
- Transparency by empowering users to gain insight to how AI systems operate, how training data is used, its intended purposes and its impact on persons or groups of persons; and
- A human-staffed agency to monitor and remediate potential harmful risks associated with the applications of AI systems, monitor their performance and correctly interpret outputs generated by AI systems.
Conformity Assessments, Certification and Enforcement
AI systems covered by the regulation will be subject to conformity assessments and certification protocols valid for five years.
The regulation also incorporates an onerous enforcement mechanism, including administrative fines, which may be as high as $36 million (€30 million), or 6 percent of the total worldwide annual turnover for prohibited and noncompliant practices (e.g., poor training data practices that result in material harm to users).
Explaining Explainable AI
While the EU’s announcement for the proposed AI regulation update does not mention the term “explainable AI,” many of its target measures work to make algorithmic decision-making easier to understand. The way many AI systems currently operate make it impossible for its engineers – let alone its users – to say why a series of inputs led to the output it produced. The proposed regulations focus on making AI trustworthy.
The EU-commissioned expert panel on Ethics Guidelines for Trustworthy AI, states:
Trustworthiness is a prerequisite for people and societies to develop, deploy and use AI systems. Without AI systems – and the humans behind them – being demonstrably worthy of trust, unwanted consequences may ensue, and their uptake might be hindered. This could prevent the realization of potentially vast social and economic benefits that they can bring.
However, to engender trust in AI systems, they must be explainable. But what exactly does explainable AI involve?
On the most simplistic level, explainable AI empowers humans to understand how AI algorithms work, the methods employed to make decisions, the reliability of data used to train AI algorithms and confidence in the accuracy and fairness of the results. This means the explainable AI ought to have a “human in the loop” component to adjudicate and remediate potentially harmful impacts of AI.
In August 2020, the National Institute of Standards and Technology (NIST) released a draft report on the 4 Principles of Explainable Artificial Intelligence, which sets four proposed principles to determine the “explainability” of decisions made by AI systems. Explainability, they said, refers to the idea that the reasons behind the output of any AI system should be understood.
The proposed principles are:
- Explanation: AI systems should deliver accompanying evidence or reasons for all outputs.
- Meaningful: Systems should provide explanations that are understandable to individual users.
- Explanation Accuracy: The explanation should correctly reflect the system’s process for generating the output.
- Knowledge Limits: The system only operates under conditions for which it was designed or when the system reaches a sufficient level of confidence in its output.
Furthermore, businesses should undertake impact assessments of AI bias before the commercial release of AI-based applications, particularly when their applications may adversely impact social, economic and privacy rights.
What’s Next?
The proposed EU AI regulation is far from ratification and will be subject to vigorous debate within the EU Parliament and EU Council. The proposed regulation will likely be a model for other jurisdictions to follow as the EU plans to encourage setting global AI standards in close collaboration with international partners aligned with the rules-based multilateral system and the values it upholds.
Stateside, the FTC has already provided guidance relating to trustworthy use of AI which, in large part, echoes the ambitions of the proposed EU AI regulation.
Global harmonization of AI regulation is ultimately a desirable objective, as AI adoption is accelerating with proven social utility and economic benefits. However, such benefits must be balanced with common sense regulation that promotes innovation while safeguarding the fundamental human rights of consumers.