Ambitious Europe: the European Commission unveils its policy strategy on Artificial Intelligence
n 19 February 2020, the European Commission released its White Paper on Artificial Intelligence (“AI”), which sets out policy options on how to achieve the double goal of (1) promoting the update of AI and (2) addressing the risks associated with certain uses of this technology. This new framework on AI builds on the Commission’s AI strategy of 2018, the Coordinated Plan agreed on with the Member States as well as the Ethics Guidelines on trustworthy AI presented by a High-Level Expert Group in April 2019. As the EU seeks to become a key global digital player, it has been taking various initiatives since 2014[1] regarding data strategy, big tech and AI. The article below focuses on the two main building blocks of the European approach to AI presented in the Commission White Paper on AI: excellence and trust.
Introduction
The Commission White Paper says AI is developing fast and will improve our lives; however, there are certain potential risks to AI’s use linked to its complexity, opacity, unpredictability and partially autonomous behaviour. A common European approach to AI is indispensable to ensure a trustworthy and excellent framework for AI applications based on and compliant with European values such as fundamental rights, consumer rights and data protection. To outline such a framework and strengthen a human-centric approach to AI that will entail citizens, businesses (including small and medium-sized enterprises (“SMEs”)) and services of public interest to adopt AI-based solutions, the public and private sectors need to mobilise resources along the entire value chain to create the right incentives for all players.
Ecosystem of excellence
To ensure excellence in the development and uptake of AI, action is needed at multiple levels.
First, cooperation between the Member States to maximise the impact of investment in research, innovation and deployment is essential for pooling investment in areas where the action required goes beyond that which a single Member State can achieve. The same reasoning will require strong public-private partnership. These common efforts should focus on the research and innovation community creating more synergies between the multiple European research centres on AI and creating a ‘lighthouse’ centre of research and expertise that can attract investment and talent from all over the world. In this context, developing the skills necessary to work in AI as well as increasing awareness of AI at all levels of education will be a priority of the revised Coordinated Plan.
Second, to allow SMEs to access and use AI, which in turn will create jobs and kindle growth across the EU, at least one innovation hub per Member State should have a high degree of specialisation in AI to provide support and help SMEs understand and adopt AI. Moreover, SMEs and start-ups will need financing to lead them on the path of innovation using AI: a € 100 million pilot investment fund in AI and blockchain[2] will be available to these enterprises.
Ecosystem of trust: regulatory framework
AI brings opportunities but also risks, which are both material and immaterial.[3] Indeed, flaws in the overall design of AI systems or the use of data without correcting certain biases might entail breaches of fundamental rights such as freedom of expression, non-discrimination, protection of personal data and personal life and consumer protection. The White Paper notes the practical implications of such risks, for example: state authorities using AI for mass surveillance or employers observing their employees’ behaviour. The flaws imbedded in AI technology noted above might, next to the risks related to fundamental rights, also bring new safety risks for users as they are featured in (new) products and services.[4] Should these safety risks materialise, the characteristics of AI technologies might make it difficult to trace back potentially problematic decisions made by AI systems; this might in turn prevent people who suffer harm from obtaining proper compensation under liability legislation.
These risks strengthen the lack of trust in AI, which is probably one of the main factors holding back its development. While current EU legislation[5] in principle applies to emerging AI applications, the Commission has found that the particularities of the latter might require: adjustments to specific legal instruments, the adoption of a new clear European framework able to protect fundamental rights (including personal data and privacy protection and non-discrimination), and the tackling of safety and liability-related issues that would provide legal certainty for businesses when marketing AI-involving products on the market.
A risk-based approach
Regulatory intervention needs to be proportionate to avoid creating disproportionate burdens, especially for SMEs. This is why the Commission proposes following a risk-based approach and differentiating between AI applications. Only high-risk AI applications (e.g. health, policing or transport) would have to comply with specific legal requirements such as transparency, traceability and the guarantee of human oversight, while other applications that do not qualify as “high-risk” would not be subject to such requirements.
The Commission sets forth a number of key features that might outline the legal requirements for high-risk AI applications: training data; data and record keeping; information to be provided; robustness and accuracy; and human oversight.
Training data and record keeping. AI systems are trained on specific data sets. To ensure that these systems respect the EU’s values and rules, they could be required to be trained on data sets that are sufficiently broad (to guarantee safety) and representative (to avoid prohibited discrimination). Moreover, it is important to keep records related to the programming of the algorithms and the data used to train high-risk AI systems to allow potential problems to be traced back and verified.
Transparency. Clear information should be provided in a proactive manner as to the AI system’s capabilities, limitations and the purpose for which it is intended. From the citizens’ perspective, they should be clearly informed when they are interacting with an AI system instead of a human being.
Robustness and accuracy. To be trustworthy, the development and functioning of AI systems need to behave reliably. Requirements ensuring that outcomes are reproducible, that the systems can adequately deal with errors or inconsistencies and are resilient against attacks and manipulations are therefore paramount.
Human oversight. An appropriate involvement of human beings needs to be ensured to build the necessary trust in AI systems. Depending on the particular system and its use, human oversight could consist in ex ante validation, ex post review and/or constant monitoring and real-time intervention.
For so-called lower risk AI applications (which would not be subject to the requirements set out above, but which remain entirely subject to the existing EU rules), the Commission envisages a voluntary labelling scheme. Operators, on a voluntary basis, would be able to decide to comply with the legal requirements set out for high-risk AI applications, and therefore obtain a quality label for their AI applications that would enhance users’ trust.
Conclusion
The Commission has presented a government strategy for developing an AI ecosystem capable of fostering innovation and growth, building on traditionally strong EU sectors, while preserving consumers’ and businesses’ fundamental rights, including privacy and personal data. Europe’s innovation capacity should be supported through investments in research and innovation, the development of skills and SMEs uptake of AI. Together with (adjustments to) existing EU legislation, the Commission calls upon a new specific regulatory framework capable of tackling the particular difficulties and risks that AI applications bring. The common, European dimension of the policy options discussed is paramount for removing barriers within Europe and ensuring Europe’s competitive position in the global market.
The Commission White Paper is open for public consultation until 19 May 2020. Based on the input received, the Commission will decide on what further action to take.
[1] See for example the Regulation on the free flow of non-personal data, the Cybersecurity Act, the Open Data Directive and the General Data Protection Regulation.
[2] This pilot investment fund is made available under the InvestEu guarantee (http://www.Europe.eu/investeu).
[3] Material risks can relate to the safety and health of individuals, including the loss of life and damage to property, while immaterial risks cover loss of privacy, limitations on the right of freedom of expression, human dignity or discrimination.
[4] For example: an autonomous car that wrongly identifies an object on the road due to a flaw in the object recognition technology.
[5] The Race Equality Directive, the Directive on equal treatment in employment and occupation, the Directives on equal treatment between men and women in relation to employment and access to goods and services, consumer protection rules, rules on personal data protection and privacy (such as the General Data Protection Regulation) and other sectorial legislation.
Written by
Recommended articles
The Belgian Competition Authority intensifies its fight against bid rigging – with physical persons also to be prosecuted
On 21 September 2024, the Chief Public Prosecutor of the Belgian Competition authority (“BCA”) publicly announced that in the BCA’s investigations into agreements concerning subsidies granted for the delivery of newspapers and magazines in Belgium the BCA will for the first time also be prosecuting physical persons.
Read onHacking NIS2: 5 innovations about the sequel to the EU’s cybersecurity framework
NIS2 (the second “Network and Information Systems Directive”) is an updated regulatory framework introduced by the European Union to strengthen cybersecurity across member states. It is a successor to the original NIS Directive, which was adopted in 2016.
Read onNew guidelines for the Belgian rules on foreign direct investment
Since 1 July 2023, the Belgian rules on the screening of foreign direct investments (“FDI”) have been in force. As the decisions of the Interfederal Screening Commission (“ISC”) are not published, the rules’ application and interpretation still raise a lot of legal uncertainty. On 4 April 2024, the ISC published guidelines to provide further clarification.
Read on