The Commission White Paper says AI is developing fast and will improve our lives; however, there are certain potential risks to AI’s use linked to its complexity, opacity, unpredictability and partially autonomous behaviour. A common European approach to AI is indispensable to ensure a trustworthy and excellent framework for AI applications based on and compliant with European values such as fundamental rights, consumer rights and data protection. To outline such a framework and strengthen a human-centric approach to AI that will entail citizens, businesses (including small and medium-sized enterprises (“SMEs”)) and services of public interest to adopt AI-based solutions, the public and private sectors need to mobilise resources along the entire value chain to create the right incentives for all players.
Ecosystem of excellence
To ensure excellence in the development and uptake of AI, action is needed at multiple levels.
First, cooperation between the Member States to maximise the impact of investment in research, innovation and deployment is essential for pooling investment in areas where the action required goes beyond that which a single Member State can achieve. The same reasoning will require strong public-private partnership. These common efforts should focus on the research and innovation community creating more synergies between the multiple European research centres on AI and creating a ‘lighthouse’ centre of research and expertise that can attract investment and talent from all over the world. In this context, developing the skills necessary to work in AI as well as increasing awareness of AI at all levels of education will be a priority of the revised Coordinated Plan.
Second, to allow SMEs to access and use AI, which in turn will create jobs and kindle growth across the EU, at least one innovation hub per Member State should have a high degree of specialisation in AI to provide support and help SMEs understand and adopt AI. Moreover, SMEs and start-ups will need financing to lead them on the path of innovation using AI: a € 100 million pilot investment fund in AI and blockchain will be available to these enterprises.
Ecosystem of trust: regulatory framework
AI brings opportunities but also risks, which are both material and immaterial. Indeed, flaws in the overall design of AI systems or the use of data without correcting certain biases might entail breaches of fundamental rights such as freedom of expression, non-discrimination, protection of personal data and personal life and consumer protection. The White Paper notes the practical implications of such risks, for example: state authorities using AI for mass surveillance or employers observing their employees’ behaviour. The flaws imbedded in AI technology noted above might, next to the risks related to fundamental rights, also bring new safety risks for users as they are featured in (new) products and services. Should these safety risks materialise, the characteristics of AI technologies might make it difficult to trace back potentially problematic decisions made by AI systems; this might in turn prevent people who suffer harm from obtaining proper compensation under liability legislation.
These risks strengthen the lack of trust in AI, which is probably one of the main factors holding back its development. While current EU legislation in principle applies to emerging AI applications, the Commission has found that the particularities of the latter might require: adjustments to specific legal instruments, the adoption of a new clear European framework able to protect fundamental rights (including personal data and privacy protection and non-discrimination), and the tackling of safety and liability-related issues that would provide legal certainty for businesses when marketing AI-involving products on the market.
A risk-based approach
Regulatory intervention needs to be proportionate to avoid creating disproportionate burdens, especially for SMEs. This is why the Commission proposes following a risk-based approach and differentiating between AI applications. Only high-risk AI applications (e.g. health, policing or transport) would have to comply with specific legal requirements such as transparency, traceability and the guarantee of human oversight, while other applications that do not qualify as “high-risk” would not be subject to such requirements.
The Commission sets forth a number of key features that might outline the legal requirements for high-risk AI applications: training data; data and record keeping; information to be provided; robustness and accuracy; and human oversight.
Training data and record keeping. AI systems are trained on specific data sets. To ensure that these systems respect the EU’s values and rules, they could be required to be trained on data sets that are sufficiently broad (to guarantee safety) and representative (to avoid prohibited discrimination). Moreover, it is important to keep records related to the programming of the algorithms and the data used to train high-risk AI systems to allow potential problems to be traced back and verified.
Transparency. Clear information should be provided in a proactive manner as to the AI system’s capabilities, limitations and the purpose for which it is intended. From the citizens’ perspective, they should be clearly informed when they are interacting with an AI system instead of a human being.
Robustness and accuracy. To be trustworthy, the development and functioning of AI systems need to behave reliably. Requirements ensuring that outcomes are reproducible, that the systems can adequately deal with errors or inconsistencies and are resilient against attacks and manipulations are therefore paramount.
Human oversight. An appropriate involvement of human beings needs to be ensured to build the necessary trust in AI systems. Depending on the particular system and its use, human oversight could consist in ex ante validation, ex post review and/or constant monitoring and real-time intervention.
For so-called lower risk AI applications (which would not be subject to the requirements set out above, but which remain entirely subject to the existing EU rules), the Commission envisages a voluntary labelling scheme. Operators, on a voluntary basis, would be able to decide to comply with the legal requirements set out for high-risk AI applications, and therefore obtain a quality label for their AI applications that would enhance users’ trust.
The Commission has presented a government strategy for developing an AI ecosystem capable of fostering innovation and growth, building on traditionally strong EU sectors, while preserving consumers’ and businesses’ fundamental rights, including privacy and personal data. Europe’s innovation capacity should be supported through investments in research and innovation, the development of skills and SMEs uptake of AI. Together with (adjustments to) existing EU legislation, the Commission calls upon a new specific regulatory framework capable of tackling the particular difficulties and risks that AI applications bring. The common, European dimension of the policy options discussed is paramount for removing barriers within Europe and ensuring Europe’s competitive position in the global market.
The Commission White Paper is open for public consultation until 19 May 2020. Based on the input received, the Commission will decide on what further action to take.
 Material risks can relate to the safety and health of individuals, including the loss of life and damage to property, while immaterial risks cover loss of privacy, limitations on the right of freedom of expression, human dignity or discrimination.
 For example: an autonomous car that wrongly identifies an object on the road due to a flaw in the object recognition technology.
 The Race Equality Directive, the Directive on equal treatment in employment and occupation, the Directives on equal treatment between men and women in relation to employment and access to goods and services, consumer protection rules, rules on personal data protection and privacy (such as the General Data Protection Regulation) and other sectorial legislation.