By Paal-André Storesund and Øyvind Eidissen
The European Commission's proposal for the Artificial Intelligence Act was published on 12 April 2021 and is now being debated and refined by the European Parliament and the Council. On 5 September 2022, the European Parliament's Committee on Legal Affairs (JURI) adopted their opinion on the Artificial Intelligence Act as the last committee in the Parliament.
The proposed Act takes a risk-based approach to the use of AI, applying different levels of requirements based on the risk associated with the specific use of the AI system in question.
When assessing the risk of an AI system, the main guideline is whether the system poses a risk to the health and safety or fundamental rights of individuals. Respect for private life and protection of personal data, non-discrimination and equality between women and men are just some examples of the fundamental rights which may be affected without thorough regulation. This has resulted in three categories of AI systems:
- Prohibited AI practices
- AI systems which involve a high risk
- AI systems which involve a limited or minimal risk
Chapter 1 of the proposed Act lists certain uses of AI systems which are not allowed at all because they impose a substantial threat to the EU's values and fundamental rights. This includes practices that have a significant potential to manipulate a person through subliminal techniques beyond their consciousness or exploit specific vulnerable groups such as children or persons with disabilities in order to materially distort their behaviour in a manner that is likely to cause them, or another person, psychological or physical harm.
The proposed Act also forbids the use of AI systems for social scoring for general purposes done by public authorities. The rationale for this prohibition is that social scoring may lead to unjustified or disproportionate treatment of individuals based on the social behaviour or the gravity of the behaviour shown, or based on the data gathered in a context which is unrelated to the context to be assessed.
Lastly, as a key rule, the proposed Act forbids use of biometric identification systems which gather data in real time, in publicly accessible spaces, with the purpose of law enforcement. Such use of AI systems is only proportionate in a few specific types of serious crime investigations.
AI systems which constitute a high risk are allowed under the proposed Act but are strictly regulated. The classification as a high-risk AI system is based on the AI system's intended purpose, in line with existing product safety legislation. As a result, it is not only the function performed by the AI system that classifies the system as high-risk. The specific purpose and modalities for which that system is used, will also be relevant.
There are certain legal requirements which apply to high-risk AI systems which are to be put on the market or put into service. There must be:
- a risk management system
- a governance system for governing the quality of the data used to train the system's models
- technical documentation affirming that the AI system complies with the proposed Act's requirements
- logging capabilities and record-keeping
- transparency and provision of information to users
- a design including appropriate human-machine interface tools, ensuring that the system can effectively be overseen by a person during the period in which the AI system is in use
- an appropriate level of accuracy, robustness and cybersecurity in the light of the system's intended purpose
Further, the proposed Act does not only set requirements for the AI system. There are also several obligations set forth for, i.a., importers, distributors and authorized representatives of the high-risk AI system.
AI systems which involve a limited or minimal risk are those which intend to interact with humans; emotion recognition systems or biometric categorization systems, and AI systems that generate deepfakes. These will also be subject to certain legal requirements. As a main rule, they must be transparent to the human interacting with or being analyzed by the AI system.
This means that unless it is obvious based on the context, the proposed Act requires that the AI system intended to interact with humans is designed and developed in such a way that humans interacting with the system are informed that the system is based on AI.
The same requirement applies to emotion recognition systems or biometric categorization systems. However, there is an exception for AI systems used for biometric categorization which are permitted by law to detect, prevent and investigate criminal offences.
As to the last category, deepfakes, AI systems using deepfakes must inform the recipient that the content has been artificially generated or manipulated. But as with the biometric categorization system, there are also an exception for this type of technology: Deepfakes can be used without informing the recipient if the use is authorized by law to detect, prevent, investigate and prosecute criminal offences, or is necessary for the exercise of the right to freedom of expression and right to freedom of the arts and sciences. Conditions for applying the exception include that there are appropriate safeguards for the rights and freedoms of third parties.
Even though the proposed Artificial Intelligence Act is still being developed, there are good reasons for companies who are or intend to utilise this technology to prepare for the changes under the proposed Act now. In its initial position paper, the Norwegian government has reacted positively to the proposed Act and has deemed the regulation relevant for the EEA, even though it is at an early stage.
We are already assisting many companies in the preparations for the proposed Act. If you would like a more thorough introduction to the proposed Act and how it may affect your business, please don't hesitate to contact us.