William Eitrem
Associate
Oslo
Newsletter
by William Eitren
Published:
In the rapidly evolving landscape of Artificial Intelligence ("AI"), deep fakes have emerged as a particularly contentious and challenging issue. Deep fakes, which involve the use of AI to create hyper-realistic but entirely fabricated images, videos, or audio recordings, have the potential to revolutionise various sectors, from entertainment to education. However, their capacity for misuse, ranging from misinformation and fraud to privacy violations and defamation, has prompted urgent calls for regulatory frameworks to address these risks. Especially within the political sphere, where attempts have been made to portray political opponents saying things which are entirely false, the threat of deep fakes are apparent: if one can realistically portray a state leader declaring war towards another, the consequences may be dire.
The European Union's AI regulation (the "AI Act") represents one of the most comprehensive attempts to regulate AI technologies, including deep fakes. This legislation aims to balance the promotion of innovation with the need to protect fundamental rights and societal values. This introductory Article focuses on the specific provisions of the AI Act that pertain to deep fakes, examining how the legislation defines, categorises, and seeks to regulate this technology. We will provide a brief analysis on how the AI Act defines and utilises the term "deep fake" and the specific rights and obligations that pertain to this concept.
The AI Act defines a "deep fake" as an "AI‑generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful" (emphasis added, Article 3 (60)).
Further, Recital 134, the only Recital to the Act which explicitly mentions deep fakes, begins as follows (emphasis added):
"Further to the technical solutions employed by the providers of the AI system, deployers who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful (deep fakes), should also clearly and distinguishably disclose that the content has been artificially created or manipulated by labelling the AI output accordingly and disclosing its artificial origin."
Both the Article and the Recital use the phrasing "resembles existing", before persons, objects, etc., which indicates that content must resemble something existing to constitute a deep fake. It could be interpreted that the definition under the Act means that content must resemble a specific existing person, object, etc. to qualify as a deep fake. This interpretation is in line with many people's perception of what a deep fake is, which involves manipulating real-world people, locations, and other objects in order to create a perception that something false is true.
On the specific existing person, object, etc. interpretation of the definition of a deep fake, the lines may be hard to draw since a similarity between an image of an artificially generated person and a human individual may be arbitrary, given the essentially unlimited number of humans, locations, objects, etc. that deep fakes may resemble.
A more concise approach, if the intention is indeed to encompass systems which create realistic content hard to separate from real content more generally, would be to phrase the qualification norm so that a deep fake is anything resembling "natural" persons, objects, etc., which would falsely appear to a person to be authentic. The fact that this obvious approach to drafting is not taken further substantiates the likelihood of the above interpretation: deep fakes must resemble specific, existing objects to qualify. However, the definition is currently vague and will hopefully be subject to clarifications in the future.
The only substantive rule in the Act which includes the term "deep fake" is Article 50, titled "Transparency obligations for providers and deployers of certain AI systems". This Article specifically addresses the transparency obligations for certain AI systems, ensuring that users are adequately informed about the nature and functioning of these technologies. In the risk hierarchy of the AI Act, Article 50 pertains to every category which directly interacts with natural persons. For high-risk systems, other Articles impose stricter rules and requirements. Accordingly, Article 50 may chiefly target so called "limited risk" AI systems which interact with people.
It is Article 50 (4) that references deep fakes. The first sentence stipulates that "Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated." This first sentence outlines the requirement that deployers of AI systems, which are used to generate image, audio, or video content that constitutes deep fakes, must disclose that the content is artificially generated or manipulated. This disclosure requirement appears relatively lenient, given the damaging potential of deep fakes.
However, as briefly touched upon above, other parts of the AI Act may more regulate a deep fake generating AI system which damaging potential more harshly, as its production of deep fakes with damaging potential may shift the system's risk profile from limited to the high-risk or even prohibited categories (Article 5 (1)(b) and Article 6 (1)).
In summary, under the EU AI Act definition of deep fakes and the material restrictions and disclosure requirements which apply to deep fakes within this definition, the most important aspect appears to be the risk profile of the AI system, rather than whether a result of an AI system constitutes a deep fake or not.
Associate
Oslo
Partner
Oslo
Partner
Oslo
Partner
Oslo
Partner
Oslo
Partner
Oslo
Legal Advisor
Oslo
Managing Associate
Stockholm
Senior Associate
Stockholm
Associate
Stockholm
Partner
Oslo
Partner
Oslo
Senior Associate
Oslo
Associate
Oslo
Associate
Oslo
Associate
Oslo