Generative AI - Some Legal Considerations

Generative AI - Some Legal Considerations

The introduction of OpenAI’s disrupting large language model ChatGPT brought generative AI services into the spotlight, making generative AI one of the top buzzwords of 2023. During the past year, various content creating AI-driven tools have emerged on the market and an increasing number of companies are evaluating the possibilities of implementing generative AI in their business. As is often the case with exciting new technologies, they tend to raise several questions in relation to the application of current legal frameworks. Generative AI is no exception. In this article, we aim to provide a brief overview of some key legal considerations in relation to generative AI.

Copyright – main concerns

Generative AI relies on access to vast amounts of data throughout its lifecycle in order to be useful and produce high quality content, encompassing training, testing, and ongoing improvement. One way to provide AI access to big data sets is by employing different information gathering techniques such as web scraping and data mining practices, where data is extracted from online sources. At this point, it has probably occurred to you that some of the data may contain copyrighted material, e.g., images, texts, or code. The use of copyrighted material, without a license granted from the holder, may constitute unlawful reproductions. Moreover, there is a risk that AI-generated output closely resembles existing copyrighted works, potentially leading to claims of copyright infringement.   Nevertheless, these issues remain controversial and have not yet been finally settled by a court in Sweden. There are several ongoing American lawsuits highlighting the debate over AI's use of copyrighted materials, where the plaintiffs allege that the AI “scrapes” and is trained on copyrighted works which they argue constitutes copyright infringement (e.g., Andersen et al v. Stability AI). Additionally, discussions have arisen regarding whether the content used by generative AI for training purposes could rely on the so-called text and data mining exceptions in the DSM Directive. However, these exceptions are subject to specific conditions (lawful access and possibility for rightsholders to “opt out”), which makes it difficult for AI developers to full rely on them. The UK is currently discussing the adoption of an explicit exception for AI training data, and Japan has already implemented broader exceptions covering AI training data. Now, let us see how the EU will deal with this going forward.

When generative AI creates something - the output - one might ask whether this creation could be subject to copyright protection. While the criteria for copyright protection may vary by jurisdiction, in several countries like Sweden, copyright protection requires the work to be created by an individual. However, when the AI is merely a tool for an individual in the process of creation, the work may be protected. Conversely, from an American perspective, a recent court decision concluded that works generated by AI are not eligible for copyright protection. It has been discussed whether a new category of copyright exclusively for AI-generated works shall be implemented, however, this raises questions about the general purpose of copyright law and what is actually worth protecting.

As for the upcoming EU AI Act in relation to copyright, the current draft proposal prescribes transparency obligations on providers of generative AI, including that they must disclose a “sufficiently detailed summary” of the use of copyrighted training data. What this means exactly, only time will tell.

A data protection perspective

As previously mentioned, generative AI systems handle vast amounts of data, which may include personal data. In cases where personal data is being processed as part of the input, it speaks for itself in terms of data protection - the GDPR will apply. Some examples of personal data-related issues in the context of generative AI includes justifying the processing of a potentially huge amount of personal data (lawful ground, both in relation to training, testing, and development), the lack of transparency and information about the personal data used, facilitating access, deletion, or correction of personal data already used, addressing the risk of the AI to reproduce personal data in the output, and so on. All in all, there are several tricky legal challenges to deal with to ensure that generative AI is compliant with data protection legislation. This is illustrated by the increasing actions taken by data protection authorities, for example the Italian DPA’s issuance of a temporary suspension on the processing of personal data of Italian users by OpenAI. Other examples from the past year include the Brazilian DPA’s statement that it has opened an investigation into whether emerging generative AI complies with the Lei Geral de Proteção de Dados and the South Korean Personal Information Protection Commission imposing a fine against OpenAI for non-compliance with the legislation. Here in the EU, the EDPB has launched a task force specifically dedicated to OpenAI’s ChatGPT service.  

Who is liable?

Another significant legal consideration is the question of liability. Imagine that a generative AI provides information that is inaccurate or unreliable, non-compliant, or possibly even offensive or harmful. Who will be liable for any damage or loss suffered as a consequence of the AI’s output? Could it be the AI model developer, the entity that operates and provides the AI, the user who relied on the information or maybe even the AI itself? At present, significant legal uncertainties surround this issue. As a general practice, third-party providers often disclaim any liability resulting from the use of the AI and its output. The issue of liability is illustrated by an ongoing court case in Georgia in the U.S., Walters v. Open AI. In the case, Walters alleges, that the AI provided false information about him, implying that he was involved in embezzling funds from a foundation he was falsely connected to, and that he is therefore entitled to damages from OpenAI.

Apart from the above, the EU AI Act is on the horizon. The AI Act is currently a drafted proposal, but if approved, the act will be a comprehensive regulation of the technology in general and will give rise to several considerations in relation to generative AI.  

If you need any guidance or strategic advice on generative AI, the upcoming AI Act, or any other emerging technologies from a legal perspective, you are always welcome to reach out to us at Synch, experts in the field. We are happy to assist!

Written by Hugo Snöbohm Hartzell

More from our team