Skip to main content

Blog

New Working Paper ‘Private Ordering and Generative AI: What Can we Learn From Model Terms and Conditions?

Posted on    by Gabriele Cifrodelli
BlogWorking papers

New Working Paper ‘Private Ordering and Generative AI: What Can we Learn From Model Terms and Conditions?

By 29 May 2024June 21st, 2024No Comments

CREATe is happy to present the fifth entry in our series of working papers released in 2024: ‘Private Ordering and Generative AI: What Can we Learn From Model Terms and Conditions?’, by Lilian Edwards, Igor Szpotakowski, Gabriele Cifrodelli, Joséphine Sangaré and James Stewart. This paper is forthcoming in the Cambridge Research Handbook on Generative AI and the Law (CUP 2024).

Click on image to download

Generative AI or “foundation models” have moved with remarkable speed from proof of concept to massive consumer and industry adoption. Large models generating not just text and image but also video, games, music and code, aspire to revolutionise innovation and ‘democratise’ creativity.  However, all is not rosy: literature already emphasises that foundation models may create serious societal risks, including embedding and outputting bias; generating fake news, illegal or harmful content and inadvertent “hallucinations”; infringing existing laws relating eg to copyright and privacy; as well as environmental and workplace concerns. Most developed nations are now considering regulation to address these worries, whether via mandatory comprehensive legislation (eg the EU AI Act); adapting existing law (see the many copyright lawsuits underway); or by “soft law” such as codes of ethics, “blueprints”, or industry guidelines.

What has had less attention has been self-regulation by model providers via their own choice of terms and conditions of use, user contracts or licenses.  These are interesting not just for transparency in proprietary models, but for revealing something about compliance with top-down law, enforcement and business models. As legislative processes move slowly, they may be all we have for some time. Social media platform terms have been extensively studied but almost no work has yet been done on this for foundation models. Our team funded by UKRI ran pilot empirical work in January-March 2023 mapping terms of service across a representative sample of generative AI providers and downstream deployers, split across modes (text, image, etc), size, and countries of origin and focusing on copyright, privacy and consumer protection. We are now presenting our preliminary findings, notably the emergence of a paradigm, in which providers attempt to position themselves as neutral intermediaries similarly to search and social media platforms, but without the governance increasingly imposed on these actors.

Private Ordering and Generative AI: What Can We Learn From Model Terms and Conditions?

Lilian Edwards, Igor Szpotakowski, Gabriele Cifrodelli, Joséphine Sangaré, James Stewart

CREATe Working Paper 2024/05

Abstract

Large or “foundation” models, sometimes also described as General Purpose Artificial Intelligence (GPAI), are now being widely used to generate not just text and images but also video, games, music and code from prompts or other inputs. Although this “generative AI” revolution is clearly driving new opportunities for innovation and creativity,  it is also enabling easy and rapid dissemination of harmful speech such as deepfakes, hate speech and disinformation, as well as potentially infringing existing laws such as copyright and privacy. Much attention has been paid recently to how we can draft bespoke legislation to control these risks and harms, notably in the EU, US and China, as well as considering how existing laws can be tweaked or supplemented. However private ordering by generative AI providers, via user contracts, licenses, privacy policies and more fuzzy materials such as acceptable use guidelines or “principles”, has so far attracted less attention.  Yet across the globe, and pending the coming into force of new rules in a number of countries, T&C may be the most pertinent form of governance out there.

Drawing on the extensive history of study of the terms and conditions (T&C) and privacy policies of social media companies, this  paper reports the results of pilot empirical work conducted in January-March 2023, in which  T&C were mapped across a representative sample of generative AI providers as well as some downstream deployers. Our study looked at providers of multiple modes of output (text, image, etc), small and large sizes, and varying countries of origin. Although the study looked at terms relating to a wide range of issues including content restrictions and moderation, dispute resolution and consumer liability, the focus here is on copyright and data protection. Our early  findings indicate the emergence of a “platformisation paradigm”, in which providers of generative AI attempt to position themselves as neutral intermediaries similarly to search and social media platforms, but without the governance increasingly imposed on these actors, and in contradistinction to their function as content generators rather than mere hosts for third party content. This study  concludes that in light of these findings, new laws being drafted to rein in the power of “big tech” must be reconsidered carefully, if the imbalance of power between users and platforms in the social media era, only now being combatted, is not to be repeated via the private ordering of the providers of generative AI.

Full paper can be downloaded here.