This blog post is the first in a series of posts from researchers working at the ARC as part of the CREATe academic visitors initiative which runs concurrently with the ‘Policy Futures for the Digital Creative Economy’ series in Spring 23, a key element of the CREATe@10 programme.
First up is a contribution from Anthony Rosborough (EUI). Anthony’s work examines relationships between IP laws, embedded computer systems, Human-Computer Interaction, and personal property rights. His doctoral research investigates the design and social impacts of software technological protection measures as a form of private regulation. This post focuses on the notion of ‘technological neutrality’ in respect of copyright law.

Photo by Jeremy Bishop on Unsplash
Introduction
On October 19, 1985, American historian and author Melvin Kranzberg provided his presential address to the Society for the History of Technology in which he articulated his “six laws of technology”. These are not laws in the more classical or jurisprudential sense, but rather a list of parameters analogous to the laws of physics. Of the six, Kranzberg’s first law has become emblematic of his thinking over the past several decades:
“Technology is neither good nor bad; nor is it neutral”.
The statement recognises the complex nature of technology and its impact on society. It addresses the fact that, while technology is not inherently infused with moral value, its effects and consequences can result in normative outcomes. This idea has encouraged much scholarly and public commentary over the years following, and much can be unpacked from it. Does he mean to imply that technology is deterministic in its function, or is his view that it merely plays an instrumental role toward the achievement of otherwise value-laden social and pollical aims? Are we only ever able to measure the morality of technologies in terms of the externalities they produce, or are there circumstances where they also embody certain values?
At a basic level, we can discern from Kranzberg’s first law that technology cannot be divorced from its social, political, and economic contexts. It is both instrumental in enabling moral choices while also capable of creating, shaping, and influencing society in new ways. How might we build upon Kranzberg’s idea – particularly in relation to today’s persuasive technologies, algorithmic systems, and exclusivity – in the context of copyright law? The answer requires that we first unpack the longstanding principle of technological neutrality.
The Principle of Technological Neutrality
Though some scholars have contended that the principle of technological neutrality has no unifying or authoritative articulation, the classic view is that copyright must remain agnostic to all technologies and devices, regardless of their features or capabilities. In 1999, the European Commission defined its technologically neutral stance toward copyright policy as “legislation [that] should define the objectives to be achieved, and should neither impose, nor discriminate, in favor of, the use of a particular type of technology to achieve those objectives.”
The principle has since been interpreted and adopted around the world. In 1997, it was recognised in the United States as an orientation for intellectual property policy that should “neither require nor assume [the use of] a particular technology.” It has also become recognised in decisions of the European Court of Justice, with adoption in UsedSoft v Oracle (2012) in relation to software distribution (finding that the right of distribution applies equally whether copies of the protected work are tangible or intangible copies). The Court has reached similar conclusions in relation to other economic rights under copyright, including communication to the public in Filmnet v Telenor (1999), and the reproduction right in Infopaq International A/S v Danske Dagblades Forening (2009). And though it is possible to point to many counterexamples where copyright law has taken a backseat to emerging technologies, lip service to the principle of technological neutrality persists.
A charitable understanding of the classical approach to technological neutrality is that it promises consistency in policymaking and ensures that the law remains conceptually moored and aligned with larger doctrinal principles. But neutrality and consistency in policymaking can result in biased outcomes. Indeed, the creation of “neutral” public policy has long been criticised by scholars in human rights fields for its tendency to disproportionally advantage or disadvantage certain groups. When we treat two subjects with distinct social and political considerations identically, it is usually not long before we discover that some injustice follows.
But a less charitable view of the classical approach to technological neutrality is that it provides policymakers with a justification for failing to fully appreciate and understand technological progress and its implications. Under this view, it serves as an overly malleable justification for otherwise arbitrary legal and policy decisions. The result is an excuse to ignore the numerous social, political, and economic impacts that result from emerging tech. In other words, the principle can easily become an overly simplistic test for measuring the suitability of copyright law in new technological contexts by assuming that the preservation of theory will always result in the preservation of practical outcome.
For example, copyright law’s equal treatment for temporarily cached files on a computer (so-called “temporary acts of reproduction”) and photocopying pages of a tangible book means that quite different technological activities receive the same legal analysis and treatment. The social, political, and economic impacts of restricting text and data mining are quite different from those which (for example) limit the number of pages one may photocopy from a book under fair dealing rules. Reading a book is not at all like mining data or web scraping, and the social and political impacts of placing limitations on these activities are hardly the same.
Persuasive Technologies
This omission left by the classical view of the technological neutrality doctrine has been exacerbated in recent years. The lack of neutrality identified by Kranzberg has become more and more prominent. Today’s technologies more overtly embody inherent or embedded values, independent of their effects and consequences. Indeed, over the past decade, we have witnessed the rise of persuasive technologies, with research and design being devoted to the creation of technologies which intentionally influence behaviour and the decisions of users. Algorithmic systems, large language models, and the systems of exclusivity and rights protections that privatise them, are the most obvious examples. The burgeoning field of Human-Computer Interaction within Computer Science, and Stanford University’s Behavior Design Lab also compliment these innovations. And unsurprisingly, we have seen a meteoric rise in the academic and public policy momentum toward better understanding the ethics of information technologies, and how these systems can be used to improve society and remedy inequality.
If there were a time when copyright law could apply equally to distinct technologies, irrespective of the methods by which works are created, accessed, or disseminated, those days are almost certainly behind us. With the advent of “digital copyright”, 21st century copyright law has become predominantly a regime for regulating the use and capabilities of technologies rather than works. The question then remains: what to do with copyright’s principle of technological neutrality?
Substantive Technological Neutrality
As put forward by Professor Carys J Craig in a recent article, the future of technological neutrality in copyright law may require looking beyond the effects of technologies on the balance of interests and rights, and to reach further into the broader social, political, and economic effects of technologies themselves. This substantive principle of technological neutrality focuses on what Craig refers to as “normative equilibrium in the face of technological change”. In other words, the contemporary challenge is for copyright law is to respond to – and attempt to regulate – the impacts of new technologies on society to preserve the public interest.
On a more concrete level, adopting a substantive principle of technological neutrality may provide more guidance in cases involving the appropriation of artistic and literary works for use as training data. This is the matter precisely at issue in a recent class-action complaint against Microsoft, GitHub, and OpenAI. The three companies jointly built the AI-powered Copilot tool, which assists developers by suggesting code. Late last year, programmer and lawyer Matthew Butterick joined forces with a law firm to commence a class action lawsuit against the three companies, claiming that the Copilot tool violated the open-source licences under which the code used to train Copilot was shared. Among other requirements, the MIT, GPL, and Apache licences at issue require attribution. In response, Microsoft and GitHib have contended that no violation of open-source licences took place, and that the training data used for Copilot was based off exclusively publicly available and licence-free code. The three defendants have since filed a motion to dismiss the lawsuit.
A substantive view of technology neutrality would assist in cases like these by providing the doctrinal leeway to rethink the traditional concepts of reproduction or attribution. In the long run, this could act to ensure that emerging technologies like Copilot are regulated in a way the preserves the core normative aims of the copyright system, even if the rules that apply to these technologies are not the same as those which apply to other technologies.
Substantive technological neutrality may also assist in addressing technological protection measures (TPM) overreach in embedded computer systems that are creating impediments for a whole host of socially beneficial activities. Widespread software integration in ostensibly every device and object around us now infuses our built environments with copyright governance enabled through software protections. Under the auspices of preventing infringement, device manufacturers can receive legal protection for using software to lock these products and devices down – beyond the scope of public understanding or scrutiny. Much like AI systems, these technologies can shape behaviour over time, pre-determine the legality and morality of conduct, and influence political expression and democratic deliberation.
Finding a resolution to this tension between intellectual property rights and personal property use may also require a more responsive and normative view of technological neutrality. Indeed, there is a normative distinction between TPMs which are used for the purposes of safeguarding digital content than those which serve to prevent tangible interaction or use of physical devices and products. Though the classical approach to technological neutrality may not permit such a distinction, it is possible that a normative or substantive view of the principle could.
To sum up, the technologies regulated by copyright are no longer instrumental or devices that may facilitate copying or infringement – they increasingly shape the parameters of political, social, and economic consensus. A computer is not only a machine that can reproduce or communicate copyright works, its design and function also shapes democratic deliberation. Like legendary folk musician Woodie Guthrie’s “fascist killing” acoustic guitar, the social and political discourse of today is transmitted through code. Therefore, the choices behind the design, implementation, and uses of these technologies requires public involvement and regulation. To the extent that copyright law safeguards these technological choices through privatisation and exclusivity, we need to reimagine what “technological neutrality” means and how it can or should guide copyright law and policy.

Al Aumuller/New York World-Telegram and the Sun – This image is available from the United States Library of Congress’s Prints and Photographs division under the digital ID cph.3c30859.