Skip to main content

A Statement by The Readers Project


…concerning contemporary literary practice, digital mediation, intellectual property, and associated moral rights

John Cayley & Daniel C. Howe


The Readers Project is an aesthetically oriented system of software entities designed to explore the culture of human reading. The project’s research agenda sheds light on a range of institutional practices and provides critical perspectives on what it means to engage with the literary in digital media, and—more pertinent to this context—with the linguistic commons as it is aggregated and enclosed by proprietary network services and corporate big software.

The Readers Project contends that the existing custom and law of intellectual property is unable to comprehend or regulate a significant proportion, if not the majority, of contemporary literary aesthetic practices. As such, it is irremediably flawed. Specifically, we claim that processes implemented in The Project, with the intention of generating both aesthetic and critical outcomes, demonstrate how literary practices have been so altered by digital affordances and mediation that the fundamental expectations of human writers and readers are changed beyond easy recognition, and beyond the scope of existing legal frameworks. Moreover, practices of reading and writing are now inextricably intertwined with their network mediation—the Internet and its services—and questions surrounding copyright and intellectual property have shifted from who creates and owns what, to who controls the most privileged and profitable tools for creation and dissemination.

Network services have arisen that allow practices of reading and writing to be automatically and algorithmically captured, processed, indexed, and otherwise co-opted for the commercially-motivated creation and maintenance of vectors of transaction (commerce) and attention (advertising). These vectors are themselves, as the results of indexing, processing, and analysis, fed back to human readers and writers, profoundly affecting, in turn, their subsequent practices of reading and writing. In cyclical fashion, reading and writing is fed back into the continually refined black boxes of proprietary, corporate-controlled, algorithmic process: the big software of capture, analysis, index and so on. This is the grand feedback loop of ‘big data,’ encompassing and enclosing the commonwealth of linguistic practice. As a function of proprietary control and neoliberal predominance, this system is regulated by little more than considerations of the marginal profit that service providers can derive.

Awestruck by the novel power of internet services and in the belief that we might all benefit, human readers and writers have willingly thrown themselves into this artifactual cultural vortex. Is it too late now to reconsider, to endeavor to radically change both the new and traditional institutions that allowed us entry? Why might it be important that we do so? Because inequalities in the distribution of power over the vectors of transaction and attention—commercial, but especially cultural—are simply too great. This power was acquired far too quickly by naive and untried corporate entities that remain largely unregulated, though now far less naive. This power is consolidated through agreements—literal, habitual, and all-but-unconsidered—with ‘users’ who enter into them as ‘terms of service’ that are not mutual, but instead serve only to reinforce and increase the disparities between ‘server’ and ‘client.’[1] Huge marginal profits allow the new corporations to acquire, on a grand scale, the estates of conventionally licensed intellectual property along with the interest and means to conserve them, via both legal and technical mechanisms. In a particularly vicious aspect of this cycle of power aggregation, these same mechanisms remain wholly inadequate to the task of regulating the culture and commerce of networks, clouds, and big data: the very culture and commerce which grants big software its profits.

We conclude with the following description of recent outcomes from The Readers Project that exemplify its engagement with these issues. The installation Common Tongues and the artist book How It Is in Common Tongues both make transgressive use of networked search services in order to produce literary aesthetic works in the domains of conceptual and computational literature.[2] These works read and reframe Samuel Beckett’s late novella, How It Is. The project’s software entities read through the work, seeking out and resolving its text into a sequences that we call Longest Common Phrases (or LCPs). Each LCP is the longest sequence of words, beginning from a specific point in the text, that can be found on the Internet, not written by its author. We use automated Internet search to locate these phrases in other contexts, proving their continued circulation in the commons of language, unfettered by any liens of association or integrity. We then cite the web occurrences of these LCPs in How It Is in Common Tongues, a book released by the project’s artists. In fact, we resolve the entire text of How It Is into common phrases as inscribed by thousands of other English language users. By doing so, we produce both an elegant aesthetic object, and a text that reads quite differently from the original. It is constantly interrupted by reference—distracting invitations to turn to other networked writings. As such its punctuation is entirely novel and calls attention to alternate phrasings that generate strange, new, and differently engaging prose rhythms. It is a conceptual work: a new, distinct instance of digital language art. Nonetheless, it is also exhaustively associated with the Beckett text. Moreover, its punctuation and annotation have been produced by algorithmic processes that transact with Internet search services in deliberate contravention of their terms of use (amongst other things: unfairly seeking to deny any client-use of bespoke software agencies). We argue that the transgressive—and controversial—practices by which we have created this work are now vital and necessary additions to the existing repertoire of literary aesthetic practices. That they should be denied or contradicted by inadequate custom and law is an unfortunate circumstance that should not be maintained or supported.

In installations of Common Tongues, LCPs from a section of How It Is are used to discover particular human-selected contexts for these phrases, written neither by Beckett, nor by the project’s artists. These selections are hand-stitched together, maintaining syntactic regularity, and a new text is formed; one for which algorithmic processes guarantee that none of the constituent language is authored by the text’s makers. This text has its own significance and affect, though generated in regular relation with Beckett, and with hundreds of other writers. It could not have been made without digital affordances, that is, without algorithms. It could not have been made without the writing of many others. As a creative work, it runs counter to the customs and laws of intellectual property and defies conventions of authorship, but it is clearly a critical and aesthetic response to the evolving circumstances of linguistic and literary practice. This is, we believe, another way of saying that it has value. But how can we recognize and preserve this value? How will we protect it from the aggressive cultural vectors that threaten?


[1] See Cayley, John. ‘Terms of Reference & Vectoralist Transgressions: Situating Certain Literary Transactions over Networked Services.’ Amodern 2 (2013):

[2] Please refer to the Project’s website, and, for further linked documentation of these works, to the ELMCIP Knowledge Base at (Common Tongues) and (How It Is in Common Tongues). For conceptual literature see, inter alia: Goldsmith, Kenneth. Uncreative Writing: Managing Language in the Digital Age. New York: Columbia University Press, 2011; Dworkin, Craig Douglas, and Kenneth Goldsmith, eds. Against Expression: An Anthology of Conceptual Writing. Evanston Illinois: Northwestern University Press, 2011; and Place, Vanessa, and Robert Fitterman. Notes on Conceptualisms. Brooklyn: Ugly Duckling Presse, 2009. Computational literature does yet have so readily identifiable apologia. However Nick Montfort,, is a major exponent, and the work of Noah Wardrip-Fruin, both aesthetic and theoretical, is highly relevant and significant.







John Cayley makes language art using programmable media. Recent work has explored aestheticized vectors of reading (, with Daniel C. Howe) and ‘writing to be found’ within or against the so-called services of Big Software. In future work he aims to write for a readership that is as much aural as visual. Cayley is Professor of Literary Arts at Brown University.

Daniel C. Howe writes natural and unnatural language as a means for exploring the social and political implications of networked technology, specifically concerning privacy, surveillance, and human rights. He divides his time between New York and Hong Kong, where he teaches at the School of Creative Media.