…concerning contemporary literary practice, digital mediation, intellectual property, and associated moral rights
John Cayley & Daniel C. Howe
The Readers Project is an aesthetically oriented system of software entities designed to explore the culture of human reading. The project’s research agenda sheds light on a range of institutional practices and provides critical perspectives on what it means to engage with the literary in digital media, and—more pertinent to this context—with the linguistic commons as it is aggregated and enclosed by proprietary network services and corporate big software.
The Readers Project contends that the existing custom and law of intellectual property is unable to comprehend or regulate a significant proportion, if not the majority, of contemporary literary aesthetic practices. As such, it is irremediably flawed. Specifically, we claim that processes implemented in The Project, with the intention of generating both aesthetic and critical outcomes, demonstrate how literary practices have been so altered by digital affordances and mediation that the fundamental expectations of human writers and readers are changed beyond easy recognition, and beyond the scope of existing legal frameworks. Moreover, practices of reading and writing are now inextricably intertwined with their network mediation—the Internet and its services—and questions surrounding copyright and intellectual property have shifted from who creates and owns what, to who controls the most privileged and profitable tools for creation and dissemination.
Network services have arisen that allow practices of reading and writing to be automatically and algorithmically captured, processed, indexed, and otherwise co-opted for the commercially-motivated creation and maintenance of vectors of transaction (commerce) and attention (advertising). These vectors are themselves, as the results of indexing, processing, and analysis, fed back to human readers and writers, profoundly affecting, in turn, their subsequent practices of reading and writing. In cyclical fashion, reading and writing is fed back into the continually refined black boxes of proprietary, corporate-controlled, algorithmic process: the big software of capture, analysis, index and so on. This is the grand feedback loop of ‘big data,’ encompassing and enclosing the commonwealth of linguistic practice. As a function of proprietary control and neoliberal predominance, this system is regulated by little more than considerations of the marginal profit that service providers can derive.
Awestruck by the novel power of internet services and in the belief that we might all benefit, human readers and writers have willingly thrown themselves into this artifactual cultural vortex. Is it too late now to reconsider, to endeavor to radically change both the new and traditional institutions that allowed us entry? Why might it be important that we do so? Because inequalities in the distribution of power over the vectors of transaction and attention—commercial, but especially cultural—are simply too great. This power was acquired far too quickly by naive and untried corporate entities that remain largely unregulated, though now far less naive. This power is consolidated through agreements—literal, habitual, and all-but-unconsidered—with ‘users’ who enter into them as ‘terms of service’ that are not mutual, but instead serve only to reinforce and increase the disparities between ‘server’ and ‘client.’ Huge marginal profits allow the new corporations to acquire, on a grand scale, the estates of conventionally licensed intellectual property along with the interest and means to conserve them, via both legal and technical mechanisms. In a particularly vicious aspect of this cycle of power aggregation, these same mechanisms remain wholly inadequate to the task of regulating the culture and commerce of networks, clouds, and big data: the very culture and commerce which grants big software its profits.
In installations of Common Tongues, LCPs from a section of How It Is are used to discover particular human-selected contexts for these phrases, written neither by Beckett, nor by the project’s artists. These selections are hand-stitched together, maintaining syntactic regularity, and a new text is formed; one for which algorithmic processes guarantee that none of the constituent language is authored by the text’s makers. This text has its own significance and affect, though generated in regular relation with Beckett, and with hundreds of other writers. It could not have been made without digital affordances, that is, without algorithms. It could not have been made without the writing of many others. As a creative work, it runs counter to the customs and laws of intellectual property and defies conventions of authorship, but it is clearly a critical and aesthetic response to the evolving circumstances of linguistic and literary practice. This is, we believe, another way of saying that it has value. But how can we recognize and preserve this value? How will we protect it from the aggressive cultural vectors that threaten?
 See Cayley, John. ‘Terms of Reference & Vectoralist Transgressions: Situating Certain Literary Transactions over Networked Services.’ Amodern 2 (2013): http://amodern.net/article/terms-of-reference-vectoralist-transgressions/.
 Please refer to the Project’s website http://thereadersproject.org, and, for further linked documentation of these works, to the ELMCIP Knowledge Base at http://elmcip.net/node/4677 (Common Tongues) and http://elmcip.net/node/5194 (How It Is in Common Tongues). For conceptual literature see, inter alia: Goldsmith, Kenneth. Uncreative Writing: Managing Language in the Digital Age. New York: Columbia University Press, 2011; Dworkin, Craig Douglas, and Kenneth Goldsmith, eds. Against Expression: An Anthology of Conceptual Writing. Evanston Illinois: Northwestern University Press, 2011; and Place, Vanessa, and Robert Fitterman. Notes on Conceptualisms. Brooklyn: Ugly Duckling Presse, 2009. Computational literature does yet have so readily identifiable apologia. However Nick Montfort, http://nickm.com, is a major exponent, and the work of Noah Wardrip-Fruin, both aesthetic and theoretical, is highly relevant and significant.
John Cayley makes language art using programmable media. Recent work has explored aestheticized vectors of reading (thereadersproject.org, with Daniel C. Howe) and ‘writing to be found’ within or against the so-called services of Big Software. In future work he aims to write for a readership that is as much aural as visual. Cayley is Professor of Literary Arts at Brown University.
Daniel C. Howe writes natural and unnatural language as a means for exploring the social and political implications of networked technology, specifically concerning privacy, surveillance, and human rights. He divides his time between New York and Hong Kong, where he teaches at the School of Creative Media.