Synthesizing Naturalized Language Patterns to Simulate Meaningful Thresholds of Purpose for Interchangeable Labour in Specialized Knowledge Economies

Tonight in the Department of English and Film Studies at the University of Alberta, Professor Harvey Quamen delivered this year’s F.M. Salter Lecture on Language: “Written in Code: Computers as Language Machines.” During question period, humanists and computer scientists alike raised concerns about the implications of Quamen’s titular thesis in regards to the distinction between natural vs. programming languages (e.g., “Can a computer use metaphors?”) and human vs. computer agency. I read these comments as defensive, even anxious—humanists not wanting to concede agency or the traditional territory of human(istic) inquiry, creativity, and thought (language) to a non-human “tool” (e.g., in response to one commenter’s use of the word “make” in regards to computer language, another audience member insisted that humans “create”), and computer scientists likewise exhibited disciplinary anxiety about Quamen’s proposal that we change our metaphors about the computer from one of number-crunching to one of linguistic creativity. I’m surprised (yet, because I’ve witnessed a lot of academia, not surprised) that some humanists, even in view of the emergence in recent decades of such human-decentering perspectives as actor-network theory, object-oriented ontology, ecocriticism, and animal studies, have implicitly or explicitly still not accepted that the author and the human are historically unstable politically constructed concepts and that there is little “natural” about the metaphorically-invested manipulations of sonic material by one particular primate species that deceives itself with that very material by naturalizing its particular anthropocentric use of it as “natural.” The reality of distributed agency, humans’ non-centredness, and hence humans’ non-proprietary hold over “natural” language, should not even be questions anymore. The same people who raised these objections found their ways home using cell phones and GPS while reading articles that may have been written by robots, on computers whose programs may have been written by other programs.

It is to our own detriment to deny the reality that language is no longer ours alone to command (as if it ever was!—this is the faulty ideological assumption that interpellates us as subjects: the assumption that, in Lacanian terms, when the Big Other of the symbolic order speaks through us, “I” speaks). As Federica Frabetti and Tara McPherson both argue in separate articles, we as tool-modular organisms cannot treat our technologies as neutral; they affect the way we think, the actions we take, the organization of our culture. McPherson discusses modularization as a paradigm of programming and software design complicit with racism and the capitalistic logic of the “factory” (150) and, especially nowadays, of the university:

“The intense narrowing of our academic specialties over the past fifty years can actually be seen as an effect of or as complicit with the logics of modularity and the relational database. Just as the relational database works by normalizing data—that is, by stripping it of meaningful, idiosyncratic context, creating a system of interchangeable equivalencies—our own scholarly practices tend to exist in relatively hermetically sealed boxes or nodes.” (154)

The defensive disciplinary displays at tonight’s lecture illustrated precisely this weakness of specialization. I’m not about to stage another disciplinary display by saying interdisciplinarity gets us out of the box (to the relational database, it too can be just another category); rather, I only want to discuss further the implications of being boxed in.

Several of my blog posts thus far in response to the digital reading paradigm shift in the humanities have dealt with the question of randomness and probability, and I am about to embark on trying to get a computer to read one hundred trillion poems in order to further probe these issues, which radically challenge humanistic epistemological and methodological assumptions—what interpretive weight does one reading of one text bear, if it’s just one of one hundred trillion possibilities? Why read that way instead of n other ways? Why not, even, using code, build conditionals and adjustable parameters into one’s reading?

The challenge raised by such questions is due similarly to what McPherson calls (in regards to academic specialization) a “system of interchangeable equivalencies” (154): the risk that the knowledge we produce is arbitrary and interchangeable. While it’s true that most individual contributions to scholarship or otherwise are going to be relatively small, and it’s difficult to predict how such individual contributions can collectively add up over time to change a culture, it’s also true that extreme specialization is also the logic of the assembly line and corporate programmer collectives (you design one molecule of one cog of one wheel and will never gain security access to what purpose it serves in the giant mysterious machine being built) and that interchangeability is also the logic of monetary capital (the universal equivalent). From the perspective of capital, our contributions are arbitrary and interchangeable; but the danger now is how, because of the epistemological shift toward the macroscalar that we are facing, the interests of capital align with that shift.

A few years ago, Matt Might’s “illustrated guide to a Ph.D.” started circulating as a meme, telling a pretty story about what a Ph.D’s cog-molecule will do: if a circle is the sum of all human knowledge, then when you become an expert in one area, you will push that circle’s radius ever so tinily forward—but push it you will! Progress will be made!

Scott Weingart deconstructs the picture this way, though: as that circle grows larger with time, then “every new dissertation expanding our radius…increas[es] the distance to our neighbors” such that “the inevitable growth of knowledge results in an equally inevitable isolation. / This is the culmination of super-specialization: a world where the gulf between disciplines is impossible to traverse, filled with language barriers, value differences, and intellectual incommensurabilities” (np), or in McPherson’s words, “the divide-and-conquer mentality that the most dangerous aspects of modularity underwrite” (153).

Untitled-3

It’s not just a question of (inter)disciplinary community, though, but of epistemology and politics. If what we take for “knowledge” can just as easily be duplicated by a machine (whether this machine involves human components such as “programmers” and “CEOs” or not) capable of analyzing billions of patterns of human speech and discourse in order to calculate what degree of patternicity must be simulated in order to dupe such-and-such pattern-seeking human into thinking of such discourse as “meaningful” or “truthful”, then what knowledge producers like PhDs may be “producing” is less importantly “knowledge” than a form of labour that, if it has any knowledge-value at all, will, more likely than not, further serve the interests of the Big Data machines that crowd-source such individual sequestered research in order to make Bigger claims with it. If the Big Data epistemological paradigm expands that “circle of knowledge” such as to render isolated individual efforts and claims increasingly obsolete, then it will succeed in subordinating those individuals themselves to a (relatively, according to the Big Data pardigm) meaningless pattern-seeking behaviour in fact serving as merely a sample-size-increasing node within a larger machine.

I recognize the cynicism of such a perspective; however, it is something that admitting our lack of control of and exclusive right to language, in combination with the current shift in epistemology, should make us pause and think (or maybe, to think less).

As McPherson writes, “Our screens are cover stories, disguising deeply divided forms of both machine and human labor. We focus exclusively on them increasingly to our peril” (152).

When there are so many Netflix or other movies you can watch (“research”, if you prefer) – even filtering for good ones within a genre you like – then does the particular flashing on the screen (and all its “generic” variations, its postmodernly indifferent interchanging of tropes) matter so much anymore as merely the fact that the content causes the screen to flash with enough patternicity to keep your pattern-seeking  mind’s butt at the edge of its seat (but still, crucially, in its seat)?

Non-Digital Works Cited

Frabetti, Federica. “Have the Humanities Always Been Digital? For an Understanding of the ‘Digital Humanities’ in the Context of Originary Technicity.” Understanding Digital Humanities. Ed. David M. Berry. 161-171. New York: Palgrave MacMillan, 2012. Print.

McPherson, Tara. “Why Are the Digital Humanities so White? or Thinking the Histories of Race and Computation.” Debates in the Digital Humanities. Ed. Matthew K. Gold. Minneapolis: University of Minnesota Press, 2012. 139-160. Print.

Synthesizing Naturalized Language Patterns to Simulate Meaningful Thresholds of Purpose for Interchangeable Labour in Specialized Knowledge Economies

What is Reading, if not Proximity?: From Hermeneutic Codes to Sensory Hallucinations

In Graphs, Maps, Trees, Franco Moretti’s definition of distant reading in opposition to close reading is more provocative or playful than actual or, even, polemical; contrary to the misconceptions of many defensive close readers, as Ted Underwood puts it the binary is not a real choice or debate; Andrew Piper’s medial term “scalar reading” is useful and could be cited more (Piper 382). However, since we’re dealing with a binary opposition, I think in clarifying either term we need to question an assumption common to both: What is reading? This axiomatic concept is so close to literary scholars and subjects of a graphic culture in general that its assumptions are hard to see; below, I compress a history of relatively gigantic leaps (bearing out any rabbit holes, Alice, with a shrinking potion à la Moretti: “distance is…not an obstacle, but a specific form of knowledge: fewer elements, hence a sharper sense of their overall interconnection” [95, italics his]) in order to question Western visual/sensory assumptions of reading in light of distant reading’s turn towards information visualization.

The hermeneutic practice of close reading, or interpreting a text for self-enclosed meaning (“the capacity of a critical language to substitute itself for another language, to say x actually means y” [Piper 380]), has been the mode of literary crticism. Far beyond the New Critics of one hundred years ago, the scholarly practice of close reading an exclusively small canon of literary texts can be traced back at least some 2000 years to the practice of transcribing and interpreting what was then the only literary text considered to be important: the Bible. More recently, close reading especially in an English context emerges in the sixteenth century from the Protestant Reformation, which in conjunction with the new technology of printing made it possible to distribute God’s Word in the common tongue for interpretation beyond the priestly. Nonetheless, criticism of literary saints, until (but still) very recently has been the exclusive right of academic priests gatekeeping the path to conversion from signifier x to transcendent signified y.

Lost in this history, however, is an alternative philosophy of reading emerging from the Jesuit Counter-Reformation. “[I]n reply to the new Protestant medium of the letterpress,” writes Friedrich Kittler, the Jesuits employed sensually descriptive poetry adapted from the scriptures, “a theater of illusions for all five senses (although the sense of vision took absolute priority in all of the spiritual exercises),” in order to engage a “reading practice for readers who did not stick to the letter but rather experienced its meaning immediately as a sensual hallucination” (78). Likewise nor did the Jesuits use “icons or panels on a church wall” or “miniatures” representing a Biblical story; rather, they sought to create “psychedelic visions” to experience the (often painful) story themselves, for example the Stations of the Cross or the flames of Hell (78). Thus “[i]t was a new kind of image worship, which, like the hallucinatory readings, was not directed at the image, but rather at its meaning” (79). This is not reading x for y, transposing one language for another, but of experiencing a lived reality of x through/as x itself (the signifier as meaning). By the same stroke, the Jesuit “elite” engaged in a lived writing: they “worked over weeks and months with all possible mortifications of the flesh to actually achieve hallucinations” (79).

This emphasis on experience is different from the one lamented by those who fear distant reading as an encroachment on close reading. Stephen Marche, whose misuses of the word “data” and “information” I responded to in my previous post, argues that “literature is not data. Literature is the opposite of data” using such phrases as “The experience of the mystery of language is the original literary sensation”, “Meaning is mushy”, and “The very first work of surviving literature [The Epic of Gilgamesh] is on the subject of what can’t be processed as information, what transcends data [i.e. ‘the ineffable’].” These ideas of literary experience are more Protestant (or pre-Reformation Catholic) than Jesuit: while Marche advocates for reading for subjective experience and subjective meaning, that experience still involves reading for meaning – the sacred “mystery” and “original” of a “transcend[ent]” signified. Likewise, the ineffable – what exceeds (linguistic) data’s ability to represent (which Marche wrongly calls “information”) – is different from something (i.e. some data) that can’t be processed as (sensory) information, i.e. at all. Since language always fails as perfect representation (because data never translates into 100% information), the experience of language as “ineffable” doesn’t need to be elevated to something “mysterious” if “mysterious” can be reduced to “uncertain,” i.e. a bit (see previous post). If we don’t read with the expectation of being brought from x to y in the first place, then the ineffable is nothing more than the banal experience of language’s everyday inadequacy. Hence the difference with Jesuit reading: the Jesuits processed language data not as sensory information coding/representing another set of linguistic data, but as sensory information. There was no “ineffable” insofar as, to their minds, graphs (whether graphemes or graphics) could conjure the flames of Hell, 1:1.

Now that I’ve mentioned graphs, maybe you sense where I’m going with this. In today’s culture of information visualization, is writing likewise undergoing a (counter-)reformation? Perhaps the important shift marked by “distant reading” is not so much in the “distant” part as in the reading. In Jean Baudrillard’s words, “ ‘virtual’ text (the Internet, word-processing)” is “work[ed] on…like a computer-generated image, which no longer bears any relation to the transcendence of the gaze or of writing….[A]s soon as you are in front of the screen, you no longer see the text as text, but as an image” (76). This very “text” you are reading is only the output of another underlying code designed to draw that text. Languages like HTML and CSS are instructions which tell the graphic user interfaces of Internet browsers how to draw text, images, layouts, etc. So, while the purpose of these languages is, to the computer, hermeneutic (translating from x to y), it’s so limited a hermeneutic relation as to be a misnomer since the computer doesn’t engage in the ambiguity of multiple critical readings; moreover, the purpose of these languages for the human user is not hermeneutic: they are languages designed to draw images, not to be read (except by a different kind of reader: a programmer coding or decoding them to figure out how the computer is unambiguously interpreting them). In this sense, today’s texts, and any hermeneutic engagement with them, already occurs at some level of what Kittler calls Jesuit “hallucination”. Moretti’s graphs, likewise, are graphics calculated and drawn by computer-code graphemes not themselves present in his book itself. Perhaps these graphics, then, reflect not only a shift in scale of hermeneutics (to the macroscopic), but a shift in writing/reading practice: from a rhetoric of representation to a rhetoric made more convincing through increasingly direct sensory engagement. (Cf. my discussion of an augmented-reality library database in Localities.) Piper, although he contextualizes his own reading practice in differentiation from the fifth-century Augustinian religious conversion model of reading (382, 384), describes his topological visualizations of Goethe’s corpus in a way reminiscent of Jesuit reading practice: while “reading is always simultaneously a practice of visual interpretation” as well as “decoding”, “topology undoes the binary distinction between text and illustration and rethinks text as illustrative” (388).

In a broader sense, this “shift,” however, may only reflect the broader biases of a “writing” culture. Jonathan Sterne deconstructs the written-culture/orality-culture binary by showing how “orality is not a very good description of non-Western, non-industrial cultures”: “There were technologies prior to writing that served some of its functions. Painting and sculpture externalized memory and solidified institutional forms over time. Musical instruments and musical technique were disciplines of the body that subordinated collective communication to abstract codes, even if they were not semantico-referential codes like those of writing” (220, 221). It was only colonial rhetoric that promoted “writing” as a superior cultural marker, and largely because of Biblical logocentrism that we came to view writing as a self-enclosed media form.

Writing practices have always been but one medium interacting among any number of other cultural forms of collective cultural hallucination. Close reading’s faulty assumption is not only its hyper-closeness to particular literary texts, but its hyper-closeness to writing as an exclusive and exclusively representational medium. Distant readings are no less prey to Western assumptions about reading, but their use of other graphics beyond the grapheme gestures towards writing not just as a representational rhetoric but as a directly sensory one that has precedent not only in sixteenth-century Jesuit Counter-Reformational practice, but in myriads of multimedia cultural forms both pre- and post-“literate”, Western and non-Western.

Works Cited

Baudrillard, Jean. The intelligence of evil or the lucidity pact. [2004] Trans. Chris Turner. New York: Berg, 2005.

Moretti, Franco. Graphs, Maps, Trees: Abstract Models for a Literary History. London and New York: Verso, 2005. Print.

Piper, Andrew. “Reading’s Refrain: From Bibliography to Topology.” English Literary History 80 (2013): 373-399. Web.

Sterne, Jonathan. “The Theology of Sound: A Critique of Orality.” Canadian Journal of Communication 36 (2011): 207-225. Web.

What is Reading, if not Proximity?: From Hermeneutic Codes to Sensory Hallucinations