Synthesizing Naturalized Language Patterns to Simulate Meaningful Thresholds of Purpose for Interchangeable Labour in Specialized Knowledge Economies

Tonight in the Department of English and Film Studies at the University of Alberta, Professor Harvey Quamen delivered this year’s F.M. Salter Lecture on Language: “Written in Code: Computers as Language Machines.” During question period, humanists and computer scientists alike raised concerns about the implications of Quamen’s titular thesis in regards to the distinction between natural vs. programming languages (e.g., “Can a computer use metaphors?”) and human vs. computer agency. I read these comments as defensive, even anxious—humanists not wanting to concede agency or the traditional territory of human(istic) inquiry, creativity, and thought (language) to a non-human “tool” (e.g., in response to one commenter’s use of the word “make” in regards to computer language, another audience member insisted that humans “create”), and computer scientists likewise exhibited disciplinary anxiety about Quamen’s proposal that we change our metaphors about the computer from one of number-crunching to one of linguistic creativity. I’m surprised (yet, because I’ve witnessed a lot of academia, not surprised) that some humanists, even in view of the emergence in recent decades of such human-decentering perspectives as actor-network theory, object-oriented ontology, ecocriticism, and animal studies, have implicitly or explicitly still not accepted that the author and the human are historically unstable politically constructed concepts and that there is little “natural” about the metaphorically-invested manipulations of sonic material by one particular primate species that deceives itself with that very material by naturalizing its particular anthropocentric use of it as “natural.” The reality of distributed agency, humans’ non-centredness, and hence humans’ non-proprietary hold over “natural” language, should not even be questions anymore. The same people who raised these objections found their ways home using cell phones and GPS while reading articles that may have been written by robots, on computers whose programs may have been written by other programs.

It is to our own detriment to deny the reality that language is no longer ours alone to command (as if it ever was!—this is the faulty ideological assumption that interpellates us as subjects: the assumption that, in Lacanian terms, when the Big Other of the symbolic order speaks through us, “I” speaks). As Federica Frabetti and Tara McPherson both argue in separate articles, we as tool-modular organisms cannot treat our technologies as neutral; they affect the way we think, the actions we take, the organization of our culture. McPherson discusses modularization as a paradigm of programming and software design complicit with racism and the capitalistic logic of the “factory” (150) and, especially nowadays, of the university:

“The intense narrowing of our academic specialties over the past fifty years can actually be seen as an effect of or as complicit with the logics of modularity and the relational database. Just as the relational database works by normalizing data—that is, by stripping it of meaningful, idiosyncratic context, creating a system of interchangeable equivalencies—our own scholarly practices tend to exist in relatively hermetically sealed boxes or nodes.” (154)

The defensive disciplinary displays at tonight’s lecture illustrated precisely this weakness of specialization. I’m not about to stage another disciplinary display by saying interdisciplinarity gets us out of the box (to the relational database, it too can be just another category); rather, I only want to discuss further the implications of being boxed in.

Several of my blog posts thus far in response to the digital reading paradigm shift in the humanities have dealt with the question of randomness and probability, and I am about to embark on trying to get a computer to read one hundred trillion poems in order to further probe these issues, which radically challenge humanistic epistemological and methodological assumptions—what interpretive weight does one reading of one text bear, if it’s just one of one hundred trillion possibilities? Why read that way instead of n other ways? Why not, even, using code, build conditionals and adjustable parameters into one’s reading?

The challenge raised by such questions is due similarly to what McPherson calls (in regards to academic specialization) a “system of interchangeable equivalencies” (154): the risk that the knowledge we produce is arbitrary and interchangeable. While it’s true that most individual contributions to scholarship or otherwise are going to be relatively small, and it’s difficult to predict how such individual contributions can collectively add up over time to change a culture, it’s also true that extreme specialization is also the logic of the assembly line and corporate programmer collectives (you design one molecule of one cog of one wheel and will never gain security access to what purpose it serves in the giant mysterious machine being built) and that interchangeability is also the logic of monetary capital (the universal equivalent). From the perspective of capital, our contributions are arbitrary and interchangeable; but the danger now is how, because of the epistemological shift toward the macroscalar that we are facing, the interests of capital align with that shift.

A few years ago, Matt Might’s “illustrated guide to a Ph.D.” started circulating as a meme, telling a pretty story about what a Ph.D’s cog-molecule will do: if a circle is the sum of all human knowledge, then when you become an expert in one area, you will push that circle’s radius ever so tinily forward—but push it you will! Progress will be made!

Scott Weingart deconstructs the picture this way, though: as that circle grows larger with time, then “every new dissertation expanding our radius…increas[es] the distance to our neighbors” such that “the inevitable growth of knowledge results in an equally inevitable isolation. / This is the culmination of super-specialization: a world where the gulf between disciplines is impossible to traverse, filled with language barriers, value differences, and intellectual incommensurabilities” (np), or in McPherson’s words, “the divide-and-conquer mentality that the most dangerous aspects of modularity underwrite” (153).

Untitled-3

It’s not just a question of (inter)disciplinary community, though, but of epistemology and politics. If what we take for “knowledge” can just as easily be duplicated by a machine (whether this machine involves human components such as “programmers” and “CEOs” or not) capable of analyzing billions of patterns of human speech and discourse in order to calculate what degree of patternicity must be simulated in order to dupe such-and-such pattern-seeking human into thinking of such discourse as “meaningful” or “truthful”, then what knowledge producers like PhDs may be “producing” is less importantly “knowledge” than a form of labour that, if it has any knowledge-value at all, will, more likely than not, further serve the interests of the Big Data machines that crowd-source such individual sequestered research in order to make Bigger claims with it. If the Big Data epistemological paradigm expands that “circle of knowledge” such as to render isolated individual efforts and claims increasingly obsolete, then it will succeed in subordinating those individuals themselves to a (relatively, according to the Big Data pardigm) meaningless pattern-seeking behaviour in fact serving as merely a sample-size-increasing node within a larger machine.

I recognize the cynicism of such a perspective; however, it is something that admitting our lack of control of and exclusive right to language, in combination with the current shift in epistemology, should make us pause and think (or maybe, to think less).

As McPherson writes, “Our screens are cover stories, disguising deeply divided forms of both machine and human labor. We focus exclusively on them increasingly to our peril” (152).

When there are so many Netflix or other movies you can watch (“research”, if you prefer) – even filtering for good ones within a genre you like – then does the particular flashing on the screen (and all its “generic” variations, its postmodernly indifferent interchanging of tropes) matter so much anymore as merely the fact that the content causes the screen to flash with enough patternicity to keep your pattern-seeking  mind’s butt at the edge of its seat (but still, crucially, in its seat)?

Non-Digital Works Cited

Frabetti, Federica. “Have the Humanities Always Been Digital? For an Understanding of the ‘Digital Humanities’ in the Context of Originary Technicity.” Understanding Digital Humanities. Ed. David M. Berry. 161-171. New York: Palgrave MacMillan, 2012. Print.

McPherson, Tara. “Why Are the Digital Humanities so White? or Thinking the Histories of Race and Computation.” Debates in the Digital Humanities. Ed. Matthew K. Gold. Minneapolis: University of Minnesota Press, 2012. 139-160. Print.

Advertisements
Synthesizing Naturalized Language Patterns to Simulate Meaningful Thresholds of Purpose for Interchangeable Labour in Specialized Knowledge Economies