“Literature at the Macroscale”

(A response to papers by Andrew Piper, on “Wertherity,” and Hoyt Long and Richard So, on literary networks and the English-language haiku, MLA annual meeting, Chicago, Jan. 10, 2014.)

For lack of time, I will jump into the normative right away. Computers should augment human inquiry, not replace it: “augment,” that is, in a specific sense that I will try to elaborate. The point of using computers should not be to do the same sort of thing that scholars have been doing for a long time with file cards and dictionaries, only faster and larger. Let’s not frame the work of the Humanities in such a way that it throws the humans out on the street. My motive is more than protectionism. I hate wasted effort, even computer effort. Let us honor the tools by giving them work that only they—in combination with us—can do.

A particularly obvious example of the less interesting style of Digital Humanizing—and I hope I’m not offending someone presently in the room by citing it—is the study, reported a few months ago, of the language of American fiction since 1945. It showed, by tallying lexical frequencies, that over the period American fiction used more and more words connoting anxiety. Therefore the American consciousness can be shown to have grown more and more anxious over that period. Now if this is a sample conclusion from the great world of computer humanities, I don’t think it added greatly to our knowledge. The reason I grouse about this sort of computer application is that it works entirely within the assumptions of traditional lexicography. A word, for this way of thinking, is a unit of meaning, and the more often a word or its synonyms appear, the more importance we are to ascribe to the concept referred to by that unit of meaning. Electronic databases allow for quick searches and effortless totaling, but there doesn’t seem to be a lot of new thinking here, at least as the research is reported. It is possible that the detailed results that someone reported on in making that conclusion will show us something we didn’t already know, but for that, someone will have to take a second, more imaginative, look at the data.

The papers we’ve just heard don’t put computers to doing, more speedily or on a bigger scale, the same things that readers have been doing all along. They use the machines to discover new questions, questions we couldn’t or wouldn’t have anticipated. They thereby cause literary texts to show us a new face. Scale is incidental to the change in perspective. When I try to describe that change, I find myself asking questions such as: How does the object of our attention differ by being seen in quantity? How does the singular example’s status as example change by enlarging the set of comparables? What new comparative contexts does an open search for lexical parallels reveal? What new cause-and-effect relations are suggested when we jigger together lexical parallels, authors, publication venues, and chronology? What happens when we telescope in and out of different analytic scales (the phrase, the genre, the work, the career, the trend, the canon, or any of these seen in tandem with its relatives in other languages)? The reason for hybridizing our research with the computational demon is, after all, to enlarge our imaginations.

Imagination has been around for a while. In my role as decadent humanist I will try to draw parallels and draw distinctions (both are important) between the discoveries made in these papers and a few inventions of the unaided human fancy.

I think it is important to distinguish the sort of work being done here from the “distanced reading” advocated a few years ago by Franco Moretti and other participants in the fields known as “world literature” or “study of the novel.” As you’ll remember, Moretti proposes to construct a worldwide account of literature “without a single textual reading,” by having local informants conversant with the relevant languages and cultures do the dirty work of reading novels from Indonesia, Somalia, Alabama, etc., and writing reports on them which are forwarded up the chain to what we might call the Theoretical Command Center. The overseers in the Command Center would then proceed to draw from these reports synthetic accounts of the growth, development, mutation and spread of different features of world fiction, and that would be world literary history. This outline calls for collective work of a kind not usually seen in the humanities, indeed a conscious parallel with the Big Science projects that have kept our universities humming since World War II; and in that regard it appeared innovative when first proposed about fourteen years ago. But it should be pointed out that nothing in this plan particularly requires instruments that were not available to Diderot and d’Alembert, or indeed to the Chinese court bibliographer Liu Xiang in the first century BC. The writing of reports that summarize a previous, more detailed report is one of the most ancient bureaucratic techniques. Machines are not particularly good at it. Ernest Renan, writing in 1849, predicted that the expansion of print culture would impose exactly this summary form on absolutely every kind of cultural record, disregarding nuances of style and individuality for the sake of comprehensiveness in the ever more crowded attentional space of modern humanity:

We often have a thoroughly wrong idea about our survival in the world to come; we imagine that literary immortality consists in being read by future generations. This is an illusion that we must abandon…. After a hundred years have passed, a genius of the first rank is reduced to two or three pages…. Some obscure, dust-covered book contains results that are still valid today and will be valid for all eternity. Literary history is destined to substitute, by and large, for the direct reading of works of the human mind. … The revolution which has transformed literature into newspapers or periodicals, and turned every work of the mind a temporary production that will be forgotten a few days later, quite naturally puts us in this frame of mind. (Ernest Renan, L’Avenir de la science: pensées de 1848. Paris: Calmann-Lévy, 1890, pp. 226, 224, 225-227)

Renan was, whether he knew it or not, a media theorist; and here he is theorizing the effect of plentiful print publication on its audience’s state of mind. But there is nothing here that requires a changed receptive apparatus; somebody still has to read and digest the works in order to write the literary histories that replace them. Distanced reading per se has nothing to offer macrodata, except perhaps a foot in the door. (A foot-in-the-door of which Matthew Jockers, author of Macroanalysis, has taken honorable advantage.)

Macro-reading by humans can counteract the influence of anthologies, canons and literary histories that choose a few examples and doom the rest to oblivion. Digital searches, here, can help us do this better, by more comprehensively accounting for the forerunners of the manners and devices that bore canonical fruit in the hands of people later recognized as major poets. The bump in translated or pseudo-translated haiku, twenty years before Pound’s “In a Station of the Metro,” is invisible to most histories of American poetry that proceed too quickly to sort the wheat from the chaff, or assume that sorting to have already been done. The macro-reader can program the Macrovision Apparatus to home in on this or that trick of phrase, metrical shape, use of the rhetorical question, etc., that we discover combined with others in the flood of literary discourse to produce a masterpiece we recognize. This alters our sense of what influence, imitation, intertextuality, originality and unoriginality mean.

The macro perspective also echoes the imaginative rethinking of the work of translation performed some forty years ago already by Itamar Even-Zohar, Gideon Toury, and other “polysystems theorists,” who observed that when a work of literature is translated, it isn’t just migrated word for word into a new idiom, but it anchors itself in the existing repertoires of meanings, attitudes, styles, genres, gestures and precedent-setting literary works in the language of arrival. Translation is to some extent poetic assimilation. This observation—essential for understanding world literary history—can now be refined in countless ways by searching for textual correlations, perhaps invisible to the naked eye, but apparent to the assisted eye of the analyst of Wertherism, who must coin new terms for the edges and contours of the new landscape, as a mathematician might.

And finally, macroanalysis leads us to reconceive one of our founding distinctions, that between the individual work and the generality to which it belongs, the nation, context, period or movement. We differentiate ourselves from our social-science colleagues in that we are primarily interested in individual cases, not general trends. But given enough data, the individual appears as a correlation among multiple generalities. One sees this for example in medical research, where previously it was sufficient to anonymize data before publishing them. But if the data are comprehensive enough, an external observer can go from the aggregate to the individual case. Let me cite a medical researcher’s remarks taken from a conversation I was reading last month on Metafilter, a links and discussion page you’re doubtless familiar with as well:

It is amazing what one can glean from bulk data if there’s enough of it. (For instance: we and others have shown that it is possible to compromise patient/study participant privacy if the average allele frequencies — not individual genomes, but averages aggregated across thousands of people — of a medical study’s case and control groups are published for a sufficiently large number of loci.) I have close colleagues who work on similar problems in social networks, transportation networks (in some cases using cell phone tracking data from volunteers), text analysis, &c. None of us need to “read” anything; what we do need are data sets that are large enough to train a machine to pick out patterns that have low probabilities of occurring by chance. (Comment by Westringia F., posted at http://www.metafilter.com/134823/The-NSA-An-Inside-View-blog-post, December 16, 2013.)

Since the bulk collection of metadata has become one of the growth industries in a country sadly lacking for growth industries, perhaps we should all be paying attention to the difference between the scale on which we’re used to living our lives and the macroscale. Literary analysis at both scales is one vividly intuitive way to think about the relations of reading, computing and knowing. It might therefore be good even for something other than literary understanding.

 

2 thoughts on ““Literature at the Macroscale”

  1. Pingback: Talks and Slides from MLA2014 – Literary Networks

  2. Pingback: Bridging Token and Type | the scottbot irregular

Comments are closed.