11/19/13

Toward a plural theory of Anthropocenes

easterFor all I know this has been said before, but: the anthropocene is a world-concept.

The normal way to understand the Anthropocene is as a historical period, defined more or less as the era when human beings acquire the capacity to affect the ecology of the entire planet, thereby opening the door to mass extinction, disastrous climate change, and, at the limit, the disappeareance of the species. Generally people want to date it to the beginning of the Industrial Revolution, though you see arguments for dating it to the beginning of agriculture. Since the challenge we are facing collectively at the moment (and for the next centuries) is the immediate result of the dramatic expansion in carbon-based energy (oil, gas) use that comes from the Industrial Revolution, my impression is that most people are inclined towards that date.

But that’s just because the scope of this environmental event is in fact the entire planet Earth. I want to suggest that we become aware of/wish to designate the Anthropocene at this crucial moment precisely because of that scope, that because like the various other -cenes (the Pleistocene, the Holocene) this era involves ecological/geological/meterological activity that is planet-wide, we feel comfortable declaring it to be “epoch”-worthy. That is, the epoch (that which can be designated by the -cene, that which is a scene for the -cene) is partially a temporal metaphor for spatial scale.

This is true of all world-concepts, and trivial. But now what we can do is to scale down the Anthropocene from the world to world, and recognize that, unlike the Pleistocene or Holocene, we can use the concept to refer to any “world” (that is, any relatively closed totality, relatively closed because like our totality it can be potentially escaped from, in our case via rocket ships/space colonization) that is capable of producing self-extinction through the manipulation of its environment.

In that case there have been other Anthropocenes, some of them, perhaps, not even human. Any virus that kills its host too rapidly–before the host has a chance to infect others–is Anthropocenic in this sense. We might also think of the series of extinctions on Easter Island as one example of a quasi-Anthropocene (resolved by the arrival of European explorers). Or, an extreme and fanciful case, of a literary character like Raskolnikov.

I am not sure that it is politically useful to think of the Anthropocene this way — it may be that there’s more traction in terms of getting people to think about how to live, or die, in it if they can have the narcissistic pleasure of imagining themselves to be historically unique. But it may also be that philosophers and other humanists could benefit from a plural theory, a theory of Anthropocenes, both as a structure for comparative analysis and as a humbling reminder that self-desctruction, when it happens, is usually a matter of degrees of difference, not kinds, from ordinary life.

11/11/13

Critical Distance and the Crisis in Criticism (2007)

One of the things I want to do sometimes is to repost stuff from Printculture’s archives, because it tends to be hard to find. Here is a series of discussions on the topic of something I called “leverage,” by which I meant, as Mark McGurl pointed out in the comments, “critical distance.” The conversation that ensues sees the two of us thinking through and explaining some of the things that motivated The Program Era and The Hypothetical Mandarin. The entire conversation series of posts (which are combined below) dates from October 2007. I will also say that one of the weird things about rereading this stuff is realizing how old some of my ideas are; I swear I’ve repeated some of the things I say below in the last couple of years as though they’d just occurred to me.

Leverage as a function of critical capability and interest

It occurred to me the other day — and in fact I may have already bored one or two Printculture readers with this — that it would be useful to think about why so much academic work on contemporary material isn’t very good. But perhaps the premises bear repeating: (1) a higher percentage of literary critical or cultural analysis of contemporary material — fiction, poetry, film, the culture in general — says, by my standards, completely predictable things (than does work on material removed from us in time) and (2) is therefore no good. I have no data to back the first part of this up; it’s merely an impression. For the movement from the first to the second premise, I rely on my belief that literary critical analysis should, in general, aim to teach us things we don’t already know about the world.

The question I’m setting out to answer here is why this is true. Why, that is, does work on contemporary material so often simply tell me what I (feel like I) already know.

The answer has to do, I think, with leverage. By leverage I mean to indicate the degree to which my ability to tell you something about X that X doesn’t already know about itself and isn’t obviously saying to anyone who’s paying attention, depends to a very large extent on the difference I am able to generate between myself, and what I know or see, and what X knows or sees on its own.

Continue reading

11/11/13

Recidivism in weight loss

Nice article from NY Mag on the psychological and physiological adjustments that come with having lost large amounts of weight.

Cultural fantasies of weight loss present a tidy, attractive proposition – lose weight, gain self-acceptance – without addressing the whole truth: that body image post-weight loss is often quite complicated. Perhaps that helps explain why the rate of recidivism among people who have lost significant amounts of weight is shockingly high – by some estimates, more than 90 percent of people who lose a lot of weight will gain it back. Of course, there are lots of other reasons: genetic predisposition towards obesity, for one. For another, someone who’s lost 100 pounds to get to 140 pounds will need to work harder – including eating much less each day – to maintain that weight than someone who’s been at it her entire life. (Tara Parker-Pope’s excellent piece “The Fat Trap” explains these physiological factors in much greater detail.) But what about the psychological? Who would be surprised if a person – contending with both a new body that looks different from the one she feels she was promised, and the loneliness of feeling there’s no way to express that disappointment – returned to the familiar comfort of overeating? At least its effects are predictable.

Two thoughts: first that the last bit is of a piece toward a more general understanding of how psychologically difficult deprivation is, and how things like being fat or being poor change the wiring of our bodies and our brains. Beginning from that understanding makes compassion for the choices others make far easier (and moralizing judgment oriented around disgust more difficult).

Second is that Iwonder if anyone’s ever done a comparative analysis of the disappointment one feels after losing a great deal of weight and the post-pregnancy/childbirth body. Both are situations in which one does not return (unless one is a certain sort of celebrity, I suppose) to the status quo ante; in the case of weight loss this is exacerbated or made more weird, of course, by the fact that the new status quo may never have been ante. I was 6’1″, 215 pounds at age 16, 6’3″ 240 at 18, and 6’3″ 278 in summer 2002. Since 2007 I’ve bounced between 190 and 200 (I was at 184 at one point, but never again) and I’m still not used to it.

11/8/13

Bonus points to the cynical guy

So a few months ago I predicted that one day actors would be hired by firms like Coursera to teach MOOCs (because once you don’t have to respond to student questions live, who cares who reads from the script? Might as well be a hottie…).

And now one of the leading MOOC firms, EdX, is considering hiring Matt Damon to teach a course.

Casting Damon in a MOOC is just an idea, for now: In meetings, officials have proposed trying one run of a course with someone like Damon, to see how it goes. But even to consider swapping in a star actor for a professor reveals how much these free online courses are becoming major media productions—ones that may radically change the traditional role of professors.

One for-profit MOOC producer, Udacity, already brings in camera-friendly staff members to appear with professors in lecture videos. One example is an introduction to psychology course developed earlier this year in partnership with San Jose State University. It had three instructors: Gregory J. Feist, an associate professor of psychology at San Jose State University, who has been teaching for more than 25 years and who wrote a popular textbook on the subject; Susan Snycerski, a lecturer at the university who has taught for 15 years; and Lauren Castellano, a Udacity employee who recently finished a master’s in psychology from the university, advised by Feist.

10/22/13

Scott Adams is a strange man

…with lots of ideas about the future of online education.

I suppose by “strange” I mean that his politics (if you look at his blog) operate from a position that imagines itself as entirely apolitical but is nonetheless quite interested in politics. So it produces frequent pox-on-both-houses language, but also pragmatic suggestions for various kinds of things (including online ed, in the link above) with no real concern for what I think of as the “normal” language of American politics (involving concepts like the moral, the just, and so on).

And then you ask yourself — well, who would Dilbert vote for? — and you realize that Adams’s politics are perfectly in tune with the strip, because the answer is totally unknowable. Even the grounds on which Dilbert might vote for someone are unknowable.

10/20/13

And the MOOC revolution seems to be over

At least according to this reading of a Chronicle story by Chris Newfield. Short version that both faculty and university presidents agree that MOOCs will have a negative impact on higher ed, and that this opinion is held by people who nonetheless seem open to technological innovation and other kinds of innovation in teaching (so it’s not just a thoughtless resistance to change).

And yet, the problem is that for about 18 months state legislatures were allowed to pretend (or pretended to pretend) that the MOOC would allow for further cutting of state support for higher education…

In other words, when universities lose MOOCs as a budget solution, they lose the main source of hope that state politicians had for a free fix of the college cost problem for a less affluent, not wonderfully educated younger generation.  MOOCs were the austerity solution to the mass quality problem.  Without them, tempers will flare, fingers will point, and funding will not be restored. In the meantime, faculty are going to have to lead higher ed innovation anyway, and the good news is that post-MOOC-as-cure-all faculty don’t need to focus on the technology to the exclusion of the “human side” of teaching and learning.

Now that the MOOC seems to be a non-viable solution, we can look forward to the rapid restoration of that missing funding.

10/7/13

How Someone Ends Up Working in Disability Studies…

… or at least thinking about it.

Those of you who know me and my family know that our son, Jules, was born with a very rare genetic disability (known as 9p deletion syndrome). He’s fine, at least medically, though it was no fun for the first three weeks of his life and has on various occasions been a little less fun than it otherwise might have been (cleft palate surgery, some ongoing concerns, now faded, about his heart). Cognitively, we know less about the future than we might, partly because the syndrome is so rare (maybe 150 cases in the United States), partly because it produces such a wide range of outcomes, and partly because the treatment of the disabled has changed so radically in the United States in the last 60 years that evidence gathered on the basis of a 30-, 40-, or 50-year-old 9p deletion person does you little to no good, since that person lived through a radically different set of approaches to disability than will any child born ten or twenty or thirty years later.

I know less than I should about how disabled people are treated in the United States. More than I used to know, of course, before Jules was born, before he spent 2.5 of his first 3 years in an amazing day care facility, in which he was fully integrated with the other kids (a process known as “mainstreaming,” now the normal thing to do in the United States), and to which state-provided therapists (occupational, physical, speech, developmental) showed up for 7 hours a week to help Jules catch up with his peers.

The idea behind mainstreaming and the therapy (which is known generally as “early intervention”) is simple and twofold: first, that the earlier you can work with disabled (or even potentially disabled) children, the better you can help them reach their maximum genetic potential (I know that’s a fuzzy concept, but let’s use it loosely here to express something like the maximal cognitive capacity someone can reach, all other things being equal); and, second, that surrounding (potentially) disabled children with other children who are developmentally “ahead” of them actually encourages the (potentially) disabled children to rise to the level of their peers. In this mainstreaming takes advantage of two well-established developmental facts: that early and frequent intervention produces better developmental outcomes, and that peer effects are powerful social, physical, and cognitive motivators (for good and ill–just ask someone who chooses to live in a frat house).

So by the summer of 2013 Jules barely qualified to continue in the state-provided program that provided the 7 hours of extra attention per week that he had been getting since he was four months old. He had made amazing progress, and was catching up to his peers on a number of levels that the state measures to determine eligiblity for its programs (gross motor, fine motor, speech, social/psychological maturity, etc.). But we were thrilled that he was qualified because we knew that the more help he got, the better off he’d be in the long run. (None of this stuff means he’ll stay caught up with his peers, which is why this early intervention is so important.)

And then we decided to move to Germany for the academic year.

Continue reading

05/29/13

The Future of the University: A Vision

Some people think MOOCs are bad, some people think they’re good (though I know almost none of the latter). But what you really need to know is: what’s going to happen to the university in the next twenty years as a result of innovations in content delivery?

Luckily for you I have had a vision of the future. I don’t like some of it, but I think it’s accurate. If I were a dean or a university or college president I would be thinking about what I could do right now to respond to the changes that are coming. And if you teach in a university, or attend one, or plan on having friends or children who do, then you need to know what’s coming, because it will affect (and indeed transform) the entire institutional structure of higher education in the United States (and probably worldwide). I’ve put it all in an eay-to-read Q&A format, so no excuses for not following along.

As a bonus at the end I’ll tell you what’s happening to public education at the K-12 level, and offer some suggestions on how to keep the most disastrous vision of the future from coming true. Continue reading

05/28/13

Something you probably didn’t know about satellite radio

So let’s be clear: satellite radio is MUCH MUCH better than regular radio. If you drive as part of your job you should get satellite radio immediately.

That said something you probably didn’t know is that satellite radio has a pornography channel. I listened to it for about 15 minutes (at least that’s what I’m admitting to) during an 11-hour car drive from Springfield, Illinois to State College and heard two kinds of shows:

1. A show with three female hosts in which one of them discussed in detail a three-way she had with her two roomates. The description was surprisingly graphic, and then the other hosts were asking things like, “tell me exactly how you were positioned–were his balls in your mouth or just banging on your chin?” and so on… and then exclaiming things like “oh that’s so hot” and so on. Kind of amazing.

2. Another show in which people call in and tell the hostess what they’d like to do to her. “If I were there I’d be doing bla bla bla,” followed by “Oh, that sounds amazing–I wish you were here right now, I’d totally suck up all your cum,” etc. I honestly could only listen to this for about 20 seconds before becoming too embarrassed so I have no idea how the show goes beyond that.

Still: amazing!

02/28/13

Who’s Afraid of China?

I’ll be giving a keyonte at Indiana University of Pennsylvania’s first annual Asian Studies Undergraduate Research Conference, title “Who’s Afraid of China?” One of the pleasures of writing the talk was the opportunity to go back to these sentences, which I wrote in 2002, whose context was the shift caused by 9/11, in which we went from potentially being enemies of China (you’ll remember the Belgrade embassy bombing of 1999 and the spy plane controversy of 2001) to being allies in the war on Muslim terror.

The insistence on Chineseness as a particularly odd combination of ancient past and scientific future has clearly demonstrated its ability to resurface when needed. Should the geopolitics change again, we will find ourselves right back in the middle of more “coming conflict” literature, perhaps this time forced to work against it in the face of events that will make its predictions seem all the more prescient.

I don’t make predictions much, but this one has come delightfully and perfectly true, so I feel obliged to brag about it. Of course, no one since 1600 would have ever lost money betting on the eventual appearance of anti-Chinese Yellow Perilist sentiment, which will make my back-patting fairly mild.

02/12/13

When Beautiful Dreams are Bad Dreams

Working my way through Conor Friedersdorf’s collection of 2012′s best nonfiction, I have come across a piece by Joshua Foer on a man named John Quijada, who has invented a language, Ithkuil, that attempts to fulfill the age-old dream of a perfect language.

At one point Foer describes what happened after Quijada read Lakoff and Johnson’s Metaphors We Live By:

For Quijada, this was a revelation. He imagined that Ithkuil might be able to do what Lakoff and Johnson said natural languages could not: force its speakers to precisely identify what they mean to say. No hemming, no hawing, no hiding true meaning behind jargon and metaphor. By requiring speakers to carefully consider the meaning of their words, he hoped that his analytical language would force many of the subterranean quirks of human cognition to the surface, and free people from the bugs that infect their thinking.

“As time went on, my goal began changing,” he told me. “It was no longer about creating a mishmash of cool linguistic features. I started getting all these ideas to make language work more efficiently. I thought, Why don’t I just create a means of finishing what all natural languages were unable to finish?

The piece is fascinating (though Foer’s prose is only really average, if by “average” you’ll allow me to refer to the general high quality of New Yorker prose). But it does go to show that dreaming big almost always means dreaming crazy. Quijada’s story is wonderful, and Foer includes just enough of the history of invented languages (you can get more, and have more fun, reading Arika Okrent’s book) to give the whole thing context.

Some flavor of both the lovely, bold, joyful craziness of it all and the desperate grasping for control that accompanies it can be gathered from these two paragraphs, which succeed one another immediately and appear three-quarters of the way through the piece:

He opened a closet and pulled out a plastic tub filled with reams of graph paper documenting early versions of the Ithkuil script and twenty-year-old sentence conjugations handwritten in marker on a mishmash of folded notepads. “I worked on this in fits and starts,” he said, looking at the mass of documents. “It was very much dependent on whether I was dating anyone at the time. This isn’t exactly something you discuss on a first or second date.”

Human interactions are governed by a set of implicit codes that can sometimes seem frustratingly opaque, and whose misreading can quickly put you on the outside looking in. Irony, metaphor, ambiguity: these are the ingenious instruments that allow us to mean more than we say. But in Ithkuil ambiguity is quashed in the interest of making all that is implicit explicit. An ironic statement is tagged with the verbal affix ’kçç. Hyperbolic statements are inflected by the letter ’m.

02/11/13

Air pollution in China: alpha/omega?

Useful and interesting discussion at China File on “airpocalypse now.”

Quote from Alex Wang to set up the discussion:

My own view is that China’s tipping point, in a sense, already arrived a few years ago. But the official response has been wholly inadequate to the task. Fundamental weaknesses in the way that China has approached its environmental protection efforts mean that the environmental crisis has continued to run amok.

Put all this in the “why I’m down on China” file, whose contents explain why my family will not be spending my 2013-14 sabbatical there.

02/11/13

Multigenerational social mobility

…is apparently less fluid than we tend to think. A really useful piece from the Economist updates us with the latest research from a variety of social scientists, and also–incredibly usefully–includes links to all the research it cites.

Money quote:

A second method relies on the chance overrepresentation of rare surnames in high- or low-status groups at some point in the past. If very few Britons are called Micklethwait, for example, and people with that name were disproportionately wealthy in 1800, then you can gauge long-run mobility by studying how long it takes the Micklethwait name to lose its wealth-predicting power. In a paper written by Mr Clark and Neil Cummins of Queens College, City University of New York, the authors use data from probate records of 19th-century estates to classify rare surnames into different wealth categories. They then use similar data to see how common each surname is in these categories in subsequent years. Again, some 70-80% of economic advantage seems to be transmitted from generation to generation.

It should by the way be mandatory for articles in newspapers and magazines published online to include links to the scientific papers to which they refer.

02/4/13

Best nonfiction of 2012

Per Conor Friedersdorf, who is not my favorite political writer, but still: a list of 102 very good to excellent nonfiction pieces for the year.

I’ll be reading through them when I can (though not this week!) but for now here’s a link to Cory Doctorow’s excellent piece on the future of computing. Opening paragraphs:

General-purpose computers are astounding. They’re so astounding that our society still struggles to come to grips with them, what they’re for, how to accommodate them, and how to cope with them. This brings us back to something you might be sick of reading about: copyright.

But bear with me, because this is about something more important. The shape of the copyright wars clues us into an upcoming fight over the destiny of the general-purpose computer itself.

01/29/13

Imagining a New University

When I was younger I used to pass long car rides from home to college (7 hours, much of it on the PA turnpike) by doing two things (well, three if you count the constant masturbation, but who does?): narrating imaginary golf tournaments to myself (why? I have no idea… I’ve never actually played golf) and imagining the structure of a new university, to be funded by me after I won some enormous lottery jackpot.

(Reader, you are forgiven if, after reading this list, you said to yourself, “so, I guess really just one thing after all.”)

That is why I was delighted to read Lawrence Weschler’s piece imagining a new university in Public Books, which you should also go read. Here’s his vision for the core curriculum:

Hence the core, to be titled Play/Ground—a yearlong course that would take up at least half of the students’ (and the participating faculty’s) workload that first year. Every year, twelve members of the faculty would be peeled off to run the core (a different twelve each year, in a general four-year rotation), chosen to reflect the widest possible range of disciplines: a musicologist, say, and a physicist, a political theorist, a climatologist, a classicist, a microbiologist, a historian of Islam, a sculptor, an information scientist, an economist, and so forth. All the students and faculty in the core would gather together in a large lecture hall every Monday morning for a sequence of three-week minicourses offered, one after the next in turn, by each of the participating faculty, in which said teacher (the musicologist for three weeks, and then the physicist, the political theorist, and so forth) would be expected to take the class on a concentrated tour of one aspect or issue or controversy in their discipline. For the rest of the week, to further explore themes raised by that three-week series of lectures (and then the next and then the next), the class would be broken up into twelve seminars of ten to twelve students, each led by one of the participating faculty (groupings that would meet two or three times a week and stay together through the entire year). Key here would be the fact that in most cases, the faculty leader wouldn’t necessarily be any more conversant with the topic in question than his or her charges: he or she would just have a better sense of how to use the library, how to read, how to hone questions, et cetera. (Though one might imagine a parallel seminar in which the participating faculty themselves would meet on a weekly basis to receive added instruction and compare notes on how the course was proceeding.)

 

 

01/23/13

Trading Babies Are Not Enough…

…to bring us to Milton Friedman’s promised land.

(Before I get started: I find the baby ads (from E-Trade) obnoxious, partly because they suggest (not despite but because of the humor) a kind of distant limit for the absolute financialization of everyday life, from birth to death, the final dream of which is the end of the welfare state and the incorporation of human beings (thereby neatly reversing Mitt Romney’s canard).)

The title of this post derives from new research by Roger Farmer, who shows (or purports to–I’m not qualified to judge) that efficient market hypotheses fail because no market system can include investment choices made by the as-yet-unborn:

Steve Davis and Till von Wachter (2011) have shown that the present value of lifetime income of new entrants to the labour market can differ substantially depending on whether their first job occurs in a boom or a recession. In our model, the lifetime income of the young can differ by as much as 20% across booms and slumps.

Given the choice, the young agents in our model would prefer to avoid the risk of a 20% variation in lifetime wealth. There is a feasible way of allocating resources that would insure them against this risk, but financial markets cannot achieve this allocation, except by chance. The inability of our children to trade in prenatal financial markets is sufficient to invalidate the first welfare theorem of economics.

As Farmer goes on to say, the research has “Keynsian policy implications” (I had figured it might).