As you know, the two main divisions of scholarly labor are nailing jelly to a wall and herding cats. I will be doing both today. The jelly I am taking in hand is the concept of “oral literature,” and the pack of mutually antagonistic cats includes Julius Caesar, Jean-Jacques Rousseau, Johann Gottfried Herder, Michael Silverstein’s teacher Roman Jakobson, and many less famous thinkers, mainly from the last two centuries.
Let’s talk about something that makes us uneasy. Retirement.
To raise the question is almost automatically to send each hearer into a private zone of calculation. Prisoner’s dilemma, self-constructed. For if I signal that I am ready to think about retirement, I am virtually abdicating my role in an institution, and I’m embarking on some risky financial and other calculations for which I’m perhaps not entirely prepared– so of course I don’t feel like making public my thoughts on the matter. (And I’m not, here: for the aficionados of the use/mention distinction, I am talking about what it would mean to talk about retiring, not talking about it.)
Another thing that makes it hard to talk about retirement is the awareness that when we go, the place we occupied is likely to go too. All right, if the Shakespearean on your campus departs, there will have to be another Shakespearean to step up into the role. But the less popular your field, or the more individual and experimental your way of doing scholarship, the less likely it is that your career will be prologue to another person’s comparable career. For people who spend a lot of time in the future (planning classes, writing books that somebody someday is supposed to read, wondering where the discipline is going), this is painful to contemplate, and I suspect that some of us who are old enough and wealthy enough to retire without disadvantage stay on because that’s the only guarantee that Etruscan philology or whatever will go on being taught.
Conversely, one of the powerful encouragements to pass the baton is the idea that somebody will be there to pick it up. And that idea is poorly supported just now. (I know why; you don’t have to tell me.)
Watching my own students throw themselves against the implacably locked door of the job market year after year, I wonder whether a collective agreement among senior faculty to move on, conditioned on an understanding that something tantamount to “replacement” will occur thereafter, wouldn’t moderate some of the pain and frustration. But that’s asking for the economically impossible: a future engagement on the part of a disaggregated (and internally competitive) group of employers to do something on behalf of people who, by the act of asking for this concession, are giving up whatever leverage they had. So we’re left with short-term calculations and actuarial endpoints.
D’Iorio, Mauss, Jousse, Leroi-Gourhan, and all the semioticians out there, stand back! Here’s the short list of topical turn signals to practice assiduously:
Ernest Renan (whose enormous body of work stands heaped in front of your correspondent) was “one of the few nineteenth-century writers to think on a planetary scale,” observes Simone Fraisse (“Péguy et Renan,” Revue d’Histoire littéraire de la France 73 , 264-280, at 269). But planetary scale and prophetic futurity do not equal being a nice guy. Some excerpts from Renan’s Dialogues et fragments philosophiques (1876), with translation below:
There was an old man of L. A.
Whose motto was A = ~A.
When they called him contrary,
He cried: “Corollary!”
That intrinsic old man of L. A.
Jean Métellus has died. Better to put it that way than “is dead,” because he was the sort of person who did things in the active voice. Though I am sure he didn’t want to, he has gone into the great night after a sojourn among us that began in 1937. He was a year younger than my father and three years older than my mother. I felt for him the filial respect of a reader and the conditional identification of a translator (for I did translate him, but one of my reasons for doing so is that I felt a strong difference between his literary personality and my own). I am reminded of how he signed his letters:
Indéfectiblement: what’s that in English? An indéfectible is someone who never gives up, never deserts. And that he was. Convinced that the wretched of the earth were never going to get a fair deal unless they stood up and risked death to demand it, writing his poetry every morning under a big bright picture of Toussaint Louverture, he was indeed indéfectible. A real lefty. Not a deserter. Unshakeable.
Le Monde has a short bio here. “Jean Métellus, 1937-2014, a figure of the Haitian intellectual scene.” I’ll have more to say about it later, but what does it mean to call a man “a figure of the Haitian intellectual scene” when he’s been living in France for more than fifty years? Headlines are rarely written by the journalist who wrote the corresponding article, but I must say that this way of putting things rather confirms than denies the accusation made by the great poet that “French racism is if anything more severe in 2014 than in 1959” (see the article). Métellus? Black man, therefore figure of Haitian intellectual scene, therefore not our problem. Meanwhile, much more importantly, Philippe Sollers has changed his mind about something…
And further Le Monde, God love them, resorts to an esoteric vocabulary when describing Métellus’s book of odes to the heroes of black resistance to racism (e.g.: Rosa Parks, M. L. King Jr., Steve Biko, Nelson Mandela) as being touched occasionally “by angelism or Manicheanism.” That is? Decoded: Métellus occasionally simplified the record, making these resisters unambiguously good and the people against whom they struggled unambiguously bad. A grievous sin against subtlety… But in order to keep it all in the register of polite disagreement, Le Monde uses terms that will be understood by only a few. Is the French paper of record afraid of the Front National, so that they can’t call racism by its name any more? Looking back over Métellus’s broadsides against white privilege, I see the simplification as understandable and not the last word in a competition that has generations yet to run. I had no problem sympathizing with his sympathy. He hated to see good people condemned to live rotten lives because it was more convenient to someone else that it should be so.
Unshakeably. That should sum it up, Monsieur le Docteur Métellus, poète et citoyen éternel de Jacmel. Cher ami, porte-parole extraordinaire des Muses haïtiennes, général de l’armée des mots, may you have readers as long as words are read, and may they recognize your zeal for justice, which spoke through your personas of neurologist and poet. Ainsi soit-il.
(Round table talk at the ACLA/MLA panel of the same name, Chicago, Jan. 11, 2014.)
I don’t know about you, but I’ve had a busy ten years. I approach today’s assignment in the spirit of the chastened soothsayer. Since I’m on record as having said a few things, in my section of the ACLA’s 2004 State of the Discipline report, about what Comparative Literature is, has been, and should be, I might now look back at which of those statements held up and which ones were dead in the water.
(A response to papers by Andrew Piper, on “Wertherity,” and Hoyt Long and Richard So, on literary networks and the English-language haiku, MLA annual meeting, Chicago, Jan. 10, 2014.)
For lack of time, I will jump into the normative right away. Computers should augment human inquiry, not replace it: “augment,” that is, in a specific sense that I will try to elaborate. The point of using computers should not be to do the same sort of thing that scholars have been doing for a long time with file cards and dictionaries, only faster and larger. Let’s not frame the work of the Humanities in such a way that it throws the humans out on the street. My motive is more than protectionism. I hate wasted effort, even computer effort. Let us honor the tools by giving them work that only they—in combination with us—can do.
Cooper Union – as a unique institution of higher education; as a legacy of visionary founder Peter Cooper; as a dream – lives or dies today. Just so you know.
Free is Not for Nothing – The Vote to Save Cooper Union by alumni trustee Kevin Slavin:
If the vote goes one way, a new, lean, careful Cooper Union will tiptoe forward, tuition-free. It will require equal parts deep sacrifice, wild ambition, and straightforward pragmatism. And it will uphold a 150+ year tradition of free undergraduate education.
If it goes the other way, all of that will disappear. Not just the free tuition, but everything that was built on it. In its place we’ll find a tragic fraud. A joke. A zombie.
Here’s some background from Felix Salmon, who has been drawing attention to the foresight of Cooper’s vision and the perfidy of recent Presidents and Boards.
The Cooper Union story recapitulates, in miniature, a shockingly large proportion of the various aspects of the global war on public-serving higher education. Here’s to hoping the tide is turning, today.
The normal way to understand the Anthropocene is as a historical period, defined more or less as the era when human beings acquire the capacity to affect the ecology of the entire planet, thereby opening the door to mass extinction, disastrous climate change, and, at the limit, the disappeareance of the species. Generally people want to date it to the beginning of the Industrial Revolution, though you see arguments for dating it to the beginning of agriculture. Since the challenge we are facing collectively at the moment (and for the next centuries) is the immediate result of the dramatic expansion in carbon-based energy (oil, gas) use that comes from the Industrial Revolution, my impression is that most people are inclined towards that date.
But that’s just because the scope of this environmental event is in fact the entire planet Earth. I want to suggest that we become aware of/wish to designate the Anthropocene at this crucial moment precisely because of that scope, that because like the various other -cenes (the Pleistocene, the Holocene) this era involves ecological/geological/meterological activity that is planet-wide, we feel comfortable declaring it to be “epoch”-worthy. That is, the epoch (that which can be designated by the -cene, that which is a scene for the -cene) is partially a temporal metaphor for spatial scale.
This is true of all world-concepts, and trivial. But now what we can do is to scale down the Anthropocene from the world to a world, and recognize that, unlike the Pleistocene or Holocene, we can use the concept to refer to any “world” (that is, any relatively closed totality, relatively closed because like our totality it can be potentially escaped from, in our case via rocket ships/space colonization) that is capable of producing self-extinction through the manipulation of its environment.
In that case there have been other Anthropocenes, some of them, perhaps, not even human. Any virus that kills its host too rapidly–before the host has a chance to infect others–is Anthropocenic in this sense. We might also think of the series of extinctions on Easter Island as one example of a quasi-Anthropocene (resolved by the arrival of European explorers). Or, an extreme and fanciful case, of a literary character like Raskolnikov.
I am not sure that it is politically useful to think of the Anthropocene this way — it may be that there’s more traction in terms of getting people to think about how to live, or die, in it if they can have the narcissistic pleasure of imagining themselves to be historically unique. But it may also be that philosophers and other humanists could benefit from a plural theory, a theory of Anthropocenes, both as a structure for comparative analysis and as a humbling reminder that self-desctruction, when it happens, is usually a matter of degrees of difference, not kinds, from ordinary life.
One of the things I want to do sometimes is to repost stuff from Printculture’s archives, because it tends to be hard to find. Here is a series of discussions on the topic of something I called “leverage,” by which I meant, as Mark McGurl pointed out in the comments, “critical distance.” The conversation that ensues sees the two of us thinking through and explaining some of the things that motivated The Program Era and The Hypothetical Mandarin. The entire conversation series of posts (which are combined below) dates from October 2007. I will also say that one of the weird things about rereading this stuff is realizing how old some of my ideas are; I swear I’ve repeated some of the things I say below in the last couple of years as though they’d just occurred to me.
Leverage as a function of critical capability and interest
It occurred to me the other day — and in fact I may have already bored one or two Printculture readers with this — that it would be useful to think about why so much academic work on contemporary material isn’t very good. But perhaps the premises bear repeating: (1) a higher percentage of literary critical or cultural analysis of contemporary material — fiction, poetry, film, the culture in general — says, by my standards, completely predictable things (than does work on material removed from us in time) and (2) is therefore no good. I have no data to back the first part of this up; it’s merely an impression. For the movement from the first to the second premise, I rely on my belief that literary critical analysis should, in general, aim to teach us things we don’t already know about the world.
The question I’m setting out to answer here is why this is true. Why, that is, does work on contemporary material so often simply tell me what I (feel like I) already know.
The answer has to do, I think, with leverage. By leverage I mean to indicate the degree to which my ability to tell you something about X that X doesn’t already know about itself and isn’t obviously saying to anyone who’s paying attention, depends to a very large extent on the difference I am able to generate between myself, and what I know or see, and what X knows or sees on its own.
Nice article from NY Mag on the psychological and physiological adjustments that come with having lost large amounts of weight.
Cultural fantasies of weight loss present a tidy, attractive proposition – lose weight, gain self-acceptance – without addressing the whole truth: that body image post-weight loss is often quite complicated. Perhaps that helps explain why the rate of recidivism among people who have lost significant amounts of weight is shockingly high – by some estimates, more than 90 percent of people who lose a lot of weight will gain it back. Of course, there are lots of other reasons: genetic predisposition towards obesity, for one. For another, someone who’s lost 100 pounds to get to 140 pounds will need to work harder – including eating much less each day – to maintain that weight than someone who’s been at it her entire life. (Tara Parker-Pope’s excellent piece “The Fat Trap” explains these physiological factors in much greater detail.) But what about the psychological? Who would be surprised if a person – contending with both a new body that looks different from the one she feels she was promised, and the loneliness of feeling there’s no way to express that disappointment – returned to the familiar comfort of overeating? At least its effects are predictable.
Two thoughts: first that the last bit is of a piece toward a more general understanding of how psychologically difficult deprivation is, and how things like being fat or being poor change the wiring of our bodies and our brains. Beginning from that understanding makes compassion for the choices others make far easier (and moralizing judgment oriented around disgust more difficult).
Second is that Iwonder if anyone’s ever done a comparative analysis of the disappointment one feels after losing a great deal of weight and the post-pregnancy/childbirth body. Both are situations in which one does not return (unless one is a certain sort of celebrity, I suppose) to the status quo ante; in the case of weight loss this is exacerbated or made more weird, of course, by the fact that the new status quo may never have been ante. I was 6’1″, 215 pounds at age 16, 6’3″ 240 at 18, and 6’3″ 278 in summer 2002. Since 2007 I’ve bounced between 190 and 200 (I was at 184 at one point, but never again) and I’m still not used to it.
So a few months ago I predicted that one day actors would be hired by firms like Coursera to teach MOOCs (because once you don’t have to respond to student questions live, who cares who reads from the script? Might as well be a hottie…).
And now one of the leading MOOC firms, EdX, is considering hiring Matt Damon to teach a course.
Casting Damon in a MOOC is just an idea, for now: In meetings, officials have proposed trying one run of a course with someone like Damon, to see how it goes. But even to consider swapping in a star actor for a professor reveals how much these free online courses are becoming major media productions—ones that may radically change the traditional role of professors.
One for-profit MOOC producer, Udacity, already brings in camera-friendly staff members to appear with professors in lecture videos. One example is an introduction to psychology course developed earlier this year in partnership with San Jose State University. It had three instructors: Gregory J. Feist, an associate professor of psychology at San Jose State University, who has been teaching for more than 25 years and who wrote a popular textbook on the subject; Susan Snycerski, a lecturer at the university who has taught for 15 years; and Lauren Castellano, a Udacity employee who recently finished a master’s in psychology from the university, advised by Feist.
One sometimes hears objections to Halloween from people who are afraid it’s an anti-Christian holiday, demonic, Satanic, hedonistic, or pre-Christian. It might be any of the above (it’s certainly pre-Christian, in its guise as Samhain). But I wonder about the implied notion that a non-Christian holiday might be somehow morally deficient, that it would be virtuous to abstain from costumes, trick-or-treating, or jack o’lanterns.
After all, what passes for a Christian holiday? The Día de los Muertos is an ingenious cultural hybrid (All Saints’ Day plus the lingering memory of the Aztec gods). So is Christmas/Yule. Mardi Gras falls somewhere between Lupercalia and the Greater Dionysia. Isn’t Easter named after Ishtar? If we go on at this rate discovering predecessors, we won’t have any Christian festivals left.
And let’s look at the content of the holiday. On Christmas, there is feasting, singing, and giving of gifts. Some of us make charitable donations (especially Americans, when the first of January looms and the IRS comes to mind). But the greater part of the gifts, and certainly the massive advertising, decorating, carol-singing, red-nosed public side of the thing, has to do with circulating good stuff among friends and family– which is not far from benefiting oneself. If “it is more blessed to give than to receive,” most Christmas gifts qualify only under a technicality.
On Halloween, we throw our doors open to all strangers, the stranger the strangers the better: space aliens, pirates, warrior princesses, dinosaurs, zombies, Elvises, Time Lords, Scotsmen, Yellow Submarines. And into their outstretched baskets we pour candy, not caring when they eat it or whether they got more than they deserved. Sure, there’s the threat of a trick in the absence of a treat, and it may be that atavistically we are bribing the ghosts and spirits to stay away from our homesteads for another year. But it’s still an act of radical hospitality that sweeps the candy away from, not around in, the domestic circle. So I say: Cast thy Twix upon the waters! Go thou and do likewise!
I’ll be using this title to compile some reflections on the NSA spying scandals. Much of what I say will be obvious. If you want to have friends, don’t treat them like enemies. Don’t spy on them and don’t lie to them about what you did if there is the remotest chance of being found out.
I can imagine that there would be a justification for listening in on suspects already identified as “of interest” in a developing case– this is just acting on what is known as “probable cause,” and although not entirely charming, passes the prudence test (does the behavior result in tangibly reduced risk for the people you are responsible for and care about? If yes, check the box and go ahead, though with misgivings). But it’s in the nature of an uncontrolled, secret program to expand as far as the money will take it (and there’s always more money), until the spying apparatus is pretty soon listening in on everybody who is connected to everybody who might be of interest in a potential case that might eventually surface– which means everybody. It’s quite a thing, mathematically speaking, to stay on top of the trillions of relationships that obtain among a billion or so people. But I’m not proud that we’re accomplishing this triumph; it would have been a better cause for pride if we left more people alone.
Expanding the definition of “national security” to the point that it includes eavesdropping on all parties to any negotiation in which we and our allies are involved is simply a sign of pathology. People who are deeply mentally sick alienate their friends pretty quickly. Put otherwise, it is hard to be a friend of a person who is paranoid and prone to violent outbursts and claims to be the world savior– and who taps your phone and email. Such a person, if you have the poor luck to be stuck on an island with him or her, I would style the Lord of the Files.
And that is the person that the US has become. Blame is raining down on Obama, and the question (for those who care) is, as it was for Nixon and Reagan, “what did he know and when did he know it?” According to the White House press secretary, he didn’t know anything. But we knew that was going to be the answer. The fact that repugnant opposition politicians, who would do ten times worse if they had the chance, are jumping in to score points shouldn’t dissuade us from asking the question. But I don’t think it matters so much what Barack H. Obama, Esq., knew. The presidency is a legal person, like a king under the old “two-bodies” theory, and in the transcendent sense of that personality the presidency knows, authorizes, and is responsible for a hell of a lot, and it has been so since the XYZ affair was rattling the ruffled cuffs of the young Republic.
What was Obama’s reaction when he learned about these nifty intercepts that gave him a preview of what our friends were thinking (but not, apparently, what the Russians or the Chinese were thinking)? What kind of courage would it have taken for him to push it all aside and say to the Director of National Intelligence, “Cut those wires and fire those spooks. I don’t need to tap the phones of our most trusted allies. We can compete in the big old world on a fair-and-square basis”? Did he think, “Well, this is not totally kosher, but I didn’t actually order the surveillance, and it might come in handy some day after all.” Did he have an impulse to reject the Faustian package, only to receive this remonstrance from the spook in charge: “Sir, this intelligence could be of national importance. It could save lives. It could make the difference between Boeing winning a contract and Airbus winning it. Your personal moral quibbles must cede to the national interest”? I don’t know. I suspect presidents don’t like to be called choirboys.
But a salient piece of Washington folklore was uttered by a member of the policy circle most likely to have the task of making up justifications for the US President being the Lord of the Files. Mike Rogers, a Republican member of Congress and head of the House Intelligence committee, offered, according to the Guardian, the following interpretation of twentieth-century history:
Going further, Rogers claimed that the emergence of fascism in Europe in the early 20th century could be partly explained by a conscious decision by the US not to monitor its allies.
“We said: ‘We’re not going to do any kinds of those things, that would not be appropriate,” he said. “Look what happened in the 30s: the rise of fascism, the rise of communism, the rise of imperialism. We didn’t see any of it. And it resulted in the deaths of tens of millions of people.”
I don’t thing Congressman Rogers would pass a high school history test. What dates are associated with (a) the rise of fascism? (b) the rise of communism? (c) the rise of imperialism? Half-credit for (a), Mr. Rogers (the conspicuous events of the 1930s do include Hitler’s election in 1933, but fascism got its start in Italy in 1922); zero credit for your other two answers.
However, what he is parroting is a familiar line based on at least a fragment of fact. In the first decades of the twentieth century the US had a Black Chamber, or cryptanalysis bureau, based, as much of our present spying activity is, in a commercial telecommunications node (in that instance, the Chamber was hidden in a firm that compiled telegraphic codebooks). After 1919, Henry Stimson, Secretary of State, is said to have declared, “Gentlemen do not read each other’s mail,” and disbanded the Black Chamber. It was pulled back together in a hurry in response to the next war (I am getting this from David Kahn’s 1967 book The Codebreakers, a favorite of my father’s).
Now perhaps Stimson’s code of ethics did not prepare the US for conditions as ungentlemanly as those endured by the combatants of WWII. But Mike Rogers, twisting the anecdote, is demonstrating a degree of paranoia and self-centeredness that is quite magnificent, even for the spoiled children of our elected assemblies. He is saying that the fact that we were (allegedly) not spying on the Germans, the Russians, the British, and the rest accounts for “the rise of fascism, the rise of communism, the rise of imperialism… the deaths of tens of millions of people.” Astonishing! Where is a global policeman when you need one? Why, indeed, did the US not steam forth in 1931 and whack the Japanese who were overrunning China, invade Germany in 1933 and slap them around for their poor judgment in electing Hitler, parachute into Windsor Castle in 1757 and make old George say uncle and give Bengal back to its Nawab? Why can’t we poke our NSA earbuds into every wire and satellite and issue executive orders about every damn thing we please, lest somebody, somewhere, get up to some evil? Foreigners are all right, I guess, if carefully observed and called to order at the slightest sign of going wrong.
…with lots of ideas about the future of online education.
I suppose by “strange” I mean that his politics (if you look at his blog) operate from a position that imagines itself as entirely apolitical but is nonetheless quite interested in politics. So it produces frequent pox-on-both-houses language, but also pragmatic suggestions for various kinds of things (including online ed, in the link above) with no real concern for what I think of as the “normal” language of American politics (involving concepts like the moral, the just, and so on).
And then you ask yourself — well, who would Dilbert vote for? — and you realize that Adams’s politics are perfectly in tune with the strip, because the answer is totally unknowable. Even the grounds on which Dilbert might vote for someone are unknowable.
At least according to this reading of a Chronicle story by Chris Newfield. Short version that both faculty and university presidents agree that MOOCs will have a negative impact on higher ed, and that this opinion is held by people who nonetheless seem open to technological innovation and other kinds of innovation in teaching (so it’s not just a thoughtless resistance to change).
And yet, the problem is that for about 18 months state legislatures were allowed to pretend (or pretended to pretend) that the MOOC would allow for further cutting of state support for higher education…
In other words, when universities lose MOOCs as a budget solution, they lose the main source of hope that state politicians had for a free fix of the college cost problem for a less affluent, not wonderfully educated younger generation. MOOCs were the austerity solution to the mass quality problem. Without them, tempers will flare, fingers will point, and funding will not be restored. In the meantime, faculty are going to have to lead higher ed innovation anyway, and the good news is that post-MOOC-as-cure-all faculty don’t need to focus on the technology to the exclusion of the “human side” of teaching and learning.
Now that the MOOC seems to be a non-viable solution, we can look forward to the rapid restoration of that missing funding.
… or at least thinking about it.
Those of you who know me and my family know that our son, Jules, was born with a very rare genetic disability (known as 9p deletion syndrome). He’s fine, at least medically, though it was no fun for the first three weeks of his life and has on various occasions been a little less fun than it otherwise might have been (cleft palate surgery, some ongoing concerns, now faded, about his heart). Cognitively, we know less about the future than we might, partly because the syndrome is so rare (maybe 150 cases in the United States), partly because it produces such a wide range of outcomes, and partly because the treatment of the disabled has changed so radically in the United States in the last 60 years that evidence gathered on the basis of a 30-, 40-, or 50-year-old 9p deletion person does you little to no good, since that person lived through a radically different set of approaches to disability than will any child born ten or twenty or thirty years later.
I know less than I should about how disabled people are treated in the United States. More than I used to know, of course, before Jules was born, before he spent 2.5 of his first 3 years in an amazing day care facility, in which he was fully integrated with the other kids (a process known as “mainstreaming,” now the normal thing to do in the United States), and to which state-provided therapists (occupational, physical, speech, developmental) showed up for 7 hours a week to help Jules catch up with his peers.
The idea behind mainstreaming and the therapy (which is known generally as “early intervention”) is simple and twofold: first, that the earlier you can work with disabled (or even potentially disabled) children, the better you can help them reach their maximum genetic potential (I know that’s a fuzzy concept, but let’s use it loosely here to express something like the maximal cognitive capacity someone can reach, all other things being equal); and, second, that surrounding (potentially) disabled children with other children who are developmentally “ahead” of them actually encourages the (potentially) disabled children to rise to the level of their peers. In this mainstreaming takes advantage of two well-established developmental facts: that early and frequent intervention produces better developmental outcomes, and that peer effects are powerful social, physical, and cognitive motivators (for good and ill–just ask someone who chooses to live in a frat house).
So by the summer of 2013 Jules barely qualified to continue in the state-provided program that provided the 7 hours of extra attention per week that he had been getting since he was four months old. He had made amazing progress, and was catching up to his peers on a number of levels that the state measures to determine eligiblity for its programs (gross motor, fine motor, speech, social/psychological maturity, etc.). But we were thrilled that he was qualified because we knew that the more help he got, the better off he’d be in the long run. (None of this stuff means he’ll stay caught up with his peers, which is why this early intervention is so important.)
And then we decided to move to Germany for the academic year.
“All 25,000 candidates fail Liberia university entrance test” (Vanguard, Lagos)
That ought to boost their ranking.
The art of showing you pictures of babies killed in bombardments, so that the public will support another bombardment that will kill more babies whose pictures you won’t be shown.
I apologize for the cynicism, but I can’t think of an intervention, by the US or anyone else, since 1945 that did what it was supposedly going to do. Nor am I a fan of sitting by and watching when horrors are going on. There isn’t a good way to take the weapons away from the bullies without (a) triggering the deaths of thousands more people on both sides, and (b) rolling out a carpet for the very things the US professes to wish did not exist (civil war, djihadist governments, regional power-projection of Iran, China, you name it).
Want to support something? Support medical assistance to the population (Médecins du Monde is deeply engaged).
Representative discussion among people most of whom I would not dismiss as crazy or ignorant, here. I’d like to know how this is going down across the breakfast tables of America now.