“My prof is so stupid”

Leslie Ward, via Wikimedia Commons

I’ve heard this said on my campus. Often by a student who is also making fundamental factual and grammatical errors in the process of an extended whine that, I can only assume, was prompted by a lower-than-expected grade.

I’ve also gathered from students who have asked me about grad school that it is often assumed that becoming a professor is about going to school a lot and then answering job ads like with any other job, and that more or less anyone who can stand going to school a lot would have a decent chance.

That’s kind of true, but mostly really, really not true.

As I’ve written before, the Ph.D. degree — which is pretty much always required for a professor’s job — is not some kind of ultimate IQ test. It really requires more drive and motivation than anything else. But, at the same time, there’s intense competition at many stages to go from an undergraduate degree to a tenure-track job as a professor, so that while it’s certainly possible that your professor might be “stupid” (whatever that means exactly), it’s really unlikely your prof is just some random person pulled off the street who doesn’t know more than you do about his or her subject.

On the contrary, for admission to an MA/PhD program, there are hundreds of applicants and only a tiny handful of openings, so for starters the vast majority of people admitted into these programs went to the most competitive colleges in the world, earned top grades and test scores, and are recommended enthusiastically by Big Name professors. Then, in five to ten years of graduate study, these select few are put through an incredibly rigorous regime, and close to half drop out before finishing. Those who do finish (after having taken extensive exams in all the fields they might teach, judged by the best people working in those fields, and having written a book-length research project which is approved by a committee of top people in their fields), then face an incredibly tough job market (right now it’s the toughest it’s been since the 1970s). You don’t look for ads in the newspaper of the city where you want to live for an academic job. In my subfield this year, for example, there were five jobs in the WORLD. Five. And probably about a hundred people applying for them. All of whom have PhDs from top-ranked schools (Ivies, Chicago, Stanford, Berkeley, with few exceptions). Then, once in a position, those lucky few face rigorous reviews every few years to keep their position.

So that person at the front of the classroom who seems like an idiot to you — he or she had to go through some incredibly intense and competitive hoops just to get there, all after excelling to an extraordinary degree at the level of education you’re currently immersed in. That doesn’t mean that prof is perfect, and he or she may be so overwhelmed by the intense pressure of essentially holding two or three full-time jobs at the salary of half of one that you may not be seeing his or her best work. If your instructor is an adjunct, he or she may be commuting between several school, cobbling together 5 or 6 courses at the same time to earn less than sufficient money to pay rent, with no benefits. Research-intensive universities tend to value a professor’s research agenda much more than teaching and so in those cases you might see someone who has never actually had any training or interest in teaching (but who is a top expert in their field). But that’s becoming rare even at the big research institutions.

On the whole, the chances are you should maybe open your mind a little bit to the possibility that this person might have something of value to teach you after all.

Posted in Profession, Teaching | Tagged , , , | Leave a comment

“Grades are so subjective”

AGradeActually, they’re probably less subjective than you think. And to the degree that there is still some subjectivity, it probably works in your favor, not against you.

First, in many classes these days grades may be almost completely objective, as multiple-choice tests are sadly common in overcrowded, underfunded classrooms. History is one of the subjects that’s less likely to do this at all, or at least not exclusively. Most of our assignments are usually written essays, or some other form of project that you may think is graded subjectively, because the professor reads your work and then slaps a grade on it, and you may have so little idea what happens between those two steps that it might as well be random.

It’s not random. (Well, everyone has heard an anecdote to the contrary, but those are mostly jokes, made in the throes of the abject torture profs actually go through when they grade.)

It’s increasingly common these days for essays to be graded according to a rubric. Rubrics break down an assignment into component parts, often attaching some point value to each part. These are mainly intended to communicate more clearly to the student what the instructor is looking for, and to show relative strengths and weaknesses in different areas of an assignment. But there’s still a judgment being made on how to assign points — a number — to something like your writing style, argument, factual accuracy, creativity, etc. None of these things can really be reduced entirely to a number, so there is a certain amount of arbitrariness involved. But not very much, because the instructor is grading your essay compared to others by students in the same class. The quality of your argument may not be fully represented by, say, the number 18 out of 20. But if the quality of argument across a class of 30 students ranges from 20 to 5, and you’re an 18, you know that you’re doing very well, significantly above the median, but not quite at the top of the class. That’s real, though limited, information.

How does an instructor judge your work? How can she be sure yours ranks at 18 — that those handful of students who got 19 or 20 definitely did turn in work measurably more successful than yours? For one thing, things that may still feel really amorphous to you, like what an “arguable thesis topic” looks like or the level of specificity in your language choice are not at all amorphous to a professor who has been writing and reading these kinds of statements literally many thousands of times, day in and day out for years on end. Examples:

“The Bolsheviks won the Civil War because of their geo-strategic advantages” is an arguable topic, and therefore acceptable.

(That thesis statement should be followed by a detailed explanation of the specific geo-strategic advantages that the Bolsheviks did have, and that the Whites did not have — see? Arguable)

“Stalin’s purges were caused by his lust for power” is not arguable, and therefore not an acceptable thesis statement.

(What would you follow this statement with? A series of repetitive statements that all essentially say, “Stalin was a bad man. Real bad.” Believe me, I’ve seen it. But that’s not evidence supporting the thesis — it’s a circular restatement of the thesis over and over. Because that’s all you can do with something as amorphous “Stalin was bad because Stalin was bad” — it’s not arguable.)

[Note that whether or not your prof agrees with your thesis is totally irrelevant here — your prof is reading THOUSANDS of pages all saying basically the same things. She really just doesn’t care either way what your thesis is, only that it is actually a workable thesis, demonstrating that you understand fundamental concepts. She wants to be done grading already.]

How do we distinguish between “specific” languages choices and vague ones?

It’s easy to see that the sentence “Lenin was a ruthless leader” is vague when another essay states, “Lenin’s NEP was an ideological compromise that divided the Party and made Stalin’s manipulation of factions possible.” The second sentence is not only better writing, and more convincing as part of an argument, but also tells your prof that you actually know what happened and how and why it mattered.

Do you see how the difference between those two statements is both obvious, and objective? Multiply that by a million little judgments of exactly the same kind, and that’s how we can grade fairly.

Also, remember that your grade is not an absolute value that sums up everything about your work (let alone about you — this is not you as a person under judgment, but the words on a page that you turned in). It simply ranks the relative quality of your work compared to that of the other students on a few basic criteria that the instructor deems most significant (hopefully, your instructor told you what these criteria are — if not, ask).

If you read the same set of papers from your class that your professor got, you too would be able to roughly rank them in terms of clarity, accuracy, and how convincing they were as arguments. Most likely, your ranking would actually come pretty close to that of your professor (I say this because I often have students grade themselves and their peers in exercises, and they’re always right on in their assessments). The professor’s experience allows her to do this much more quickly that you probably would, and her expertise allows her to catch the errors. But otherwise, grading is not all that mysterious and most people would do it in a very similar way in most cases.

Each professor reading each essay does ultimately make some degree of holistic assessment (“this essay is cogent and careful, but doesn’t go out of the box; that one is creative but doesn’t fully support its claims; this other one blows my mind; and this one here makes me wonder if the student even knows what course they’re in”). But when multiple instructors read the same essays, they nearly always end up with very similar assessments (I’ve seen this from experience as a TA in large courses where multiple people do read the same essays, and I’ve seen studies concluding the same thing).

This general agreement on relative success comes from three things: (1) the more specifically one defines what one is looking for in an essay, the easier it is to see where those goals are reached and where they aren’t (2) experience reading lots and lots of essays makes these things much simpler to spot than it seems could be possible to the novice who is writing this kind of essay for the first time and (3) the differences usually are pretty stark — in an average class of 30 with a grade spread from A to F, the difference between A work, C work, and F work is blindingly obvious. The tricky part is distinguishing between, say, a B and B+. Those judgements are very fine, and it is true that two experienced readers may disagree at that level. Luckily, those kinds of fine distinctions aren’t really significant in the long run.

(In my own case, I tend to use pluses and minuses as signals — a B+ tells that student that the essay is not A work, but it’s coming close, and would need only a small amount of revision to get there. On the other hand, a B- tells the student that while their essay was essentially accurate and complete and therefore belongs in the B category, it just barely reached that level in some respects, so that the student knows s/he would need to revise quite thoroughly to reach A-level work.)

Finally, there is the issue of bias. Students talk a lot (or so I overhear on campus) about this or that prof having favorites, or “not liking” them. The first point to make here is that professors are insanely busy people who usually see hundreds of students every semester. Honestly, most of us don’t have time to form actual opinions about individual students. But of course it’s true that a student who comes frequently to office hours and turns in excellent work is going to build a good reputation with faculty, and students who don’t show up to class, turn in late and/or shoddy work or don’t turn in work at all, and then beg for a higher grade because they “need” it are going to lose the respect of faculty. But either way, that reputation is far less likely to be reflected in grades than students think (it does enormously affect things like recommendation letters and how willing a professor is to spend time chatting and giving advice — which ultimately may matter more). Simply because grades are much less subjective than students realize, there’s really no need and little opportunity to manipulate grades in this way. Even if we assume a truly ill-willed instructor who has the time to bother artificially inflating some grades and deflating others, the chances are that sooner or later complaints about this practice will accrue with the department chair and deans, and eventually there will be consequences for the faculty member, which would discourage those few who would ever bother with such asinine and pointless manipulation anyway.

But there is one way in which the relatively more subjective process of grading an essay is different from the wholly objective process of grading a multiple-choice exam, and that works entirely in favor of the student, in my experience.

I experimented briefly with multiple-choice exams once, in a class in which students also did a lot of writing. My notion was that since students had mostly been assessed by multiple-choice in the past (I did a survey to confirm this) that I could eliminate the anxiety involved in learning a new format of demonstrating their knowledge, and just find out what they actually knew. Then in separate written essays I could focus more on teaching them how to write well. As it turned out, the entirely objective grades from the exams were abysmal, far lower than I usually see on essay exams or written short-answer exam questions also aimed at testing content knowledge. I did some surveying to find out why, and while I can’t be sure, the problem seems to have been a combination of two things. First, because there was less anxiety about a multiple-choice exam, students studied less. Second, and most relevant here, when I grade an essay, I am more flexible in how I award credit to the student. For example, if the student answers a multiple-choice question and gets it wrong, it’s wrong, period. But in an essay on the same subject, it may be clear that the students is confused about one factual detail, but does fundamentally understand the concepts under discussion, and has analyzed the material well. In that case I’ll dock a small point value for the one bit of confusion, but give credit for the general understanding. “Objective” assessment does not give me the leeway to do that.

Posted in Profession, Teaching | Tagged , , , | Leave a comment

ASEEES and AHA

In the last few months I’ve enjoyed the rich alphabet soup of attending ASEEES and AHA in NOLA. Say what? I mean I attended the annual conference of Slavicists and Eastern Europeanists and that of the American Historical Association, which both happened to be held this year in New Orleans, LA. If you’re a new graduate student or enthusiastic undergrad considering a Ph.D. in history and wondering whether you should try to make it to a conference, GO. Do it. But you might want to read this first to know what to expect.

What’s a conference like? You may picture the scene from The Fugitive when the US Marshals track down Harrison Ford’s friend at a conference, where a bunch of really boring-looking people in tweedy jackets sit around and talk about papers with incomprehensible titles in a fancy hotel. And for once, Hollywood pretty much got it right. Conferences are the ultimate insider’s gathering: no one from the outside of these little worlds would ever want to go to one of these things, I imagine. But they can actually be rather a lot of fun, from the right point of view. If you’re working on becoming an insider, conferences are a great introduction — they are nothing more or less than the physical manifestation of “the field” or “the discipline.”

The annual conference for Slavists (which used to be more entertainingly called AAASS, but was recently changed to ASEEES, the pronunciation of which no one can agree on) brings together people from nearly every phrase of my life, so it’s a strange and interesting social occasion. There are people I took Russian language classes with as an undergraduate, people I know because we once rented the same room in St. Petersburg, people I went to grad school with, people I taught, people who taught me, and people I don’t know but whose work I’ve long admired from afar. So a big part of that conference is reconnecting with people from all these spheres — most of us never see each other anywhere else.

But the main purpose of the conference, of any academic conference, is to share new research. Conferences, perhaps more than anything else we do, are really at the core of our jobs as researchers, which is funny since most people if asked will readily whine about the poor quality or general boringness of conference panels (self definitely included).

In theory, all papers presented at conferences are works-in-progress: new research that is presented among colleagues (hence the conference’s definition as an insider’s affair) for comment and criticism. The reality is that one has to propose papers and panels almost a year in advance, so that one often has to basically guess what one’s work will look like a year in the future, then as that year passes all too quickly with a million other deadlines and the overwhelming time commitments of teaching, all of sudden the deadline for the paper comes up and one often slaps together something either too rough, or too familiar — something not as new as it ought to be, because the newer work isn’t ready yet.

Also, in theory, panels bring together papers on related themes, which creates juxtapositions and comparisons that breed more ideas. In reality, panels too are put together in a rather hodge-podge way, and since panelists don’t often communicate much before showing up at the conference, many panels feel random, and you don’t get that synergy of ideas at all. Often panels are idealistically planned to be interdisciplinary — at the Slavic conference, people often try to bring together historians and literature specialists, political scientists and art historians. The idea is that by talking to each other, we’ll each broaden and enrich our approaches. Sometimes this happens, but sadly I find it’s more common that these kinds of panels just bring into stark relief the fact that our disciplines’ differences in rules of evidence, jargon, and ways of framing questions are almost impenetrable.

The theory also goes that the audience is as important as the presenters, and that the conversation should really include everyone — when we write our papers and present them, we all hope for constructive, thoughtful comments that will help us improve. As audience members, we all hope to hear rich, engaging, well-presented papers that will provoke excited responses. But then there’s the counter-stereotype, of presenters droning through turgid papers while audience members ramble “questions” that just happen to really be all about their own work, not what was presented. The reality generally runs the gamut from one stereotype to the other and covering everything in-between.

My experience of conferences, as a junior scholar, has been that they are a series of slightly disillusioning or uninspiring talks broken up by moments of incredible, sometimes life-changing excitement and inspiration that make it all worthwhile. For example, I found out about the documents that would become my first book — and met someone who became a good friend — by attending a panel on regional history at a AAASS conference as a mid-stage graduate student. The panel was great, but what really mattered was talking privately to each panelist afterward, to ask about their archival research and whether they had come across anything that might help me in my project. One of the panelists knew of amazing materials that were perfect for me, and — poof — my life changed. The topic of my new book also came out of a similar chat with another colleague, at that same conference. Many important insights about my work were born in a conversation here or there — often not in direct comments on a paper I presented, but through indirect conversations at other panels, often totally unrelated ones, or chats over lunch. I think the real work and value of these conferences is that they bring all these people to one physical space and throw them together, which creates the circumstances in which these kinds of unpredictable synergies can happen.

Conferences can also be great as a kind of giant snapshot of the state of the field. Usually they have some kind of theme — the theme for last year’s ASEEES was “borders and peripheries” and next year it’s revolution — but I find the official theme is often kind of like a parlor game, as everyone tries to shoehorn it into the topic they want to present on, no matter how awkward the marriage. This can lead to amusing paper titles (though I won’t call anyone out publicly, it’s worth browsing the program for a giggle). What’s more interesting are the patterns that turn up by accident — it seems like the last couple of years have thrown up a lot of papers on childhood and education, and religion seems to be popping up more than it used to. A few years ago, it was all empire, all the time in the Slavic and EE world. That’s still there, but less heavily than before. These kinds of patterns do give you a sense of where “the field” is heading in a way nothing else can.

The other factor worth mentioning about conferences is location. Each big annual conference is held in a different city every year, though my conferences tend to be in the northeastern cities most often: Boston, NYC, DC, Philly, Pittsburgh. For those of us who live out here, this is convenient. The conferences are cheaper (faculty with full-time positions are usually at least partially reimbursed for these quite expensive events, but note that many, perhaps most, attendees are paying at least partially out of their pocket), and in those cities in the winter months, there’s often little reason to leave the conference hotel, which keeps the panels well-attended.

This year was very different, though, and the effect was noticeable on attendance at panels at both ASEEES and AHA: in New Orleans, everybody was playing hooky at least some of the time to go out and explore the French Quarter. It may have been a little depressing to see the mostly-empty rooms, but speaking for my own panel, which was barely outnumbered by its audience, we may have inadvertently benefited from low attendance. At any rate, it was the most interesting and fruitful question and comment session I’ve had at a panel where I presented. All the attendees were there listening instead of out eating beignets because they had an intense interest in our topic. And because there were so few of us, the barrier between panel and audience really broke down, and we actually had a real conversation.

When I played hooky myself, I not only enjoyed some fabulous food, but I made some of those great professional contacts that conferences are for, which may not have happened in the hallways between panels. What started as a cup of coffee with an old friend grew to a three-hour, multi-course lunch where I met several new people whose work interests me in completely unexpected ways. And after my own panel, several of us moved on to a lunch where our conversation continued less formally, but just as productively.

It was a side-bonus that I was also able to get acquainted with one of the most extraordinary cities I’ve ever been to. New Orleans struck me as a rather odd mashup of Vegas, the deep South, and (inexplicably) a little bit of Budapest.

Please excuse the crappy cell-phone photos.

Please excuse the crappy cell-phone photos.

I could -- and did -- spend hours just walking around the French Quarter, soaking up the unusual colors, shapes, sounds and smells.

I could — and did — spend hours just walking around the French Quarter, soaking up the unusual colors, shapes, sounds and smells.

I think it was the courtyards that most reminded me of Budapest, along with the cafe culture -- especially the Palace Cafe, with its central curved staircase, rails, and mirrored walls.

I think it was the courtyards that most reminded me of Budapest, along with the cafe culture — especially the Palace Cafe, with its central curved staircase, railed interior balconies, and mirrored walls.

One of the notable things to come out of both ASEEES and AHA this year was how very little Slavists and historians tweeted or blogged the conference, a practice which is increasingly common in other disciplinary conferences, notably the MLA. Of course, historians are historians because we like old things, and we have always been famously technologically backward.

The tiny little pharmacy museum was apparently a hot spot for conference visitors, and certainly one of my favorite finds of the trip.

The tiny little pharmacy museum on Chartres St. was apparently a hot spot for conference visitors, and certainly one of my favorite finds of the trip.

What can I say, we like old stuff.

What can I say, we like old stuff.

I was ridiculously excited about the blue glasses -- it was a cliche that the Russian women "nihilists" (radicals) of the 1870s wore blue glasses, but I could never really picture what that meant. They are really blue!

I was ridiculously excited about the blue glasses — it was a cliche that the Russian women “nihilists” (radicals) of the 1870s wore blue glasses, but I could never really picture what that meant. They are really blue!

For many years we have been somewhat snickered at because we still mostly read papers at our panels, instead of using PowerPoint or (newsflash! This is The Thing now) Prezi, or even poster sessions. Personally, I wanted to tweet both conferences but was inhibited by not owning a mobile device — which is due to a combination of being a late adopter of technology in general and just plain not having any money. But since poverty is endemic throughout academia, that can’t be the reason historians and Slavists are so behind. My other problem would be that I’m too wordy for Twitter. *cough* This might also be common to historians generally. *cough* Excuse me, I seem to have something stuck in my throat.

Another big feature of the AHA conference is the job fair — the AHA is the primary venue for first-round interviews for academic jobs in history. This is why this conference always feels considerably less warm and friendly to me than ASEEES. It’s always filled with so many nervous people wearing nearly identical dark suits. If we all had less / more kempt hair, it would look like a secret service convention. Interviews are usually held either in suites (so that you often run into nervous people pacing the hallways upstairs) or all together in a ballroom, where a thousand tiny cubicles are formed for interviews to be held in, with an outside waiting room known as the holding pen. This year the holding pen was freezing cold, and the interview pens were made from floor-to-ceiling black curtains, with bright overhead lights, making that room unusually warm. Job interviews, or KGB interrogation? Sense of humor definitely required for survival.

Finally, the conference feature that is seemingly tangential yet a favorite for nearly everyone: the book exhibit. Scholarly publishers put together booths all in one big ballroom with books from their list relevant to the conference discipline. For laypeople to understand why this can be exciting, you have to understand that the kind of books most academics write and like to read are almost never stocked in stores, so conferences are a rare opportunity to browse books in person. Plus, university-press books are incredibly expensive, and at conferences they’re usually discounted about 20%, often 50% on the last day. The book fair is a geek’s wonderland.

I got to see my book on display for the first time at ASEEES, then again at AHA. Definitely a personal highlight!

I got to see my book on display for the first time at ASEEES, then again at AHA. Definitely a personal highlight of the year!

The New York Times even took note of the AHA, with a nice little piece highlighting some of the trends of the conference. But at the same time, for me this brief summary for outsiders highlighted, between the lines, the enormous difference between the conference and my conference. The piece did capture the “news” from this year’s AHA in that it records some of the points made by big names at the high-profile events (which I mostly didn’t attend), as well as capturing a little something of the atmosphere (I guess; I eschewed “historian-themed cocktails” for what felt like more historical cocktails — famous local concoctions which date back to the Prohibition era when cocktails got interesting largely to cover up the horrible taste of badly renatured industrial alcohol). But at the same time, the major points quoted in this piece are in a different way not at all representative of the real nature of the profession, at least to me.

Michael Pollan asked why he uses our books as sources and his version sells so much more than ours. Did no one point out that the failure of our books to sell might not actually be a problem? Is best-seller status the only marker of success, or usefulness? Scholarly books are meant to be read by scholars, because some problems are so complex that only people with a lot of training are going to be able to take the time to go over all the evidence in detail, but someone — and it should be a lot of someones — needs to comb through that evidence, so that when someone like Pollan (whose role is also very necessary) takes away the general conclusions and frames them in a way that’s useful to the general public, he can be pretty sure that the conclusions are truly evidence-based and meaningful. He can’t write his book for the masses unless we first write books for each other. If we all tried to write for the masses, we wouldn’t be doing the evidence-sifting that we’re trained for, and on which the general conclusions depend. (From the summary in the article, it does seem that Pollan was more or less making this point, but it’s not clear to me to what degree either the conference or the NYT author are understanding this as a good thing for academia, rather than a “problem.” But I wasn’t there, so if anyone would like to tell me whether / how this point was raised I’d love to hear about it in the comments.)

Similarly, outgoing AHA president William Cronon and president of Oxford University Press Niko Pfund are both quoted as worrying about the state of the academic monograph. According to the NYT article, Cronon said that historians “tend to default to a dry omniscient voice that hasn’t changed since the 19th-century, despite the fact that historians no longer believe in that kind of omniscience.” And Pfund, noting that the pressures of tenure decisions are a key reason why historians are still married to the traditional monograph, added that historians remain “absolutely imprisoned in the format of the printed book,” a situation he called “borderline catastrophic.” As a junior scholar, conference attendee (who admittedly skipped out on the event) and as an author of a recent monograph published by Oxford University Press, I’m confused by these remarks.

First, as explained above, I’m not sure that scholars writing for other scholars to solve problems that can’t be solved better in other ways is a problem. Second, while I am absolutely a very strong advocate for good, readable academic prose (there’s no reason that an original argument written for other trained scholars has to be written badly after all), it is precisely the senior scholars like Cronon and the editors of prestigious presses like OUP that keep standards for monographs so rigid, and maintain monographs as the key format for historical research. Perhaps Cronon and Pfund are trying to convince their peers to change, for which I applaud them, but the most recent AHA newsletter showed graphs demonstrating how newer digital formats for scholarly research are less respected than any other aspect of a scholar’s portfolio in tenure decisions, and Oxford, with most other university presses, actually fights rather hard against digital incursions into the traditional monograph market. Finally, my editors at Oxford actually made me revise my book manuscript to more closely follow that “dry omniscient voice that hasn’t changed since the 19th-century” than the original manuscript did, in contrast to the general trend among academic writers to be more forthright with voice and authorship (a simple example is the old schoolmarm rule about using “I” in formal prose — my editors still frown on it, while a Google Scholar search will show that it has already become the standard).

I adore Oxford University Press for some of the quirks that may increasingly seem old-fashioned but have real value, like the Oxford comma, or quality craftsmanship in a physical book, or simply the high caliber of editorial staff they maintain in an age when authors slapping a manuscript straight into an Amazon ebook is becoming dangerously tempting. And of course I adore OUP simply because they wanted my book. How could I not? I also love them for publishing many of my favorite academic books (including, perhaps, some of those very “dry” monographs “imprisoned” in beautiful covers on my shelf where I can pick them up on a whim, flag useful passages and discover unexpected connections when browsing the shelf — monographs that are not best-sellers, but are purchased by a small number of people exactly like me…).

But at the same time, as a young junior scholar who is coming up for tenure soon, hearing senior, powerful people in my field tell us we need to go in direction X when they are among the primary gatekeepers blocking the doors to direction X, I’m deeply confused and troubled. And I do think my confusion highlights one of the downsides of conferences — I’m not sure there’s very much meaningful exchange between the most senior scholars and the rest of us. I know that from my first conference as a starting grad student to this year I have interacted at panels, in hallways, and socially with everyone from undergrads to mid-career scholars as a matter of course. But many senior scholars forgo conferences — after all, they’ve been staying in crappy hotels and listening to boring panels in cold rooms year in and year out for decades — or if they do attend, with the exception of my own advisors and mentors I see them only from afar, from the back of an audience for a keynote address attended by hundreds and therefore decidedly not a venue for those kinds of serendipitous exchanges of ideas that ideally a conference is for.

Fellow conference attendees: what do you think?

Potential conference attendees: I’m sorry, were you looking for practical advice on presenting papers? Look here as a start.

 

UPDATE: Nice AHA wrap-up from the Tenured Radical and an entertaining account from a Job Seeker

Posted in GradSchool, History, Profession | Tagged , , , , , , , | Leave a comment

I’m on the OUP blog!

Check out my guest post today on the Oxford University Press blog, about a mid-nineteenth-century Russian stay-at-home-dad.

Posted in Random, Research, Russia | Tagged , | Leave a comment

Book!

My first book is now available as an ebook, and will ship soon in hardcover from Amazon! It has already made its appearance at the annual conference of the Association for Slavic, East European, and Eurasian Studies in November. And it will also be available at the American Historical Association conference in January (and probably be 50% off on the last day).

To find out more about it, click on “Research,” then “Book,” in the menu bar above.

This book represents about a decade of work, as well as being the very first time my name appears in print on something I authored. This book was much harder to produce than my daughter. I know it more intimately than I know anything or anyone. It was the most difficult thing I’ve ever done. It very nearly never happened, because I frequently wanted to give up on it completely. I’m so sick to death of this project at this point that I’d almost rather talk about anything else on the planet. Yet I’m so proud of this book that I can’t wait to tell everyone about it. Writing a book is a strange journey indeed.

I hope you’ll be interested in reading it. I like to think it’s rather a fun read for a scholarly monograph (mainly due to the quirkiness of one of my main subjects, Andrei Ivanovich Chikhachev). But it is a scholarly monograph, so I don’t expect that very many people actually will read it (if you skim some parts, I swear I won’t be offended!). If you’d like to buy it, I’ll be thrilled (and my publisher even more so). But like most academic monographs, it’s pricey—believe it or not, these kinds of books are always published at a loss, despite the high price, because they almost always get purchased only by libraries and a few handfuls of individuals.

If you’d like to read the book but can’t afford it, there are two options:

First, a paperback edition will come out at some point, in a year or two, which should be considerably cheaper. Hopefully the ebook price will also go down at that point.

Second, you can always request your local library to buy it. That’s a wonderful way to support the book (and me) and to enable not just yourself, but others, to read it!

I appreciate your interest more than I can say!

Posted in Research, Russia | Tagged , , , , | Leave a comment

Adventures in Russian archives

Ivanovo Train Station

The Ivanovo train station.
(Photo from Russian Wikipedia, used under a Creative Commons license.)

I first arrived in Ivanovo, Russia, in the fall of 2004 by overnight train from Moscow. We pulled into Ivanovo at seven in the morning, and I peeked out, still sleepy and disoriented. I asked the elderly gentlemen getting off beside me if this was, indeed, Ivanovo. He looked out at the bleak landscape, still dark, of a handful of crumbling concrete buildings with a gigantic Soviet-era wall mosaic of a worker, and replied with an ironic grin, “sure looks like it.”

Archive Mittens

Hand-felted mittens, adapted for archive use.

I made these incredibly ugly mittens to wear in the Ivanovo archive where I did the bulk of the research for my book. They were knitted in Russian wool, then fulled with hot water and soap to make them denser and therefore warmer. The forefinger on the right mitten only was made separate from the rest of the hand, so that once the fulling process ensured the wool wouldn’t unravel, I could cut tiny holes in the pad of the forefinger and thumb, so that I could (just barely) grip a pen with the mitten still on. I went through all this in the long fall evening hours after the archive was closed, then wore them all through the winter. Those mittens tell you a lot about doing archival research in Russia.

I did my research mostly in just one archive, and one that few westerners ever visit: the State Archive of Ivanovo Region, or GAIO for short.

Archive

Like a beacon in the distance, the State Archive of Ivanovo Region calls to me…

GAIO is a provincial archive, and the city of Ivanovo is the capital for its region, also called Ivanovo. Like most enterprises in Ivanovo, the archive is run pretty much entirely by women. Ivanovo’s nickname is “City of Brides” because it has been a disproportionately female city for more than two hundred years. This phenomenon began because the city of Ivanovo grew out of a region that dominated Russia’s new textile industry in the late eighteenth century. Textile workshops tended to mostly employ women in those days, so there were disproportionate numbers of women workers. Today, Ivanovo’s textile industry is dead, but the disproportionate domination of women continues.

I assumed the “city of brides” thing was little more than a nickname, but Ivanovo actually has a hair salon specializing entirely in brides….

I lived in Ivanovo for almost ten months, all of them winter. Today, with its industry closed, Ivanovo is mainly known for its malls, a couple of which were built in abandoned factory spaces. Most young people try to leave Ivanovo as soon as they can, as there aren’t many jobs. Too many of the relatively small number of adult men can be seen wandering the streets, drunk at midday—there’s not much else for them to do, if they’re not both well-educated and lucky. When I was there, from 2004-05, there was some new construction, but mostly the town looked like a graveyard for the various historical epochs it has survived. There are old merchant homes from the late nineteenth century all over town, made of wood with decorations around the windows and doors. They are quaint, but decaying fast. In between them, there are the hastily erected apartment buildings and institutional constructions of the 1960s, ugly and decaying even faster than the nineteenth-century buildings. Along the river banks are the shells of what once was an enormous factory complex, and here and there are sparkling new apartment buildings offering “luxury” units to the entrepreneurs of the new shopping malls.

Ivanovo Landscape

A nineteenth- or early twentieth-century house with a 1960s apartment building in the background, in Ivanovo.

The Ivanovo archive, like most archives, opens its reading room for pretty limited hours, about 4-5 hours each day, four days a week. As a researcher, you can only request a limited number of documents each day, so you try to plan ahead to make sure you’ll have enough to fill your time until you can request more, since you can’t afford to waste an hour. When you first arrive, they tend to not give you most of what you ask for. Instead they’ll give you one or two documents to start with, and watch how you handle them, to make sure you’re a serious researcher and are handling the documents carefully. And, at least when I was there, it was very difficult to get a xerox or digital photo of anything. It was very expensive, and you had to ask permission separately for every page. They approved only a few pages once in a while, and usually only something that obviously couldn’t be easily transcribed, like a drawing. This means you have to sit there and copy out the documents you’re interested in by hand. Eventually I was given permission to use a laptop, but I found that copying by hand was actually more efficient for my research, since the handwriting of private, nineteenth-century Russian documents was hard to decipher, so it was often easier and faster to “draw” the illegible bits in my notebook than to try to indicate what I thought I saw in the middle of typing. That’s why it took almost ten months to get the information I needed, and I barely got it all before I had to leave.

I didn’t bother to get a photograph of the EKG-type handwriting, as it wouldn’t have helped. This is an example of difficult, but decipherable handwriting. It’s an excerpt from an account book.

The handwriting isn’t really difficult because it’s old and Russian. First, I’d been reading Russian for more than ten years by the time I started this project, and it’s also not that difficult to adjust to the idiosyncrasies of the mid-nineteenth century. There are reference books that provide some of the standards of the time, though the real trick is getting to know the personal quirks of a given writer. I was lucky in that the vast majority of documents I needed were written by just a handful of people, so I could get to know each one within a week or two, and have little trouble with them thereafter. Deciphering the handwriting is a bit like the last stages of figuring out a code: you can see most of it, so you isolate the strange parts and try to identify patterns about when they appear. Once the context tells you what a figure must indicate in one instance, you can apply that to the other instances, and hopefully everything suddenly becomes clear. This is all rather fun. Though sometimes you come across the handwriting of someone who just completely defeats you. I had one such case in Vasily Rogozin, the husband of Aleksandra Chikhacheva, the daughter in the family I was studying. His handwriting looked like an EKG readout, and I had to give up on it, with great regret, since the content, if only it were legible, probably would have solved a few mysteries, because Aleksandra is one of the most enigmatic figures in this collection of documents. But I felt better when I read a letter by her father to Rogozin, complaining about his impossible handwriting!

These strange (to me) symbols popped up in all the family diaries and at first eluded me. Over time it became clear they represented days of the week. Then, I found this key, listing each symbol with its meaning and related day of the week, in the naval diary of Natalia Chikhacheva’s father, Ivan Yakovlevich Chernavin. I don’t know whether he invented it or it was a common naval code (perhaps a reader of this blog can tell me?)

This code was never mysterious, but is definitely a lot of fun. Andrei Chikhachev and his best friend and brother-in-law Yakov Chernavin invented a system of signaling to each other across their opposing balconies that they referred to as their “home telegraph.” The system involved navy-style flags (later they invented a nighttime version with lights). This is a page from their telegraph signal book.

Some mysteries remain: this seemingly coded text was inscribed by Andrei Chikhachev into his parallel diary. I have no idea what it means. Maybe someday someone will recognize it, if it wasn’t a completely idiosyncratic code unique to Andrei and his brother-in-law Yakov Chernavin. Other mysteries in the documents include odd lists and charts that I believe may have been related to various games the family played.

The hardest part about the archive work, for me, was the cold. The archive is deliberately kept cold because the low temperatures are better for the documents. But when you’re sitting still for 5 hours at a time in a cold room, you soon begin to feel like your limbs will fall off if you attempt to get up again. I coped with the help of those archive mittens, and an ankle-length down coat worn at all times (with hat and scarf and fur-lined boots). I went out into the hallway for a break with hot tea and crackers three times a day, and did quick stretches every time, to get the blood moving again.

The other greatest challenge was confronting the very different attitudes toward research and access held by the authorities of this archive (or any other state archive in Russia, though they vary in the details). Mind you, I had it incredibly easy compared to most foreign researchers in Russia. There’s even a whole book written about adventures in Russian archives. In the old days, your biggest problems included being followed by the KGB and getting permanently banned from ever traveling to Russia again. These days, keeping warm is really the biggest issue for most of us. Although it can still be very difficult to study certain subjects from the 20th century (some archives have still not been opened to researchers at all), for someone like me, studying gentry women in the early and mid-nineteenth century, there’s generally no question of whether I can get access. I’ve been denied some documents, and always told this was because they were “in restoration.” Sometimes I suspect this really means that they can’t be found, or that an archivist is in a bad mood, or that I’ve been asking for too much lately, but it’s never been anything very important.

What was much more challenging for me is that in Ivanovo in 2004-05, archivists were still very wary of digital photography, though they did eventually allow me to photograph a few documents, under strict supervision. Even now Russian archives are slow to permit digital imaging, although it has become pretty standard in most of the world and it’s potentially a marvelous way for archives to get paid to digitally preserve their own collections. For many decades, Russian archives were focused on keeping information from getting out, and this is how most working archivists were trained, so it has been a very slow—some might say glacial—process to shift policies toward the priorities shared by most western archives, which is that archives exist in order to provide access to the documents, so that researchers can do something productive with them, instead of letting them literally disintegrate unseen.

So, I labored away, copying by hand under the somewhat suspicious eyes of the authorities. But this is really not an accurate depiction. There are very few people who work in the reading room of the Ivanovo archive for more than a few days, and I was there every single moment of every day for so long that I became quite close to the main reading room archivist, and the archive as a whole was incredibly generous in helping me to pursue my research (they have little control over central policies, and in any case there’s a long history of archivists losing their jobs by being too kind to foreign researchers–their task is not an easy one). Working in the Russian provinces was very different from the kind of experience you’d have working at, for example, the Bakhmeteff Archive in New York, but not necessarily worse.

While it was harder to live in a rented room in a foreign town while I did my work, this aspect of my research was also incredibly fun. Ivanovo is a strange and interesting town in many ways. For whatever reason many of the names of streets and squares have not been renamed since the collapse of the Soviet Union (as they mostly have been in Moscow and especially St. Petersburg), so there’s a Revolution Square and Red Army Street and Marx Street and so on. There’s also a rock in the center of town to commemorate the fact that Pushkin once traveled somewhere near Ivanovo, but not actually to Ivanovo. This rock is maybe my favorite part of Ivanovo. The contrast of Pushkin rock and Revolution Square is just the beginning—beside the crumbling buildings there are fancy new western-style supermarkets and a McDonald’s knock-off. Above the post office that still smells of old Soviet paper there is an internet cafe full of foreign students sending emails to far-flung parts of the world. Ivanovo is home to a town-within-a-town full of universities, so there are a lot of students. There’s also a formerly-secret military base not far from town, so plenty of soldiers, too. And dotted here and there are a handful of pre-revolutionary churches, with shiny gold paint newly re-applied to their onion domes.

Back streets of Ivanovo

The back streets of Ivanovo: path to the archive.

To get to the archive every day I took a short-cut through the back alleys of one of the older neighborhoods, where I saw spectacular new dachas being built alongside 150-year-old peasant huts. There were still hand-pumps for water by the side of the roads, and every morning a lady walked her goats across the path I was taking. As I exited this neighborhood and neared the main road where the archive was located, I passed a 1960s-vintage apartment building with a pack of wild dogs encamped in the courtyard. You read that right. Dogs in Russia are not routinely spayed or neutered, and there isn’t much in the way of systematic dog-catching, so there are a lot of strays wandering everywhere. Calling them “wild” is probably a stretch, but they are dangerous, to each other and to passersby. I got used to them after a while, which I cannot say for the -30 degree windchill (Celcius) in February.

An area of Ivanovo I like to call “wild dog alley.”

By far the most exciting part of that research year, however, was traveling beyond Ivanovo, into the countryside. I went there to find the villages once owned by the gentry family I was researching. Their main residential village still exists, complete with manor house, then being used as the village school. I was able to meet several of the teachers, who gave me a tour of the house and village. We went back again in spring, and the teachers treated us to a memorable feast in an upstairs bedroom that once belonged to the woman at the center of my study.

The road sign to Dorozhaevo. We went once in the bitter cold of mid-winter, and again in a muggy and buggy June.

Dorozhaevo Village

The village of Dorozhaevo

Enjoying the quality of freshly pumped well water in remote Dorozhaevo.

An upstairs bedroom of the Chikhachev house in Dorozhaevo, which the locals told me belonged to the lady of the house (and nothing I read in the documents contradicted this).

Traveling on back roads from the village of Berezovik (once owned by the Chernavin and Chikhachev families) to the nearest town, Teikovo.

A wooden church from the outdoor museum at Suzdal

A rich peasant’s house at the outdoor museum at Suzdal.

Interior of a rich peasant’s house, from the outdoor museum at Suzdal.

We also traveled to another village, where the church still stood, and to nearby towns that had been significant in the mid-nineteenth century. Of these, Suzdal is now a major stop on the tourist circuit known as the Golden Ring. It features two medieval monasteries and an outdoor museum with reconstructed village houses from the nineteenth century. We also visited Rostov-the-Great, home of a magnificent medieval fortress containing several cathedrals, which should also be a tourist site, but is somewhat off the beaten path and so not as prosperous as Suzdal.

A bell tower from a monastery in Yaroslavl, a beautiful and mostly thriving city on the Volga river.

Sadly, Yaroslavl is also the home of what I believe may be the world’s ugliest building.

Skyline of Vladimir.

Finally, we visited neighboring Yaroslavl, and the former provincial capital, Vladimir, both cities that are adjusting rather better to post-Soviet times than Ivanovo, thanks in part to their more diverse economies and several significant historical sites, which bring in tourist money.

None of these visits were really essential to my research, but they helped me to assimilate the setting in which the events of my study took place. Perhaps most exciting of all my side-trips, though, was a last-minute excursion to tiny Shuia. I went because I’d been told at the Ivanovo archive that the little town museum in Shuia had a few books that had belonged to the father of the family I studied. It turns out they had a shelf full of Andrei Chikhachev’s bound volumes of the newspaper Agricultural Gazette, full of articles he had written, and with his own marginalia! Not a bad surprise for my last day of research in Russia for that project.

On an article titled “The Influence of the Moon on Trees” Andrei wrote, “Rather useful article” (perhaps not the most revelatory annotation, but characteristic of Andrei!)

These are some of the aspects of historical research that don’t really get talked about in books or classrooms, though they should. For my current research I have been working so far in the central State Archive of the Russian Federation in Moscow, and will be doing more in St. Petersburg and possibly in archives in France and Germany, so my experience has been rather different. I can order xeroxes easily in Moscow, so I can gather my materials much more quickly, and I am less immersed in the process, as I work for short periods on summer “breaks.” This is probably more typical of most historians’ archival research, and I must admit there have been far fewer moments, lately, when I wished to myself that I had chosen to study Italian history instead.

 

For more images related to the people and places in my book, look here.

 

NOTE ABOUT IMAGES: All photographs are my own (© Katherine Pickering Antonova 2012), unless otherwise noted. Please don’t use or distribute without my permission. Photographs of archival documents were taken with permission from the State Archive of Ivanovo Region.

Posted in History, Profession, Research | Tagged , , , , , , , , , , | 3 Comments

Dickens and Dostoevsky Just Got Real

Check out this nicely written and detailed summary of a recent dissertation that should be getting a lot of attention, in my totally-not-humble opinion (the author may just happen to also be my spouse).

Which reminds me to mention that the site that produced the review is a really interesting one: it provides reviews of recent dissertations from all fields, hopefully helping to extend their reach into non-academic circles, or at least across disciplinary boundaries.

Posted in History, Profession, Research, Russia | Tagged | Leave a comment

What Is Socialism?

Judging by the way the media and the GOP talk about it, you might conclude that socialism is anything the GOP disagrees with.

Teaching what socialism actually is is part of my job, so I get asked this quite a bit.

First, socialism isn’t one thing. There is socialism the idea—and the idea has been expressed in different ways by different people—and then there are a vast variety of ways that the idea of socialism has been implemented in various times and places.

When I talk about socialism in my classes, I usually start by drawing an umbrella on the board. Because socialism is an umbrella term for all these different manifestations. Only one of the many manifestations of the socialist idea is “Communism.” And then there’s Soviet Communism as opposed to, say, Maoist or several other kinds, and Soviet Communism also changed dramatically over time, so there’s really no such thing as one Soviet communism. More on that below.

At the most basic level, the core of socialism that all these variable manifestations share is the notion that it would be a good thing if economic resources were distributed equally in a society.

Here’s just the start of a list of things not all socialists agree on about how that equal distribution would happen:

1. By “equal” distribution of resources, do we mean absolute equality (everyone has the same) or do we mean relative equality (some degree of correction of the enormous gaps between rich and poor that characterize capitalist systems)?
Various mid-nineteenth century experiments in communal living aimed for absolute economic equality. Today’s European social democracies aim only for a modest degree of relative economic equality.

2. How would this distribution of resources be imposed, regulated, or maintained?
Since the assumption is usually that a society with non-socialist economic principles would be shifted to socialist economic principles, some mechanism would be required to effect the shift of economic resources from just one part of the population to a more even distribution across the whole population, and then to maintain that relative balance as time passes. There are many, many possible ways for this to happen. Just a very few of the possibilities are:

    A. Voluntary sharing of wealth (as in a commune or co-op)

    B. Government regulation and taxation provides incentives and other “invisible” methods of shifting some limited economic resources to the poor within an essentially capitalist economy.

This could in theory be done in a very minor way–as it is in all industrialized countries right now–in a moderately progressive way, as it is in some social democracies in Europe, or aggressively, which has arguably never yet been tried.

    C. Government legislates salary caps and high minimum wages to deliberately even out wealth
I don’t know of a case where this has been tried to any significant degree.

D. Government nationalizes property (wholly or partially), sets prices, and otherwise directly controls the economy, seizing and redistributing assets as necessary

The Soviet Union did this in the early years following the October Revolution, in a policy referred to as War Communism, since it took place during a civil war and was justified as necessary to save the revolution in its infancy. Lenin changed this policy—reintroducing a limited market and limited private property—as soon as the Civil War was completed, though doing so was very controversial in the Party. We don’t really know what Lenin intended in the longer term, since he died in 1924.

E. Government plans economic production ahead of time (wholly or partially), determining what is made or exchanged by whom on what terms

The Soviet Union began doing this with the first Five Year Plan in 1928 (under Stalin), and it characterized most of the Soviet economy in subsequent decades.

    F. War/revolution are employed to redistribute wealth by force

Arguably, this is another way of describing the Soviet policy of War Communism, and other examples of forced requisition during wartime in many other parts of the world.

3. What resources are we talking about? Just cash? Money and property? How about commercial services? Does socialism address political equality directly?

Traditionally, the discussion of what to equalize is about tangible economic resources, not health, education, or political rights. Although there are clearly connections between economic resources and how easily you can access medical care, education, or civil rights, socialism is at its core a theory about economic resources. The idea is that once those are equalized, the rest follows. Access to intangibles such as political rights, health, safety, and knowledge are really about the distribution of power, and are therefore fundamentally political, not economic, in nature.

IMPORTANT: Socialism, as theory, is an economic idea, not a political idea. So there is no inherent connection between socialism and any particular form of government.

Sing it with me: Economic ideas are about how money and other tangible resources are distributed. Political ideas are about how power is distributed.

Many Americans assume that there is some inherent connection between capitalism and democracy, and between socialism and authoritarianism. There is no such inherent connection, neither in theory nor in practice. There have been democracies with socialist economies (much of Scandinavia in recent decades, for example), and democracies with capitalist economies (such as the US). There have been authoritarian governments with capitalist economies (most absolute monarchies in the nineteenth century), and authoritarian governments with socialist economies (such as the USSR).

While all socialists like the idea of some degree of equality of wealth, socialists have not historically agreed on their preferred form of government. Since the collapse of the Soviet Union, however, most (though not all) people arguing for socialism in the industrialized world prefer democratic governing and non-violent methods of wealth redistribution.

It should go without saying—though sadly it does not!—that by “people arguing for socialism” I do NOT refer to the U.S. Democratic Party. Economically speaking, the American democratic platform is on the conservative end of the spectrum and from a European point of view virtually indistinguishable from the U.S. Republican Party on economics. By “people arguing for socialism” I refer to people actually arguing for socialism. Such as the Socialist Party USA or the American Social Democrats. Ask them what they think of Obama, I dare you. (LOL)

4. Is socialism something that can be achieved, or does it happen “spontaneously”?

This has historically been an incredibly contentious question. Many proponents of socialism consider economic equality a goal that can be worked for, and perhaps fought for. Others acknowledge that economic equality would be an improvement for human societies over capitalist or other economic systems, but do not believe that socialism can be created “from above,” that is, imposed by professional revolutionaries or government fiat.

Karl Marx inspired many professional revolutionaries, including the Bolshevik Party that took power in Russia in October 1917 and set about imposing socialism from above, but Marx himself believed socialism would happen “spontaneously,” from below, through a process of economically exploited classes recognizing how they are exploited and working together to take control of their economic power as producers, which would eventually result in a system characterized by greater economic equality and which Marx identified as “socialism.”

He wrote about all that in the second half of the nineteenth century, as labor in Europe was indeed being grotesquely exploited. After Marx’s death, labor in Europe and the U.S. began to organize and to strike for better conditions. As it happened, the general revolution Marx predicted did not occur (at that time!) — instead, the owners and managers compromised enough on working conditions and wages that workers began to enjoy (just) sufficient health, safety, and access to material goods and education to not be motivated enough for a revolution along the lines Marx expected. The democratic socialism and welfare systems of liberal democracy that dominated Europe after the second world war have essentially held that compromise in place. Until recently, that is, when deregulation, anti-union legislation, and the defunding of welfare and other public programs in the US and (to a less extreme degree) in Europe is beginning to shift the labor-management relationship backward again. It remains to be seen where this relationship will go, but I find the Occupy movement a fascinating early sign of resistance to these anti-labor policies. I say this only to point out that Marxism is not necessarily a relic of history, but still a framework that can be applied to working conditions and economic systems today.

Okay, so that’s socialism. What about Communism?

Communism is even more confusing!

Communism has a lot of meanings, too, depending on the context in which it’s being used.

Marx and Marxists have been known to use “socialism” and “communism” interchangeably, but when they’re being picky, socialism is often referred to as a transition stage on the way to communism. In this sense, socialism marks a stage after a revolution has overthrown private property, but before government has “withered away.” Communism then describes a utopian stage where government is unnecessary—society is classless, all labor is equal, and the system can maintain itself.

What gets really confusing is when a country like the USSR undertakes a revolution and declares itself a Marxist state — what they said they had achieved was not socialism or communism, but a revolution that was directed toward that end. So, when the Bolshevik Party that seized power in Russia in 1917 changed their party’s name to the Communist Party and their country’s name to the Union of Soviet Socialist Republics, they were using those terms aspirationally—they were aiming for socialism and communism. As the years followed, the Party dithered about just how much socialism had actually been achieved at any given point, but technically communism, if you read your Marx and Lenin, as every Soviet citizen did, remained on the horizon.

That would be confusing enough, except that these aspirational names have by now become descriptive of the countries engaged in this experiment. So, while the Soviet Union was attempting to achieve Communism, it became known as “a communist country,” and thus we began to speak of “Communism” not as the utopian final phase of Marxist development, never (yet) achieved on earth, but as “what they’re doing over there in the Soviet Union.” This is an extremely problematic usage when even in the USSR the Communist Party admitted that what they were doing was not actually Communism!

Since the end of the Cold War (at least) most scholars don’t like to refer to anything the Soviet Union was actually doing as “socialism” or “communism” because the terms are so imprecise. We tend to use those words mainly to describe the theories. The reality in the Soviet Union is known by the specific policy names used by the Party at the time — such as War Communism or the New Economic Policy or Perestroika — or in more general contexts by the leader who is associated with a certain cluster of policies, hence, “Leninism,” “Stalinism,” or for the Brezhnev period, “stagnation,” a term coined by Gorbachev that is irresistibly evocative, if not precisely literally accurate. One can also speak accurately of the type of socialism actually practiced in the Soviet Union as “planned socialism” or simply a planned economy.

Anarchism

A final note on anarchism, another frequently misunderstood term. Anarchists do not advocate chaos. Anarchism is also something of an umbrella term, encompassing both individualists and collectivists, but the collectivist branch can be seen as a variant of socialism. What distinguishes collectivist anarchists is that they are particularly concerned with the role of government in establishing or maintaining economic equality—namely, they want government to stay the heck out. A case can be made that if there were ever hope for the Bolshevik Revolution to live up to any of the theoretical principles on which it was based, this hope was derailed by the domination of government and Party at the expense of workers. Other arguments can be made to explain the many hypocrisies of the Soviet state, but there’s no question that Lenin’s notion of the Party as “vanguard” leading the revolution on behalf of workers resulted in a much more powerful role for the state than many socialists condoned at the time or since.

Posted in History, Russia, Teaching | Tagged , , | 2 Comments

Russians Love Their Children Too

By Rita Molnár, via Wikimedia Commons

I’m quoting Sting, of course, in his famous — and at the time daring — song, released in 1985, during the Cold War. He was hoping that Russians, though our enemies, are human too, loving their children enough not to push the button to start nuclear war. Fortunately, it turned out that indeed, Russians love their children, too.

Imagine a bunch of Russians on an internet forum debating the merits of capitalism. Imagine that they’re talking about the United States in the 20th century as if it was all one, unchanging thing. As if the Civil Rights movement, the Great Depression, and post-Reagan neoconservatism were all happening simultaneously, and all characterize who we all are as a people. Imagine that people are saying all Americans have been merely reactive to our regime, that we are materialistic products of the free market, which drives our every action. Imagine that these writers on an internet forum acknowledge no social or cultural changes of any kind, and seem to believe that all our political leaders (FDR and Hoover, Coolidge and Clinton, Bush—either Bush what’s the difference— and Obama) had essentially the same outlook (because after all we’ve been a capitalist democracy the whole time, haven’t we?). Now imagine that these Russians are arguing that these “facts” about the U.S. prove that capitalism must necessarily lead to chauvinistic imperialism and enormous gaps between rich and poor to the degree that thousands of people are homeless in the richest country in the world (Russians didn’t know homelessness until they “democratized,” a correlation that could easily be misunderstood as causation).

It’s all patently ridiculous, of course. It’s hard to even know where to begin to correct all the false assumptions embedded in that argument.

Yet, I’ve heard it — often. Pretty much every time either “capitalism” or “democracy” is mentioned in my presence when I’m in Russia, actually, most of the points I’ve outlined here are made to me as if this should suddenly make me understand everything about my homeland that I’ve been blind to all these years.

The thing is, Americans just as frequently make the same mistake about the Russians. Every time you see a bunch of Americans (often on an internet forum) talking about how Russia proves that socialism isn’t possible, you’re seeing that same mistake being made.

I wrote that imaginary scenario by reading an actual internet argument by Americans about the Soviet Union and socialism, and just replacing the USSR with the US, socialism with capitalist democracy, to show how silly it is.

You can’t look at one moment in time and use it to characterize a whole century.

It is a mistake to confuse rhetoric and reality.

It is also a mistake to assume that socialism, an economic idea, has in inherent connection to authoritarianism, a political system. Socialist democracies exist, and so do authoritarian societies with capitalist economies.

It’s a mistake to confuse a people with their government.

It’s a mistake to lump hundreds of millions of people together and imagine they all think and behave the same way.

Yet everybody does makes these mistakes, all the time. People are ignorant everywhere, too — which is only natural. You can’t know about everything, and it’s easy to be unconsciously influenced by media. Does anybody think middle-class New Yorkers really get to live in apartments like the ones you see on Friends? If you do, I have a bridge to sell you. For the same reason, you shouldn’t imagine that the movie From Russia with Love tells you anything about Russia — it tells you only what those western filmmakers imagined about Russia for their own artistic and economic purposes. See my previous post on Rocky IV.

Interestingly, I’ve noticed that there are a lot more realistic Russian films set in normal-looking apartments than there are American films featuring people living in anything like any dwelling I have ever known in real life (though Russian TV is getting weirder and weirder and there are fewer realistic films and more ludicrous shocksploitation ones being made, so this is changing; I refer mostly to the 1970s-1990s).

I don’t think most Americans walk around deliberately spreading unfounded assumptions about other countries. We have a reputation abroad for doing it more than anyone else, though, deliberately or not, and that’s embarrassing. I find the the most effective way to remember not to make these kinds of mistakes oneself is to see how it feels when someone else does it to you. I’ve lived in Norway and in Russia for fair amounts of time and traveled briefly around Europe, so I’ve collected my share of anecdotes of this nature. A woman in Prague in 1992, who checked my passport at a currency exchange point, saw that it was issued in Chicago and asked me if I was afraid to live there. I thought it was the usual “don’t you get shot by gangs whenever you set foot outside” thing, but it turned out it was Al Capone — she thought he was still alive and busy! That was not the last time I came across someone who thought Al Capone was our contemporary.

The first time I lived abroad in 1991-92, I was continually asked if I lived in New York. No. Miami? No. L.A.? No. Well, but you can tell me what they’re like, right? No, actually I’d never been to any of those places. WHAT?!! But you said you were American?! Even those Europeans who have traveled to the US often visit only a major city or two, so many have little idea what’s “in” the rest of the US. Outsiders’ perceptions of our economic status are also often taken from Hollywood, or otherwise filtered through lenses. For example, when I taught English in St. Petersburg in 1998-99, a student of mine once confessed to me that he saw a documentary about the homeless in American back in the ‘80s and because he saw the homeless people on TV wearing blue jeans — which at the time cost a month’s salary in Russia — he concluded that even the homeless in American were rich!

Before you laugh too hard, remember that the assumptions Americans make about other countries are often distorted in exactly this way.

Posted in Random, Russia | Tagged , | Leave a comment

Unlearning High School in Five Painful Steps

By Maho mumbles, via Wikimedia Commons

This is addressed to all the college freshman out there.

There are a few habits you may have learned in high school that will have to be adjusted in college. Remember that the chief difference between high school and college is that high school aims to fill your brain with some basic knowledge of the world and introduce you to the main fields of inquiry (mathematics, science, social science, humanities, the arts), while the main goal in college is to train you to think critically about the world: to analyze, to find and sort through new information effectively, and to apply lessons from one sphere to another. Each discipline uses different techniques, which you are meant to familiarize yourself with as you take courses in different departments, but the overall goal of all disciplines is to train you in advanced critical thinking. Later, as you choose a major, you will also be expected to master many of the subtleties of a specific discipline, more narrowly defined than they were in high school.

In the case of history, in high school you are taught the basic facts of history and you are perhaps exposed to some questions any citizen might ask about our past. In college, you are expected to act as an apprentice historian, to try out the more complex methods of professional historians in order to understand them fully, and to ask deeper questions about the nature and uses of history, and how history influences our society.

In other words, in high school you are told a story; in college you are invited to discover how stories are written and what they may mean from different points of view.

1. The 5-Paragraph Essay

Frequently taught in high schools, the 5-paragraph essay model is a solid way of teaching students the basic outline of most scholarly writing: an introduction that sets up a problem and a resolution to it, a series of points of evidence supporting the resolution, and then a conclusion that summarizes the case made and connects it to broader implications. This is a good basic model. Naturally, however, not every argument relies on precisely three points of evidence, and not every introduction or conclusion can best be articulated in precisely one paragraph each.

The rigidity of the five paragraphs can safely be left behind in college, though you should retain the overall structure of introduction-problem-resolution-evidence-conclusion.

In college we expect you to be familiar enough with this model to reproduce it reliably, and we now want you to focus on content: think through real problems and evidence and come to your own reasoned, supported conclusions.

This difference implies something very important about how your writing process in college should be different than it was in high school. When your goal was just to practice the 5-paragraph model over and over, it made sense to start with an outline, fill it in, then you’re done. That is not sufficient in college, because it allows you only to record whatever you already know, not to discover new knowledge.

In college, writing should be a process of sorting through complex information, understanding it better, and then figuring out what you think about it. To do this properly, you must write many drafts. Start by explaining the evidence and arguments from your source texts in detail in your own words — that’s the best way to figure out what the evidence really is. Then start to ask questions about what the evidence means, what it adds up to. As you clarify the questions the evidence can help you answer, you will gradually come to some conclusions about how to answer your questions. Only at this point can you put all this into an outline and revise according to the introduction-problem-resolution-evidence-conclusion model!

2. You must do the reading at home

The number of hours spent in the college classroom is obviously far fewer than in high school. This is not because college is easier, or because it’s meant to be done on the side while you work (or play).

The way college courses are structured, the expectation is that a full load should be at least 40 hours a week, or the equivalent of a full-time job by itself. You should expect to work an average of 2-4 hours at home for each hour you spend in class (however with practice you will find that you’ll spend less time than this some weeks, and much more other weeks).

Because class time is so limited, we cannot waste it sitting and reading in a room together. Class time is for synthesizing the material, asking questions about it, and learning how to identify patterns in it. For that time to be worthwhile, you must come to class fully prepared.

At home you should be mastering the basic facts covered in the course (usually provided in the textbook) and absorbing the content of the other readings, so that in class you can think about the questions, problems, and arguments they raise.

In class, you should be taking notes, but don’t try to write down every word said. If you are sufficiently prepared you should not need to write down every factoid, but should be able to focus on questions, problems, and patterns.

3. You will not be rescued from disaster at the last minute

We can fail you, and we will. I understand that it has become common in American high schools to never fail a student no matter how poor their performance (which, you may have noticed, only serves to bring you to college grossly unprepared, which is really doing you a disservice in the long run), and it is common to allow make-ups, revisions, extra credit, etc, to improve grades. Do not expect this to happen in college. You are personally responsible for your performance, and your own learning.

If we could put the knowledge and skills you need on a flash drive and stick it in your ear, we would, but it doesn’t work that way.

Think of college as being like a gym membership: you pay to have access to the facilities, and to trainers who can help push you along, show you the most efficient way, and keep you from hurting yourself. But you still have to do the work, or you’ll never get in shape.

4. Assessments are far less frequent, so they count more

In college it is typical to have only one or two exams per semester, and perhaps one or two additional papers (this can vary widely–when I was an undergrad, most of my class had just one paper, or one exam!). This means you must master a greater amount of material for each assignment than you may be accustomed to, and the grade of each assignment will count more in your final course grade. Final exams frequently ask you to synthesize material from the entire semester, to enable you to tie together everything covered and to make connections among different places and periods (for a history class).

So studying is not about memorizing details just long enough to pass a test, then forgetting it all. Generally, there is less memorization needed at the college level, but it is vital that you fully understand concepts and that you think through the material being covered. Always ask how each piece of material connects to others, and why it matters — these are the most significant “facts” you need to learn.

And, of course, remember that it’s not okay to “bomb” one exam or paper — because of the smaller number of assignments, this will make a big impact on your final grade, and it won’t be possible to make up a bombed assignment later.

5. Feedback matters

In high school you may have found that you got very small amounts of feedback very regularly, and that it was generally positive. (The theory that constantly bolstering students’ self-esteem will help them succeed — though now convincingly debunked in my opinion — has been dominant in the schools since I was in kindergarten.)

In college it is more likely that you will get feedback relatively rarely, but it will be detailed and focused on what you need to do differently next time. The idea of this kind of feedback is not to be mean. Feedback is never about you as a person, but about the written work you turned in on a given occasion.

The instructor’s goal is to help you, by showing you where you need to improve most, so that you can do better next time. Always pay very close attention to feedback; don’t take it personally, but do consider it a guide to how to approach your next assignment (even if that next assignment is in another course!). If you don’t understand the feedback you’re getting or it isn’t enough, talk to your professor!

You’re an adult now. If they don’t hear from you, they assume you know what you’re doing.

 

 

Note: much of my information about what the high schools are up to these days comes from colleagues, as does the gym metaphor, for which I will be forever grateful.

Posted in Teaching | Tagged , , , | Leave a comment

Revision

By Hownote, via Wikimedia Commons

There are two kinds of people in the world: those who revise, and those who don’t. The former are writers, the latter are not.

This implies that the way to become a writer, is to revise. A lot. And that’s absolutely true.

Yet, many novice writers, especially college students who are writing a lot of papers under tight deadlines, persistently believe the myth that by “writing process” one means: start typing, continue until you hit the word limit, proof-read or spell-check, and hit “print.”

This is a recipe for papers that—even if full of brilliant ideas—probably can never make it out of the B-range, and very often are much worse.

Almost any experienced scholarly writer can tell you that revision IS the writing process. How you get a first draft on paper matters very little, and every writer will have her own habits (and superstitions) about how to do it. But taking the usually mushy, half-formed, inarticulate ideas from your own head, where they are warm and happy and seem clear, and translating them into a form that an unknown reader can quickly and easily understand is a complicated craft that involves many steps.

Moreover, almost anyone who’s ever written something truly original or exciting will tell you that most if not all of these ideas come out only in the process of writing (that is, revising). What seemed brilliant when you sat down at the computer becomes “belaboring the obvious” after a few hours of working the sources and your own thoughts into organized structures. It is this process that usually reveals the connections and inconsistencies that lead to brilliant new ideas.

Most students turn in papers with a thesis at the end of the essay (regardless of whatever it was they wrote at the end of the introduction, way back at a different stage in their thinking and now forgotten). Often, this thesis-at-the-bottom is very interesting, because it was developed out of a detailed discussion of the evidence. But, unfortunately, most students stop and print at this point because they run out of time. These essays are never more than half-baked, and serve only as a record of the student’s thought process.

To make it a solid essay, the student must recognize that when that thesis finally “articulates itself” at the end (that’s often what it feels like when it happens), they have merely reached the half-way point in the writing process. Now, it is time to translate the “writer’s draft” into a “reader’s draft.” The new, richer thesis must be put at the end of a new introduction that tells the reader what the paper is, now, really going to be about. The discussion of the evidence must be re-worked for the convenience of the reader, not the writer. And finally, the student must reflect a bit on what has been accomplished, and put this new perspective into a new, real conclusion. Only then have you reached the point of polishing the prose and proof-reading for errors. But having got here, you will have the satisfaction of knowing that your essay is finely crafted and original, and that you have expressed yourself effectively.

Even when students do recognize what the revision process is really about, they often claim they still can’t do it, because they believe that revising takes more time than they have, or is not worth the time put into it, because after all the great ideas are on paper somewhere and that’s all that matters.

Think about it: do you want to bank your grade on the idea that your TA or professor will do all that work I’ve just described to untangle your paper for you, so they can have the privilege of receiving your great ideas?

They read many, many papers and some of them will be just as interesting as yours, but better organized and clearer. They can only put the same amount of time into each. They have seen (and probably tried themselves, at some point) every trick there is involving fancy fonts and margins, high-flown language, and “filler,” and recognize all such silliness for exactly what it is (which doesn’t stop them from being annoyed by it).

More importantly, though, in the long term learning to write a solid paper is easier than trying to get by with unrevised schlock. In fact, in purely practical terms, the single easiest thing you can do to improve your grades on essays is to spend more time revising (as long as you do it mindfully). Putting your exciting thesis exactly where the prof expects to find it and following it with a series of points of support that in every case is accompanied by at least a couple paragraphs of thorough discussion complete with specific examples, caveats, counter-arguments and elaboration and interpretation of all quotes, can hardly help but result in a good grade with any professor or TA (assuming of course that you’ve correctly understood and followed the assignment, and read and understood the sources).

You don’t usually have to guess what the professor wants—the standards are usually quite predictable for a short college-level essay. And if you’re reading the sources and understanding the material, there’s really nothing stopping you from doing well but time. Start your next paper with twice as much time to work as you usually give yourself. The beauty of getting really good at revising is that it gets faster and faster with practice, so that eventually you can expect to need little more time than you probably take now, but will produce much higher quality work.

Posted in Teaching, Writing | Tagged | Leave a comment

Obama the Professor

Obama Chesh 5

“How is it that not one of you has actually read the syllabus?!” Heh. Via Wikimedia Commons.

There have been a lot of profiles written about Barack Obama, and I have read many of them with interest. As usual, I tend to read them with half my mind thinking about the difference between these kinds of profiles written in the moment, and the versions of a life written by biographers and historians long after the fact. It’s the sort of exercise that entertains me.

I don’t claim to have any profound predictions about Obama’s legacy, or even unprofound ones. I’m merely interested to watch it unfold. Right now, what interests me is the huge variety of interpretations about a man who is alive and working and accessible (more or less) to the journalists doing the writing. Historians are used to trying to re-construct the life of a person who is long dead, whose friends and coworkers and family are all long dead, and who may, in many cases, have left precious few written traces of his or her actions, let alone thoughts (chances are, in the case of a “she” there’s even less than in the case of a “he”). To me it seems like an embarrassment of riches to write a life of someone still living, with the benefit of interviews where you can ask whatever you want, with extraordinary documentation, and access, potentially, to thousands of people who know and work with him.

With this touch of envy in mind, I always feel a bit dissatisfied by contemporary profiles of important people. Especially when there are a lot of them, as there are with Obama, it seems like the more you read, the more it becomes noise, and the less you can pin down who this person is.

I have particular difficulty with the classic lengthy profile that often appears in periodicals like Vanity Fair or The New Yorker. You know the kind, where the author plucks from obscurity a handful of random but colorful anecdotes, asks some random but colorful questions, and mashes the whole thing together into a rambling “think piece” that feels profound, but…isn’t. It leaves you knowing less than you did before you read it, and somehow all the anecdotes taken from interviews and in-person observations feel inauthentic. One has a sense that the writer was gathering them like a preschooler collects bits of paper for a collage — “ooh! A red one! Score!”

I don’t mean to sound snarky. I really enjoyed the recent piece in Vanity Fair by Michael Lewis. It struck me as unusually insightful about what it’s actually like to be president. And I think he may have asked the most brilliant question I’ve ever heard asked of a president for the purposes of finding out his character:

“Assume that in 30 minutes you will stop being president. I will take your place. Prepare me. Teach me how to be president.”

But I came away from the article having little if any insight into Obama.

One of the most insightful people writing about Obama, I think, is Andrew Sullivan. Sullivan tends to characterize Obama as a conservative, even a paragon of a conservative. I’m of the school that thinks that’s incredibly accurate on a number of levels (whether that’s a good thing or a bad thing and on which levels is another question, of course).

Much more often, Obama is accused of being a kind of Bambi — too soft on this or that, unwilling to take a stand when stands need to be taken, unwilling to push hard, unwilling to ram his will through no matter what. (Of course, he’s also accused of the opposite, but I’m trying to pull some of the more prominent threads out of the infinite cacophony here).

But the thing about Obama that has always struck me as most obvious, even blinding, is something I don’t really see get mentioned in these profiles. I’m talking about the fact that Obama is a professor. He was literally a professor when he taught law at the University of Chicago law school (disclaimer: at the time he was doing that, I was living in an undergrad dorm next door, and some friends and I may have gone wading in the law school fountain once and been yelled at by some law school prof who almost certainly was not Obama, though I like to tell myself that it could have been). Less literally, he’s always struck me as being a professor type, and I say this as a professor type with a lot of professor-type acquaintances, in addition to having done my time (and then some) staring at a podium from the other side of the room.

Of course the media has not missed the fact that Obama was a professor. This piece was particularly interesting. And he’s fairly often criticized as “professorial” when he’s being stiff and wonkish (but even more often, in 2008 especially, he was criticized as speaking in a “lofty” way devoid of detail or substance — another example of the media not being able to make up its mind about him).

I think he’s professorial in much deeper ways than speaking style, and I think it explains the sense people get of his conservatism (which often outrages his base) as well as the “Bambi” meme.

Run with me for a minute here. Imagine a college classroom, a small seminar class. The subject doesn’t matter. You’re the professor, and it’s your job to (a) get the students engaged and talking (b) to get them to understand the material being covered and most importantly (c) to get them to think critically, for themselves, about that material.

In that situation, you don’t go in guns blazing and force people to obey your will. Why would you? That’s just a completely irrelevant, as well as unethical and pointless, approach.

You also (if you know what you’re doing at all) don’t go in there and tell the students what’s what. Even when you’re really, really sure you know what’s what. Even when you’re feeling frustrated with the impossibility of the task in front of you and you are incredibly tempted to just skip to the end and tell them the answers already. Tempting as that can sometimes be, you do know it would be a hollow and temporary victory, because they wouldn’t really take anything in, and telling people what to think is not your job.

You also don’t go into that classroom with a goal of changing the world. You don’t even aim to turn those students in that room into scholars. Most of them probably couldn’t get there, and more importantly, there’s no reason for them to get there. They have other things they need to do, and it’s your job to help them do that. You’re not making clones of yourself. You’re giving people the knowledge and skills they need to define and pursue their own goals.

You aim when you go into that room to move the students forward from where they were when you got them.

You leave your own ideologies and convictions behind when you walk into the classroom, because you know they’ll just get in the way of the process at best, and completely undermine your ability to do your job at worst.

You don’t preach to the choir. You work with ALL the students. Even the ones who seem hopelessly behind.

With experience, you learn that students can always surprise you. All of them. Some of them that seem really with the program can turn out to be putting on a show for a grade, and not really understand or care about the material or learning in general. Some that seem like they don’t even belong in that room will work their butts off and ultimately make you feel stupid and lazy with their hard work and original insights. You never know. And it’s not your job to guess, or care, what each student is ultimately capable of. You take them as you get them, and you work to move them forward from wherever they are.

Sometimes, as part of that work, you play devil’s advocate. You find yourself saying things you don’t remotely believe, and you actually try to put conviction into your face and voice because you’re so focused on seeing the lightbulb go off in the students’ eyes, the expression on their faces that means they’re thinking, really thinking.

You willingly give up a lot of control of the classroom — control you know how to use, and would on some level love to use — because you know from experience that you can’t do the thinking and acting and learning for them. You can only push, facilitate, re-direct. They’ve got to do the thing for themselves, ultimately, or it won’t stick.

And then, after a semester of all this hard work, which you do pretty darn selflessly because you really — REALLY! — believe in the inherent value of the process…at the end of the semester, after you’ve turned in your grades, you get your evaluations. And you find out just how many students blame you for their own unwillingness to invest themselves in learning. In other words, you find out that their failures will be billed as your failures, while their successes are their own.
What does all this have to do with Obama? I think his personal convictions are so hard to read because as a representative of the people, whose job is to govern, he actually tries to represent the people, and part of doing that well is putting your more idiosyncratic attitudes out of even your own mind.

I think he listens to all sides — even the sides that hate him irrationally and eternally — because that’s his job. Like it or not.

I think he’s not saving the world because, well, first, he can’t, and second, because he realizes that. I really doubt he sets his sights that high. And I would be astounded if he looks on politics as the epic battle between Democrats and Republicans that it is often portrayed to be by the media. He’s a problem-solving type of thinker rather than an ideological type — that’s been widely observed and is after all pretty characteristic of many post-Boomer Americans — but more than that he’s a professor type. That means focusing on taking what you’re given and moving it forward, doggedly, semester after semester. That’s very different from viewing your job as a matter of wins and losses.

A professor is rarely confrontational toward students, except perhaps temporarily to make a point. Most professors genuinely don’t even feel confrontational about their students’ ideas — if you get into this gig at all, you care pretty strongly about the integrity of the process. Truth, to an academic, should be not this answer or that answer to a problem (there are rarely neat and final answers to the questions asked at college level and beyond), but the rigorously honest pursuit of a solution, using all available tools. To do that, you have to listen to everyone, even the ones who seem nuts. They are the most likely, in fact, in my classroom experience, to insert something really innovative into the conversation (though often unintentionally), and they are often the ones to name the elephant in the room. (Naming the elephant in the room is something most academics welcome; most politicians are the ones putting curtains up around the elephant.) Even the students who don’t actually contribute have to be included in the process, because otherwise the process loses all meaning and integrity.

In the Michael Lewis profile, Obama is quoted saying some remarkably professorial things. In a passage about the writing of Obama’s Nobel speech, for example, he is depicted as instructing his speechwriters to put together his favorite authors’ ideas on war — he gathers his sources first, in other words, like an academic would — and he apparently explained to his interviewer that, “[h]ere it wasn’t just that I needed to make a new argument. It was that I wanted to make an argument that didn’t allow either side to feel too comfortable.”

That’s how you lead a classroom discussion. That’s how you compose an argument that gets people to think, instead of telling them what to think.

Then Obama explained his goals for the speech: “What I had to do is describe a notion of a just war. But also acknowledge that the very notion of a just war can lead you into some dark places. And so you can’t be complacent in labeling something just. You need to constantly ask yourself questions.”

This is professorialism at its best. Nothing is black and white. The devil is in the details. Caution. Never get ahead of your evidence. Always. Ask. Questions.

Narrating Obama’s decision not to approve a no-fly zone over Libya that was intended to give an appearance of protecting innocent civilians but could not possibly have helped, Lewis quotes Obama as saying, “I know that I’m definitely not doing a no-fly zone. Because I think it’s just a show to protect backsides, politically.” This stance could read as noble. A president who puts morality (and practicality) above politics. It could be that. It could also be the overwhelming impatience of the true scholar with anything that confuses the fundamentals: the questions, evidence, and reasoning that can solve problems. Arguing about how this or that method of problem-solving looks — or finding ways to avoid the problem altogether — is a waste of time when one could actually be coming up with an answer. Even if it’s not ultimately a satisfying answer, at least you tried, and learned something from the effort that may help future efforts. That’s the pursuit of knowledge.

This professorial quality implies a few things. Most importantly, it implies that Obama believes in and is animated more by the process of governing democratically than perhaps any general policy principle. Compare this to his record, and I think you find a lot of consistency, especially in places where allegiance to party platform or political expediency is sometimes absent. I don’t want to imply that Obama’s professorial tendencies define him completely. None of us are defined by anything so simple. There are no doubt many sides to his character and his decision-making, as there are for all of us. But I think this one part is often unrecognized. I also don’t make any claims about whether these tendencies are good, great, suspect, or terrible in a President of the United States. Like any good prof, I’m just throwing it out there, to see if it makes people think.

Posted in Profession, Random | Tagged , | Leave a comment

Rules

via Wikimedia Commons

Sometimes my students get a little too hung-up on rules, when it comes to writing essays. Mind you, some rules are vital—if your writing is ungrammatical, readers will have trouble following what you are saying. Other rules (which are really more like guidelines) relate to structure and flow and they also help readers to understand you. Then there are still other rules, which don’t actually contribute much to the reader’s ability to understand and remember your text. These rules aren’t so important. The trick is knowing the difference.

Mind you, there are individual readers and—cough—the occasional rogue professor who care very deeply about this third category of rules, and if you’re writing for one of those people you might as well suck it up and follow those rules, too. But you should still know the reasoning behind them, and why in other contexts it might be okay to ignore them.

You should never use “I” in an academic essay.

Often, when a teacher tells you to “not use ‘I’” or to not use it so much, you can safely interpret this as “I need to give more substance to my opinions by inserting more reasoning and evidence, and possibly more sources, into my essay.” In other words, what this teacher often really means is that you’re asking the reader to believe something just because you said it was so – your essay is full of phrases like “I think…” and “I believe…”.

In other instances, students themselves or their teachers may fear that using “I” makes an essay ‘sound too subjective’ no matter it is used. The truth is, if you are a human being, authoring anything, that thing you author cannot be truly objective. There is a difference between saying, “John’s a fraud,” and “I think John’s a fraud,” and it is intellectually honest to differentiate for your reader what is your opinion or reasoned conclusion, and what is taken from the sources you’re citing. In these cases, using “I” is advisable.

However, it is true that some writers use phrases like “I think” more often than is required by the content – it becomes a kind of nervous tic. In this case, many of the ‘I’s can be safely eliminated or changed.

And remember that you can always find another way to convey that an idea is yours, to keep the ‘I’s from getting excessive or to please a professor who, for whatever reasons, particularly despises the presence of the word ‘I’ (though if you dare you might suggest they try searching it on Google Scholar, to see just how prevalent it is in scholarly journals from every field, including the hard sciences).

Note: Years ago, when scholars were perhaps not quite so resigned to their subjectivity, it was common to assume a sort of royal ‘we’ even when a paper had only one author. This is now frowned upon as misleading. The age of intellectual property has trumped the age of positivism! Nowadays, when an author uses “we” it generally refers to the writer and readers together, as in, “now we turn to a new subject.” Some people like this construction (it makes it easier to avoid the passive voice and nominalizations), and others dislike it (they find the intrusion of writer and reader into the text a distraction from the subject at hand). It’s largely a matter of taste and context.

You should never use the passive voice in an academic essay.

You should avoid split infinitives.

You should always have exactly three main points of support.

Always put your thesis at the beginning.

The answer to all these imperatives is, “Actually, it depends.” If there is any general rule that always applies, it is that a writer should be aware of her purposes and her audience, and suit her structure, style, and language to the particular purposes and audience of a given piece of writing.

The passive voice exists in English because it can be useful – not just to hide the subject of a verb (as in, “mistakes were made”), but also to shift the subject to the end of a sentence, where it may be more convenient for reasons of emphasis or transition (such as “mistakes were made by the President, who is now facing impeachment”).

The notion of avoiding split infinitives is borrowed from Latin, where splitting infinitives can cause confusion. But English works quite differently, and sometimes, in English, not splitting the infinitive can cause confusion. So whether you should do it or not depends on the context.

Grammar Girl has a great guide to splitting infinitives and avoiding them.

The five-paragraph essay model works very well  when you’re writing an essay that logically only has three major points of support and only needs to be five paragraphs long. However, for the vast majority of essays that don’t fall into that category, you will have to explore more complicated models.

Putting the thesis at the beginning of an essay has many strong advantages, and seems to work best in any case where the reader is approaching your essay for enlightenment rather than for entertainment or pleasure (you don’t, after all, want to keep your grader in suspense about whether you have something worthwhile to say!). But of course, there are exceptions, and you should always consider the demands of a particular instance when you make such choices.

Often, academic writers put a sort of provisional thesis at the beginning, which tells the reader what to expect without going into detail. This is sufficient to contextualize the information to follow, and fulfills the purpose of assuring the reader that you do, indeed, have a resolution to the problem you’ve set up (that is, that you’re a competent and responsible writer). Then, a more elaborate and specific thesis is stated at the end, incorporating terms and claims that have been made clear in the body of the essay but which were, perhaps, too new to the reader to use effectively in the first paragraph.

 

Update: See this nice piece from the Smithsonian on rules that aren’t really rules.

Posted in Teaching, Writing | Tagged | Leave a comment

Bias

View from Victoria Point, from Robert N. Dennis collection of stereoscopic views

Stereoscopic Views, from the Robert N. Dennis collection, via Wikimedia Commons.

When historians read a text, we are trained to filter what it tells us through an understanding of who wrote it, with what purposes and with what intended audience. Author, audience, and purpose are all important factors in shaping the meaning of a text, so identifying these factors can help us reconstruct what a text meant to its author, and to the people who read it when it was written. Identifying these factors can also help us to figure out what might be relevant, but missing from a text (something the author may not have be aware of, may not have thought was important, or even something the author may have wanted to deliberately suppress).

In college history classrooms, professors ask students to practice this skill, most commonly in assigning “primary source interpretation” essays, where the student takes a historical document (or 2) and tries to analyze it (them) in the way I just described.

Where many students go wrong in this process is confusing bias with point of view or reasoned opinion.

I’m probably particularly attuned to see this mistake because I spend so much time grading primary source essays, but also I see it constantly in talking heads on TV, in written media, and on internet forums. It’s a really insidious problem in our current political climate, in my view, so I offer this version of a handout I use in classes (originally relating only to writing primary sources essays).

Bias is a form of prejudice. It refers to opinions or assumptions that one holds despite or willfully in the absence of evidence.

Point of view refers to the fact that no one person can be aware of everything all at once. We all see the world from our own particular perspective.

It is possible (though difficult) to examine an issue without bias, but everyone always has a point of view. Your point of view is the way your previous experience, skills, inclinations, attention and interest limit your experience of the world.

Reasoned opinion is a conclusion, or claim, that a person comes to after examining and reasoning through relevant evidence. This is very different from bias (because it is based on objective reality — evidence and reasoning) and from point of view (because the exercise of reasoning through evidence is the practice of deliberately expanding your personal point of view to include evidence from others’ points of view, or evidence gathered through experimental observation).

When reading a historical text — or when you want to better understand any other text — you should look for bias, point of view, and reasoned opinion. But it is crucial to distinguish between these, because we can draw different interpretive conclusions about an author’s claims based on whether the author stated a given claim in willful contradiction of relevant evidence, merely out of an inability to see or comprehend new information, or lack of access to other evidence, or as a reasoned conclusion drawn directly from all available evidence.

Common mistakes students (and others!) make:

1. Looking for obvious biases (prejudices), but failing to look for “honest” limits to an author’s point of view.

2. Noting limits or absences and attributing these to point of view without first asking if the author’s point of view is actually so limited because it is based on assumptions from bias.

The way to avoid this mistake is, after identifying limits or absences in a given text, identify what underlying assumptions about the world led the author to “miss” these key points. How do those assumptions relate to the evidence available to the author?

3. Mistaking reasoned opinion based on evidence for mere bias. If an author seems to “like” a position or be “passionate” about it, they could be biased, or they may be enthusiastic about a conclusion simply because it is an excellent explanation of all known facts. Find out which it is by examining the evidence on which the author bases their conclusion.

Relative enthusiasm, or lack of enthusiasm, tells you nothing by itself.

Message to take home: Always look to the evidence. When someone makes a claim, do they follow it with evidence? Is it good evidence? Is it enough evidence? What part of the claim is an assumption (i.e., not based on evidence)? Some assumptions are reasonable (one has to start somewhere), some seem arbitrary (a bad sign!).

 

Update: Related reading

Posted in History, Random, Teaching | Tagged , , | Leave a comment

Objectivity

Via Wikimedia Commons

Many students come to college believing that academic writing is objective writing, or is supposed to be, and if it’s not, it’s “biased,” which is another way of saying “bad” or “useless.”

There is no such thing as objective writing.

If something is authored, then that human author’s stamp is somehow on the material, if only in the selection and organization of it (even texts authored by computer are ultimately products of the software, which was engineered by a human being, who made choices and set priorities!).

The best we can do, as writers, is to indicate to the reader explicitly what it is in our texts that comes out of our own heads, what is the opinion of other authors cited in our own work, and what is reasoned conclusion or a direct report of data (and with the latter you explain how you derived your data and chose what to share).

Best of all, we can identify and examine our own assumptions about our material, and when appropriate tell our readers what these assumptions are. We can mention that there are other factors or opinions which we have chosen not to go into, and we can say why. (Often, such things are legitimately beyond the scope of your essay, but by telling your reader you are aware that these other factors exist and have made a conscious decision to exclude them — for reasons you briefly explain — then you allow them to trust that you are, in fact, in control of your essay and have done your research. Going through these steps makes your reader more likely to trust you with the main points of your argument, as well.)

In other words, the best we can do as subjective, human authors is to acknowledge our subjectivity, to note our biases and assumptions and to factor them explicitly into our writing. Attempting the impossible task of writing objectively can be more misleading than accepting our bias and moving on.

Yet I often see student papers watered down to the point where no analysis is left at all — in some cases, I know the student had interesting and relevant ideas about the material, and I have asked why it wasn’t on the page. This is when I hear, “I thought that’s just my opinion, so it doesn’t belong in the paper.”

Analysis is a form of opinion — a very specific form that is based on evidence, in which you explain exactly how you reasoned from your evidence to form your opinion. Analysis is what we want.

Posted in Teaching, Writing | Tagged | Leave a comment

Why you shouldn’t feel bad you didn’t go for (or finish) the Ph.D.

By WMAQ-TV, Chicago, via Wikimedia Commons

Sometimes when I tell people what I do for a living, they tell me they almost got a Ph.D. Sometimes, they say this unapologetically, just as a factoid of interest, but unfortunately sometimes it’s said with a direct or implied apology, and some sort of excuse. As if an explanation is required.

A Ph.D. degree is not the ultimate IQ test.

A Ph.D. is nothing more nor less than a degree required for a particular range of professions (mainly, teaching at the university level). It’s a very narrow degree, and one that is very rarely required. So why on earth would so many people feel bad for not getting one? If you don’t need or want a Ph.D., then you shouldn’t waste your time and money getting one!

Contrary to, apparently, popular belief, a Ph.D. doesn’t test intelligence. True, you probably need to have at least average intelligence to get admitted to any respectable Ph.D. program. But succeeding in a Ph.D. program really depends more on having the drive to complete that particular degree in that particular field than on anything else.

It’s not like intelligence and specialized knowledge are remotely exclusive to people with Ph.D.s. We all experience that in people we meet every day. Yet some people–especially those who are used to doing very well in school–internalize the idea that because they are smart, their success should be defined by achieving the highest possible degree. Well, no, not if that degree is only suitable for one narrow profession, which you might not want.

The people I know who got Ph.D.s (self included, of course) finished the degree mainly because of three factors.

The first and most important factor is that they were obsessed with their field. Some people do finish the degree and decide not to actually practice in the field, but pretty much always, if they finished, they at least had some kind of obsessive devotion to the subject. Sometimes it’s a healthy devotion, occasionally it borders on the pathological, but in any case it’s pretty extreme. Most people just aren’t that into—say—early nineteenth-century Russian women’s mysticism. And that’s okay. We need people with these kinds of interests, but we don’t need LOTS of people with these kinds of interests!

The second factor is that most people I know who finished Ph.D.s aren’t really good at much of anything else. I know that’s true for me. There are other things I can do if I must, but I’m not really very good at them. I’m quite good at researching and teaching the history of Russia, and to a lesser degree, Europe and the western world. Other stuff? I’m average at best, and with most things I’m completely incompetent. I didn’t just end up in a Ph.D. program because I’m pretty smart. Being pretty smart can land you in a lot of different places. I ended up in a Ph.D. program mainly because I wrote a quite decent essay about the upbringing of early nineteenth-century Russian heirs to the throne that had a fairly original argument in it when I was only 22. Not that many people can do that, or more accurately, very few people would want to bother to do that. But, the vast majority of the population can calculate interest rates, change a tire, manage a multi-line phone, and do a lot of other things I’ve singularly failed at (despite numerous sincere and concerted attempts!). We’ve all got our niches.

The third factor I’ve seen that separates those who finish Ph.D. programs from those who leave them or don’t attempt them, is that those who finish tend to have some kind of stubborn, perhaps even stupid, determination to finish no matter what, just because. People who finish psychologically have to finish. Those who do not finish often do not need to finish. And may very well be much healthier and better off for it. Have you read my posts about what academia is really like and what it pays, even when you’re lucky enough to get a tenure-track job?

While I’m talking about those who have the stubborn drive to finish, I would like to mention another phenomenon I’ve seen many times.

In the home stretch of finishing the Ph.D. dissertation, when it’s not quite almost-done but too much done to quit, everyone I know has had a moment of crisis when they decide that they absolutely must quit. It’s too much, it can’t be done, the person in question feels like an impostor, the person in question never really wanted it anyway, etc.

It’s important to distinguish between this very typical last-minute crisis of the almost-finished Ph.D. from the more serious existential crises of an earlier-stage graduate student who truly is uncertain about whether the degree is worth pursuing. When you’ve got multiple chapters of the dissertation written (even in draft from), you’re probably one of the hopeless ones who can’t really do anything else, and you may as well finish, since you’re so close. Just know that this crisis is completely typical. But if you’re not there yet and you really don’t feel motivated to get there, ask yourself why you think you should pursue a Ph.D.

If the only honest answer you can give yourself is that you can, because you’re smart enough, then maybe you shouldn’t bother. Plenty of people are smart enough to complete a Ph.D. Only a select few of us are stupid enough to actually follow through, and only because it’s the only thing we can and want to do. If that’s not you, then unburden yourself of the guilt and expectations that a Ph.D. equals, “what smart people do.”  A Ph.D. is usually a ticket to low pay and constant work. If you can think of an alternative you like better, by all means, get out.

(If you can’t think of an alternative and love what you do so much you’re willing to live on mac-n-cheese so you can spend all your time reading obscure monographs on the subject that makes your heart go pitter-patter, well, hello, kindred spirit.)

 

Further Reading: On Being Miserable in Grad School

Posted in GradSchool | Tagged , , | Leave a comment

What is a Ph.D., Really? And What Is It Good For?

I’ve gotten the impression that many people think a Ph.D. program is like a master’s program, but longer. That you just keep taking courses—like a million of them—and then eventually you write another really big paper, and you’re done. This is kind of accurate, but also wrong in all the most important ways. I’m sure these misconceptions are partly due to the fact that there aren’t really very many movies about people in Ph.D. programs, unlike, say, law school or med school. Unless you count the show Alias, in which Jennifer Garner pretended to be a Ph.D. student by walking around saying ridiculously unlikely things and never doing any work at all. But you can’t really blame Hollywood—people in Ph.D. programs aren’t really very exciting to watch, since they mostly hunch in front of computers for days and weeks on end.

John Hamilton Mortimer - Studies of Academics - Google Art Project

By John Hamilton Mortimer (1740 – 1779), via Wikimedia Commons

NOTE: Everything that follows is really about programs in the humanities and social sciences, because that’s what I know. I don’t know what programs in the STEM (science, technology, engineering and mathematics) fields are like, but I picture a lot of labs. I’m probably mostly wrong about that. The only thing I’m sure of is that nothing about STEM Ph.D. education resembles anything seen on Numb3rs or Bones.

So, in the U.S., most Ph.D. programs are actually combined with MA programs (not so in Europe and Canada), though if you already have an MA when you enter the Ph.D. program they’ll usually grant you advanced standing, which usually allows you to skip a year of coursework.

But a standard U.S. MA/Ph.D. program in the humanities and social sciences generally begins with the MA portion. For the MA degree, you usually take 1 to 2 years of graduate courses (these are usually the only courses you will ever take in the whole program), and then write a thesis. In history, the MA thesis is usually envisioned as about the size, type, and quality of a publishable article. Ideally. But publishable articles usually max out at 30 pages, and most real MA thesis are actually about 50 to 150 pages. So the whole article model thing is a bit misleading. But the MA thesis should, like an article, incorporate original primary source research and original analysis (and, unlike undergraduate essays, it needs to be original not just to the writer but original in the sense that no one has published that argument before).

I should mention here that MA courses are not like undergraduate courses, and MA-level courses in a Ph.D.-granting institution usually vary quite a bit, too, from MA-level courses at an MA-only institution. MA courses involve more reading and writing than at the undergraduate level, and in history it’s often true that you’ll read mostly secondary sources in a grad class, where you would read mostly primary and tertiary sources in undergrad. But the main difference is in the kind of work you’re expected to produce. Graduate work assumes you have basic skills and knowledge in the field, and asks you to think critically about how knowledge is produced and to practice more advanced skills, like synthesizing larger amounts of material, and dealing with more difficult primary sources, often in foreign languages.

After the MA thesis, some people decide they don’t want to go farther, and they can leave the program with a “terminal MA.” At least they got something for their time, is the expectation. But most students continue on, sometimes after a review of their progress by their advisor or something like that.

The next stage is often, though not always, marked by the M.Phil. degree. I’ll confess right here that I didn’t know what the heck an M.Phil. degree was even after I got one, so it’s not at all surprising that most people who aren’t in Ph.D. programs have no idea. It’s sometimes referred to as a “research masters,” and I’ve been told that it derives from the British model, where you can (I believe—someone correct me if I’m wrong) get an MA through graduate coursework or an M.Phil. through independent research. Except this makes absolutely no sense in the U.S. context, where the M.A. signifies that you completed coursework and wrote an independent thesis, and the M.Phil. is, in the programs I’m familiar with, a prize you get for passing oral exams.

Oral exams, or comprehensive exams as they are often known (since they aren’t always oral) mark the transition between coursework and going out on your own as a sort of apprentice scholar. Comprehensive exams require the graduate student to demonstrate their comprehensive knowledge of their chosen field, and it’s usually described as preparation and qualification for teaching (as opposed to research, though having this broad background is essential to doing research, too). The format and style of these exams varies a lot, but usually you have from six months to a year to study, and then you are examined in written or oral form or some combination thereof.

As an example, as a specialist in Russian history, my oral exams had to cover four fields, three “major” and one “minor,” and at least one had to be “outside” (of Russia). For a major field you try to cover pretty much everything, and for a minor field you designate some set of themes you’ll cover, that are hopefully complementary to your major fields. My three major fields were Russian history to 1917, Russian history 1917 to the present, and East Central European history from 1750 to the present. My minor field covered a few themes in French and British history from 1750 to 1850, which I chose because it was helpful comparative background for the kind of research I planned to do on Russia in that period. The major fields were chosen to cover all the material I hoped to be expected to teach.

I had an advisor in each field who was a specialist, and those people helped me to create a list of about 100 books for each major field and 50 books for the minor field that represented a comprehensive survey of the scholarship to date (you examine a far greater number of books to start with, and then narrow it down to the final list that you study closely). Then I spent a year reading them all, and taking detailed notes about the major analytical questions, themes, and problems that I saw in each field. This process was a way of synthesizing how each field as a whole has developed.

The exam itself was oral in my case, meaning I met with my four advisors for 2 hours while they quizzed me. These kinds of exams generally aren’t so much about the specific material covered in each book, but about the student’s ability to synthesize these major arguments and see how the individual pieces fit into the whole.

Once you pass your comprehensive exams, you get the M.Phil. degree.

At some point before this time, you probably also have to pass some language exams. Historians tend to need to pass several, though those studying American history may need only one language. For a Europeanist historian, you usually need to pass at least three language exams, and in some fields you may need as many as five. These exams are usually written translation only, with a dictionary, because those are the skills you will need to handle foreign sources in your research. In my case I needed to pass exams in Russian, German and French. At the exam we were given passages in the language at hand that represented the kind of source a historian would read—often an analytical piece written in, say, the early nineteenth century. We had to translate them into English in a way that was both scrupulously accurate and readable.

After you’ve passed all your exams, the next step is the dissertation prospectus. This is a proposal outlining what your final, independent research project will be. The dissertation is meant to be comparable to a publishable book, and in this case it usually really is that, because in order to get a teaching and research job, in many fields you’ll have to publish a book within the first few years, and the dissertation is often the first draft, in a way, of this book. It must be based on original research and make an original argument, and it must be a significant contribution to your field of study (more so than an MA thesis).

So, for the proposal, you need to of course have some idea of what you want to research, and then you spend some time doing the necessary background reading and finding out what you will need to do to complete the thesis, in very practical terms.

For a Europeanist historian like me, this mainly means finding out what kind of archival sources exist, where they are, roughly what they might be able to tell you, etc. When your archives are located outside the U.S., you need to start applying for funding that will pay for your travel overseas, as well. Other social scientists need to plan and organize different kinds of research models, exploring possible methodologies, preparing interview questions and so on. Some other social scientists also travel, for “field work,” where they observe or interview subjects in a given location, but others work with computer modeling or published sources, etc.

In any event, all this planning and then writing up a detailed proposal about what your research and the dissertation will look like often takes about a year. Then you defend your proposal before a faculty committee of specialists in the appropriate fields, both from within your own university and from outside it. They ask you lots of pointed questions to try to make sure your plans are realistic and your thinking is coherent and reasonable.

Once you pass your proposal defense, you are “ABD.” ABD is not an official designation, but it is very commonly used—it stands for “all but dissertation.” It means you’ve completed all the requirements of the program except for writing and defending the dissertation. ABD is a somewhat ironic designation, because it sounds like you’re practically done, except that the dissertation is really the heart and soul of any Ph.D. program, and all the rest is, in a way, just a lead-up to the Real Show.

This is also the stage where the time taken to complete it can vary incredibly widely, which is why when you ask “how long does your program take?” or “when will you finish?” most Ph.D. students can’t answer, and many will squirm miserably at the very question.

The dissertation stage takes as long as it takes.

In some fields, if you don’t have to travel and all your sources are readily available, you can go straight from the prospectus defense to “writing up” and be done in about 2 years, usually. Since coursework is often 2 years, plus 6 months to 1 year for the exams and another 6 months to 1 year for the prospectus, the shortest Ph.D. program is generally about 5 to 6 years of post-graduate work (again, this can vary significantly in the STEM fields).

But, if your research requires significant travel, that part alone can take at least one full year before you can even begin to “write up.” That typically makes 6 to 7 years a bare minimum for anyone studying the history of a place that is not local to their university, for example. For those of us who travel abroad for extensive periods, often to multiple countries and/or dealing with sources in multiple languages, we often also need extra time for all the translation, sometimes for language study for those who are taking on sources in a less commonly taught language, like, say, Turkish or Georgian, where you often have to go abroad to study it at all. And once you’ve got all your sources (and, if necessary, translated them and/or used computer modeling or database software to manipulate or analyze your data), then you can finally begin to write all this information into something coherent. This last phase can take any amount of time depending on how you write.

By this stage, any graduate student will have written many scholarly papers, but the dissertation is really fundamentally different because of its scale. A book-length academic project requires extraordinary information management just to keep all the data straight and accurate, and then the bigger scope of the arguments also requires a more complex engagement with larger numbers of secondary works, and more complex thinking, to communicate clearly about something so comprehensive, without skimping on any of the nuances. It’s bloody hard work. I’ve never seen anyone do it in less than a year, and I’m very impressed by 2 years. Many people take more like 3 or 4, especially if they’re teaching at the same time. Add in the fact that most graduate students at this stage are in their late 20s or early 30s, so that many are getting married and starting families (if they can manage it financially on a scant grad student stipend) and all that can add further delay.

I should also mention that your guide through this final stage of dissertation researching and writing is your advisor, someone who has probably guided your progress from the beginning of the program, but who now takes on primary responsibility for keeping you on track and, hopefully, catching you before you make any really awful mistakes. Over the course of the whole Ph.D. program you are moving farther and farther away from the student-teacher model of education. At first you take courses, but then with the MA thesis, the exams, the proposal, and finally the dissertation you work more and more on your own at each stage, until by the time you finish your dissertation you are most likely the world’s foremost expert on your topic (since it was chosen to be an original contribution to the field), and you have gradually—sometimes somewhat uncomfortably—transitioned from being a student to being an independent scholar and a colleague to the other scholars in your discipline.

So far I’ve only briefly mentioned teaching, but that’s the one other common part of a Ph.D. program. Some programs require no teaching at all, but that is becoming downright rare these days. My program required, as part of its funding package, three years of being a teaching assistant. TAs in history led discussion sections, gave guest lectures occasionally, and did most of the grading. This is a fairly common scenario. Often, after the TA requirement is fulfilled (usually in the second, third, and fourth years of the program), advanced-stage graduate students will apply to teach as instructors, where they lead their own courses. Sometimes a lucky grad student can create the course of their choice, but more often they teach the freshman survey courses, or required writing courses, and that sort of thing.

When I started my program, there was no formal guidance whatsoever given to grad students on how to teach. We were just thrown into classrooms to figure it out. From the university’s point of view, we were just cheap instructors, and it was up to the individual faculty members we worked with as TAs to give us guidance, advice, or instruction—or not—entirely at their discretion. In my experience some faculty members took this responsibility very seriously, others less so. While I was in my program, however, I was part of a collective effort on the part of grad students to create our own teaching training program, and our program was eventually adopted by the whole graduate school. Right around that time, in the early 2000s, there was a general consensus that teacher training needed to be integrated into graduate programs, and that is increasingly becoming the norm today, thankfully.

Right now, because of the miserable state of the academic job market (with the exception of a very few fields, there are many times more qualified candidates than there are jobs available), it’s more difficult than ever to get any kind of academic employment with a Ph.D. from anything but a top-tier school (which schools are top-tier varies by field). There has been criticism from the American Historical Association in the last decade of programs that either offer too many doctoral degrees, or programs that are third or fourth-tier yet still offer doctoral degrees to paying students, knowing that they will very likely never be employed in their fields. Basically, if you have to pay to go to a Ph.D. program, you probably shouldn’t go, because the reputable ones are now under considerable pressure not to admit students without funding (there are occasional exceptions—sometimes you are expected to pay tuition the first year with the expectation that if you perform satisfactorily funding will be granted for subsequent years, but this can sometimes be fishy, too—do your research).

Most recently, the AHA is recommending that programs incorporate training in so-called public history, and other alternative career paths for Ph.D.s, into their programs. Public history includes museum work, community outreach, documentary filmmaking, etc. Other alternative career paths include mainly government and corporate research or think thanks. There is some resistance to this pressure—many programs argue that they are not equipped to train students in these directions, and others point out that the job market is little better in any of these alternative fields. But the overall trend is for fewer, more elite programs to offer degrees to fewer people (with better funding), and to diversify the training as much as possible.

On the whole, I think you can see that a Ph.D. is a unique education, encompassing tremendous breadth and depth, and is more like a professional apprenticeship than the model of being a student forever that many people imagine. It probably requires more drive and stubbornness and dogged work than it does pure brain power, and anyone who completes the process very likely has an extraordinary ability to process information (because at bottom that’s what it’s all about). There are plenty of things a Ph.D. is not remotely useful for, but what it does, it does well.

 

Further Reading: On Being Miserable in Grad School

Posted in GradSchool, Profession | Tagged | Leave a comment

Should you go to the best school you can get into?

1408 px - Harvard Gate Inscription

Harvard Gate. Not the only way in to the educated life. (Image via Wikimedia Commons)

Students ask me this question a lot, usually about graduate programs, and sometimes I get asked about it with regard to choosing an undergraduate program as well. Especially in these days of astronomical tuition costs and uncertain job market potential, it’s important for students to really think through the cost/benefit ratio of a program before committing (with the caveat, of course, that education is much more than a ticket to a job!)

My answer to this question is the same answer I (like most academics) always give to almost every question:

It depends.

This is why academics annoy people, I know. But really, the answer is complicated, and entirely depends on factors specific to each applicant.

Advice for everyone:

In terms of pure quality of expertise, the faculty are broadly comparable at any institution of higher education in the U.S., since for the last several decades institutions have all hired from the same overpopulated pool of people with Ph.D.s from a small circle of prestigious graduate schools.

But there can be very big differences in, first, how much one-on-one interaction you get with faculty, and, second, the culture of the student body—how focused students are, how motivated, and how stimulating they would be for you. These differences don’t correlate with the superficial prestige of a given institution—schools at all levels vary widely in these terms.

In many cases, you can get an outstanding education at relatively low cost at a public institution, and you will have missed nothing for bypassing Harvard.

However, in some cases the cost-benefit ratio is different: what you personally can achieve with a more prestigious degree may justify a higher investment in obtaining the degree.

And sometimes a very expensive private institution may actually be cheaper than a public one if they want you badly enough to pay you to come!

In short, making the best choice for you depends on doing a lot of very specific research. And you can improve your range of choices vastly by preparing well: do your best work at every level of education, engage thoroughly in your courses, and talk with faculty and professionals in the fields that interest you. Get as much information as you can before making your decision.

Advice specific to aspiring undergraduates:

The answer to the question of which school you should go to depends on what you want to get out of your degree, on your personality, and on the field you will study (which of course you may not know yet!). But the short answer is that making the right choice for you needs to be a much, much more complicated reckoning than just U.S. News and World Report school rankings (which actually tell you nothing at all of use).

At what kind of school are you most likely to do the best work you’re capable of?

A small, residential college that feels like a family?
A bustling, huge research school that gets your juices flowing?
A place where you’re around students that are a lot like you?
A really diverse group?
People who will constantly challenge you?
A place where you’re the “big fish” and can feel confident?

How important is the name on the diploma for the specific kinds of jobs you want (and how likely are you to stick with that goal)?

This consideration necessarily involves taking a big risk, because you may very well change your mind about a career goal. But in any case, it’s worthwhile to do careful research about several prospective careers that interest you. If you can, interview people who have the kinds of jobs you want, and ask what level of education is required, what kind of GPA is expected, how much employers care about what kind of school you went to, and many other questions too, about salary, job satisfaction, rate of advancement, benefits, etc.

How important will it be to your career goals to have one-on-one faculty mentoring?

Will your future employability rest on recommendation letters and/or connections, or on your test scores and degree from a post-graduate professional school?

What do you want from your education besides employability?

College should also enrich your life and your mind in ways that cannot be measured in dollar signs. What kind of enrichment do you most want and need?

Do your horizons need to be broadened by a place different from what you’re used to?

Do you need a really rigorous environment where the “life of the mind” is the primary focus?

Do you need access to lots of activities to siphon off all your excess energy, so you can focus?

Do you need a comprehensive general education program that forces you to explore fields of study you tend to avoid when left to your own devices?

Or do you need/want to specialize very intensely (think really carefully about that one — what if you change your mind? — would you still have options?)

Find out exactly what the financial picture would be for you if you went to each of the prospective institutions you’re thinking about.

Don’t just look at the ticket price listed on web sites! The most expensive private schools also tend to offer the most aid, and more often in grants than loans, as compared to other schools with smaller endowments. Do all the calculations (including room and board and living expenses, taking into account cost of living in different areas) for each school. If you’d need loans, find out how much your payments would be after graduation, the interest rate, and how long it would take you to pay it off assuming an average starting salary for the very specifically defined types of jobs you hope to get. You may have to go through the whole process of applying and filling out the FAFSA before you’ll know the real numbers for each school, and it may be worth applying to one or two schools you think you can’t afford, to see what they can offer you.

Advice for aspiring graduate students:

Again, the answer here depends on your field and prospective employment after graduation. But at this level in certain cases it probably matters more that you go to a highly ranked school for your subject than it does in undergrad. In other cases, it matters even less! Read on.

First, a given institution can be top-tier for one degree program, second-tier for another, and third-tier for still another program. And Ivy League schools, or other top schools everyone has heard of like Stanford, Berkeley, and Chicago, are not automatically the “best” schools for a given field of study. You need more specific information. The best people to ask are probably recent graduates from programs you’re interested in, who are now employed in the kinds of work you want.

For master’s-level work, the prestige of the degree-granting institution is less likely to matter than for other graduate degrees. Sometimes, if you’re already working in a given field, you can get tuition assistance from your employer for a local graduate degree. Look into this before starting a program. And, if you wish to work in a given location, local programs may make you more employable than distant programs that technically rank higher.

In master’s and doctoral programs in the liberal arts, you’re more likely to work with a specific advisor, and having a great advisor who actively supports your work and is widely respected in the field may be more important than the prestige of the institution you attend. This is something you should talk over in very specific terms with undergraduate advisors or other academic mentors.

BUT—be very wary of a general liberal arts master’s degree. These can make you “overqualified” for many jobs, and not qualified enough for others, leaving you in an academic no-man’s-land. Only go for a liberal arts master’s if you know exactly how you will use it, and that it is certainly required (or, if you can afford it, if you simply want to enjoy the education!).

An MA program can be a way of strengthening your application to a Ph.D. Program (but an incredibly expensive way; you may be better off excelling in your BA and writing an impressive thesis). This is different outside the U.S., so again, consult advisors about your specific situation.

An MA can also be a way of achieving a higher income for teachers, librarians, and other professionals, but you should find out exactly what programs are preferred, when you need to complete one, and whether your employer can help you pay for it.

For law school, things are quite different in several ways. First, many law firms seem to be especially concerned with the prestige of the school you graduated from. There are many, many law schools out there that are happy to take your tuition money even though they may not make you employable at all. Get information from knowledgeable people in the kind of law and location you hope to work in, about where most of their lawyers got their degrees.

Medical and business school are similar to law school. Law, business, and med students tend to borrow enormous sums on the assumption that their high salaries after graduation will make repayment possible. This may be the case, but know that:

(a) for your first several years in your profession, assuming you’re hired, your income will mainly go to paying off your loans

(b) you may graduate into a glut in the market, and be saddled with an impossible debt burden

(c) not all medical, business, or legal jobs pay equally highly. Many lawyers, especially, do not earn the kinds of incomes required to pay off off law school debt.

Then there’s the Ph.D. (or the MFA and similar terminal degrees for the arts). Here’s another field with a glut of qualified graduates: academic research and teaching. College-level teaching almost always requires a Ph.D. In almost all academic fields, the number of Ph.D.s from top schools is vastly higher than the number of positions, so that graduates from even second-tier schools are limited to adjuncting (this is slave labor with extremely low wages and no benefits, and very little hope of moving to a permanent position), or community college positions (which tend to be all or mostly teaching positions at lower pay than 4-year institutions).

The advantage to teaching at a CC is that there are many of them, usually in every community in the country, so you may be less geographically circumscribed than if you search for a tenure-track position at a 4-year. But, increasingly community colleges are able to hire people from top-tier institutions, so even this is not a given. You should research your field very specifically.

There are a few fields in which academic jobs are actually growing (being both interdisciplinary and very applied in your research seems to be the key here), and a few where salaries are higher than average (accounting, law, etc), but still less than in non-teaching positions in the same field.

Whichever level of prestige enjoyed by the school you choose, it is NEVER a good idea to enter a Ph.D. program without full funding (tuition, fees, plus a stipend). It is extremely unlikely that a Ph.D. will earn you enough to pay back years of loans. Don’t ever plan on it.

Important final caveat for prospective students at all levels:

You have to ask yourself all these questions. If you allow other people (say, your parents or friends or academic advisors) to tell you who you are and what you want, you may find after much time and money have passed you by that their image of you was filtered by their own limited perception and their own wishes for you (they always are), and therefore not entirely accurate.

Exploring what you really want and need is difficult, especially when your experience of the options is still limited. Consulting with others is a good idea, but test everything you hear by the yardstick of your own gut instinct about your skills, goals, and potential. The best you can do is to continually re-assess as you gain more experience. No decision is 100% irrevocable, and often the twisty path takes you exactly where you need to go, when a shorter, straighter path may have rushed you to the wrong destination.

And, of course, you should never just take my word on any of the issues raised here. I wanted to raise questions worth asking. Other academics will given you different advice based on their experiences. Perhaps some will do so in the comments on this post!

 

Update: some links.

Posted in Teaching | Tagged , , , , | Leave a comment

What is academic history?

Thomas Henry Huxley by Theodore Blake Wirgman

Thomas Henry Huxley by Theodore Blake Wirgman. Via Wikimedia Commons.

History is unique in being counted (or confused) as falling under both the social sciences and the humanities.

From its beginnings in oral storytelling, history was a partly literary exercise (and thus a part of the humanistic tradition) until it became professionalized in the nineteenth century.

From at least that time, history has also been counted as a social science because modern historians use objective data as evidence to support larger claims, and employ methods that are loosely based on the logic behind the scientific method. Some of our evidence is empirical (gathered through experiment or observation, as in the natural and social sciences), and some is interpreted through the “close reading” of texts (as is the evidence in other humanities fields, like literature and philosophy). In fact, as the study of everything that has happened in the past, in a way history can be said to encompass all other disciplines, with all their diverse methodologies.

Historians also rely on an exceptionally broad range of types of evidence: we use documents of every kind (public and private, statistical, official, informal, etc) as well as literature, but also fine arts, everyday objects, architecture, landscape, data on demographics, climate, health, etc, and just about anything else.

What holds together this very broad field is simply that we all study the past. That is, a historian of science may need to master many principles and methods of scientific inquiry, but her goal is to understand the development of science over time; contrast this to the scientist who may share some principles and methods with the historian of science, but whose goal is to further new scientific knowledge, rather than to understand how it developed up to the present.

More specifically, historians can be distinguished from scholars in other fields by the kinds of questions we ask. The questions historians ask can usually be reduced to some combination of the following:

(a) change and continuity over time
(what changes & when, what stays the same while other things are changing)

(b) cause and effect
(which factors affect which outcomes, how and why)

Dates, events, and famous names are elements we seek to master only so that we can more accurately explain the bigger questions of continuity, change, cause and effect.

Understanding the past helps us to know ourselves better (since we are in many ways the products of our pasts), and also to understand in a broad sense how societies behave, and how the constraints of particular societies affect their behavior.

This understanding – though always and inevitably imperfect – is both worthwhile in its own right and can also help us to better understand our choices in the present.

Although historical methods are often grounded in theoretical models and strategies (as in all academic disciplines), historians place unusual emphasis on distinguishing between specific contexts (time, place, social/intellectual/political/cultural climate, etc), as opposed to other disciplines which often aim to formulate models that apply accurately to many contexts.

On other words, we’re not lumpers, we’re splitters.

For example, when we as a society wonder about the causes of war, a political scientist may seek to distill the common factors causing many past wars so as to ultimately formulate a working general theory that will (one hopes) accurately predict the causes of future wars.

The historian, on the other hand, is more likely to delve into the unique factors of each particular context in order to understand what caused that war (but not others).

The historian’s ultimate goal, in this example, is to discern how particular contexts affect particular causes – i.e., identifying unique factors and tracking how they affect other factors), rather than directly predicting future events or reducing particular phenomena to general principle.

Note that both approaches are valuable and informative, and – interestingly – they each can serve as a check on the excesses of the other.

Posted in History, Profession, Teaching | Tagged , , , | Leave a comment

“Summarize”

2004-02-29 Ball point pen writing

Via Wikimedia Commons

If you’re a college student you may often be asked to “summarize” a text or film. The tricky thing about this is that people use the word “summarize” pretty loosely, and what is being asked of you might not be what you’re actually doing. To clarify the difference, it can help to be more picky about what we mean by “to summarize.”

If we’re being picky, then, “to summarize” in a general, non-academic context usually means to simplify.

To summarize in this sense is to touch on all the most important and interesting pieces, to highlight them or to communicate them to someone who is unable to read the original text. In this kind of summary, you’re usually looking for coverage – you want to hit all the main points, and usually in the order you found them in the original. You sacrifice depth for breadth, and that often means leaving out the complicated parts.

Students tend to have come to college with more or less this notion of what a “summary” should look like, probably because they’re used to textbook writing. In textbooks, by definition, very complex ideas are simplified, because the purpose of a textbook is to convey large amounts of general knowledge, rather than to further our knowledge in specific, new directions. So a textbook summary tends to focus on coverage of all relevant main ideas and may leave out many complexities or nuances, so that you get a complete overview, rather than depth on any particular point. Students may sometimes be asked to do this kind of summary for a very simple assignment, when the goal is only to show that you read the text, for example.

But it’s usually not what the professor is really looking for.

The reason summarizing gets tricky at the college level is in the academic context, where our main goal is to think critically about what we know and don’t know and why–not just memorize facts–the most important and interesting bits of a text are not simple, and shouldn’t be simplified, as that would deprive them of their interest and importance. Usually, in academic writing, we summarize another work in order to question or elaborate on its conclusions in a new context. If we start with a simplified version of our sources, our own analysis can only be superficial, and very likely inaccurate!

So, when you’re attempting to “summarize” a text that you will use as a source in your own paper, you need to do something much more complicated than just hitting all the main points in their original order. You want to engage with the text in depth, not just skim its surface. This is why in my own classes I use the more precise term “to distill,” which is a metaphor for exactly the action we want in an essay – a taking out of selected bits, without changing their nature.*

When you distill a source that you want to use in your own essay, you usually do not need to cover every key point of the text. Since the source text probably wasn’t written on purpose to be used as a source in your essay, and in fact had different goals of its own, parts of the source text may not be relevant to your essay. Those don’t need to be covered, then. Instead, you want to hone in on the parts of the source text that directly relate to your goals for your essay. And when you explain these relevant ideas, you want to very deliberately avoid simplifying them. Focus your energy on explaining what is complex, interesting, controversial, incomplete, or questionable about the source text, because it is these nuances that you will want to develop in your essay. This is what we mean by “analysis,” another potentially confusing word you see a lot in assignments—when you analyze a text, you apply your own thinking about the source texts, evaluating their assumptions and sources and goals and logic. You can’t do that if you’ve ignored all the details from the source text.

This confusion about what we mean by “summarizing a source” in an academic essay is actually not a minor matter of semantics at all. When a student summarizes source texts in the sense of simplifying them, the student leaves him- or herself with ideas that are too small and too simple to work with. So the student has nothing to add, and therefore no argument. And next thing I know, I have a stack of essays to grade that were supposed to be analytical, but a huge percentage of them have no argument at all. That is a sad state of affairs for us all!

* I got the term “distill” and countless other useful ways to talk about writing from the University Writing Program at Columbia University, directed by Joseph Bizup, who trained teaching fellows like me. It’s a great term that has served me well in the years since.

Posted in Teaching, Writing | Tagged | Leave a comment

Scrivener: A Love Story

Writer John

If this were how I had to write, I don’t think I’d write. Image via Wikimedia Commons.

When I was in the early to middle stages of revising my dissertation into a book, I discovered Scrivener. At the time, the Windows version had just been released in Beta. I tried it, and it was still too buggy to use on a huge project that was fast approaching its deadline, but oh, oh did it have incredible potential! My mind was blown. So much so that, I’ll admit, Scrivener was a fairly major factor in my decision to switch to Mac (it was time to get a new laptop anyway, I’ll say that much for my sanity).

Importing a 300-page manuscript full of footnotes was a bit of a pain. Scrivener isn’t really intended for large-scale importing of a whole project at once like that. But it worked. And then my life was changed.

No, really, this software changed my life.

My dissertation project had begun many years before, and I had gone through several major moves, including international ones, with all my research notes and drafts, and I had switched software for various aspects of the data management several times. In short, all my materials were a bloody mess. And here I needed to quickly revise this enormous beast in significant ways — I added four new half-chapters, new framing on every chapter, new introduction, and bits and pieces of new research throughout. It was a monster to keep track of it all.

And I am not someone who deals well with that kind of situation even on a small scale. I think in circles and spirals, not straight lines. I can’t keep anything in my head that isn’t right in front of me. This whole project had the potential for disaster.

But Scrivener was, seemingly, devised for people exactly like me. Scrivener is not word processing software (although it can do all the basics of word processing). It’s part database, part outliner, and mostly it’s something else entirely — a virtual version of a stack of legal pads, index cards, paperclips and a bulletin board. But you don’t have to carry all that paper around with you, and you can’t lose any of it, since it’s got a really smooth automatic backup system. In addition to all that — and many more features aimed at fiction writers that I haven’t explored at all — there are some really nice whistles and bells that just make it very pleasant to use.

Here’s how I use it. At first it was just for the dissertation, so I’ll start with that. Once I’d imported my huge text file and figured out how to get all the footnotes looking right (actually looking better – in a panel beside the main text, much easier to see while editing), I started splitting my text up. One of the core features of Scrivener is that you can break your text up into chunks of any size, and the smaller your chunks, the more you’ll get out of Scrivener. So I didn’t just break it up into chapters, or subsections, but into paragraphs. Each chunk gets a title, and these titles are displayed in a panel next to the text as you’re reading it, so in effect the outline of your whole text is right there in nested folders, which you can quickly and easily rearrange. (Scrivener will also output all your data and metadata into a proper outline where you can change things in groups, etc.) Just the process of splitting up and labeling my chunks of text revealed many places where the organization was a lot less logical than I’d thought, so I did quite a bit of rearranging just in the process of importing.

Each chunk of text has a virtual index card attached to it (I love that it looks like an actual index card), which you can either auto-fill with the beginning of whatever’s written in that chunk, or you can fill with your own summary. There’s a corkboard view where you can see just the index card versions of your chunks, and rearrange them at will. This is incredible.

Years earlier when I was finishing the dissertation, I had actually printed out several chapters, cut them up into paragraph-size pieces with scissors, and spread them all out on my living room floor. That exercise was incredibly helpful, but it was such a big project that I only did it once. With Scrivener I can do it easily and often, with no mess, and no trees killed for my efforts.

Each chunk of text can also be labeled for easy sorting (like, “Chapter,” “Front Matter,” “End Matter” etc), and can be marked with a status (like, “To-do,” “First Draft,” “Final Draft,” “Done”). You can set the options for label and status however you want. In addition, you can add as many keywords as you choose (like tagging — I can add “gender,” “upbringing,” “childhood” to one paragraph, and “gender,” “estate management,” “needlework” to another, and later sort all my chunks to see those that all have “gender” in common, or just the ones on “childhood,” etc.

Each chunk of text also has a free field where you can add notes, like “did I double-check this in Blum?” And you can also insert comments into the text as you do in the revision mode in MS Word. So, you can have comments pointing to one spot in your text, or comments referring to a whole chunk at once. There are, in addition, a bunch of options for custom meta-data and internal references that I haven’t even begun to explore. All this metadata displays in another frame on the other side of the text you’re reading. You can hide this frame, or the one showing your folders, at any time.

One of my favorite features (though it’s so hard to decide) is that you can also split the main text frame, vertically or horizontally, to compare two chunks of text. This feature alone would have been life-changing to me, even without all the rest. I compare documents and cut and paste between chapters or separate files constantly, and even with all the screen real estate in the world, there’s no way to do this in Word without aggravation (and endless confusion about what was changed where and when — in Scrivener everything is in the same file, with created and modified dates on every chunk of text, not just the whole file, always visible, without clogging up space). On my 13” MacBook Air, I can split the text screen horizontally and still see the folders on the left and the metadata on the right. Or, I can hide those two side screens and compare documents vertically, for more intense editing back and forth. All of this can be done with quick, one-step, intuitive clicks.

While I’m writing, the word and character counts show on the bottom of the screen. I can set daily targets for myself (or in my case limits!).

I can also view my text in any old font or size, secure in knowing that when I’m ready to compile into a Word, RTF, or PDF file, I have saved settings that convert everything automatically to the output style I want. All that is easy to do in your custom way, though there are also settings available for the basic options (for people who write things like screenplays, there’s much more to all this). I like that I can read on-screen in 18-pt Helvetica, or some random combination of sizes and fonts that result from pasting in text from a variety of notes files, for example, without it affecting the finished product, and without having to fuss about cleaning up a bunch of little inconsistencies.

I also imported Word and PDF files that I needed to refer to, but weren’t part of my text. These go into a separate folder, where they can’t be edited, but can be viewed alongside your text in the split screen, for reference. Awesome.

Right now I’m really enjoying the first stages of starting my new project on Scrivener, building up the organization and metadata from the start, but there were some particular advantages, too, to finishing up my first book project in Scrivener. As I went through my research materials collecting bits and pieces that needed to be added, I imported them into Scrivener as separate chunks of text. I labeled them as “Added Bits,” which gave them a different color in the folder hierarchy and outline, so they could be spotted easily as I integrated them into the main body of the text in the places I thought they should eventually go. As I worked my way through them, I could either change the label or merge the added bit to a chunk of the original text, as it got integrated, or I could shift it off again to another folder labeled “rejects” or “spin-off article.” When you compile your text into a word processing file, it’s easy to un-select any folders like this that aren’t intended to be part of the whole.

Once I got going with all this, I found that I could use Scrivener for practically everything I do. Most significantly, for all the writing I do for teaching. I have one Scrivener project for all teaching-related materials: syllabi, assignment sheets, handouts, etc. I keep a template that contains most of the boilerplate text for my syllabi, for example, and can very easily slip in the updated text for a particular iteration of the course, then, with a few clicks, compile it straight to PDF in my established format for syllabi. I can easily separate out a chunk of text in a handout that changes when I use it in different courses, for example, with all the alternate versions I need for just that chunk, while the rest of the handout is common to all versions. That way, I can update part of the common sections of the handout, and when I compile one or another version, that update will automatically be there. I can collapse the subfolders for courses I’m not currently teaching, yet still have them handy when I want to go back to an old handout for a new purpose. I have files with reference material like the official college grading scale, official verbiage about department goals and requirements, etc, so that I can grab it when I need it without opening new files, without constantly updating an external folder system full of duplicates, etc.

And now I even use Scrivener for writing blog posts. When I have a random bit of an idea for a post, I create a little “chunk” of text for it in Scrivener, so that I have a running list of many potential posts in various degrees of completeness from raw idea to ready-to-publish (each one labeled with a click and automatically color-coded). This way I can add a bit here or there whenever a moment presents itself, without losing anything or getting buried in duplicates. Or accidentally publishing a half-baked post!

It’s also easy, once you have a system down, to create a template in Scrivener that you can use for future projects, and then these templates can be easily shared. I made very basic templates for my own purposes (and to share with my husband), for a book-length historical research project, an article-length project, and teaching materials. These templates don’t use the vast majority of Scrivener features — they’re really just a system of basic organization that I don’t want to have to recreate again and again. I’ve shared them on my academia.edu profile if you’re interested.

To conclude this story of a love affair, I’ll admit that I’ve had one problem with Scrivener so far, and I don’t know if it was my fault. The word count of my manuscript in Scrivener was drastically different from the word count I got when I compiled it to Word. By 30,000 words! This is of course a very serious problem. I assume that Scrivener was not counting the notes or some part of the front- or end-matter, but I did very carefully (many times!) check all the settings and make sure the right boxes were checked to include all those. I tried comparing a short, plain text document, and the word counts were comparable. It may be that the many abbreviations in my footnotes were handled different by Scrivener’s word counter than by Word’s (though I don’t think that could add up to such a huge discrepancy). Right now, I don’t think Scrivener is really designed for massive scholarly research projects with more than a thousand footnotes. It can handle that, but it wasn’t really designed for it, and that may be part of how it was possible for the word count to be so far off. I haven’t gotten to the bottom of this issue, and I welcome thoughts others might have about it. In any case, now that I’m aware of the issue, it’s simple enough to compile the text after any major changes to keep a rough gauge of the difference between a Scrivener word count and Word’s.

Posted in Random, Research, Writing | Tagged | 3 Comments

Money

Bundesarchiv Bild 183-19204-013, Währungsreform, Frau mit Geldscheinen

Bundesarchiv, Bild 183-19204-013, via Wikimedia Commons

I learned not long ago that as a tenure-track assistant professor* of history I was making the same salary as a deckhand on the Staten Island Ferry.

I don’t begrudge the deckhand his salary one bit, because I know as well as anyone that you can barely support a couple of people on that money in New York City.

Also, while I believe my work is very valuable to society, I think everyone’s work is valuable. We need deckhands.

The thing is, the deckhand on the Staten Island Ferry probably doesn’t pay $950$1150 a month in student loans (Sallie Mae just hiked the rate on us again). And he probably started work before the age of 30 because he didn’t need 8 years of post-graduate education to get that job, so he got that salary or something like it for all those student years that I was living on beans and rice and couldn’t afford coin-operated laundry machines. And hopefully (though these days you sure can’t count on it) he’s been paying into social security and a pension plan all those extra years that I wasn’t, while I was still being trained for my job. So that means the deckhand on the Staten Island Ferry is very significantly better off than I am financially. And let’s just remember that the deckhand–though his work is as valuable as anyone’s–is not the driver of the ferry, who has the safety of thousands of people in his hands.

Now let’s compare my salary to that of a first-year law firm associate in New York City. The law firm associate is likely to make at least twice as much money per year, not including the annual bonus. That person may indeed have the same loan burden that my family has, though (law school is how we got most of ours). But the law associate can handle it, and will probably pay it off in about five years (we’re only paying off interest, so we’ll be doing that every day of our lives until we die). That cushion can also easily cover the much nicer professional wardrobe that a law firm associate needs to work long hours in an office, as opposed to long hours at home that I work (though, arguably, I actually work more hours total, which is saying something). And the law associate only needed 3 years of post-graduate education to get that job, so potentially he has about five extra years of earning and paying into a pension, too. At twice the salary. Not to mention that Manhattan corporate lawyers get access to private banking accounts with fabulous terms (no fees for anything! Extra special interest! All the perks rich people get to make them more rich, also including tax loopholes). What does the NYC law firm associate contribute to society? Well, judging by what someone I know very well once did when he had this job, he helps corporations make sure they don’t pay their employees the money they are entitled to. Or something similar. While I teach the citizens of this democracy to think critically. Okay, so neither of us is curing cancer, or saving your life when you get into a car accident. There’s a reason I’m not including medical doctors in this comparison.

Of course, what I’m describing here is not at all what all, or even most, lawyers do. I really wouldn’t whine about the vast majority of lawyers, many of whom make as little or less than I do anyway, and many of whom do incredibly important things for our society. I’m talking about corporate lawyers in Manhattan, and even then, there are exceptions. There are firms that exist more or less to go after the money-grubbing firms. But—as a rule of thumb I’ve noticed that the more useful your work is to society, the less money you make (with the glaring and rightful exception of the medical profession). Still doubting? Two words: social workers.

There’s a funny thing, too, that should be mentioned about New York City. The last numbers I saw estimated that NYC salaries are about 20% higher than the national average, while the cost of living in NYC is about 200% higher. Unfortunately that was a print source and I lost my original, but here are some links that can give you a pretty specific idea of what it’s like: costs broken down by type of expense, Daily News articles on the general awfulness, CNN cost of living calculator so you can compare how far your salary would go here.

Yeah. Bridging that tremendous gap would be easier if I were making twice the salary I make now, for twice as many years. The rent my family pays on a small, cockroach-invested 2BR apartment in a questionable neighborhood in Queens could get us a beautiful 4BR house in most parts of this country. The costs of groceries and transportation in NYC still make me boggle, after over a dozen years here. Childcare was an unaffordable dream on my salary for the first three years of my daughter’s life, yet without childcare there’s little hope of doing the work that could help us earn more. But my point is merely that there’s tremendous regional variation in income and how far that income will stretch, which is helpful to keep in mind when one is comparing salaries.

There’s been quite a bit of news around the country lately about the supposedly astronomically high salaries of faculty driving up costs for college. I won’t link to it because I don’t want to be a part of driving traffic to those sites, but it’s easy enough to find.

This is another of those pernicious lies. It’s scapegoating. Faculty are being pinched just like students and parents are.

Who is telling these lies about faculty salaries? Mostly politicians and university administrators. Who’s not feeling the pinch of the higher costs of university education? Mostly politicians and university administrators. Yep. Always ask yourself what the person telling you “facts” has at stake, and examine where they got their information.

That goes for me, too, of course. So let me tell you where I got my information.

First, it’s not hard to find out what faculty really make. Most of the sensationalist news articles have been giving a single “average” salary figure for all faculty at an institution, or even all faculty in general, and in every case that number has made my eyes bug out of my head in fascinated disbelief. I have no idea where they’re getting those numbers from, but I can tell you that you can search here, at the Faculty Salary Survey from the Chronicle of Higher Education and get actual average salaries of actual faculty at nearly every university in the country, broken down by rank and gender.

It’s important to break down the figures, and ideally you’d break them down even more than that site does, because salaries vary widely across the academy. From well below the poverty line to astronomical sums. This is really wide variation, which is not accounted for by variations in cost of living.

So lets talk first a bit about how and why faculty salaries vary so much.

First, are we talking about full-time or contingent faculty? The majority of people teaching college-level courses in this country are contingent faculty. Contingent faculty are usually paid on a course-by-course basis, with zero job security and zero benefits. It is not physically possible to live on this money unless you teach an insane load, like 8 courses at a time, and even then, you’ll barely scrape by. NO ONE does this unless they are either (a) absolutely desperate to get a full-time job and hoping this will help them to achieve that dream, and/or (b) they love what they do so much that they are willing to work mostly for free, usually at great personal sacrifice. In most cases, working as contingent faculty is only possible for families with another “real” income from somewhere else.

So when we’re talking about faculty salaries right off the bat we have to exclude all the contingent faculty who don’t really get salaries at all. And let me repeat: these mostly selfless and often desperate people are the majority of faculty in this country. Look here at the Coalition on the Academic Workforce Survey to see what kinds of pay (if you can even call it that) are typical for contingent faculty across the country.

How about full-time faculty?

Well, from the first link I gave you above, the first thing I hope you’ll notice is that across the board, at comparable ranks and institutions, women make less than men. Sometimes a LOT less. A small part of this disparity may be explained (though not excused) by the fact that there are more women in the fields that pay less (basically, the humanities), but that doesn’t explain away the whole gap. It’s also less common for women to negotiate for higher salaries than it is for men, but that too doesn’t explain away the whole disparity. Some schools are—for reasons I can’t fathom—much worse than others. But the pay gap between men and women exists, unfortunately, in nearly every field of employment in this country.

Let’s go back to the fact that the humanities are paid less than other fields. If you have a friend or relative who is an accounting professor, you may be under the impression that faculty are pretty well paid. In the case of accounting professors, you may be largely right (allowing for differences between institutions, the gender gap, and assuming we’re talking about full-time faculty). But a professor in the humanities at the same rank, with the same education and same duties, may very well make half what the accounting professor makes, at worst, and certainly will make much less. Fields like accounting, business, law, engineering, and applied chemistry all compete for faculty with employers outside academia. Someone with a Ph.D. in accounting may get a job as an accounting professor, or work as a CPA. So, faculty salaries for accounting professors have to compete with CPA salaries in order to attract good faculty.

If you have a Ph.D. in history, though, the employment options that will most directly make use of your degree are: academia, museum work, k-12 education, and government research. All historically under-paid fields (it is not a coincidence that all but the last are also historically dominated by women—historically female professions are universally less well-paid than historically male professions). So, universities don’t need to offer as much to attract the top candidates, and subsequently history professors make much, much less money than professors in accounting, business, law, and a few other fields. The same is true for other humanities fields like philosophy, literature, languages, and fine arts. Social sciences and some of the hard sciences, and mathematics, are an in-between category, where Ph.D.s are generally more employable outside academia, but not as obviously so as in more applied fields, and so the salaries in these fields may sometimes be slightly higher than in the humanities, sometimes not.

Then there are some fields in the hard sciences where faculty salaries are largely paid by outside grants. These can sometimes (but far from always) be higher than average. And other fields, mainly athletics, are hugely higher paid at certain institutions because they help to create a money-making industry (like a successful football or basketball team).

There are also vast differences based on the employing institution. You can poke around on the web site I linked to, and what you’ll find is that the hierarchy of salaries basically follows this pattern (note that I’m talking about the US throughout this post — faculty salaries around the world follow other patterns):

Research-1 universities (huge private and public university systems with enormous research budgets, like the Ivy League, Chicago, Stanford, Berkeley, U of Michigan, Wisconsin, NC-Chapel Hill, etc) – pay the highest salaries across the board, in order to attract the top researchers in most major fields, in order to maintain their status as the world’s best universities.

Small liberal arts colleges with huge endowments (like Middlebury, Pomona, Williams, Sarah Lawrence, etc) pay the second highest salaries. They have enormous amounts of money from internal sources, and can afford to attract top faculty in order to attract a very selective student body.

Major state universities and smaller private liberal arts colleges (like Michigan State or Kalamazoo College or my own employer, Queens College, CUNY) are third in line — they are mostly hiring top candidates from top Ph.D. Programs, so they have to offer respectable salaries compared to their peer institutions, but they’re usually not involved in bidding wars over big names or establishing top programs in any field.

Regional schools and community colleges (like Grand Valley State University or LaGuardia Community College) — smaller schools like these rely principally on contingent faculty, but the full-time faculty they do have are often there because they need a job in that region or are committed to the mission of schools like this that serve populations who would not otherwise be able to afford college, so they are often forced to compromise on things like salary.

The final major variation in salary range in the academy is based on experience. Like anywhere else, faculty salaries start fairly low and climb higher over the years for any given faculty member. Salaries for senior scholars grow incrementally from the base salary, so women faculty tend to be earning even less compared to their male peers when they get to senior positions, because their raises are a percentage of their lower starting salary. The same is true for faculty in the less well-paid disciplines—they start out earning less and so their percentage raises are also less.

Often, when the media reports a “typical” faculty salary, the salary they quote can only be that earned by a big-name star senior faculty member at a research-1 institution in a field with a very competitive non-academic job market. So, yes, it is entirely possible for a faculty member to be earning half a million dollars a year. But there are only handfuls of such faculty in this country. The vast, vast majority are lucky if they make the equivalent of a deck hand on the Staten Island Ferry, while they also struggle to pay off gigantic student loans and, much like the rest of the middle class in America these days, do not look forward to ever enjoying a pension adequate to support life.

There is also an enormous generational difference. The generation of faculty who entered their first jobs in the two decades following World War II (and the GI Bill that brought record numbers of college students into the system) are largely male, largely had wives who could afford not to work and thus took care of childcare and the home, largely were offered jobs instead of competing for them, and largely had respectably upper-middle class salaries and zero student loans (the major federal student loan programs started in the 1970s). It was much harder to reach the point of a Ph.D. for that generation—with few exceptions you had to have the right sort of background in addition to being very bright and working very hard—but once you got a Ph.D., you got a job and a salary on which you could support a family. Much has changed for subsequent generations, and not just in academia.

While the percentage of Americans going to college continues to grow, the pace of growth has slowed, and universities are choosing to expand the administration rather than the faculty (leading to larger student-faculty ratios and more courses taught by TAs). Meanwhile, Ph.D. programs have churned out more and more graduates, creating an increasingly huge surplus of qualified candidates (I get my information on this phenomenon in my discipline mainly from the American Historical Association — I’m not sure how much of their publications are available to non-members). This means these graduates, more and more desperate for more and more competitive jobs, can be offered relatively lower and lower salaries. Meanwhile, access to Ph.D. programs has broadened—all you need to get in now is brains and drive, no matter what your background, with a few significant exceptions—but for this very reason more and more Ph.D. students have to take out loans to complete their education.

(A personal example: I am the second person in my very huge extended family to get a Ph.D. My maternal grandfather was the son of truck farmers, my paternal grandfather grew up malnourished in Kansas during the Depression, where he got his first job to help support his large family when he was nine years old. With excellent grades and test scores, I was able to attend the second most expensive private college in the country—and in my not-humble opinion the most rigorous college in the world—with half the tuition paid by grants and half by loans. I competed to get into a top grad program with paid tuition plus a small stipend. All these opportunities were unprecedented for someone of my background just one or two generations earlier. But, I also have a big debt burden and can’t look to my family for financial support, unlike some of my cohort in graduate school, and radically unlike the previous generation of Ph.D.s).

So where on earth is the astronomically rising costs of college coming from?

There are a few explanations that I’ve read about and seen with my own eyes.

First, for the more competitive schools, there has been a rising expectation that to attract the best students colleges need state-of-the-art technology, gyms and other recreational facilities, and living spaces. All this is very expensive, and has contributed to the rising cost of tuition in private colleges, especially.

Second, there’s one part of university payrolls that has sky-rocketed since the 1980s. It ain’t faculty salaries. It’s administrative salaries. A part of this is justifiable—new federal regulations require new personnel. And, justifiable or not, the new facilities that parents and students increasingly expect—like gyms and dorms but also disability services, writing tutors, etc—require administrators. As a rule, although they do not directly educate students, administrators make more money than faculty (the disparity is far greater for top-level administrators, though the sheer numbers of arguably suitably paid mid- and lower-level administrators is, collectively, part of the rising costs). The reason for this is presumably that administrators, like accountants, can choose to work in the private sector, so their salaries need to compete.

(Are you noticing a trend here? That universities take advantage of the Ph.D. degree that uniquely qualifies graduates for the professorate by paying them much less than anyone else with less difficult-to-obtain degrees? Obviously no one would put up with this…unless they were so devoted to their subject of inquiry and their teaching that they put up with being treated unfairly in return for making a difference in the world….cf. social work, teaching, nursing, mothering, and most other female-dominated professions….)

But there’s still another phenomenon at play here, and it’s true at every institution of higher education in the country. Since the 1980s, there’s been a push to apply “corporate” or “private” financial principles to the administration of institutions of higher education. On this principle, top administrators have been hired at astronomical salaries (at least six figures, sometimes high six figures or more) to “fix” university budgets by applying these magical principles of capitalism that make money fall out of the sky.

The thing is, decades have passed, and university budgets have shown no improvement. The biggest difference between the university of today and the university of 1980 is not a streamlined budget and efficient administration. The very idea is laughable. The biggest difference I see is the proportion of the university budget that pays enormous salaries to administrators with no background in education who flit from one institution to the next “fixing” budgets but leaving them, mysteriously, in no better shape than they were before.

The problem is based in part on a fundamental error in how the public, the government, and university administrators understand capitalism. Taking a course in the history of modern Europe or in basic economics could resolve this fundamental error, but apparently a large sector of our public failed to take such a course, or just plain failed it.

If you’re feeling skeptical about what I’m about to say, please, I beg you, read Adam Smith’s Wealth of Nations, the acknowledged bible of capitalism. You don’t have to believe me. Go back to the primary source (as any good professor will tell you to do) and judge for yourself. Just don’t blindly believe the talking heads on your TV, I BEG YOU.

Capitalism is about the exchange of commodities. Education is not a commodity.

Confusing this issue is a fundamental error that is bringing down the (to-date) world’s best system of higher education. We are fast losing our edge to new universities in India, China, and the Middle East, because we are mis-applying financial principles to a non-financial sector.

I could go on—and on and on and on and ON—but to spare you I will stop and let you process what I’ve already said.

Just one more thing, while you’re processing. Adam Smith understood what has been lost in the American mainstream discussion of capitalism today: healthy capitalism requires regulation. Without regulation, capitalism is destroyed by monopolies and corruption. The people who are monopolizing and corrupting our capitalist economy are, today, in the mainstream media, accusing true capitalists of being evil socialists (which is also a complete misunderstanding of socialism, but that’s a subject for another post). The irony would be delicious if the consequences weren’t so incredibly dire for nearly every American.

I may seem to have strayed waaaay off topic, here, but it’s all much more deeply connected than you may think. Every citizen in the United States should have a basic understanding of how capitalism works, including the facts that capitalism works with commodities and commercial services (not life-or-death necessities that you can’t effectively comparison shop for or decide not to “buy,” i.e., health, safety, or in our modern world, education), and that it requires basic regulation to avoid falling into corruption (which is by definition not capitalism, but a failure of capitalism). But most citizens don’t have this basic understanding, in large part because civics education has been eliminated from most public school curricula in the past few decades. And because, while more Americans than ever are going to college, they increasingly aren’t taking “frivolous” classes like the history of modern western societies (in which capitalism is always a major theme), or aren’t understanding them, because they come into college ill-equipped to succeed thanks to our decimated system of primary and secondary education. It’s all connected.

Take-home message? As always, question your sources of information.

Why not start with these? Some links to respectable articles about faculty pay and tuition:

CHE: College Costs Too Much Because Faculty Lack Power

NYT: How Much is a Professor Worth?

NPR: The Price of College Tuition

New Yorker: Debt by Degrees

Business Insider: America’s REAL Most Expensive Colleges

Philip Greenspun: Tuition-Free MIT

 How the American University was Killed in 5 Easy Steps

New: The Adjunct

* Note that “assistant professor” is not an assistant to a professor. In the US those are known as teaching assistants or research assistants. The three ranks of full-time, tenure-track professors in the US are, in order from junior to senior: assistant professor, associate professor (often achieved along with tenure), professor (known as “full”). A retired professor is “professor emeritus.” All teaching faculty are referred to in a general way as “professors,” usually (but not quite always) including adjunct or other short-term-contract faculty. Almost all faculty these days have Ph.D.s (so that “Dr. So-and-So” usually also applies), but in some fields the terminal degree is at the master’s level, such as a Master of Fine Arts. Generally, the only people independently teaching college-level classes who aren’t loosely referred to as “professor” are graduate students, who are officially “graduate instructors” or “teaching fellows.” In my day, we usually asked students to call us by our first names at that rank. Mind you, in Europe these ranks and titles are all completely different, which is very confusing.

Posted in Profession | Tagged , , , | 1 Comment

Rocky IV

Rocky statue at the Philadelphia Museum of Art. By Sdwelch1031, via Wikimedia Commons

All of the Rocky movies appeared on Netflix recently, and I was inspired to put them on in the background while I was doing some mindless busy work. Ah! How they bring back my childhood. Anyway, I was particularly excited to watch Rocky IV, “the one with the Russian,” for the first time since it came out in 1985. At that time, I was ten and didn’t particularly know anything about Russia. The Cold War had been reignited and I was scared of nuclear war. I took the movie at face value, enjoyed it, was mildly interested in the scenes where Rocky is (supposedly) in the Soviet Union. The message I got about the USSR at that time was mainly that it’s cold there, the people are apparently big and scary, and they have a lot of technology.

Watching it twenty-seven years later, as a professor of Russian history whose students don’t even remember a world in which a Cold War existed, was of course very different. I expected to giggle at the cliches about Russians (yeah, they’re not any bigger on average than we are, duh), but my dim memories of the movie had not prepared me for how hilariously, amazingly backward the whole portrayal is of Soviet athletes versus Americans.

The Russian boxer is characterized in the movie as almost super-human not just because of his size, but mainly because of the super-cutting-edge technology his team of top trainers use (lots of rooms full of flashing lights!), while Rocky is of course all natural, just one man against the world, able to beat stronger opponents through sheer will power and his ability to endlessly take a beating. When he travels to the “Soviet Union” to train for his big face-off, Rocky demands not a high-tech training center equivalent to what his opponent is using, but a humble cabin off in the snow somewhere (Russia is cold, yo). He runs in thigh-high snow, he climbs mountains (is he supposed to be in the Urals? there aren’t a lot of mountains in Russia, actually, and none of them are very impressive looking), he throws logs around. That’s our authentic Philly boy, there.

There are so many things wrong about this that it’s hard to know where to start.

First, in the 1980s, to our detriment, we vastly over-estimated both the economic and technological power of the Soviet Union. Mind you, we had to guess because the USSR worked very hard to prevent an accurate picture of their real abilities from reaching the West, but our guesses were very, very wrong. There are multiple reasons for that, but one of them must be that we let our fear become reality. We were afraid the USSR was ahead of us in technology, so we assumed they were ahead of us in technology. Sometimes that kind of thinking can be useful — be prepared for the worst case scenario, right? But we acted on this fear, even though there was no evidence that it was substantiated, in all kinds of ways that are still hurting us today (mainly by running up our national debt to astronomical levels in a race to “match” an opponent that actually was way behind us from the beginning).

What we now know about Soviet technology in the 1980s is that it was woefully behind western standards and falling farther behind every day. The first scene of supposedly Soviet high tech that I saw when re-watching Rocky IV made me laugh out loud.

Yeah, right — they wished.

The Soviets obtained the technology for microcomputers, for example, quite early, but had endless delays in their attempts to reverse-engineer it, and by the 80s, from what I’ve read, they had started buying many of the parts, and some whole computers, from abroad. In 1985 when I first watched Rocky IV I had had a TRS-80 personal computer in my family home for 5 years already, but the USSR as a nation was still struggling to develop and distribute comparable technology.

I’m generalizing, of course — there were a few areas of technology where the Soviets put all the investment they were capable of (as far as I know, the first personal computers that got into classrooms in the latter part of the 80s seemed to have been developed for military purposes), but athletics wasn’t one of those areas. And in any case the level of investment they were capable of in the 80s was basically in the realm of imaginary numbers — in a nutshell, the empire collapsed a few short years later because they’d been running on imaginary numbers for decades. By 1980 at least, the game was already up for the Soviet Union — this just wasn’t admitted to until 1989 and beyond.*

If you remember watching the Olympics in those years you know the Soviets did put on a show of strength — that was part of the game of the Cold War. There’s some truth to the cliche of those cold-eyed coaches who pushed their athletes to the limits and achieved huge successes. But don’t confuse a hard-as-nails coach with technological or economic superiority.

Why does the movie, Rocky IV, portray Soviet strength in technological terms? Well, as I’ve said it wasn’t implausible to an American audience at the time because we didn’t know the truth, but it’s also no doubt because the robot-like Soviet villain makes such a nice contrast to our humble homeboy Rocky Balboa. Throughout all the Rocky movies there’s a running theme that Rocky wins not so much because he’s stronger or a better boxer, but because he can take punishment and stay standing. He is a hero for his ability to withstand suffering.

This is astounding, that such an iconic American hero-figure is portrayed this way. The standard narratives of what Americans are all about have never had anything to do with suffering. In most American myths, we are pioneering, we are adventurous, we are brave, chivalrous, we are often the plucky underdog. But looking on ourselves as the underdog had to be getting pretty difficult after World War II, when we were essentially the only Western power left standing, and certainly by the late Cold War, when we more or less ruled the globe. Clearly Rocky is a plucky underdog type, calling on an origins myth (we became Americans, gaining our independence from Britain, against the odds, so somehow we’re still underdogs at heart), but adding this layer of suffering is curious, to say the least, especially at a time when Americans seemed unbeatable.

To a Russianist, the words “hero for his ability to withstand suffering” is overwhelmingly Russian. The word “suffering” in Russian — stradanie — is full of deep cultural associations. A big part of the reason is that Russian Orthodox Christianity values suffering (and humility) much more overtly than most other Christian Churches. As I understand it (not being an expert by any means in the theology), suffering is a path to God. Those who endure great suffering and remain devoted to God are often recognized as saints — it’s one of the highest virtues.

In a general cultural way, for Russians suffering is understood as an inevitable fact of life (in stark contrast to the American view, where the pursuit of happiness is actually a right of all people in our Declaration of Independence). What matters when suffering is inevitable is that you keep standing. Like…Rocky Balboa.

And then, in historical terms, no one can deny that Russians have endured a hell of a lot of suffering over the centuries. Suffering isn’t something you can quantify, but most people familiar with Russian history and the histories of western Europe are struck by the sheer ubiquity of suffering in Russia. Mind you, western Europe (and since it was founded the U.S. too) are the real exceptions here, if you’re looking at the whole globe. It’s a false comparison. Nevertheless, Russians have compared themselves to the West since at least the beginning of the 18th century, and so it is that comparison that contributes to Russians’ sense of themselves as a historical people. And that sense of themselves is colored by endless stories of suffering. Where Americans have won nearly all of our wars and only experienced major bloodshed on our soil once (when we fought ourselves), have never experienced foreign invasion, and have never been targeted for takeover and elimination by a foreign power, Russians have experienced all these national tragedies over and over.

A really abbreviated list of just the worst national tragedies and humiliations would include:

  • Devastation and then foreign rule at the hands of Mongols (1247-1380)
  • Terrible defeat at the hands of Crimean Tatars 1571
  • Vicious internal warfare, most notoriously at the hands of Ivan the Terrible (1560s and 70s mostly), plus a big loss in the Livonian War
  • Near takeover by Poles, resolved in the nick of time in 1613 by the election of a new monarch
  • Terrible loss to the Turks in 1711
  • Crushing defeat at Napoleon’s hands (1807)
  • Crimean War (1853-56) — first major, lasting military loss since Russia became a Great Power
  • Russo-Japanese War (1905) — humiliating loss to a tiny peripheral nation that contributed to bringing down the monarchy
  • World War I (1917) — in the middle of revolution, the Russians made a separate peace with Germany on punishing terms
  • Relatively bloodless revolution devolves into destructive Civil War (1918-23)
  • Stalin effectively declares war on his own people (1929-1953) — collectivization, purges
  • The Cold War (1949-1991) — Gorbachev essentially threw in the towel, arguably bringing to an end (for now) Russia’s place as a Great Power in Europe

Okay, those are just the Big Events (and note how many times Russians suffered at the hands of their own government, in addition to their vulnerability to foreign invaders, due largely to the absence of natural defenses).

Here’s another list of just some really BIG ways Russians have suffered as a people:

  • Enserfment of the vast majority of the population (arguably beginning in 1649, arguably ending in 1861 but arguably not really ended until it really went out with a bang with Stalin’s collectivization and industrialization which was a tragedy in itself…but it’s a really long story)
  • A rigid system of hereditary social estates, police surveillance, and passport restrictions that severely limited the life choices of every Russian (developing in bits and pieces over time, but arguably oppressive at least from the 18th century to the present)
  • Economic backwardness — due to a variety of geographical factors as well as the mistakes of a long series of regimes, and “backward” only relative to western Europe, the fact is that famines are common throughout Russian history, industrial development was very slow, and access to wealth was/is restricted to a miniscule portion of the population…more or less from the 13th-century Mongol invasion to the present.

And this is just the r e a l l y big stuff. So let’s go ahead and conclude that when it comes to suffering, Russians know what they’re talking about.

Back to Rocky IV. Now that you know how important the concept of enduring suffering and staying on your feet is to Russians, and their long legacy of economic and technological backwardness, which was certainly still relevant in 1985, look again at Rocky Balboa, running through snow, pushing through the pain, taking cruel punishment, but still standing in the end. Note that Rocky is also decidedly working class — the Soviet Union was founded as a working-class state, and while the falseness of that claim is legendary, the claim was still an important part of the Soviet national myth. And look at Ivan Drago, surrounded by coaches and computers and drugs, using fancy machines to push himself to unprecedented capabilities (isn’t striving and achieving without regard to any old-world notions like social class part of the American myth? Isn’t innovation — especially in technology — also a big part of how we see ourselves?). This is the crazy, astounding thing about Rocky IV:

Rocky is the Russian, and “the Russian” is really the American.

Mind — blown.

 

For further reading: If you’re interested in late Soviet realities, I recommend Stephen Kotkin’s Armageddon Averted: The Soviet Collapse, 1970-2000.

 

*While the USSR was definitely behind on technology, I want to point out that they may well have been ahead on the brain power that is needed to make technology work — Soviet programmers were relatively well-supported and very well-educated, and I’ve read of underground experiments on the early internet in the ’80s, among countless examples of extraordinary intellectual achievements in early Soviet computer science. To this day Russian programmers tend to lead the world. What they lacked in the ’80s was money, mainly, though there were also bureaucratic, ideological, and infrastructure-related obstacles. A final unrelated note because I can’t not mention it — did you know the Russians invented tetris? Remind me to tell you my tetris joke sometime.

Posted in History, Random | Tagged | Leave a comment

Students: What to Do When You’re Drowning

William Blake, via Wikimedia Commons

William Blake, via Wikimedia Commons

1. Get help

If you’re drowning in your schoolwork, the last thing you should do is pretend it isn’t happening or hide. Talk to your professors. Go to the student counseling center. Talk to the dean of students. Make sure someone knows what is going on. This means you can get help if you need it, and your problem will be documented, so that professors might be able to accommodate you.

2. Don’t make the dumb mistakes

A. Something is better than nothing.

If you just never turn in a graded assignment, you get a zero. One zero may mean failing the course, or very close to it. Even if you turn in incomplete gibberish, it may get some points, which is better than zero!

B. Show up to class.

Showing up is by far the easiest thing you can do with the biggest payoff. (This is true throughout life, by the way.) Sitting in class every day means you’ll hear announcements and reminders, you’ll get hints about assignments, and you’ll get at least a passive exposure to the material. If you can’t handle anything else, you can handle this, and once you’ve done it, you may find that the assignments aren’t as hard to handle as you expected. It should go without saying that while in class you should stay awake and keep your mind on the class, not the laptop or smartphone.

C. Don’t be a jerk.

Don’t lie to your professors, don’t brown-nose, don’t whine, and don’t try to manipulate them. They have seen all these tactics before, and whether they call you on it or not, you will have alienated them. Be nice, be respectful, take responsibility for your own behavior. Those are the ways to win real goodwill.

D. Keep in touch.

Don’t just disappear. If you’re unable to come to class or turn in an assignment, tell your prof about it as soon as possible (before the date in question is infinitely better than after!!). Be honest, and take responsibility for your own inability to follow through on the class. It may be that there’s nothing your prof can do (without being unfair to other students). It may be that your prof can find a way to work around your issue, if you’re willing to do your part (such as an alternate assignment, etc). You won’t know which is true until you ask.

3. Survival Tactics

A. Read the syllabus! Frequently!

This is where all the course policies and schedule are spelled out. At the beginning of a course, make sure you have all the required readings and you know where and how to turn in assignments, and what the due dates are.

B. Skim intelligently.

If you’re overwhelmed by the readings, make an effort to figure out how to skim effectively. This is a skill. Just letting your eyes pass over the pages without taking anything in is not what I’m talking about here. Read this guide [link goes to PDF] to reading a book, and apply it to any reading assignment. Look first for clues about the main ideas (title, abstract, introduction, section headings, conclusion). Think about how the subject of the reading connects to the subject of the course, and the topic for the particular day or week for which this reading was assigned. This will tell you what aspects of the reading you should pay most attention to. Make a list of questions—What is the author trying to say? How does this add to what we’re covering in class? What is most interesting, surprising, or confusing about this reading? Then look through the reading for the answers to these questions. If this is all you manage to do, you’re probably still well ahead of the game.

C. Use a calendar.

Set up an early warning system. Google calendar or any other calendar software will allow you to set up reminder emails or alarms. Go through the syllabus at the beginning of class and put all the due dates in the calendar. Set alarms for the day you should start working on an assignment (1-2 weeks before due date, usually), the day when you should have a draft (a few days before due date), and the last few hours, when you need to proof read, and print or download. You might also look into an online to-do list, like the one built in to the google calendar, or the more complicated one at vitalist.com

D. Take good notes, be organized.

You need some kind of system to make sure you keep all papers related to a given course in one place, where you won’t lose them. Create a system for your notes, too. Take them in a notebook so you can’t lose pages. Use margins to insert subject headings or comments about the relative importance of a given passage of notes (for example, write in the margin, “for exam!”)

E. Take care of yourself.

Shower. Eat. Sleep. Exercise. Block out a reasonable period of each day to relax (preferably after working), and stick to it.

F. Avoid Wikipedia.

If you don’t know the answer to a question, the last thing you should do is google it or look to Wikipedia. Even assuming these sources will give you accurate information (and they don’t always), the information will be organized for different purposes, with different emphases. Always start with the materials that are required for the course. If course books have an index, start there. Look through headings and sub-headings in the required readings. Look at the topics on the schedule in the syllabus to see where each reading falls, to tell you what it relates to. Look through your notes from class.

G. Plagiarism is never the answer.

Plagiarized papers are never good papers, even if the plagiarism isn’t caught. Students never believe me about this, but it’s true. A good paper reflects (thoughtfully!) the questions and problems that the class covered. A plagiarized paper is almost never a direct answer to the assignment posed (since it came from some other context). Even purchased, custom papers are written by people who were not in the class. Even if they are experts in the field in question (they almost never are), they don’t know what the professor is really looking for, because to find that out you need to be in class. And if you plagiarize from another student in the class, the prof will see both your papers, which makes things rather obvious. To plagiarize well is possible but actually harder than simply doing the assignment in the first place.

Also, the penalty for plagiarism ranges from a zero on the assignment to an F for the course to expulsion from the school. Even assuming an inflated expectation of your potential success if the plagiarism isn’t caught, this is not worth that risk. Turning in a crappy paper may get you, say, 30-40 points out of a 100 if you truly don’t know what you’re doing but put in a minimal effort (say, no more effort than it takes to paste random lines out of wikipedia). That’s better than zero.

4. Failure can be an opportunity.

Failing at something gives you lots of information, which you can use to improve your situation. But you need to examine what happened carefully and honestly in order to take something out of it and turn yourself in a better direction. Failure may tell you that a certain subject is not for you. Nobody is good at everything; this is okay. Failure may tell you that your priorities are not lined up well with what you’re actually doing. Re-evaluate those priorities, and try to act according to them. Failure may also mean your goal is fine, but your methods are flawed. Try new methods.

5. Take a break?

This is often heresy in American educational circles, but if you’re not in a place in your life where you can put real effort into your studies, or if you do not see the value of the classes you’re taking (despite actually trying!), it may be time to take a break. Do some honest self-assessment, and come up with a realistic plan for how to come back, in case you need it. Remember that if you have loans, you’ll have to start paying them back (usually 6 months after leaving school). But if you’re not getting anything out of your classes, then you are wasting your time and money. The world will not stop turning if you don’t finish college four years after graduating from high school. It is possible (though harder!) to come back later. You don’t have to leave forever—try starting with a semester. Talk to an advisor at your college about your options.

At my college we often see students failing out on their first time around, and then coming back a few years later, after work or other outside experience. The difference is miraculous – the older students usually have perspective, motivation, maturity, and focus.

Posted in Teaching | Tagged , , | Leave a comment

Syllabus: History 102, Fall 2112

As a historian, when I’m following current events I almost always think about them as I imagine a historian will do a hundred or two hundred years from now. I can’t help myself, because this is just how I think, but the process also puts an interesting twist on my reading of current events. My affiliation with the study of history is far stronger than my affiliation with any political party, position, or policy. In fact, my view of the world through a historical lens probably determines a lot of my political views. In trying to understand events, I look for patterns, like anyone else, but the kinds of patterns I look for play out over decades and centuries.

Moscow in XXIII Century. Kremlin. 1914

1914 Postcard depicting 23rd-century Moscow. Via Wikimedia Commons

Thinking along these lines, I began to imagine what the syllabus might look like for a course on the modern western world (similar to a course I currently teach), when it’s taught a hundred years from now. It was an interesting exercise, not only to try to predict the future, but to think about how future historians might look back on our past and present. It would necessarily be drastically compressed in a survey course like this, so I thought about what aspects of our lifetimes would stand out.

It should go without saying (but perhaps does not) that what follows is not what I want to happen, but what seems possible or even likely given our current trajectories and what I know of how political systems, economics, and societies evolve—that is, that the only thing you can count on is constant change. I very much hope our future is actually much brighter than this. But for that to happen, we’d have to start making much better choices as a society than we’re making right now.

Here’s what I came up with, as a thinking exercise, not a recommendation!

History 102: The Western World in the 19th to 21st centuries
Fall 2112

Week 1: The Invention of Citizenship (1750-1860)
The American and French Revolutions, and the modern British constitutional monarchy. What are the origins of democracy? How was citizenship defined? Who was included in the new democracies, and who was left out? Reactions to the new ideas: reactionaries, Romantics, and revolutionaries.

Week 2: Industrialization and Cultural Revolution (1780-1900)
The origins of modernity, introduction of class warfare, the origins of environmental devastation. The rise of the middle class, decline of aristocracy and the exploitation of workers.

Week 3: Racism and Imperialism (1860-1914)
Public misapprehensions of science, racist ideologies, and the scramble to colonize the globe.

Week 4: The Wars of Ideas: Capitalism, Socialism, and Fascism in the 20th century (1860-1991)
Mass politics, ideological warfare, and state terrorism. A civilization destroys itself. The United States as the only major power left whole.

Week 5: American Dominance (1945-2001)
The expansion of the American Empire around the world. The American nuclear umbrella and the Cold War. Oil and gas at the center of global politics and security.

Week 6: Decline and Fall Part I: European Empires (1945-2008)
Decolonization, and political and economic obsolescence: Europe retreats.

Week 7: The Information Revolution Part I (1950-2050)
Microcomputing to internet to unlimited global connectivity: access to information as a global resource, and the Neoconservative backlash (ignorance as political platform).

Week 8: Decline and Fall Part II: The American Empire (2001-2090)
Deregulation and the destruction of capitalism. Cycles of global economic crashes and the contraction of the American Empire. Great War with Iran triggers American decline relative to the other Great Powers. India emerges as military superpower through technological and organizational innovation.

Week 9: Federalism and Localism (2001-2090)
European micro-economies and micro-democracies combined with the revival of the EU to regulate trade and security bring Europe back to political prominence. Late in the period the same model was adopted in parts of U.S., initiating partial recovery of prosperity.

Week 10: The Rise of the Third World (2030-2090)
Africa, East Asia, Latin America and the Middle East adopt the European model of combined federalism and localism and rise to compete with India and Europe as global super-powers. Return of the multi-polar world. War for Arctic Resources and global climate change make authoritarian Russian Empire the richest country in the world and arbiter of global energy supplies, causing political tensions with the democratic regional federations.

Week 11: The Resource Race (2050-2090)
Water shortages, famine, and climate chaos leads to civilizational wars. The collapse of the United States into social-democratic Northern States and neo-fascist Southern States. Collapse of the Russian Empire into very rich social-democratic North and authoritarian South.

Week 12: The Information Revolution Part II (2050-2090)
Rising wealth and access differences between educated and uneducated (mirrors late Industrial Revolution, except access to information rather than economic class origins is determining factor in wealth and social status). Micro-governments increasingly divided into informed and rich versus uninformed and poor, leading to violence and the break up federalist institutions around the world.

Week 12: Cataclysm (2090-2100)
The Great Demographic Catastrophe, renewed “dark age.” Mass famines, warfare, and destruction of world knowledge archives causes sharp decline in technological development.

Week 13: Renaissance Part II  (2100-present)
Reduced global population resolves environmental and resource problems. Now-smaller communities re-organize into renewed micro-economies with balanced resource distribution and equitable access to information.

————————–

Like all histories, this one leads up to the “present” as if everything that ever happened before was headed toward a happy ending on purpose. It’s very common to not only think that all of history is an upward trajectory leading to a superior present, but also that history comes to an “end” with us, and no further catastrophes will occur on the scale they once did.

One of the greatest challenges today of teaching 20th century European history is finding ways to make today’s college students understand how people in 1914 could have so stupidly allowed World War I to happen, or why everybody in Germany in 1933 didn’t just emigrate, and why seemingly “normal” people in every country in the industrialized world in the 1930s thought fascism was a good idea, or why millions of people in Russia between 1917 and 1991 continued to believe in the dream of socialism even while the Soviet government did all the things it claimed to be against.

An important lesson I think you can learn from studying history, actually, is that human beings have an infinite capacity to bury their heads in the sand and do stupid, self-destructive things rather than rationally face the reality in front of them. All of us are doing this all the time, but it’s difficult by definition to catch yourself doing it. Analogies to the past—where people like us were making the same mistakes but we now see clearly how wrong they were—can help wake us up. There are many good reasons to study history, but I think this is one of the most important ones.

What do you think history will say about us 100 years from now? What lies ahead? Please share in the comments!

Posted in History, Random | Tagged , | Leave a comment

Why Is Academic Writing So Unpleasant to Read?

Most of us are trying, really we are! Image via Wikimedia Commons.

I’ll be the first to admit that many academic books and articles just aren’t a good read. Sometimes they could be much better written. Sometimes they’re as well-written as they can be, but the subject matter and purposes of the work don’t lend themselves to easy reading. Not everything can — or should — be easy. Either way, knowing some of the reasons why an academic text you may have been assigned to read is so turgid and unpleasant may ease the pain just a bit.

What follows is my short list of common assumptions about academic writing, and my own explanation for why people get that impression. Important background for this discussion is in my earlier post, What Is Academic Writing?

Academic writing is always boring, dry, formulaic, and unnecessarily complex.

It doesn’t have to be, and academics increasingly agree that it shouldn’t be. But just because something is published doesn’t mean you can rely on its being well written. In the academic world, having something truly new to say – or maybe even just something that more or less fills a gap (or even just having a famous name) – can be enough to get published, despite bad writing.

But original ideas communicated well through effective writing are still the goal.

In many cases the writing (the form) must be simple or plain, because the ideas (the content) are by definition new and complex. The ideas themselves are meant to be the source of excitement. The writing is meant to not get in the way by making these ideas less clear or harder to assimilate. Some readers don’t like this, as a matter of taste (it seems dry or formulaic), but in the academy it is inescapable.

If you’re not excited by the ideas in an academic piece, it may be that that subject is not for you, but it may also be that you don’t yet know enough about it to see why it’s so fascinating, or it may be that the author simply didn’t write clearly or directly enough to ‘let you in’ to ideas that do have inherent interest.

Academics perversely make the simple and obvious seem more complicated than it is, and refuse to recognize what everyone else knows (i.e., common sense).

The whole purpose of academia is for some people to spend time working out the really difficult questions, facing the complexity, and bringing to public attention the hardest and most hidden truths. It’s a dirty job, but someone’s got to do it.

Sometimes, it’s true, the inertia of the academic machine (not to mention the cruel tenure review process) causes common sense to get momentarily lost. But the nature of the endeavor – in which every claim is constantly questioned and judged by one’s peers – is meant to ensure that nonsense doesn’t hold up forever.

If there were no scholars (from undergraduates to the big-name professors) to ask questions and vet the information we use to build bridges, cure diseases, form public policy and define ourselves as a people, where would we be as a society?

Academic writing is a static, unchanging entity, and separate from every other kind of writing.

On the contrary, academic writing often has much in common with many kinds of journalism and other “public” writing, and the lines distinguishing one from another often blur. Moreover, standards of what academic writing ought to look like have changed over time and continue to evolve, constantly taking on influences from trends inside and outside the academy. If you start noticing the publication date of what you read, you’ll start noticing patterns — academic work written in the 1960s is different in style and form from that written in the 1980s, or the 2000s.

We might just note here that the teaching of “academic writing” is itself a relatively new phenomenon. In the not-so-distant past, becoming an insider in the academy was an option for only a few, and the fact that one had to learn the rules of how to look like an insider more or less by osmosis ensured that the ranks remained thin. Clear, effective writing was – and in some circles still is! – considered a little risky, for if just anyone could understand what academics were talking about, what would happen to their prestige?! Fortunately, this is one bit of nonsense that is on its way out.

The aim of an academic paper is to quell controversy, to prove that a certain answer is the best answer so effectively that no one will ever disagree about this issue again (and if a paper doesn’t do this, it has failed).

Though many students are taught in high school to treat argument in writing as a kind of battle-to-the-death, this is more a reflection of teachers’ need to force novice writers to find their independent opinions — so they may effectively assert and defend them in writing — than a reflection of how the academy really works or what’s actually expected of your written arguments in college and beyond.

In reality, academics are usually collegial people who respect each other’s research and conclusions, and whose main aims are to refine and expand our collective knowledge. To that end, we value controversy very highly, as a means to open up new questions and identify the gaps in current knowledge. An argument that sets out to definitively prove some absolute solution will – in most cases! – be seen for what it is, the mistake of a novice who has (presumptuously) overstepped the bounds of what can be proven. Most arguments suggest tentative conclusions, expand on conclusions made by others or quibble with aspects of others’ evidence or reasoning, or – in many cases – simply lay out some new, surprising thought or theory so as to deliberately provoke controversy, rather than resolve it.

As an undergraduate, you should (like any other scholar) aim to develop arguments that honestly reflect your reasoned judgment of the evidence. If the evidence leads you to conclude only that more evidence needs to be gathered (which cannot be gathered now, in the scope of the current project), then you may need to either redirect the focus of your project to address a problem where you can conclude something more substantive, or – if the reasons for being unable to make a conclusion are sufficiently surprising or interesting in themselves – you may simply present those reasons as the “evidence” for an open-ended thesis statement.

Academic writing is full of a bunch of meaningless jargon.

Sometimes, yes, it is. But most of the time the jargon is far from meaningless, though it may not contribute much to the clarity of the writing.

Ideally, jargon is used only when necessary, but there are times when it really is necessary. Jargon should be understood not as made-up words people use to sound smarter than they are (though occasionally it is that). Proper jargon is a form of short-hand. A term of jargon always has a very specialized definition, often for a word that is also used in different ways in other contexts, which is part of what makes it so confusing to outsiders.

Jargon by definition is understood largely by insiders, which is probably why it so often seems downright offensive. But, in highly complex conversations taking place amongst a small group of researchers on a given topic, jargon serves to sum up whole complicated parts of the conversation in one word or phrase. It’s a means of efficiently referencing long, drawn-out thought processes that the whole insider group has already been through.

For example, there’s a concept well-known in many social science and humanities circles under the term “orientalism.” Edward Said wrote an entire book to define what he meant by that term, and since then people who want to apply some part of his ideas in other contexts refer to all those interrelated ideas as “orientalism.” If you’ve never read Edward Said’s work or had the term explained to you, you couldn’t possibly know what it’s about. You can’t guess from looking at the word, and a standard dictionary won’t help you. However, this term, like some others, is so well established by now that a good specialized encyclopedia will have it listed. Even a comprehensive general encyclopedia like Wikipedia will give you an explanation, though you should remember that Wikipedia can only ever be a starting point, to orient you. It can’t give you the nuanced and specific background that you really need to understand how a term like orientalism is being used in a given scholarly work—it can only tell you where to begin to look to understand it.

Hopefully, in a reasonably well-written piece of scholarship, jargon terms will be defined somewhere in the text. But this is not true of some terms that are so widely used in so many fields of scholarship that most scholars consider them obvious, like “discourse” or “civil society,” or, increasingly, “orientalism.” If you come across undefined specialized terms like this, the first thing you need to know is not to try to find them in a dictionary. Start with encyclopedias instead, the more specialized the better. Again, Wikipedia might be a good starting point if you have no idea where else even to look. But then go back to how the term is used in the text you’re working on, and think about its specific application in this context. Find an encyclopedia specializing in the field or discipline you’re reading about. You can also look to other related readings and your professor if a given term is obviously important and you can’t figure it out. For better or worse, jargon goes with the territory of academic writing, and you can’t completely avoid it.

Nominalizations

Okay, this isn’t a common accusation leveled at academic writers, but it should be. I learned about this endemic problem as an undergraduate student of the Little Red Schoolhouse at the University of Chicago. Once you’re aware of it, you see it everywhere. Unfortunately, I can attest that as an academic writer, being aware of the problem makes it only a little bit easier to address. Okay, I know you’re asking, what is a nominalization? It’s when a verb is made into a noun. As in, the sentence that should state “the committee members revised the bylaws” is more often written, “the revision of the bylaws was enacted by the committee members.” If you present the latter version of that sentence to an English teacher, that teacher is likely to point to the passive and “empty” verb “was enacted” as a problem. But a more direct way of assessing the problem is to note the nominalization — “revision,” a noun made out of the verb, “to revise.” When you turn a verb into a noun, you are often forced to supply some sort of empty verb, often a passive one, to fill the verb-void. Nominalizing a verb also often results in strings of ugly prepositional phrases, like, “the revision OF the bylaws BY the committee members.” So why on earth would anyone change their nice, fat action verbs into awkward nominalizations that force the whole rest of the sentence into unpleasant contortions of logic? There’s a surprisingly, depressingly, obvious explanation. When a writer knows her subject really, really well, she tends to think in terms of lists of concepts. But a reader who is NOT familiar with the subject will find it much easier to digest in a totally different form: as stories about who did what to whom and why (that is, grammatically, via substantive nouns with action verbs). The writer deeply embedded in her subject is likely to write in strings of concepts (often in the grammatical form of nominalizations) linked by empty verbs like “to be,” “to have,” “to enact,” etc., and prepositional phrases like “the yadda-yadda of the humdinger of the balderdash of the chupa-chups.” In the ideal case, the writer revises from strings of nominalized concepts into “stories” (even if abstract ones) structured into substantive nouns and action verbs. But, speaking as someone who has finished revising her first book under ridiculous time constraints and sleep deprivation (“constraint” and “deprivation” are both nominalizations), sometimes there just isn’t enough bloody time to revise as much as we would like.

(For those academic writers of any level who could use some help with the nominalization problem and more, I can’t recommend highly enough Joseph Williams’ Style: Toward Clarity and Grace.)

Academics have no sense of humor

Well, okay, I do see where this criticism is coming from. Without debating whether academics themselves have more or less humor than the general population, I will admit that academic writing generally contains little in the way of jokes or whimsy, let alone hilarity. The main reason is probably that we all want to be taken seriously by our colleagues and many of us live in fear of not getting tenure or promotion (which rests in part on our publications). A second reason is that our subject matter often doesn’t it particularly lend itself to humor (you try to make Stalinism or nuclear physics funny, why don’t you, and don’t forget to make an original contribution to the field while you do it!). And still another reason is that, again, our main focus is always clarity, since by definition our subject matter is complex and new.

That said, academic whimsy does exist and you occasionally find it in the wild. In Norman Davies’s God’s Playground: A History of Poland, Vol. II, on page 75 (1982 paperback edition) there’s a whole sentence where nearly every word begins with the letter P:

The proliferating profusion of possible political permutations among the pullulating peoples and parties of the Polish provinces in this period palpably prevented the propagation of permanent pacts between potential partners.

LOL. Okay, let me catch my breath. No, really, that was hilarious, was it not? Admit it, you laughed.

In sum:

There are a lot of reasons why academic prose may not be exactly scintillating. It may actually just be badly written, whether because the writer didn’t consider style important, or because the writer never had training in good writing, which most scholars didn’t systematically get until very recently. Or it may just be about a subject you can’t stand, and this aversion makes it harder for you to follow complex prose. The text may depend on a lot of jargon (necessarily or not). It may have been written with a very tiny audience in mind, of which you are not (yet) a member, so there may be assumptions to which you are not (yet) privy (though you can ask your instructor for help). It may, in rare cases, even be badly written on purpose, to “sound smart.” Figuring out, if possible, which of these is the case in a given instance may help you to wade your way through. Regularly consulting dictionaries and encyclopedias to expand your vocabularies is not only necessary, but part of the point — if you understood everything you read in college, you wouldn’t be challenging yourself, and you wouldn’t be learning, now would you? In any case, none of these reasons can serve as a good excuse for you to write badly, insofar as you can avoid it. Aim higher!

Posted in Profession, Teaching | Tagged | 1 Comment

What is Academic Writing?

Bundesarchiv B 145 Bild-F001323-0008, Bonn, Münsterschule

This is not what we mean by academic writing. Bundesarchiv B 145 Bild-F001323-0008, via Wikimedia Commons.

An academic essay is best defined by the PURPOSE that distinguishes it from other kinds of non-fiction writing:

It aims to identify and resolve complex problems in relation to ongoing discussions among fellow thinkers about the most difficult or abstract human issues.

In every field there are scholars working to resolve debates and questions of general interest (a “field” of inquiry can be anything from “history” to “the early nineteenth-century cultural history of the Russian gentry”).

As students or scholars, our written work is intended to be a part of such ongoing debates, and our aim is not only to illuminate a very particular problem through analysis of sources and original reasoning, but also to relate that problem to similar ones other scholars are working on, so that we – as a group – may better understand our whole field of inquiry.

The complexity of our subjects requires that our writing be as simple and clear as possible, and the goal of situating our ideas in relation to a wider public discussion requires that we refer to and analyze outside sources (i.e., other writers) as an integral part of our own work.

As such, scholarly essays generally have the following FEATURES in common:

-one main problem or a cluster of related problems is identified and its significance to the field is explained

-original claims and interpretations intended to resolve the main problem are made by the author, and supported by reasoning and evidence

-secondary sources: situate the author’s problem and main claim within a public discussion, and may also serve as support for some claims

-primary sources: support the author’s claims (Note that some kinds of scholarly writing – like book reviews and many undergraduate research papers – refer only to secondary sources)

-analysis of sources, both primary and secondary, to explain, question, and explore how they can support the author’s claims

-definitions of all specialized terms so their nuances can be analyzed in detail, and so terms may be reliably used in the same way by other researchers, or applied or adapted as necessary in new contexts

-style and structure appropriate to the intended audience

-rules of logic, evidence, citation and intellectual property are adhered to according to convention

READERS of an academic essay are assumed to be fellow toilers in the academic endeavor to “let our knowledge grow from more to more and so be human life enriched” (Crescat Scientia, Vita Excolatur, the motto of my alma mater).

In other words, we expect our readers to be looking to our writing for:

(a) information that will enrich or enlighten their own studies and

(b) our original ideas, conclusions or interpretations that will also help to further other studies and general enlightenment.

Readers of academic essays are generally not looking for:

(a) entertainment or aesthetic gratification,

(b) simplified or summarized versions of things they already know,

(c) conclusions or plans of action without the reasoning or evidence that led to them

(d) suspense or delay in finding out what the point is (though these are all valid elements in other kinds of essays, to suit other purposes).

Therefore, the virtues of STYLE AND STRUCTURE most often looked for (though not always achieved!) in academic essays are: clarity, cohesion, and brevity.

We want to find what we’re looking for, understand it, remember it, and apply it in new contexts, as quickly and easily as possible, without losing the inherent complexity of the ideas.

In order to best fulfill these goals, the classic short academic essay has a skeleton that looks something like this:

-Introduction: context, problem, proposed resolution (=thesis, which at this point may be only generally implied or stated in broad terms that will be elaborated later)

-Body: Argument (consisting of claims, evidence, reasoning), also including definitions of terms, background information, and counter-arguments as needed to make the argument clear and accurate

-Conclusion: restatement of problem’s resolution (thesis), and re-contextualization (how does this resolution serve the greater discussion, and where do we go next?)

(The citation and analysis of sources often plays an integral role in all three major parts of an academic essay: sources can be used to contextualize as well as to support the author’s claims. Every reference to a source, whether it is directly quoted, paraphrased, or merely mentioned, must be accompanied by a citation.)

Within this formula, there is enormous room for creativity, experimentation, and even subversion of the formula.

It is important to remember, however, that the formula is what academic readers expect to see. When you give them something different for no good reason (whimsy and rebellion are not good reasons), they will be confused, and your essay will have failed to achieve its goals.

To subvert the formula you must know the formula – that is, the reader’s expectations – so well that you can predict and guide reader responses in your own directions.

Every field or sub-field of academic inquiry has its own conventions, jargon, habits and expectations. Undergraduates encounter a greater variety of conventions than most other scholars ever have to deal with on a daily basis, and almost all of it will be new to them. This is very difficult, but it helps to concentrate on the basic principles and methods common to all academic writing (as defined by the common purpose described above), with occasional side- tracks into issues of particular interest to historians. When you work in other fields, you need to look for and assimilate the conventions or assumptions peculiar to those fields, and integrate them into the general principles and methods of effective analytical writing you have already mastered.

Finally, it may also be helpful to define an academic essay by WHAT IT IS NOT:

-Writing which aims to entertain or give aesthetic gratification (fiction, poetry, memoirs or “New Yorker”-style essays) may use entirely different devices to convey meaning (such as imagery, formal complexity, foreshadowing, juxtaposition, etc), and they may emphasize expressionistic or impressionistic understanding over analytical understanding. Structures and formal elements can vary infinitely. (academic writing relies exclusively on reasoning, logic, and rules of evidence because it must be reliably understood in the same way by every reader.)

-Writing which aims only to convey information (news journalism, some professional reports, textbooks or technical writing). naturally does not usually include an argument or thesis and has no need to refer to other arguments or theses. Often the most important information is placed right at the start, with other information following in decreasing order of importance.

-Writing which aims to direct future action or justify an action (exhortatory or opinion-based journalism, grant proposals, legal briefs, certain kinds of professional research reports). In these cases, an argument is an integral part of the structure, but the goal is to convince or inspire the reader toward a specific action, rather than to contribute new information or enlightenment for its own sake. Such pieces generally begin and end with a statement of the action desired, and the body would consist of evidence or reasoning. They may or may not emphasize a critique of alternative arguments. Depending on the intended reader, they may simplify reasoning or evidence. Such works also differ from academic writing in that they are not necessarily situated as part of any larger discussion (therefore making much less use of outside sources or analysis of sources), and may require different rules of evidence or citation, or no such rules, depending on the intended audience.

-Writing which aims to tell a story based in fact ((auto)biography, memoir, narrative history, summaries of various kinds) generally eschews argument and analysis of sources, and may employ certain literary devices. Organization is usually chronological.

 

Coming soon: Why is academic writing so unpleasant to read?

Posted in Teaching | Tagged , , | Leave a comment

Rogue Professors

Okay, so you’ve read my posts about managing your expectations in college, taking responsibility for your own behavior, and understanding what grades do and do not mean. And you still think your professor is being unfair.

Ion Theodorescu-Sion, via Wikimedia Commons

Okay, it’s possible your professor is being unfair. It happens. It happens partly because failure happens in every field everywhere. And in academia a professor’s failure may happen because of the insane constraints imposed on contingent faculty or the insane workload of full-time faculty or the incredible pressures of trying to make ends meet with a faculty workload and low faculty salary (more on that soon). Whatever is the cause of the failure of an individual faculty member, let’s remember that it isn’t the tenure system.

Okay, whatever, what do you do when your prof is being unfair?

First, double-check yet again that he or she really is unfair. Re-read the syllabus, and the assignments, and all other course materials, and be honest with yourself about your work.

Okay, still unfair?

Talk to your professor. Most likely, there’s a miscommunication issue, or a simple mistake, at bottom. Typos happen, on assignment sheets and on grades. It’s not totally uncommon, and it can usually be easily remedied.

Eternally Good Advice: Always submit your work electronically as well as in hard copy, if you can. Whether by email or through course software, if you submit your work electronically it is time-stamped, proving that you did it on time. This is a good way of covering your butt in any case of confusion.

Talk to your professor respectfully, honestly and with an open mind. Be fair to yourself and to your professor.

If your professor does not respond to email, give it a week or two and then send a gentle reminder (knowing that faculty inboxes are inundated constantly with demands, most of which have more immediate deadlines than yours).

If, after directly trying to resolve any situation with your professor, you still feel that you are being treated unfairly in a way that will have serious consequences on your final grade, you can refer your complaint to the chair of the professor’s department. Again, be respectful, honest, open-minded, and fair (and if communicating via email, allow 2 weeks for response).

In extreme cases (and this is very rare), if you have a real case and you are stone-walled even by the chair of the relevant department, you can try explaining the case to the dean of students.

There are cases of real unfairness, and in those cases you absolutely should bring it to higher authorities. They really need to know if something seriously wrong is going on. Faculty can and should be held accountable for real incompetence.

But it’s also true that you are a student, and the vast majority of faculty members would not have gotten anywhere near the positions they’re in without many years of incredibly rigorous evaluation and training, so don’t take what they tell you lightly.

And in still other cases, there may be real unfairness going on, and whether or not you can get the department chair or dean to listen to you (and I certainly hope you do), it may not be worth killing yourself over. Ultimately, one grade in one class is not a matter of life and death. Do an honest evaluation of the costs and benefits to yourself of pursuing a case where you believe you have been treated unfairly. In any such case, you should always make sure someone knows what happened (with as much documentation as possible), in case there is a larger pattern at work, but once you’ve done that, it may not be worth what it costs you to pursue the matter further. The best course of action will depend very much on your individual circumstances.

I say this both as a professor who has seen many students upset and indignant over their own complete misunderstanding of basic policies that apply to everyone, and as a former student who was once or twice indignant myself over faculty behavior that felt—and may have been—very unfair. The best course of action really does depend on many factors.

Rogue professors do exist—they do—but they are not as common as your friends will tell you.

Posted in Teaching | Tagged , | Leave a comment

Being Original

Many students have the mistaken assumption that having an argument or thesis means they have to prove that some professional academic who wrote a book is wrong about his own specialty (an obviously impossible task for an undergraduate writing a short paper under strict time constraints). Such students often conclude that the expectation of having an argument in every paper is ridiculous, and they give up before they’ve even started writing the paper.

1810EconomicalSchool

By Baroness Hyde de Neuville, via Wikimedia Commons

No professor (unless they really are crazy, of course) expects you to become an expert in a subject overnight, nor to refute in a short essay ideas that were developed over years by an expert with access to all the original sources.

What they do expect is that you direct your very able and unique mind to the text and ask important, worthwhile questions. You should then explore those questions, and posit some possible answers, based on nothing more than your careful reading of the text and your reasoning.

Every book, no matter how carefully researched or how famous its author, rests on certain assumptions, is limited in scope, and is derived from some finite set of sources. Your job when asked to review or critique a work of scholarship is to examine its assumptions, limits, and use of sources, and from these to understand the goals of the work, and to assess how effectively it met its goals. Then, ask yourself what else could have been done, or should be done next, to further our collective understanding of this subject.

Once you have explored all these ideas, you ought to have come to some sort of conclusions of your own about the value of the work for various purposes, and what remains to be explored. These conclusions should be articulated as your thesis, and you will support this thesis with arguments grounded in the text to illustrate why your reading of it is fair and accurate. A critical review is not the same as a bad review.

A closely related problem that many students have is the idea that, as an author of a paper, a student has to at least pretend to know everything about the subject.

Actually, you really ought not to pretend anything, as an author (unless of course you’re writing fiction). What you should do is research and think about your topic as thoroughly as you can within the scope of a given project, and reflect that reading and thinking accurately on the page. Nothing more, and nothing less.

If comments you receive on your writing suggest to you that you are supposed to “know everything about the subject,” what it probably really means is that you did not do as much reading or as much thinking as the assignment required, or that the reading and thinking you did do somehow did not make its way onto the page. Look at your syllabus again, and/or the assignment sheet. Did you carefully read everything that was required for the assignment? Did you do everything the assignment asked of you?

In almost every case, when a student throws up her hands, saying the professor expects too much, that student did not fail to write a truly original, publishable paper. Such a paper was never expected. What is most likely is that the student simply failed to carefully understand the course materials and requirements. The latter is a perfectly reasonable expectation.

Posted in Teaching | Tagged , , , | Leave a comment

What is Tenure?

Metternich (c. 1835-40)

If you don’t like tenure, you might be a fan of this guy. Klemens von Metternich. Portrait by anonymous, via Wikimedia Commons.

 Many people think tenure means job security. That it means that educators, unlike everyone else, can’t be fired.

This is nonsense.

Tenure does not equal job security. It does not exist in order to protect the jobs of teachers.

I could say this a thousand times, and still many people in this country would refuse to believe me, even though what I say is undeniably true here on planet reality.

That is because many people are listening to politicians who lie.

The same people tend to be cynical about politicians, but nevertheless, they choose to believe this particular lie.

It’s sometimes comforting, when times are hard, to identify someone who seems to have it better, and to hate that person.

The thing is, the people identified as scapegoats in these situations (historically speaking) tend to be people who do not, in fact, “have it better.”

So it is with teachers.

This is why tenure exists.

That link goes to a historical document, known as the Carlsbad Decrees, dating to 1819. It represents the true reason that tenure exists, and it also explains the purpose of tenure, but you may need some context to understand why.

In 1819 Europe had recently experienced some revolutionary movements. European revolutionaries at this time wanted the same basic rights of citizenship that Americans now take for granted as defining who we are as a people: freedom of speech, freedom of the press, and the right to vote for a government that is made up of representatives of the people, not of kings. In the first half of the nineteenth century, these ideas were still scary and radical in Europe. The monarchs who sat on thrones across most of Europe at that time did not want to acknowledge such rights. And many rich, powerful, landed aristocracies sure as heck didn’t want to extend voting rights to a bunch of uneducated, not-terribly-hygienic “masses.”

By “masses,” they meant my ancestors, and most likely yours.

In this climate, in the various German-speaking provinces of Europe (some of which were independent tiny principalities at this time, some of which were part of an enormous Empire ruled by Austria but made up of many peoples, from German speakers to Poles to Hungarians to Muslim Serbs), some people liked the ideas that the French and Americans were so excited about, that people have “natural” rights. But these people were ruled either directly or indirectly by an Emperor, and their Emperor was in his turn ruled by a powerful minister, Klemens von Metternich.

Metternich thought social classes (that is, ranks in society that were determined by birth: aristocracy, middle classes, working classes, peasants) were ordained by God and should not be meddled with. People who were not born to wealth and social rank should not vote because, Metternich thought, God said so. It was the natural order of things, and upsetting that order would lead to chaos. Also, Metternich himself was born to wealth and social rank (pure coincidence, I’m sure), and he liked that, and didn’t want anyone else horning in on his privileges.

Metternich, in other words, was the embodiment of everything the American Revolution fought against.

Metternich was the man behind the Carlsbad Decrees. He forced the German Confederation (a loose group of German-speaking states that Metternich dominated) to all agree to sign this document.

What does the document say?

You can, and should, read it yourself. Here’s the quick version: it is based on an assumption that universities are a hotbed of revolution (of ideas like those that founded the American Republic, in other words). Students are young and silly and get persuaded by their over-educated professors to think wild ideas. Sound familiar? It’s something we’re hearing in the news in the USA right now, in 2012. But the “wild ideas” that Metternich was so terrified of were the same ideas that ALL Americans, liberal or conservative, now hold dear, that freedom of the press, freedom of speech, and the right to vote for a representative government are the best way to go. Metternich was terrified of students learning these ideas from their professors at university. So, in the Carlsbad Decrees, he made it the law in all the signing German-speaking states that universities be watched over by a government appointee whom Metternich selected. Students would not be allowed to meet in groups. Any professor caught saying things in class that Metternich didn’t agree with, would be fired.

Sounds familiar? Yeah, it’s totally the plot of Harry Potter and the Order of the Phoenix. Yeah, that’s probably not a coincidence. J.K. Rowling is an educated lady.

Tenure was created because of the Carlsbad Resolutions, and other laws like it in Europe in the decades following the French Revolution. The main idea of tenure is that professors should not be fired for disagreeing with a prevailing political view.

Professors can be fired for other things, like not doing their jobs. They can, and are, fired for not showing up to teach, for not being qualified to teach. Probably not as often as they should be, but can you honestly say that everyone in your field of work is fired as soon as anyone realizes they’re not terrific at their job? Of course you can’t. Incompetent people exist in every profession.

Tenure does not technically prevent anyone from being fired for incompetence, and protecting such people certainly isn’t its purpose. It does prevent people from getting fired for saying something that others disagree with. The tricky bit is that the line between these two things can often be grey and is almost always contentious, but it’s a VERY important line.

Why? Because the nature of education (when correctly understood as a process of exploring and learning about the world, not the way Metternich understood it as a process of making everyone think just like he did), is that professors MUST discuss ideas that not everyone will agree with. Students are not forced to agree. But they are forced to be exposed to ideas they may not agree with. This is the very definition of education.

And if a student is secure in his or her beliefs, there is nothing dangerous about this process, and much that is beneficial.

Also, not all teachers have tenure. In order to get tenure, you have to go through a process. This process varies from place to place and from level to level of teaching, but no matter where you look, that process is difficult, and more intense, I argue, than any review anyone undergoes in any other profession as a contractual part of employment.

Whoa. Think about that for moment. No one else, in any other profession, has as part of their employment contract the requirement to go through a process of scrutiny this intense. This is after all the scrutiny required to get the degrees you need to even apply for a job (for university professors, it’s the highest and most difficult degree you can get), and after the job application process. This is in addition to all that.

Usually, at the university level, the tenure process involves at least the following:

  • Recommendations from one’s peers
  • Recommendations from one’s students
  • Recommendations from one’s colleagues outside one’s own institution
  • Examples of one’s original research from prestigious, peer-reviewed presses (in my field, usually a book and at least a couple of articles)
  • Examples of one’s teaching pedagogy, through syllabi, assignments, examples of written feedback, written explanations of one’s “teaching philosophy,” etc.
  • Evidence of one’s ability to compete successfully for outside funding
  • Evidence of a substantial research plan for the future
  • Observations of one’s teaching provided by peers in the profession
  • Evidence of one’s service to the institution where one works
  • Evidence of one’s service to one’s discipline, or the profession as a whole

Do you have to do all this to keep your job after 5-7 years? Does anyone have to do this outside of education? It is unique. I’d argue that educators are more closely vetted than any other profession on earth. (Okay, except maybe spies.) We’re also uniquely underpaid, among professions that require comparable levels of education, and especially among those that do require fairly extensive ongoing training and adherence to ethical standards, like law and medicine—an interesting fact in itself, but one for another post.

But we can still be fired, after all this, in cases of demonstrable professional misconduct where academic freedom is not a complicating issue. So tenure is not job security.

What we can’t be fired for is for saying something that our bosses disagree with.

Now, that is also different from most professions.

In many corporations, or hospitals, or law firms, you can be fired if you step up and say to clients, or to patients, that they are being cheated by the institution to whom they are paying money for services, for example. (This is to the vast disadvantage of clients, and patients, if you really are being cheated, by the way.)

But universities are different. Because our job is to teach young people, we have to be able to be completely honest with them.

The students, on their part, have the right (and for heaven’s sake the DUTY!!!!) to think for themselves about what they hear from their professors. Any prof worth their salt actively encourages this. Some of us jump up and down and wave our arms, literally begging students to question what we say. Teaching students to question what we say is our whole reason for existing in this profession, and most of us feel pretty strongly about it, or we wouldn’t sacrifice so much to go into such a benighted and underpaid profession in the first place.

Tenure protects our right to say what we see and understand (and remember “we” are selected according to a uniquely rigorous process that takes five to seven years, after five to ten years of post-graduate training) is necessary, in order to expose students to all possible points of view, so that students can choose for themselves what to think.

Tenure does not protect our jobs. It protects students’ right to think for themselves.

Tenure was created to protect the right to think such “seditious” ideas as the United States was founded on.

There is nothing in the world more patriotic than the institution of tenure. Everyday, tenure protects our republic from people who want to bring back Metternich.

Anyone who tells you different is either lying to you, or too ignorant to be worth listening to on this matter.

If the person saying these things is lying, it is a good idea to imitate the best kind of college student and ask why.

Food for thought on this topic from the Chronicle of Higher Education.

Food for thought from the always great podcast of the Colonial Williamsburg museum: Thomas Jefferson’s ideas about education

Update: more food for thought: How the American University was Killed in 5 Easy Steps

A final word: I know you can name someone who has tenure and should be fired, but isn’t being fired. In those cases, the solution is to look into (a) the tenure review process — if people are getting through who shouldn’t, then the process at a given place may need to be revised and (b) what the real reason is that the person in question isn’t being fired. What may look to you like a clear case of incompetence may actually be a more grey area of differing views on effective teaching. If it IS a clear case of incompetence, there are other factors that come into play besides tenure: those whose responsibility it is to fire someone in this situation may not see it as worth their while. Just one of many reasons they may not fire someone is fear of a discrimination lawsuit, or union blowback. However you may feel about the validity of discrimination lawsuits or unions, you should separate those issues from tenure. Not. The. Same.

Posted in History, Profession, Teaching | Tagged , , , | Leave a comment