What Is Socialism?

Judging by the way the media and the GOP talk about it, you might conclude that socialism is anything the GOP disagrees with.

Teaching what socialism actually is is part of my job, so I get asked this quite a bit.

First, socialism isn’t one thing. There is socialism the idea—and the idea has been expressed in different ways by different people—and then there are a vast variety of ways that the idea of socialism has been implemented in various times and places.

When I talk about socialism in my classes, I usually start by drawing an umbrella on the board. Because socialism is an umbrella term for all these different manifestations. Only one of the many manifestations of the socialist idea is “Communism.” And then there’s Soviet Communism as opposed to, say, Maoist or several other kinds, and Soviet Communism also changed dramatically over time, so there’s really no such thing as one Soviet communism. More on that below.

At the most basic level, the core of socialism that all these variable manifestations share is the notion that it would be a good thing if economic resources were distributed equally in a society.

Here’s just the start of a list of things not all socialists agree on about how that equal distribution would happen:

1. By “equal” distribution of resources, do we mean absolute equality (everyone has the same) or do we mean relative equality (some degree of correction of the enormous gaps between rich and poor that characterize capitalist systems)?
Various mid-nineteenth century experiments in communal living aimed for absolute economic equality. Today’s European social democracies aim only for a modest degree of relative economic equality.

2. How would this distribution of resources be imposed, regulated, or maintained?
Since the assumption is usually that a society with non-socialist economic principles would be shifted to socialist economic principles, some mechanism would be required to effect the shift of economic resources from just one part of the population to a more even distribution across the whole population, and then to maintain that relative balance as time passes. There are many, many possible ways for this to happen. Just a very few of the possibilities are:

    A. Voluntary sharing of wealth (as in a commune or co-op)

    B. Government regulation and taxation provides incentives and other “invisible” methods of shifting some limited economic resources to the poor within an essentially capitalist economy.

This could in theory be done in a very minor way–as it is in all industrialized countries right now–in a moderately progressive way, as it is in some social democracies in Europe, or aggressively, which has arguably never yet been tried.

    C. Government legislates salary caps and high minimum wages to deliberately even out wealth
I don’t know of a case where this has been tried to any significant degree.

D. Government nationalizes property (wholly or partially), sets prices, and otherwise directly controls the economy, seizing and redistributing assets as necessary

The Soviet Union did this in the early years following the October Revolution, in a policy referred to as War Communism, since it took place during a civil war and was justified as necessary to save the revolution in its infancy. Lenin changed this policy—reintroducing a limited market and limited private property—as soon as the Civil War was completed, though doing so was very controversial in the Party. We don’t really know what Lenin intended in the longer term, since he died in 1924.

E. Government plans economic production ahead of time (wholly or partially), determining what is made or exchanged by whom on what terms

The Soviet Union began doing this with the first Five Year Plan in 1928 (under Stalin), and it characterized most of the Soviet economy in subsequent decades.

    F. War/revolution are employed to redistribute wealth by force

Arguably, this is another way of describing the Soviet policy of War Communism, and other examples of forced requisition during wartime in many other parts of the world.

3. What resources are we talking about? Just cash? Money and property? How about commercial services? Does socialism address political equality directly?

Traditionally, the discussion of what to equalize is about tangible economic resources, not health, education, or political rights. Although there are clearly connections between economic resources and how easily you can access medical care, education, or civil rights, socialism is at its core a theory about economic resources. The idea is that once those are equalized, the rest follows. Access to intangibles such as political rights, health, safety, and knowledge are really about the distribution of power, and are therefore fundamentally political, not economic, in nature.

IMPORTANT: Socialism, as theory, is an economic idea, not a political idea. So there is no inherent connection between socialism and any particular form of government.

Sing it with me: Economic ideas are about how money and other tangible resources are distributed. Political ideas are about how power is distributed.

Many Americans assume that there is some inherent connection between capitalism and democracy, and between socialism and authoritarianism. There is no such inherent connection, neither in theory nor in practice. There have been democracies with socialist economies (much of Scandinavia in recent decades, for example), and democracies with capitalist economies (such as the US). There have been authoritarian governments with capitalist economies (most absolute monarchies in the nineteenth century), and authoritarian governments with socialist economies (such as the USSR).

While all socialists like the idea of some degree of equality of wealth, socialists have not historically agreed on their preferred form of government. Since the collapse of the Soviet Union, however, most (though not all) people arguing for socialism in the industrialized world prefer democratic governing and non-violent methods of wealth redistribution.

It should go without saying—though sadly it does not!—that by “people arguing for socialism” I do NOT refer to the U.S. Democratic Party. Economically speaking, the American democratic platform is on the conservative end of the spectrum and from a European point of view virtually indistinguishable from the U.S. Republican Party on economics. By “people arguing for socialism” I refer to people actually arguing for socialism. Such as the Socialist Party USA or the American Social Democrats. Ask them what they think of Obama, I dare you. (LOL)

4. Is socialism something that can be achieved, or does it happen “spontaneously”?

This has historically been an incredibly contentious question. Many proponents of socialism consider economic equality a goal that can be worked for, and perhaps fought for. Others acknowledge that economic equality would be an improvement for human societies over capitalist or other economic systems, but do not believe that socialism can be created “from above,” that is, imposed by professional revolutionaries or government fiat.

Karl Marx inspired many professional revolutionaries, including the Bolshevik Party that took power in Russia in October 1917 and set about imposing socialism from above, but Marx himself believed socialism would happen “spontaneously,” from below, through a process of economically exploited classes recognizing how they are exploited and working together to take control of their economic power as producers, which would eventually result in a system characterized by greater economic equality and which Marx identified as “socialism.”

He wrote about all that in the second half of the nineteenth century, as labor in Europe was indeed being grotesquely exploited. After Marx’s death, labor in Europe and the U.S. began to organize and to strike for better conditions. As it happened, the general revolution Marx predicted did not occur (at that time!) — instead, the owners and managers compromised enough on working conditions and wages that workers began to enjoy (just) sufficient health, safety, and access to material goods and education to not be motivated enough for a revolution along the lines Marx expected. The democratic socialism and welfare systems of liberal democracy that dominated Europe after the second world war have essentially held that compromise in place. Until recently, that is, when deregulation, anti-union legislation, and the defunding of welfare and other public programs in the US and (to a less extreme degree) in Europe is beginning to shift the labor-management relationship backward again. It remains to be seen where this relationship will go, but I find the Occupy movement a fascinating early sign of resistance to these anti-labor policies. I say this only to point out that Marxism is not necessarily a relic of history, but still a framework that can be applied to working conditions and economic systems today.

Okay, so that’s socialism. What about Communism?

Communism is even more confusing!

Communism has a lot of meanings, too, depending on the context in which it’s being used.

Marx and Marxists have been known to use “socialism” and “communism” interchangeably, but when they’re being picky, socialism is often referred to as a transition stage on the way to communism. In this sense, socialism marks a stage after a revolution has overthrown private property, but before government has “withered away.” Communism then describes a utopian stage where government is unnecessary—society is classless, all labor is equal, and the system can maintain itself.

What gets really confusing is when a country like the USSR undertakes a revolution and declares itself a Marxist state — what they said they had achieved was not socialism or communism, but a revolution that was directed toward that end. So, when the Bolshevik Party that seized power in Russia in 1917 changed their party’s name to the Communist Party and their country’s name to the Union of Soviet Socialist Republics, they were using those terms aspirationally—they were aiming for socialism and communism. As the years followed, the Party dithered about just how much socialism had actually been achieved at any given point, but technically communism, if you read your Marx and Lenin, as every Soviet citizen did, remained on the horizon.

That would be confusing enough, except that these aspirational names have by now become descriptive of the countries engaged in this experiment. So, while the Soviet Union was attempting to achieve Communism, it became known as “a communist country,” and thus we began to speak of “Communism” not as the utopian final phase of Marxist development, never (yet) achieved on earth, but as “what they’re doing over there in the Soviet Union.” This is an extremely problematic usage when even in the USSR the Communist Party admitted that what they were doing was not actually Communism!

Since the end of the Cold War (at least) most scholars don’t like to refer to anything the Soviet Union was actually doing as “socialism” or “communism” because the terms are so imprecise. We tend to use those words mainly to describe the theories. The reality in the Soviet Union is known by the specific policy names used by the Party at the time — such as War Communism or the New Economic Policy or Perestroika — or in more general contexts by the leader who is associated with a certain cluster of policies, hence, “Leninism,” “Stalinism,” or for the Brezhnev period, “stagnation,” a term coined by Gorbachev that is irresistibly evocative, if not precisely literally accurate. One can also speak accurately of the type of socialism actually practiced in the Soviet Union as “planned socialism” or simply a planned economy.

Anarchism

A final note on anarchism, another frequently misunderstood term. Anarchists do not advocate chaos. Anarchism is also something of an umbrella term, encompassing both individualists and collectivists, but the collectivist branch can be seen as a variant of socialism. What distinguishes collectivist anarchists is that they are particularly concerned with the role of government in establishing or maintaining economic equality—namely, they want government to stay the heck out. A case can be made that if there were ever hope for the Bolshevik Revolution to live up to any of the theoretical principles on which it was based, this hope was derailed by the domination of government and Party at the expense of workers. Other arguments can be made to explain the many hypocrisies of the Soviet state, but there’s no question that Lenin’s notion of the Party as “vanguard” leading the revolution on behalf of workers resulted in a much more powerful role for the state than many socialists condoned at the time or since.

Posted in History, Russia, Teaching | Tagged , , | 2 Comments

Russians Love Their Children Too

By Rita Molnár, via Wikimedia Commons

I’m quoting Sting, of course, in his famous — and at the time daring — song, released in 1985, during the Cold War. He was hoping that Russians, though our enemies, are human too, loving their children enough not to push the button to start nuclear war. Fortunately, it turned out that indeed, Russians love their children, too.

Imagine a bunch of Russians on an internet forum debating the merits of capitalism. Imagine that they’re talking about the United States in the 20th century as if it was all one, unchanging thing. As if the Civil Rights movement, the Great Depression, and post-Reagan neoconservatism were all happening simultaneously, and all characterize who we all are as a people. Imagine that people are saying all Americans have been merely reactive to our regime, that we are materialistic products of the free market, which drives our every action. Imagine that these writers on an internet forum acknowledge no social or cultural changes of any kind, and seem to believe that all our political leaders (FDR and Hoover, Coolidge and Clinton, Bush—either Bush what’s the difference— and Obama) had essentially the same outlook (because after all we’ve been a capitalist democracy the whole time, haven’t we?). Now imagine that these Russians are arguing that these “facts” about the U.S. prove that capitalism must necessarily lead to chauvinistic imperialism and enormous gaps between rich and poor to the degree that thousands of people are homeless in the richest country in the world (Russians didn’t know homelessness until they “democratized,” a correlation that could easily be misunderstood as causation).

It’s all patently ridiculous, of course. It’s hard to even know where to begin to correct all the false assumptions embedded in that argument.

Yet, I’ve heard it — often. Pretty much every time either “capitalism” or “democracy” is mentioned in my presence when I’m in Russia, actually, most of the points I’ve outlined here are made to me as if this should suddenly make me understand everything about my homeland that I’ve been blind to all these years.

The thing is, Americans just as frequently make the same mistake about the Russians. Every time you see a bunch of Americans (often on an internet forum) talking about how Russia proves that socialism isn’t possible, you’re seeing that same mistake being made.

I wrote that imaginary scenario by reading an actual internet argument by Americans about the Soviet Union and socialism, and just replacing the USSR with the US, socialism with capitalist democracy, to show how silly it is.

You can’t look at one moment in time and use it to characterize a whole century.

It is a mistake to confuse rhetoric and reality.

It is also a mistake to assume that socialism, an economic idea, has in inherent connection to authoritarianism, a political system. Socialist democracies exist, and so do authoritarian societies with capitalist economies.

It’s a mistake to confuse a people with their government.

It’s a mistake to lump hundreds of millions of people together and imagine they all think and behave the same way.

Yet everybody does makes these mistakes, all the time. People are ignorant everywhere, too — which is only natural. You can’t know about everything, and it’s easy to be unconsciously influenced by media. Does anybody think middle-class New Yorkers really get to live in apartments like the ones you see on Friends? If you do, I have a bridge to sell you. For the same reason, you shouldn’t imagine that the movie From Russia with Love tells you anything about Russia — it tells you only what those western filmmakers imagined about Russia for their own artistic and economic purposes. See my previous post on Rocky IV.

Interestingly, I’ve noticed that there are a lot more realistic Russian films set in normal-looking apartments than there are American films featuring people living in anything like any dwelling I have ever known in real life (though Russian TV is getting weirder and weirder and there are fewer realistic films and more ludicrous shocksploitation ones being made, so this is changing; I refer mostly to the 1970s-1990s).

I don’t think most Americans walk around deliberately spreading unfounded assumptions about other countries. We have a reputation abroad for doing it more than anyone else, though, deliberately or not, and that’s embarrassing. I find the the most effective way to remember not to make these kinds of mistakes oneself is to see how it feels when someone else does it to you. I’ve lived in Norway and in Russia for fair amounts of time and traveled briefly around Europe, so I’ve collected my share of anecdotes of this nature. A woman in Prague in 1992, who checked my passport at a currency exchange point, saw that it was issued in Chicago and asked me if I was afraid to live there. I thought it was the usual “don’t you get shot by gangs whenever you set foot outside” thing, but it turned out it was Al Capone — she thought he was still alive and busy! That was not the last time I came across someone who thought Al Capone was our contemporary.

The first time I lived abroad in 1991-92, I was continually asked if I lived in New York. No. Miami? No. L.A.? No. Well, but you can tell me what they’re like, right? No, actually I’d never been to any of those places. WHAT?!! But you said you were American?! Even those Europeans who have traveled to the US often visit only a major city or two, so many have little idea what’s “in” the rest of the US. Outsiders’ perceptions of our economic status are also often taken from Hollywood, or otherwise filtered through lenses. For example, when I taught English in St. Petersburg in 1998-99, a student of mine once confessed to me that he saw a documentary about the homeless in American back in the ‘80s and because he saw the homeless people on TV wearing blue jeans — which at the time cost a month’s salary in Russia — he concluded that even the homeless in American were rich!

Before you laugh too hard, remember that the assumptions Americans make about other countries are often distorted in exactly this way.

Posted in Random, Russia | Tagged , | Leave a comment

Unlearning High School in Five Painful Steps

By Maho mumbles, via Wikimedia Commons

This is addressed to all the college freshman out there.

There are a few habits you may have learned in high school that will have to be adjusted in college. Remember that the chief difference between high school and college is that high school aims to fill your brain with some basic knowledge of the world and introduce you to the main fields of inquiry (mathematics, science, social science, humanities, the arts), while the main goal in college is to train you to think critically about the world: to analyze, to find and sort through new information effectively, and to apply lessons from one sphere to another. Each discipline uses different techniques, which you are meant to familiarize yourself with as you take courses in different departments, but the overall goal of all disciplines is to train you in advanced critical thinking. Later, as you choose a major, you will also be expected to master many of the subtleties of a specific discipline, more narrowly defined than they were in high school.

In the case of history, in high school you are taught the basic facts of history and you are perhaps exposed to some questions any citizen might ask about our past. In college, you are expected to act as an apprentice historian, to try out the more complex methods of professional historians in order to understand them fully, and to ask deeper questions about the nature and uses of history, and how history influences our society.

In other words, in high school you are told a story; in college you are invited to discover how stories are written and what they may mean from different points of view.

1. The 5-Paragraph Essay

Frequently taught in high schools, the 5-paragraph essay model is a solid way of teaching students the basic outline of most scholarly writing: an introduction that sets up a problem and a resolution to it, a series of points of evidence supporting the resolution, and then a conclusion that summarizes the case made and connects it to broader implications. This is a good basic model. Naturally, however, not every argument relies on precisely three points of evidence, and not every introduction or conclusion can best be articulated in precisely one paragraph each.

The rigidity of the five paragraphs can safely be left behind in college, though you should retain the overall structure of introduction-problem-resolution-evidence-conclusion.

In college we expect you to be familiar enough with this model to reproduce it reliably, and we now want you to focus on content: think through real problems and evidence and come to your own reasoned, supported conclusions.

This difference implies something very important about how your writing process in college should be different than it was in high school. When your goal was just to practice the 5-paragraph model over and over, it made sense to start with an outline, fill it in, then you’re done. That is not sufficient in college, because it allows you only to record whatever you already know, not to discover new knowledge.

In college, writing should be a process of sorting through complex information, understanding it better, and then figuring out what you think about it. To do this properly, you must write many drafts. Start by explaining the evidence and arguments from your source texts in detail in your own words — that’s the best way to figure out what the evidence really is. Then start to ask questions about what the evidence means, what it adds up to. As you clarify the questions the evidence can help you answer, you will gradually come to some conclusions about how to answer your questions. Only at this point can you put all this into an outline and revise according to the introduction-problem-resolution-evidence-conclusion model!

2. You must do the reading at home

The number of hours spent in the college classroom is obviously far fewer than in high school. This is not because college is easier, or because it’s meant to be done on the side while you work (or play).

The way college courses are structured, the expectation is that a full load should be at least 40 hours a week, or the equivalent of a full-time job by itself. You should expect to work an average of 2-4 hours at home for each hour you spend in class (however with practice you will find that you’ll spend less time than this some weeks, and much more other weeks).

Because class time is so limited, we cannot waste it sitting and reading in a room together. Class time is for synthesizing the material, asking questions about it, and learning how to identify patterns in it. For that time to be worthwhile, you must come to class fully prepared.

At home you should be mastering the basic facts covered in the course (usually provided in the textbook) and absorbing the content of the other readings, so that in class you can think about the questions, problems, and arguments they raise.

In class, you should be taking notes, but don’t try to write down every word said. If you are sufficiently prepared you should not need to write down every factoid, but should be able to focus on questions, problems, and patterns.

3. You will not be rescued from disaster at the last minute

We can fail you, and we will. I understand that it has become common in American high schools to never fail a student no matter how poor their performance (which, you may have noticed, only serves to bring you to college grossly unprepared, which is really doing you a disservice in the long run), and it is common to allow make-ups, revisions, extra credit, etc, to improve grades. Do not expect this to happen in college. You are personally responsible for your performance, and your own learning.

If we could put the knowledge and skills you need on a flash drive and stick it in your ear, we would, but it doesn’t work that way.

Think of college as being like a gym membership: you pay to have access to the facilities, and to trainers who can help push you along, show you the most efficient way, and keep you from hurting yourself. But you still have to do the work, or you’ll never get in shape.

4. Assessments are far less frequent, so they count more

In college it is typical to have only one or two exams per semester, and perhaps one or two additional papers (this can vary widely–when I was an undergrad, most of my class had just one paper, or one exam!). This means you must master a greater amount of material for each assignment than you may be accustomed to, and the grade of each assignment will count more in your final course grade. Final exams frequently ask you to synthesize material from the entire semester, to enable you to tie together everything covered and to make connections among different places and periods (for a history class).

So studying is not about memorizing details just long enough to pass a test, then forgetting it all. Generally, there is less memorization needed at the college level, but it is vital that you fully understand concepts and that you think through the material being covered. Always ask how each piece of material connects to others, and why it matters — these are the most significant “facts” you need to learn.

And, of course, remember that it’s not okay to “bomb” one exam or paper — because of the smaller number of assignments, this will make a big impact on your final grade, and it won’t be possible to make up a bombed assignment later.

5. Feedback matters

In high school you may have found that you got very small amounts of feedback very regularly, and that it was generally positive. (The theory that constantly bolstering students’ self-esteem will help them succeed — though now convincingly debunked in my opinion — has been dominant in the schools since I was in kindergarten.)

In college it is more likely that you will get feedback relatively rarely, but it will be detailed and focused on what you need to do differently next time. The idea of this kind of feedback is not to be mean. Feedback is never about you as a person, but about the written work you turned in on a given occasion.

The instructor’s goal is to help you, by showing you where you need to improve most, so that you can do better next time. Always pay very close attention to feedback; don’t take it personally, but do consider it a guide to how to approach your next assignment (even if that next assignment is in another course!). If you don’t understand the feedback you’re getting or it isn’t enough, talk to your professor!

You’re an adult now. If they don’t hear from you, they assume you know what you’re doing.

 

 

Note: much of my information about what the high schools are up to these days comes from colleagues, as does the gym metaphor, for which I will be forever grateful.

Posted in Teaching | Tagged , , , | Leave a comment

Revision

By Hownote, via Wikimedia Commons

There are two kinds of people in the world: those who revise, and those who don’t. The former are writers, the latter are not.

This implies that the way to become a writer, is to revise. A lot. And that’s absolutely true.

Yet, many novice writers, especially college students who are writing a lot of papers under tight deadlines, persistently believe the myth that by “writing process” one means: start typing, continue until you hit the word limit, proof-read or spell-check, and hit “print.”

This is a recipe for papers that—even if full of brilliant ideas—probably can never make it out of the B-range, and very often are much worse.

Almost any experienced scholarly writer can tell you that revision IS the writing process. How you get a first draft on paper matters very little, and every writer will have her own habits (and superstitions) about how to do it. But taking the usually mushy, half-formed, inarticulate ideas from your own head, where they are warm and happy and seem clear, and translating them into a form that an unknown reader can quickly and easily understand is a complicated craft that involves many steps.

Moreover, almost anyone who’s ever written something truly original or exciting will tell you that most if not all of these ideas come out only in the process of writing (that is, revising). What seemed brilliant when you sat down at the computer becomes “belaboring the obvious” after a few hours of working the sources and your own thoughts into organized structures. It is this process that usually reveals the connections and inconsistencies that lead to brilliant new ideas.

Most students turn in papers with a thesis at the end of the essay (regardless of whatever it was they wrote at the end of the introduction, way back at a different stage in their thinking and now forgotten). Often, this thesis-at-the-bottom is very interesting, because it was developed out of a detailed discussion of the evidence. But, unfortunately, most students stop and print at this point because they run out of time. These essays are never more than half-baked, and serve only as a record of the student’s thought process.

To make it a solid essay, the student must recognize that when that thesis finally “articulates itself” at the end (that’s often what it feels like when it happens), they have merely reached the half-way point in the writing process. Now, it is time to translate the “writer’s draft” into a “reader’s draft.” The new, richer thesis must be put at the end of a new introduction that tells the reader what the paper is, now, really going to be about. The discussion of the evidence must be re-worked for the convenience of the reader, not the writer. And finally, the student must reflect a bit on what has been accomplished, and put this new perspective into a new, real conclusion. Only then have you reached the point of polishing the prose and proof-reading for errors. But having got here, you will have the satisfaction of knowing that your essay is finely crafted and original, and that you have expressed yourself effectively.

Even when students do recognize what the revision process is really about, they often claim they still can’t do it, because they believe that revising takes more time than they have, or is not worth the time put into it, because after all the great ideas are on paper somewhere and that’s all that matters.

Think about it: do you want to bank your grade on the idea that your TA or professor will do all that work I’ve just described to untangle your paper for you, so they can have the privilege of receiving your great ideas?

They read many, many papers and some of them will be just as interesting as yours, but better organized and clearer. They can only put the same amount of time into each. They have seen (and probably tried themselves, at some point) every trick there is involving fancy fonts and margins, high-flown language, and “filler,” and recognize all such silliness for exactly what it is (which doesn’t stop them from being annoyed by it).

More importantly, though, in the long term learning to write a solid paper is easier than trying to get by with unrevised schlock. In fact, in purely practical terms, the single easiest thing you can do to improve your grades on essays is to spend more time revising (as long as you do it mindfully). Putting your exciting thesis exactly where the prof expects to find it and following it with a series of points of support that in every case is accompanied by at least a couple paragraphs of thorough discussion complete with specific examples, caveats, counter-arguments and elaboration and interpretation of all quotes, can hardly help but result in a good grade with any professor or TA (assuming of course that you’ve correctly understood and followed the assignment, and read and understood the sources).

You don’t usually have to guess what the professor wants—the standards are usually quite predictable for a short college-level essay. And if you’re reading the sources and understanding the material, there’s really nothing stopping you from doing well but time. Start your next paper with twice as much time to work as you usually give yourself. The beauty of getting really good at revising is that it gets faster and faster with practice, so that eventually you can expect to need little more time than you probably take now, but will produce much higher quality work.

Posted in Teaching, Writing | Tagged | Leave a comment

Obama the Professor

Obama Chesh 5

“How is it that not one of you has actually read the syllabus?!” Heh. Via Wikimedia Commons.

There have been a lot of profiles written about Barack Obama, and I have read many of them with interest. As usual, I tend to read them with half my mind thinking about the difference between these kinds of profiles written in the moment, and the versions of a life written by biographers and historians long after the fact. It’s the sort of exercise that entertains me.

I don’t claim to have any profound predictions about Obama’s legacy, or even unprofound ones. I’m merely interested to watch it unfold. Right now, what interests me is the huge variety of interpretations about a man who is alive and working and accessible (more or less) to the journalists doing the writing. Historians are used to trying to re-construct the life of a person who is long dead, whose friends and coworkers and family are all long dead, and who may, in many cases, have left precious few written traces of his or her actions, let alone thoughts (chances are, in the case of a “she” there’s even less than in the case of a “he”). To me it seems like an embarrassment of riches to write a life of someone still living, with the benefit of interviews where you can ask whatever you want, with extraordinary documentation, and access, potentially, to thousands of people who know and work with him.

With this touch of envy in mind, I always feel a bit dissatisfied by contemporary profiles of important people. Especially when there are a lot of them, as there are with Obama, it seems like the more you read, the more it becomes noise, and the less you can pin down who this person is.

I have particular difficulty with the classic lengthy profile that often appears in periodicals like Vanity Fair or The New Yorker. You know the kind, where the author plucks from obscurity a handful of random but colorful anecdotes, asks some random but colorful questions, and mashes the whole thing together into a rambling “think piece” that feels profound, but…isn’t. It leaves you knowing less than you did before you read it, and somehow all the anecdotes taken from interviews and in-person observations feel inauthentic. One has a sense that the writer was gathering them like a preschooler collects bits of paper for a collage — “ooh! A red one! Score!”

I don’t mean to sound snarky. I really enjoyed the recent piece in Vanity Fair by Michael Lewis. It struck me as unusually insightful about what it’s actually like to be president. And I think he may have asked the most brilliant question I’ve ever heard asked of a president for the purposes of finding out his character:

“Assume that in 30 minutes you will stop being president. I will take your place. Prepare me. Teach me how to be president.”

But I came away from the article having little if any insight into Obama.

One of the most insightful people writing about Obama, I think, is Andrew Sullivan. Sullivan tends to characterize Obama as a conservative, even a paragon of a conservative. I’m of the school that thinks that’s incredibly accurate on a number of levels (whether that’s a good thing or a bad thing and on which levels is another question, of course).

Much more often, Obama is accused of being a kind of Bambi — too soft on this or that, unwilling to take a stand when stands need to be taken, unwilling to push hard, unwilling to ram his will through no matter what. (Of course, he’s also accused of the opposite, but I’m trying to pull some of the more prominent threads out of the infinite cacophony here).

But the thing about Obama that has always struck me as most obvious, even blinding, is something I don’t really see get mentioned in these profiles. I’m talking about the fact that Obama is a professor. He was literally a professor when he taught law at the University of Chicago law school (disclaimer: at the time he was doing that, I was living in an undergrad dorm next door, and some friends and I may have gone wading in the law school fountain once and been yelled at by some law school prof who almost certainly was not Obama, though I like to tell myself that it could have been). Less literally, he’s always struck me as being a professor type, and I say this as a professor type with a lot of professor-type acquaintances, in addition to having done my time (and then some) staring at a podium from the other side of the room.

Of course the media has not missed the fact that Obama was a professor. This piece was particularly interesting. And he’s fairly often criticized as “professorial” when he’s being stiff and wonkish (but even more often, in 2008 especially, he was criticized as speaking in a “lofty” way devoid of detail or substance — another example of the media not being able to make up its mind about him).

I think he’s professorial in much deeper ways than speaking style, and I think it explains the sense people get of his conservatism (which often outrages his base) as well as the “Bambi” meme.

Run with me for a minute here. Imagine a college classroom, a small seminar class. The subject doesn’t matter. You’re the professor, and it’s your job to (a) get the students engaged and talking (b) to get them to understand the material being covered and most importantly (c) to get them to think critically, for themselves, about that material.

In that situation, you don’t go in guns blazing and force people to obey your will. Why would you? That’s just a completely irrelevant, as well as unethical and pointless, approach.

You also (if you know what you’re doing at all) don’t go in there and tell the students what’s what. Even when you’re really, really sure you know what’s what. Even when you’re feeling frustrated with the impossibility of the task in front of you and you are incredibly tempted to just skip to the end and tell them the answers already. Tempting as that can sometimes be, you do know it would be a hollow and temporary victory, because they wouldn’t really take anything in, and telling people what to think is not your job.

You also don’t go into that classroom with a goal of changing the world. You don’t even aim to turn those students in that room into scholars. Most of them probably couldn’t get there, and more importantly, there’s no reason for them to get there. They have other things they need to do, and it’s your job to help them do that. You’re not making clones of yourself. You’re giving people the knowledge and skills they need to define and pursue their own goals.

You aim when you go into that room to move the students forward from where they were when you got them.

You leave your own ideologies and convictions behind when you walk into the classroom, because you know they’ll just get in the way of the process at best, and completely undermine your ability to do your job at worst.

You don’t preach to the choir. You work with ALL the students. Even the ones who seem hopelessly behind.

With experience, you learn that students can always surprise you. All of them. Some of them that seem really with the program can turn out to be putting on a show for a grade, and not really understand or care about the material or learning in general. Some that seem like they don’t even belong in that room will work their butts off and ultimately make you feel stupid and lazy with their hard work and original insights. You never know. And it’s not your job to guess, or care, what each student is ultimately capable of. You take them as you get them, and you work to move them forward from wherever they are.

Sometimes, as part of that work, you play devil’s advocate. You find yourself saying things you don’t remotely believe, and you actually try to put conviction into your face and voice because you’re so focused on seeing the lightbulb go off in the students’ eyes, the expression on their faces that means they’re thinking, really thinking.

You willingly give up a lot of control of the classroom — control you know how to use, and would on some level love to use — because you know from experience that you can’t do the thinking and acting and learning for them. You can only push, facilitate, re-direct. They’ve got to do the thing for themselves, ultimately, or it won’t stick.

And then, after a semester of all this hard work, which you do pretty darn selflessly because you really — REALLY! — believe in the inherent value of the process…at the end of the semester, after you’ve turned in your grades, you get your evaluations. And you find out just how many students blame you for their own unwillingness to invest themselves in learning. In other words, you find out that their failures will be billed as your failures, while their successes are their own.
What does all this have to do with Obama? I think his personal convictions are so hard to read because as a representative of the people, whose job is to govern, he actually tries to represent the people, and part of doing that well is putting your more idiosyncratic attitudes out of even your own mind.

I think he listens to all sides — even the sides that hate him irrationally and eternally — because that’s his job. Like it or not.

I think he’s not saving the world because, well, first, he can’t, and second, because he realizes that. I really doubt he sets his sights that high. And I would be astounded if he looks on politics as the epic battle between Democrats and Republicans that it is often portrayed to be by the media. He’s a problem-solving type of thinker rather than an ideological type — that’s been widely observed and is after all pretty characteristic of many post-Boomer Americans — but more than that he’s a professor type. That means focusing on taking what you’re given and moving it forward, doggedly, semester after semester. That’s very different from viewing your job as a matter of wins and losses.

A professor is rarely confrontational toward students, except perhaps temporarily to make a point. Most professors genuinely don’t even feel confrontational about their students’ ideas — if you get into this gig at all, you care pretty strongly about the integrity of the process. Truth, to an academic, should be not this answer or that answer to a problem (there are rarely neat and final answers to the questions asked at college level and beyond), but the rigorously honest pursuit of a solution, using all available tools. To do that, you have to listen to everyone, even the ones who seem nuts. They are the most likely, in fact, in my classroom experience, to insert something really innovative into the conversation (though often unintentionally), and they are often the ones to name the elephant in the room. (Naming the elephant in the room is something most academics welcome; most politicians are the ones putting curtains up around the elephant.) Even the students who don’t actually contribute have to be included in the process, because otherwise the process loses all meaning and integrity.

In the Michael Lewis profile, Obama is quoted saying some remarkably professorial things. In a passage about the writing of Obama’s Nobel speech, for example, he is depicted as instructing his speechwriters to put together his favorite authors’ ideas on war — he gathers his sources first, in other words, like an academic would — and he apparently explained to his interviewer that, “[h]ere it wasn’t just that I needed to make a new argument. It was that I wanted to make an argument that didn’t allow either side to feel too comfortable.”

That’s how you lead a classroom discussion. That’s how you compose an argument that gets people to think, instead of telling them what to think.

Then Obama explained his goals for the speech: “What I had to do is describe a notion of a just war. But also acknowledge that the very notion of a just war can lead you into some dark places. And so you can’t be complacent in labeling something just. You need to constantly ask yourself questions.”

This is professorialism at its best. Nothing is black and white. The devil is in the details. Caution. Never get ahead of your evidence. Always. Ask. Questions.

Narrating Obama’s decision not to approve a no-fly zone over Libya that was intended to give an appearance of protecting innocent civilians but could not possibly have helped, Lewis quotes Obama as saying, “I know that I’m definitely not doing a no-fly zone. Because I think it’s just a show to protect backsides, politically.” This stance could read as noble. A president who puts morality (and practicality) above politics. It could be that. It could also be the overwhelming impatience of the true scholar with anything that confuses the fundamentals: the questions, evidence, and reasoning that can solve problems. Arguing about how this or that method of problem-solving looks — or finding ways to avoid the problem altogether — is a waste of time when one could actually be coming up with an answer. Even if it’s not ultimately a satisfying answer, at least you tried, and learned something from the effort that may help future efforts. That’s the pursuit of knowledge.

This professorial quality implies a few things. Most importantly, it implies that Obama believes in and is animated more by the process of governing democratically than perhaps any general policy principle. Compare this to his record, and I think you find a lot of consistency, especially in places where allegiance to party platform or political expediency is sometimes absent. I don’t want to imply that Obama’s professorial tendencies define him completely. None of us are defined by anything so simple. There are no doubt many sides to his character and his decision-making, as there are for all of us. But I think this one part is often unrecognized. I also don’t make any claims about whether these tendencies are good, great, suspect, or terrible in a President of the United States. Like any good prof, I’m just throwing it out there, to see if it makes people think.

Posted in Profession, Random | Tagged , | Leave a comment

Rules

via Wikimedia Commons

Sometimes my students get a little too hung-up on rules, when it comes to writing essays. Mind you, some rules are vital—if your writing is ungrammatical, readers will have trouble following what you are saying. Other rules (which are really more like guidelines) relate to structure and flow and they also help readers to understand you. Then there are still other rules, which don’t actually contribute much to the reader’s ability to understand and remember your text. These rules aren’t so important. The trick is knowing the difference.

Mind you, there are individual readers and—cough—the occasional rogue professor who care very deeply about this third category of rules, and if you’re writing for one of those people you might as well suck it up and follow those rules, too. But you should still know the reasoning behind them, and why in other contexts it might be okay to ignore them.

You should never use “I” in an academic essay.

Often, when a teacher tells you to “not use ‘I’” or to not use it so much, you can safely interpret this as “I need to give more substance to my opinions by inserting more reasoning and evidence, and possibly more sources, into my essay.” In other words, what this teacher often really means is that you’re asking the reader to believe something just because you said it was so – your essay is full of phrases like “I think…” and “I believe…”.

In other instances, students themselves or their teachers may fear that using “I” makes an essay ‘sound too subjective’ no matter it is used. The truth is, if you are a human being, authoring anything, that thing you author cannot be truly objective. There is a difference between saying, “John’s a fraud,” and “I think John’s a fraud,” and it is intellectually honest to differentiate for your reader what is your opinion or reasoned conclusion, and what is taken from the sources you’re citing. In these cases, using “I” is advisable.

However, it is true that some writers use phrases like “I think” more often than is required by the content – it becomes a kind of nervous tic. In this case, many of the ‘I’s can be safely eliminated or changed.

And remember that you can always find another way to convey that an idea is yours, to keep the ‘I’s from getting excessive or to please a professor who, for whatever reasons, particularly despises the presence of the word ‘I’ (though if you dare you might suggest they try searching it on Google Scholar, to see just how prevalent it is in scholarly journals from every field, including the hard sciences).

Note: Years ago, when scholars were perhaps not quite so resigned to their subjectivity, it was common to assume a sort of royal ‘we’ even when a paper had only one author. This is now frowned upon as misleading. The age of intellectual property has trumped the age of positivism! Nowadays, when an author uses “we” it generally refers to the writer and readers together, as in, “now we turn to a new subject.” Some people like this construction (it makes it easier to avoid the passive voice and nominalizations), and others dislike it (they find the intrusion of writer and reader into the text a distraction from the subject at hand). It’s largely a matter of taste and context.

You should never use the passive voice in an academic essay.

You should avoid split infinitives.

You should always have exactly three main points of support.

Always put your thesis at the beginning.

The answer to all these imperatives is, “Actually, it depends.” If there is any general rule that always applies, it is that a writer should be aware of her purposes and her audience, and suit her structure, style, and language to the particular purposes and audience of a given piece of writing.

The passive voice exists in English because it can be useful – not just to hide the subject of a verb (as in, “mistakes were made”), but also to shift the subject to the end of a sentence, where it may be more convenient for reasons of emphasis or transition (such as “mistakes were made by the President, who is now facing impeachment”).

The notion of avoiding split infinitives is borrowed from Latin, where splitting infinitives can cause confusion. But English works quite differently, and sometimes, in English, not splitting the infinitive can cause confusion. So whether you should do it or not depends on the context.

Grammar Girl has a great guide to splitting infinitives and avoiding them.

The five-paragraph essay model works very well  when you’re writing an essay that logically only has three major points of support and only needs to be five paragraphs long. However, for the vast majority of essays that don’t fall into that category, you will have to explore more complicated models.

Putting the thesis at the beginning of an essay has many strong advantages, and seems to work best in any case where the reader is approaching your essay for enlightenment rather than for entertainment or pleasure (you don’t, after all, want to keep your grader in suspense about whether you have something worthwhile to say!). But of course, there are exceptions, and you should always consider the demands of a particular instance when you make such choices.

Often, academic writers put a sort of provisional thesis at the beginning, which tells the reader what to expect without going into detail. This is sufficient to contextualize the information to follow, and fulfills the purpose of assuring the reader that you do, indeed, have a resolution to the problem you’ve set up (that is, that you’re a competent and responsible writer). Then, a more elaborate and specific thesis is stated at the end, incorporating terms and claims that have been made clear in the body of the essay but which were, perhaps, too new to the reader to use effectively in the first paragraph.

 

Update: See this nice piece from the Smithsonian on rules that aren’t really rules.

Posted in Teaching, Writing | Tagged | Leave a comment

Bias

View from Victoria Point, from Robert N. Dennis collection of stereoscopic views

Stereoscopic Views, from the Robert N. Dennis collection, via Wikimedia Commons.

When historians read a text, we are trained to filter what it tells us through an understanding of who wrote it, with what purposes and with what intended audience. Author, audience, and purpose are all important factors in shaping the meaning of a text, so identifying these factors can help us reconstruct what a text meant to its author, and to the people who read it when it was written. Identifying these factors can also help us to figure out what might be relevant, but missing from a text (something the author may not have be aware of, may not have thought was important, or even something the author may have wanted to deliberately suppress).

In college history classrooms, professors ask students to practice this skill, most commonly in assigning “primary source interpretation” essays, where the student takes a historical document (or 2) and tries to analyze it (them) in the way I just described.

Where many students go wrong in this process is confusing bias with point of view or reasoned opinion.

I’m probably particularly attuned to see this mistake because I spend so much time grading primary source essays, but also I see it constantly in talking heads on TV, in written media, and on internet forums. It’s a really insidious problem in our current political climate, in my view, so I offer this version of a handout I use in classes (originally relating only to writing primary sources essays).

Bias is a form of prejudice. It refers to opinions or assumptions that one holds despite or willfully in the absence of evidence.

Point of view refers to the fact that no one person can be aware of everything all at once. We all see the world from our own particular perspective.

It is possible (though difficult) to examine an issue without bias, but everyone always has a point of view. Your point of view is the way your previous experience, skills, inclinations, attention and interest limit your experience of the world.

Reasoned opinion is a conclusion, or claim, that a person comes to after examining and reasoning through relevant evidence. This is very different from bias (because it is based on objective reality — evidence and reasoning) and from point of view (because the exercise of reasoning through evidence is the practice of deliberately expanding your personal point of view to include evidence from others’ points of view, or evidence gathered through experimental observation).

When reading a historical text — or when you want to better understand any other text — you should look for bias, point of view, and reasoned opinion. But it is crucial to distinguish between these, because we can draw different interpretive conclusions about an author’s claims based on whether the author stated a given claim in willful contradiction of relevant evidence, merely out of an inability to see or comprehend new information, or lack of access to other evidence, or as a reasoned conclusion drawn directly from all available evidence.

Common mistakes students (and others!) make:

1. Looking for obvious biases (prejudices), but failing to look for “honest” limits to an author’s point of view.

2. Noting limits or absences and attributing these to point of view without first asking if the author’s point of view is actually so limited because it is based on assumptions from bias.

The way to avoid this mistake is, after identifying limits or absences in a given text, identify what underlying assumptions about the world led the author to “miss” these key points. How do those assumptions relate to the evidence available to the author?

3. Mistaking reasoned opinion based on evidence for mere bias. If an author seems to “like” a position or be “passionate” about it, they could be biased, or they may be enthusiastic about a conclusion simply because it is an excellent explanation of all known facts. Find out which it is by examining the evidence on which the author bases their conclusion.

Relative enthusiasm, or lack of enthusiasm, tells you nothing by itself.

Message to take home: Always look to the evidence. When someone makes a claim, do they follow it with evidence? Is it good evidence? Is it enough evidence? What part of the claim is an assumption (i.e., not based on evidence)? Some assumptions are reasonable (one has to start somewhere), some seem arbitrary (a bad sign!).

 

Update: Related reading

Posted in History, Random, Teaching | Tagged , , | Leave a comment

Objectivity

Via Wikimedia Commons

Many students come to college believing that academic writing is objective writing, or is supposed to be, and if it’s not, it’s “biased,” which is another way of saying “bad” or “useless.”

There is no such thing as objective writing.

If something is authored, then that human author’s stamp is somehow on the material, if only in the selection and organization of it (even texts authored by computer are ultimately products of the software, which was engineered by a human being, who made choices and set priorities!).

The best we can do, as writers, is to indicate to the reader explicitly what it is in our texts that comes out of our own heads, what is the opinion of other authors cited in our own work, and what is reasoned conclusion or a direct report of data (and with the latter you explain how you derived your data and chose what to share).

Best of all, we can identify and examine our own assumptions about our material, and when appropriate tell our readers what these assumptions are. We can mention that there are other factors or opinions which we have chosen not to go into, and we can say why. (Often, such things are legitimately beyond the scope of your essay, but by telling your reader you are aware that these other factors exist and have made a conscious decision to exclude them — for reasons you briefly explain — then you allow them to trust that you are, in fact, in control of your essay and have done your research. Going through these steps makes your reader more likely to trust you with the main points of your argument, as well.)

In other words, the best we can do as subjective, human authors is to acknowledge our subjectivity, to note our biases and assumptions and to factor them explicitly into our writing. Attempting the impossible task of writing objectively can be more misleading than accepting our bias and moving on.

Yet I often see student papers watered down to the point where no analysis is left at all — in some cases, I know the student had interesting and relevant ideas about the material, and I have asked why it wasn’t on the page. This is when I hear, “I thought that’s just my opinion, so it doesn’t belong in the paper.”

Analysis is a form of opinion — a very specific form that is based on evidence, in which you explain exactly how you reasoned from your evidence to form your opinion. Analysis is what we want.

Posted in Teaching, Writing | Tagged | Leave a comment

Why you shouldn’t feel bad you didn’t go for (or finish) the Ph.D.

By WMAQ-TV, Chicago, via Wikimedia Commons

Sometimes when I tell people what I do for a living, they tell me they almost got a Ph.D. Sometimes, they say this unapologetically, just as a factoid of interest, but unfortunately sometimes it’s said with a direct or implied apology, and some sort of excuse. As if an explanation is required.

A Ph.D. degree is not the ultimate IQ test.

A Ph.D. is nothing more nor less than a degree required for a particular range of professions (mainly, teaching at the university level). It’s a very narrow degree, and one that is very rarely required. So why on earth would so many people feel bad for not getting one? If you don’t need or want a Ph.D., then you shouldn’t waste your time and money getting one!

Contrary to, apparently, popular belief, a Ph.D. doesn’t test intelligence. True, you probably need to have at least average intelligence to get admitted to any respectable Ph.D. program. But succeeding in a Ph.D. program really depends more on having the drive to complete that particular degree in that particular field than on anything else.

It’s not like intelligence and specialized knowledge are remotely exclusive to people with Ph.D.s. We all experience that in people we meet every day. Yet some people–especially those who are used to doing very well in school–internalize the idea that because they are smart, their success should be defined by achieving the highest possible degree. Well, no, not if that degree is only suitable for one narrow profession, which you might not want.

The people I know who got Ph.D.s (self included, of course) finished the degree mainly because of three factors.

The first and most important factor is that they were obsessed with their field. Some people do finish the degree and decide not to actually practice in the field, but pretty much always, if they finished, they at least had some kind of obsessive devotion to the subject. Sometimes it’s a healthy devotion, occasionally it borders on the pathological, but in any case it’s pretty extreme. Most people just aren’t that into—say—early nineteenth-century Russian women’s mysticism. And that’s okay. We need people with these kinds of interests, but we don’t need LOTS of people with these kinds of interests!

The second factor is that most people I know who finished Ph.D.s aren’t really good at much of anything else. I know that’s true for me. There are other things I can do if I must, but I’m not really very good at them. I’m quite good at researching and teaching the history of Russia, and to a lesser degree, Europe and the western world. Other stuff? I’m average at best, and with most things I’m completely incompetent. I didn’t just end up in a Ph.D. program because I’m pretty smart. Being pretty smart can land you in a lot of different places. I ended up in a Ph.D. program mainly because I wrote a quite decent essay about the upbringing of early nineteenth-century Russian heirs to the throne that had a fairly original argument in it when I was only 22. Not that many people can do that, or more accurately, very few people would want to bother to do that. But, the vast majority of the population can calculate interest rates, change a tire, manage a multi-line phone, and do a lot of other things I’ve singularly failed at (despite numerous sincere and concerted attempts!). We’ve all got our niches.

The third factor I’ve seen that separates those who finish Ph.D. programs from those who leave them or don’t attempt them, is that those who finish tend to have some kind of stubborn, perhaps even stupid, determination to finish no matter what, just because. People who finish psychologically have to finish. Those who do not finish often do not need to finish. And may very well be much healthier and better off for it. Have you read my posts about what academia is really like and what it pays, even when you’re lucky enough to get a tenure-track job?

While I’m talking about those who have the stubborn drive to finish, I would like to mention another phenomenon I’ve seen many times.

In the home stretch of finishing the Ph.D. dissertation, when it’s not quite almost-done but too much done to quit, everyone I know has had a moment of crisis when they decide that they absolutely must quit. It’s too much, it can’t be done, the person in question feels like an impostor, the person in question never really wanted it anyway, etc.

It’s important to distinguish between this very typical last-minute crisis of the almost-finished Ph.D. from the more serious existential crises of an earlier-stage graduate student who truly is uncertain about whether the degree is worth pursuing. When you’ve got multiple chapters of the dissertation written (even in draft from), you’re probably one of the hopeless ones who can’t really do anything else, and you may as well finish, since you’re so close. Just know that this crisis is completely typical. But if you’re not there yet and you really don’t feel motivated to get there, ask yourself why you think you should pursue a Ph.D.

If the only honest answer you can give yourself is that you can, because you’re smart enough, then maybe you shouldn’t bother. Plenty of people are smart enough to complete a Ph.D. Only a select few of us are stupid enough to actually follow through, and only because it’s the only thing we can and want to do. If that’s not you, then unburden yourself of the guilt and expectations that a Ph.D. equals, “what smart people do.”  A Ph.D. is usually a ticket to low pay and constant work. If you can think of an alternative you like better, by all means, get out.

(If you can’t think of an alternative and love what you do so much you’re willing to live on mac-n-cheese so you can spend all your time reading obscure monographs on the subject that makes your heart go pitter-patter, well, hello, kindred spirit.)

 

Further Reading: On Being Miserable in Grad School

Posted in GradSchool | Tagged , , | Leave a comment

What is a Ph.D., Really? And What Is It Good For?

I’ve gotten the impression that many people think a Ph.D. program is like a master’s program, but longer. That you just keep taking courses—like a million of them—and then eventually you write another really big paper, and you’re done. This is kind of accurate, but also wrong in all the most important ways. I’m sure these misconceptions are partly due to the fact that there aren’t really very many movies about people in Ph.D. programs, unlike, say, law school or med school. Unless you count the show Alias, in which Jennifer Garner pretended to be a Ph.D. student by walking around saying ridiculously unlikely things and never doing any work at all. But you can’t really blame Hollywood—people in Ph.D. programs aren’t really very exciting to watch, since they mostly hunch in front of computers for days and weeks on end.

John Hamilton Mortimer - Studies of Academics - Google Art Project

By John Hamilton Mortimer (1740 – 1779), via Wikimedia Commons

NOTE: Everything that follows is really about programs in the humanities and social sciences, because that’s what I know. I don’t know what programs in the STEM (science, technology, engineering and mathematics) fields are like, but I picture a lot of labs. I’m probably mostly wrong about that. The only thing I’m sure of is that nothing about STEM Ph.D. education resembles anything seen on Numb3rs or Bones.

So, in the U.S., most Ph.D. programs are actually combined with MA programs (not so in Europe and Canada), though if you already have an MA when you enter the Ph.D. program they’ll usually grant you advanced standing, which usually allows you to skip a year of coursework.

But a standard U.S. MA/Ph.D. program in the humanities and social sciences generally begins with the MA portion. For the MA degree, you usually take 1 to 2 years of graduate courses (these are usually the only courses you will ever take in the whole program), and then write a thesis. In history, the MA thesis is usually envisioned as about the size, type, and quality of a publishable article. Ideally. But publishable articles usually max out at 30 pages, and most real MA thesis are actually about 50 to 150 pages. So the whole article model thing is a bit misleading. But the MA thesis should, like an article, incorporate original primary source research and original analysis (and, unlike undergraduate essays, it needs to be original not just to the writer but original in the sense that no one has published that argument before).

I should mention here that MA courses are not like undergraduate courses, and MA-level courses in a Ph.D.-granting institution usually vary quite a bit, too, from MA-level courses at an MA-only institution. MA courses involve more reading and writing than at the undergraduate level, and in history it’s often true that you’ll read mostly secondary sources in a grad class, where you would read mostly primary and tertiary sources in undergrad. But the main difference is in the kind of work you’re expected to produce. Graduate work assumes you have basic skills and knowledge in the field, and asks you to think critically about how knowledge is produced and to practice more advanced skills, like synthesizing larger amounts of material, and dealing with more difficult primary sources, often in foreign languages.

After the MA thesis, some people decide they don’t want to go farther, and they can leave the program with a “terminal MA.” At least they got something for their time, is the expectation. But most students continue on, sometimes after a review of their progress by their advisor or something like that.

The next stage is often, though not always, marked by the M.Phil. degree. I’ll confess right here that I didn’t know what the heck an M.Phil. degree was even after I got one, so it’s not at all surprising that most people who aren’t in Ph.D. programs have no idea. It’s sometimes referred to as a “research masters,” and I’ve been told that it derives from the British model, where you can (I believe—someone correct me if I’m wrong) get an MA through graduate coursework or an M.Phil. through independent research. Except this makes absolutely no sense in the U.S. context, where the M.A. signifies that you completed coursework and wrote an independent thesis, and the M.Phil. is, in the programs I’m familiar with, a prize you get for passing oral exams.

Oral exams, or comprehensive exams as they are often known (since they aren’t always oral) mark the transition between coursework and going out on your own as a sort of apprentice scholar. Comprehensive exams require the graduate student to demonstrate their comprehensive knowledge of their chosen field, and it’s usually described as preparation and qualification for teaching (as opposed to research, though having this broad background is essential to doing research, too). The format and style of these exams varies a lot, but usually you have from six months to a year to study, and then you are examined in written or oral form or some combination thereof.

As an example, as a specialist in Russian history, my oral exams had to cover four fields, three “major” and one “minor,” and at least one had to be “outside” (of Russia). For a major field you try to cover pretty much everything, and for a minor field you designate some set of themes you’ll cover, that are hopefully complementary to your major fields. My three major fields were Russian history to 1917, Russian history 1917 to the present, and East Central European history from 1750 to the present. My minor field covered a few themes in French and British history from 1750 to 1850, which I chose because it was helpful comparative background for the kind of research I planned to do on Russia in that period. The major fields were chosen to cover all the material I hoped to be expected to teach.

I had an advisor in each field who was a specialist, and those people helped me to create a list of about 100 books for each major field and 50 books for the minor field that represented a comprehensive survey of the scholarship to date (you examine a far greater number of books to start with, and then narrow it down to the final list that you study closely). Then I spent a year reading them all, and taking detailed notes about the major analytical questions, themes, and problems that I saw in each field. This process was a way of synthesizing how each field as a whole has developed.

The exam itself was oral in my case, meaning I met with my four advisors for 2 hours while they quizzed me. These kinds of exams generally aren’t so much about the specific material covered in each book, but about the student’s ability to synthesize these major arguments and see how the individual pieces fit into the whole.

Once you pass your comprehensive exams, you get the M.Phil. degree.

At some point before this time, you probably also have to pass some language exams. Historians tend to need to pass several, though those studying American history may need only one language. For a Europeanist historian, you usually need to pass at least three language exams, and in some fields you may need as many as five. These exams are usually written translation only, with a dictionary, because those are the skills you will need to handle foreign sources in your research. In my case I needed to pass exams in Russian, German and French. At the exam we were given passages in the language at hand that represented the kind of source a historian would read—often an analytical piece written in, say, the early nineteenth century. We had to translate them into English in a way that was both scrupulously accurate and readable.

After you’ve passed all your exams, the next step is the dissertation prospectus. This is a proposal outlining what your final, independent research project will be. The dissertation is meant to be comparable to a publishable book, and in this case it usually really is that, because in order to get a teaching and research job, in many fields you’ll have to publish a book within the first few years, and the dissertation is often the first draft, in a way, of this book. It must be based on original research and make an original argument, and it must be a significant contribution to your field of study (more so than an MA thesis).

So, for the proposal, you need to of course have some idea of what you want to research, and then you spend some time doing the necessary background reading and finding out what you will need to do to complete the thesis, in very practical terms.

For a Europeanist historian like me, this mainly means finding out what kind of archival sources exist, where they are, roughly what they might be able to tell you, etc. When your archives are located outside the U.S., you need to start applying for funding that will pay for your travel overseas, as well. Other social scientists need to plan and organize different kinds of research models, exploring possible methodologies, preparing interview questions and so on. Some other social scientists also travel, for “field work,” where they observe or interview subjects in a given location, but others work with computer modeling or published sources, etc.

In any event, all this planning and then writing up a detailed proposal about what your research and the dissertation will look like often takes about a year. Then you defend your proposal before a faculty committee of specialists in the appropriate fields, both from within your own university and from outside it. They ask you lots of pointed questions to try to make sure your plans are realistic and your thinking is coherent and reasonable.

Once you pass your proposal defense, you are “ABD.” ABD is not an official designation, but it is very commonly used—it stands for “all but dissertation.” It means you’ve completed all the requirements of the program except for writing and defending the dissertation. ABD is a somewhat ironic designation, because it sounds like you’re practically done, except that the dissertation is really the heart and soul of any Ph.D. program, and all the rest is, in a way, just a lead-up to the Real Show.

This is also the stage where the time taken to complete it can vary incredibly widely, which is why when you ask “how long does your program take?” or “when will you finish?” most Ph.D. students can’t answer, and many will squirm miserably at the very question.

The dissertation stage takes as long as it takes.

In some fields, if you don’t have to travel and all your sources are readily available, you can go straight from the prospectus defense to “writing up” and be done in about 2 years, usually. Since coursework is often 2 years, plus 6 months to 1 year for the exams and another 6 months to 1 year for the prospectus, the shortest Ph.D. program is generally about 5 to 6 years of post-graduate work (again, this can vary significantly in the STEM fields).

But, if your research requires significant travel, that part alone can take at least one full year before you can even begin to “write up.” That typically makes 6 to 7 years a bare minimum for anyone studying the history of a place that is not local to their university, for example. For those of us who travel abroad for extensive periods, often to multiple countries and/or dealing with sources in multiple languages, we often also need extra time for all the translation, sometimes for language study for those who are taking on sources in a less commonly taught language, like, say, Turkish or Georgian, where you often have to go abroad to study it at all. And once you’ve got all your sources (and, if necessary, translated them and/or used computer modeling or database software to manipulate or analyze your data), then you can finally begin to write all this information into something coherent. This last phase can take any amount of time depending on how you write.

By this stage, any graduate student will have written many scholarly papers, but the dissertation is really fundamentally different because of its scale. A book-length academic project requires extraordinary information management just to keep all the data straight and accurate, and then the bigger scope of the arguments also requires a more complex engagement with larger numbers of secondary works, and more complex thinking, to communicate clearly about something so comprehensive, without skimping on any of the nuances. It’s bloody hard work. I’ve never seen anyone do it in less than a year, and I’m very impressed by 2 years. Many people take more like 3 or 4, especially if they’re teaching at the same time. Add in the fact that most graduate students at this stage are in their late 20s or early 30s, so that many are getting married and starting families (if they can manage it financially on a scant grad student stipend) and all that can add further delay.

I should also mention that your guide through this final stage of dissertation researching and writing is your advisor, someone who has probably guided your progress from the beginning of the program, but who now takes on primary responsibility for keeping you on track and, hopefully, catching you before you make any really awful mistakes. Over the course of the whole Ph.D. program you are moving farther and farther away from the student-teacher model of education. At first you take courses, but then with the MA thesis, the exams, the proposal, and finally the dissertation you work more and more on your own at each stage, until by the time you finish your dissertation you are most likely the world’s foremost expert on your topic (since it was chosen to be an original contribution to the field), and you have gradually—sometimes somewhat uncomfortably—transitioned from being a student to being an independent scholar and a colleague to the other scholars in your discipline.

So far I’ve only briefly mentioned teaching, but that’s the one other common part of a Ph.D. program. Some programs require no teaching at all, but that is becoming downright rare these days. My program required, as part of its funding package, three years of being a teaching assistant. TAs in history led discussion sections, gave guest lectures occasionally, and did most of the grading. This is a fairly common scenario. Often, after the TA requirement is fulfilled (usually in the second, third, and fourth years of the program), advanced-stage graduate students will apply to teach as instructors, where they lead their own courses. Sometimes a lucky grad student can create the course of their choice, but more often they teach the freshman survey courses, or required writing courses, and that sort of thing.

When I started my program, there was no formal guidance whatsoever given to grad students on how to teach. We were just thrown into classrooms to figure it out. From the university’s point of view, we were just cheap instructors, and it was up to the individual faculty members we worked with as TAs to give us guidance, advice, or instruction—or not—entirely at their discretion. In my experience some faculty members took this responsibility very seriously, others less so. While I was in my program, however, I was part of a collective effort on the part of grad students to create our own teaching training program, and our program was eventually adopted by the whole graduate school. Right around that time, in the early 2000s, there was a general consensus that teacher training needed to be integrated into graduate programs, and that is increasingly becoming the norm today, thankfully.

Right now, because of the miserable state of the academic job market (with the exception of a very few fields, there are many times more qualified candidates than there are jobs available), it’s more difficult than ever to get any kind of academic employment with a Ph.D. from anything but a top-tier school (which schools are top-tier varies by field). There has been criticism from the American Historical Association in the last decade of programs that either offer too many doctoral degrees, or programs that are third or fourth-tier yet still offer doctoral degrees to paying students, knowing that they will very likely never be employed in their fields. Basically, if you have to pay to go to a Ph.D. program, you probably shouldn’t go, because the reputable ones are now under considerable pressure not to admit students without funding (there are occasional exceptions—sometimes you are expected to pay tuition the first year with the expectation that if you perform satisfactorily funding will be granted for subsequent years, but this can sometimes be fishy, too—do your research).

Most recently, the AHA is recommending that programs incorporate training in so-called public history, and other alternative career paths for Ph.D.s, into their programs. Public history includes museum work, community outreach, documentary filmmaking, etc. Other alternative career paths include mainly government and corporate research or think thanks. There is some resistance to this pressure—many programs argue that they are not equipped to train students in these directions, and others point out that the job market is little better in any of these alternative fields. But the overall trend is for fewer, more elite programs to offer degrees to fewer people (with better funding), and to diversify the training as much as possible.

On the whole, I think you can see that a Ph.D. is a unique education, encompassing tremendous breadth and depth, and is more like a professional apprenticeship than the model of being a student forever that many people imagine. It probably requires more drive and stubbornness and dogged work than it does pure brain power, and anyone who completes the process very likely has an extraordinary ability to process information (because at bottom that’s what it’s all about). There are plenty of things a Ph.D. is not remotely useful for, but what it does, it does well.

 

Further Reading: On Being Miserable in Grad School

Posted in GradSchool, Profession | Tagged | Leave a comment

Should you go to the best school you can get into?

1408 px - Harvard Gate Inscription

Harvard Gate. Not the only way in to the educated life. (Image via Wikimedia Commons)

Students ask me this question a lot, usually about graduate programs, and sometimes I get asked about it with regard to choosing an undergraduate program as well. Especially in these days of astronomical tuition costs and uncertain job market potential, it’s important for students to really think through the cost/benefit ratio of a program before committing (with the caveat, of course, that education is much more than a ticket to a job!)

My answer to this question is the same answer I (like most academics) always give to almost every question:

It depends.

This is why academics annoy people, I know. But really, the answer is complicated, and entirely depends on factors specific to each applicant.

Advice for everyone:

In terms of pure quality of expertise, the faculty are broadly comparable at any institution of higher education in the U.S., since for the last several decades institutions have all hired from the same overpopulated pool of people with Ph.D.s from a small circle of prestigious graduate schools.

But there can be very big differences in, first, how much one-on-one interaction you get with faculty, and, second, the culture of the student body—how focused students are, how motivated, and how stimulating they would be for you. These differences don’t correlate with the superficial prestige of a given institution—schools at all levels vary widely in these terms.

In many cases, you can get an outstanding education at relatively low cost at a public institution, and you will have missed nothing for bypassing Harvard.

However, in some cases the cost-benefit ratio is different: what you personally can achieve with a more prestigious degree may justify a higher investment in obtaining the degree.

And sometimes a very expensive private institution may actually be cheaper than a public one if they want you badly enough to pay you to come!

In short, making the best choice for you depends on doing a lot of very specific research. And you can improve your range of choices vastly by preparing well: do your best work at every level of education, engage thoroughly in your courses, and talk with faculty and professionals in the fields that interest you. Get as much information as you can before making your decision.

Advice specific to aspiring undergraduates:

The answer to the question of which school you should go to depends on what you want to get out of your degree, on your personality, and on the field you will study (which of course you may not know yet!). But the short answer is that making the right choice for you needs to be a much, much more complicated reckoning than just U.S. News and World Report school rankings (which actually tell you nothing at all of use).

At what kind of school are you most likely to do the best work you’re capable of?

A small, residential college that feels like a family?
A bustling, huge research school that gets your juices flowing?
A place where you’re around students that are a lot like you?
A really diverse group?
People who will constantly challenge you?
A place where you’re the “big fish” and can feel confident?

How important is the name on the diploma for the specific kinds of jobs you want (and how likely are you to stick with that goal)?

This consideration necessarily involves taking a big risk, because you may very well change your mind about a career goal. But in any case, it’s worthwhile to do careful research about several prospective careers that interest you. If you can, interview people who have the kinds of jobs you want, and ask what level of education is required, what kind of GPA is expected, how much employers care about what kind of school you went to, and many other questions too, about salary, job satisfaction, rate of advancement, benefits, etc.

How important will it be to your career goals to have one-on-one faculty mentoring?

Will your future employability rest on recommendation letters and/or connections, or on your test scores and degree from a post-graduate professional school?

What do you want from your education besides employability?

College should also enrich your life and your mind in ways that cannot be measured in dollar signs. What kind of enrichment do you most want and need?

Do your horizons need to be broadened by a place different from what you’re used to?

Do you need a really rigorous environment where the “life of the mind” is the primary focus?

Do you need access to lots of activities to siphon off all your excess energy, so you can focus?

Do you need a comprehensive general education program that forces you to explore fields of study you tend to avoid when left to your own devices?

Or do you need/want to specialize very intensely (think really carefully about that one — what if you change your mind? — would you still have options?)

Find out exactly what the financial picture would be for you if you went to each of the prospective institutions you’re thinking about.

Don’t just look at the ticket price listed on web sites! The most expensive private schools also tend to offer the most aid, and more often in grants than loans, as compared to other schools with smaller endowments. Do all the calculations (including room and board and living expenses, taking into account cost of living in different areas) for each school. If you’d need loans, find out how much your payments would be after graduation, the interest rate, and how long it would take you to pay it off assuming an average starting salary for the very specifically defined types of jobs you hope to get. You may have to go through the whole process of applying and filling out the FAFSA before you’ll know the real numbers for each school, and it may be worth applying to one or two schools you think you can’t afford, to see what they can offer you.

Advice for aspiring graduate students:

Again, the answer here depends on your field and prospective employment after graduation. But at this level in certain cases it probably matters more that you go to a highly ranked school for your subject than it does in undergrad. In other cases, it matters even less! Read on.

First, a given institution can be top-tier for one degree program, second-tier for another, and third-tier for still another program. And Ivy League schools, or other top schools everyone has heard of like Stanford, Berkeley, and Chicago, are not automatically the “best” schools for a given field of study. You need more specific information. The best people to ask are probably recent graduates from programs you’re interested in, who are now employed in the kinds of work you want.

For master’s-level work, the prestige of the degree-granting institution is less likely to matter than for other graduate degrees. Sometimes, if you’re already working in a given field, you can get tuition assistance from your employer for a local graduate degree. Look into this before starting a program. And, if you wish to work in a given location, local programs may make you more employable than distant programs that technically rank higher.

In master’s and doctoral programs in the liberal arts, you’re more likely to work with a specific advisor, and having a great advisor who actively supports your work and is widely respected in the field may be more important than the prestige of the institution you attend. This is something you should talk over in very specific terms with undergraduate advisors or other academic mentors.

BUT—be very wary of a general liberal arts master’s degree. These can make you “overqualified” for many jobs, and not qualified enough for others, leaving you in an academic no-man’s-land. Only go for a liberal arts master’s if you know exactly how you will use it, and that it is certainly required (or, if you can afford it, if you simply want to enjoy the education!).

An MA program can be a way of strengthening your application to a Ph.D. Program (but an incredibly expensive way; you may be better off excelling in your BA and writing an impressive thesis). This is different outside the U.S., so again, consult advisors about your specific situation.

An MA can also be a way of achieving a higher income for teachers, librarians, and other professionals, but you should find out exactly what programs are preferred, when you need to complete one, and whether your employer can help you pay for it.

For law school, things are quite different in several ways. First, many law firms seem to be especially concerned with the prestige of the school you graduated from. There are many, many law schools out there that are happy to take your tuition money even though they may not make you employable at all. Get information from knowledgeable people in the kind of law and location you hope to work in, about where most of their lawyers got their degrees.

Medical and business school are similar to law school. Law, business, and med students tend to borrow enormous sums on the assumption that their high salaries after graduation will make repayment possible. This may be the case, but know that:

(a) for your first several years in your profession, assuming you’re hired, your income will mainly go to paying off your loans

(b) you may graduate into a glut in the market, and be saddled with an impossible debt burden

(c) not all medical, business, or legal jobs pay equally highly. Many lawyers, especially, do not earn the kinds of incomes required to pay off off law school debt.

Then there’s the Ph.D. (or the MFA and similar terminal degrees for the arts). Here’s another field with a glut of qualified graduates: academic research and teaching. College-level teaching almost always requires a Ph.D. In almost all academic fields, the number of Ph.D.s from top schools is vastly higher than the number of positions, so that graduates from even second-tier schools are limited to adjuncting (this is slave labor with extremely low wages and no benefits, and very little hope of moving to a permanent position), or community college positions (which tend to be all or mostly teaching positions at lower pay than 4-year institutions).

The advantage to teaching at a CC is that there are many of them, usually in every community in the country, so you may be less geographically circumscribed than if you search for a tenure-track position at a 4-year. But, increasingly community colleges are able to hire people from top-tier institutions, so even this is not a given. You should research your field very specifically.

There are a few fields in which academic jobs are actually growing (being both interdisciplinary and very applied in your research seems to be the key here), and a few where salaries are higher than average (accounting, law, etc), but still less than in non-teaching positions in the same field.

Whichever level of prestige enjoyed by the school you choose, it is NEVER a good idea to enter a Ph.D. program without full funding (tuition, fees, plus a stipend). It is extremely unlikely that a Ph.D. will earn you enough to pay back years of loans. Don’t ever plan on it.

Important final caveat for prospective students at all levels:

You have to ask yourself all these questions. If you allow other people (say, your parents or friends or academic advisors) to tell you who you are and what you want, you may find after much time and money have passed you by that their image of you was filtered by their own limited perception and their own wishes for you (they always are), and therefore not entirely accurate.

Exploring what you really want and need is difficult, especially when your experience of the options is still limited. Consulting with others is a good idea, but test everything you hear by the yardstick of your own gut instinct about your skills, goals, and potential. The best you can do is to continually re-assess as you gain more experience. No decision is 100% irrevocable, and often the twisty path takes you exactly where you need to go, when a shorter, straighter path may have rushed you to the wrong destination.

And, of course, you should never just take my word on any of the issues raised here. I wanted to raise questions worth asking. Other academics will given you different advice based on their experiences. Perhaps some will do so in the comments on this post!

 

Update: some links.

Posted in Teaching | Tagged , , , , | Leave a comment

What is academic history?

Thomas Henry Huxley by Theodore Blake Wirgman

Thomas Henry Huxley by Theodore Blake Wirgman. Via Wikimedia Commons.

History is unique in being counted (or confused) as falling under both the social sciences and the humanities.

From its beginnings in oral storytelling, history was a partly literary exercise (and thus a part of the humanistic tradition) until it became professionalized in the nineteenth century.

From at least that time, history has also been counted as a social science because modern historians use objective data as evidence to support larger claims, and employ methods that are loosely based on the logic behind the scientific method. Some of our evidence is empirical (gathered through experiment or observation, as in the natural and social sciences), and some is interpreted through the “close reading” of texts (as is the evidence in other humanities fields, like literature and philosophy). In fact, as the study of everything that has happened in the past, in a way history can be said to encompass all other disciplines, with all their diverse methodologies.

Historians also rely on an exceptionally broad range of types of evidence: we use documents of every kind (public and private, statistical, official, informal, etc) as well as literature, but also fine arts, everyday objects, architecture, landscape, data on demographics, climate, health, etc, and just about anything else.

What holds together this very broad field is simply that we all study the past. That is, a historian of science may need to master many principles and methods of scientific inquiry, but her goal is to understand the development of science over time; contrast this to the scientist who may share some principles and methods with the historian of science, but whose goal is to further new scientific knowledge, rather than to understand how it developed up to the present.

More specifically, historians can be distinguished from scholars in other fields by the kinds of questions we ask. The questions historians ask can usually be reduced to some combination of the following:

(a) change and continuity over time
(what changes & when, what stays the same while other things are changing)

(b) cause and effect
(which factors affect which outcomes, how and why)

Dates, events, and famous names are elements we seek to master only so that we can more accurately explain the bigger questions of continuity, change, cause and effect.

Understanding the past helps us to know ourselves better (since we are in many ways the products of our pasts), and also to understand in a broad sense how societies behave, and how the constraints of particular societies affect their behavior.

This understanding – though always and inevitably imperfect – is both worthwhile in its own right and can also help us to better understand our choices in the present.

Although historical methods are often grounded in theoretical models and strategies (as in all academic disciplines), historians place unusual emphasis on distinguishing between specific contexts (time, place, social/intellectual/political/cultural climate, etc), as opposed to other disciplines which often aim to formulate models that apply accurately to many contexts.

On other words, we’re not lumpers, we’re splitters.

For example, when we as a society wonder about the causes of war, a political scientist may seek to distill the common factors causing many past wars so as to ultimately formulate a working general theory that will (one hopes) accurately predict the causes of future wars.

The historian, on the other hand, is more likely to delve into the unique factors of each particular context in order to understand what caused that war (but not others).

The historian’s ultimate goal, in this example, is to discern how particular contexts affect particular causes – i.e., identifying unique factors and tracking how they affect other factors), rather than directly predicting future events or reducing particular phenomena to general principle.

Note that both approaches are valuable and informative, and – interestingly – they each can serve as a check on the excesses of the other.

Posted in History, Profession, Teaching | Tagged , , , | Leave a comment

“Summarize”

2004-02-29 Ball point pen writing

Via Wikimedia Commons

If you’re a college student you may often be asked to “summarize” a text or film. The tricky thing about this is that people use the word “summarize” pretty loosely, and what is being asked of you might not be what you’re actually doing. To clarify the difference, it can help to be more picky about what we mean by “to summarize.”

If we’re being picky, then, “to summarize” in a general, non-academic context usually means to simplify.

To summarize in this sense is to touch on all the most important and interesting pieces, to highlight them or to communicate them to someone who is unable to read the original text. In this kind of summary, you’re usually looking for coverage – you want to hit all the main points, and usually in the order you found them in the original. You sacrifice depth for breadth, and that often means leaving out the complicated parts.

Students tend to have come to college with more or less this notion of what a “summary” should look like, probably because they’re used to textbook writing. In textbooks, by definition, very complex ideas are simplified, because the purpose of a textbook is to convey large amounts of general knowledge, rather than to further our knowledge in specific, new directions. So a textbook summary tends to focus on coverage of all relevant main ideas and may leave out many complexities or nuances, so that you get a complete overview, rather than depth on any particular point. Students may sometimes be asked to do this kind of summary for a very simple assignment, when the goal is only to show that you read the text, for example.

But it’s usually not what the professor is really looking for.

The reason summarizing gets tricky at the college level is in the academic context, where our main goal is to think critically about what we know and don’t know and why–not just memorize facts–the most important and interesting bits of a text are not simple, and shouldn’t be simplified, as that would deprive them of their interest and importance. Usually, in academic writing, we summarize another work in order to question or elaborate on its conclusions in a new context. If we start with a simplified version of our sources, our own analysis can only be superficial, and very likely inaccurate!

So, when you’re attempting to “summarize” a text that you will use as a source in your own paper, you need to do something much more complicated than just hitting all the main points in their original order. You want to engage with the text in depth, not just skim its surface. This is why in my own classes I use the more precise term “to distill,” which is a metaphor for exactly the action we want in an essay – a taking out of selected bits, without changing their nature.*

When you distill a source that you want to use in your own essay, you usually do not need to cover every key point of the text. Since the source text probably wasn’t written on purpose to be used as a source in your essay, and in fact had different goals of its own, parts of the source text may not be relevant to your essay. Those don’t need to be covered, then. Instead, you want to hone in on the parts of the source text that directly relate to your goals for your essay. And when you explain these relevant ideas, you want to very deliberately avoid simplifying them. Focus your energy on explaining what is complex, interesting, controversial, incomplete, or questionable about the source text, because it is these nuances that you will want to develop in your essay. This is what we mean by “analysis,” another potentially confusing word you see a lot in assignments—when you analyze a text, you apply your own thinking about the source texts, evaluating their assumptions and sources and goals and logic. You can’t do that if you’ve ignored all the details from the source text.

This confusion about what we mean by “summarizing a source” in an academic essay is actually not a minor matter of semantics at all. When a student summarizes source texts in the sense of simplifying them, the student leaves him- or herself with ideas that are too small and too simple to work with. So the student has nothing to add, and therefore no argument. And next thing I know, I have a stack of essays to grade that were supposed to be analytical, but a huge percentage of them have no argument at all. That is a sad state of affairs for us all!

* I got the term “distill” and countless other useful ways to talk about writing from the University Writing Program at Columbia University, directed by Joseph Bizup, who trained teaching fellows like me. It’s a great term that has served me well in the years since.

Posted in Teaching, Writing | Tagged | Leave a comment

Scrivener: A Love Story

Writer John

If this were how I had to write, I don’t think I’d write. Image via Wikimedia Commons.

When I was in the early to middle stages of revising my dissertation into a book, I discovered Scrivener. At the time, the Windows version had just been released in Beta. I tried it, and it was still too buggy to use on a huge project that was fast approaching its deadline, but oh, oh did it have incredible potential! My mind was blown. So much so that, I’ll admit, Scrivener was a fairly major factor in my decision to switch to Mac (it was time to get a new laptop anyway, I’ll say that much for my sanity).

Importing a 300-page manuscript full of footnotes was a bit of a pain. Scrivener isn’t really intended for large-scale importing of a whole project at once like that. But it worked. And then my life was changed.

No, really, this software changed my life.

My dissertation project had begun many years before, and I had gone through several major moves, including international ones, with all my research notes and drafts, and I had switched software for various aspects of the data management several times. In short, all my materials were a bloody mess. And here I needed to quickly revise this enormous beast in significant ways — I added four new half-chapters, new framing on every chapter, new introduction, and bits and pieces of new research throughout. It was a monster to keep track of it all.

And I am not someone who deals well with that kind of situation even on a small scale. I think in circles and spirals, not straight lines. I can’t keep anything in my head that isn’t right in front of me. This whole project had the potential for disaster.

But Scrivener was, seemingly, devised for people exactly like me. Scrivener is not word processing software (although it can do all the basics of word processing). It’s part database, part outliner, and mostly it’s something else entirely — a virtual version of a stack of legal pads, index cards, paperclips and a bulletin board. But you don’t have to carry all that paper around with you, and you can’t lose any of it, since it’s got a really smooth automatic backup system. In addition to all that — and many more features aimed at fiction writers that I haven’t explored at all — there are some really nice whistles and bells that just make it very pleasant to use.

Here’s how I use it. At first it was just for the dissertation, so I’ll start with that. Once I’d imported my huge text file and figured out how to get all the footnotes looking right (actually looking better – in a panel beside the main text, much easier to see while editing), I started splitting my text up. One of the core features of Scrivener is that you can break your text up into chunks of any size, and the smaller your chunks, the more you’ll get out of Scrivener. So I didn’t just break it up into chapters, or subsections, but into paragraphs. Each chunk gets a title, and these titles are displayed in a panel next to the text as you’re reading it, so in effect the outline of your whole text is right there in nested folders, which you can quickly and easily rearrange. (Scrivener will also output all your data and metadata into a proper outline where you can change things in groups, etc.) Just the process of splitting up and labeling my chunks of text revealed many places where the organization was a lot less logical than I’d thought, so I did quite a bit of rearranging just in the process of importing.

Each chunk of text has a virtual index card attached to it (I love that it looks like an actual index card), which you can either auto-fill with the beginning of whatever’s written in that chunk, or you can fill with your own summary. There’s a corkboard view where you can see just the index card versions of your chunks, and rearrange them at will. This is incredible.

Years earlier when I was finishing the dissertation, I had actually printed out several chapters, cut them up into paragraph-size pieces with scissors, and spread them all out on my living room floor. That exercise was incredibly helpful, but it was such a big project that I only did it once. With Scrivener I can do it easily and often, with no mess, and no trees killed for my efforts.

Each chunk of text can also be labeled for easy sorting (like, “Chapter,” “Front Matter,” “End Matter” etc), and can be marked with a status (like, “To-do,” “First Draft,” “Final Draft,” “Done”). You can set the options for label and status however you want. In addition, you can add as many keywords as you choose (like tagging — I can add “gender,” “upbringing,” “childhood” to one paragraph, and “gender,” “estate management,” “needlework” to another, and later sort all my chunks to see those that all have “gender” in common, or just the ones on “childhood,” etc.

Each chunk of text also has a free field where you can add notes, like “did I double-check this in Blum?” And you can also insert comments into the text as you do in the revision mode in MS Word. So, you can have comments pointing to one spot in your text, or comments referring to a whole chunk at once. There are, in addition, a bunch of options for custom meta-data and internal references that I haven’t even begun to explore. All this metadata displays in another frame on the other side of the text you’re reading. You can hide this frame, or the one showing your folders, at any time.

One of my favorite features (though it’s so hard to decide) is that you can also split the main text frame, vertically or horizontally, to compare two chunks of text. This feature alone would have been life-changing to me, even without all the rest. I compare documents and cut and paste between chapters or separate files constantly, and even with all the screen real estate in the world, there’s no way to do this in Word without aggravation (and endless confusion about what was changed where and when — in Scrivener everything is in the same file, with created and modified dates on every chunk of text, not just the whole file, always visible, without clogging up space). On my 13” MacBook Air, I can split the text screen horizontally and still see the folders on the left and the metadata on the right. Or, I can hide those two side screens and compare documents vertically, for more intense editing back and forth. All of this can be done with quick, one-step, intuitive clicks.

While I’m writing, the word and character counts show on the bottom of the screen. I can set daily targets for myself (or in my case limits!).

I can also view my text in any old font or size, secure in knowing that when I’m ready to compile into a Word, RTF, or PDF file, I have saved settings that convert everything automatically to the output style I want. All that is easy to do in your custom way, though there are also settings available for the basic options (for people who write things like screenplays, there’s much more to all this). I like that I can read on-screen in 18-pt Helvetica, or some random combination of sizes and fonts that result from pasting in text from a variety of notes files, for example, without it affecting the finished product, and without having to fuss about cleaning up a bunch of little inconsistencies.

I also imported Word and PDF files that I needed to refer to, but weren’t part of my text. These go into a separate folder, where they can’t be edited, but can be viewed alongside your text in the split screen, for reference. Awesome.

Right now I’m really enjoying the first stages of starting my new project on Scrivener, building up the organization and metadata from the start, but there were some particular advantages, too, to finishing up my first book project in Scrivener. As I went through my research materials collecting bits and pieces that needed to be added, I imported them into Scrivener as separate chunks of text. I labeled them as “Added Bits,” which gave them a different color in the folder hierarchy and outline, so they could be spotted easily as I integrated them into the main body of the text in the places I thought they should eventually go. As I worked my way through them, I could either change the label or merge the added bit to a chunk of the original text, as it got integrated, or I could shift it off again to another folder labeled “rejects” or “spin-off article.” When you compile your text into a word processing file, it’s easy to un-select any folders like this that aren’t intended to be part of the whole.

Once I got going with all this, I found that I could use Scrivener for practically everything I do. Most significantly, for all the writing I do for teaching. I have one Scrivener project for all teaching-related materials: syllabi, assignment sheets, handouts, etc. I keep a template that contains most of the boilerplate text for my syllabi, for example, and can very easily slip in the updated text for a particular iteration of the course, then, with a few clicks, compile it straight to PDF in my established format for syllabi. I can easily separate out a chunk of text in a handout that changes when I use it in different courses, for example, with all the alternate versions I need for just that chunk, while the rest of the handout is common to all versions. That way, I can update part of the common sections of the handout, and when I compile one or another version, that update will automatically be there. I can collapse the subfolders for courses I’m not currently teaching, yet still have them handy when I want to go back to an old handout for a new purpose. I have files with reference material like the official college grading scale, official verbiage about department goals and requirements, etc, so that I can grab it when I need it without opening new files, without constantly updating an external folder system full of duplicates, etc.

And now I even use Scrivener for writing blog posts. When I have a random bit of an idea for a post, I create a little “chunk” of text for it in Scrivener, so that I have a running list of many potential posts in various degrees of completeness from raw idea to ready-to-publish (each one labeled with a click and automatically color-coded). This way I can add a bit here or there whenever a moment presents itself, without losing anything or getting buried in duplicates. Or accidentally publishing a half-baked post!

It’s also easy, once you have a system down, to create a template in Scrivener that you can use for future projects, and then these templates can be easily shared. I made very basic templates for my own purposes (and to share with my husband), for a book-length historical research project, an article-length project, and teaching materials. These templates don’t use the vast majority of Scrivener features — they’re really just a system of basic organization that I don’t want to have to recreate again and again. I’ve shared them on my academia.edu profile if you’re interested.

To conclude this story of a love affair, I’ll admit that I’ve had one problem with Scrivener so far, and I don’t know if it was my fault. The word count of my manuscript in Scrivener was drastically different from the word count I got when I compiled it to Word. By 30,000 words! This is of course a very serious problem. I assume that Scrivener was not counting the notes or some part of the front- or end-matter, but I did very carefully (many times!) check all the settings and make sure the right boxes were checked to include all those. I tried comparing a short, plain text document, and the word counts were comparable. It may be that the many abbreviations in my footnotes were handled different by Scrivener’s word counter than by Word’s (though I don’t think that could add up to such a huge discrepancy). Right now, I don’t think Scrivener is really designed for massive scholarly research projects with more than a thousand footnotes. It can handle that, but it wasn’t really designed for it, and that may be part of how it was possible for the word count to be so far off. I haven’t gotten to the bottom of this issue, and I welcome thoughts others might have about it. In any case, now that I’m aware of the issue, it’s simple enough to compile the text after any major changes to keep a rough gauge of the difference between a Scrivener word count and Word’s.

Posted in Random, Research, Writing | Tagged | 3 Comments

Money

Bundesarchiv Bild 183-19204-013, Währungsreform, Frau mit Geldscheinen

Bundesarchiv, Bild 183-19204-013, via Wikimedia Commons

I learned not long ago that as a tenure-track assistant professor* of history I was making the same salary as a deckhand on the Staten Island Ferry.

I don’t begrudge the deckhand his salary one bit, because I know as well as anyone that you can barely support a couple of people on that money in New York City.

Also, while I believe my work is very valuable to society, I think everyone’s work is valuable. We need deckhands.

The thing is, the deckhand on the Staten Island Ferry probably doesn’t pay $950$1150 a month in student loans (Sallie Mae just hiked the rate on us again). And he probably started work before the age of 30 because he didn’t need 8 years of post-graduate education to get that job, so he got that salary or something like it for all those student years that I was living on beans and rice and couldn’t afford coin-operated laundry machines. And hopefully (though these days you sure can’t count on it) he’s been paying into social security and a pension plan all those extra years that I wasn’t, while I was still being trained for my job. So that means the deckhand on the Staten Island Ferry is very significantly better off than I am financially. And let’s just remember that the deckhand–though his work is as valuable as anyone’s–is not the driver of the ferry, who has the safety of thousands of people in his hands.

Now let’s compare my salary to that of a first-year law firm associate in New York City. The law firm associate is likely to make at least twice as much money per year, not including the annual bonus. That person may indeed have the same loan burden that my family has, though (law school is how we got most of ours). But the law associate can handle it, and will probably pay it off in about five years (we’re only paying off interest, so we’ll be doing that every day of our lives until we die). That cushion can also easily cover the much nicer professional wardrobe that a law firm associate needs to work long hours in an office, as opposed to long hours at home that I work (though, arguably, I actually work more hours total, which is saying something). And the law associate only needed 3 years of post-graduate education to get that job, so potentially he has about five extra years of earning and paying into a pension, too. At twice the salary. Not to mention that Manhattan corporate lawyers get access to private banking accounts with fabulous terms (no fees for anything! Extra special interest! All the perks rich people get to make them more rich, also including tax loopholes). What does the NYC law firm associate contribute to society? Well, judging by what someone I know very well once did when he had this job, he helps corporations make sure they don’t pay their employees the money they are entitled to. Or something similar. While I teach the citizens of this democracy to think critically. Okay, so neither of us is curing cancer, or saving your life when you get into a car accident. There’s a reason I’m not including medical doctors in this comparison.

Of course, what I’m describing here is not at all what all, or even most, lawyers do. I really wouldn’t whine about the vast majority of lawyers, many of whom make as little or less than I do anyway, and many of whom do incredibly important things for our society. I’m talking about corporate lawyers in Manhattan, and even then, there are exceptions. There are firms that exist more or less to go after the money-grubbing firms. But—as a rule of thumb I’ve noticed that the more useful your work is to society, the less money you make (with the glaring and rightful exception of the medical profession). Still doubting? Two words: social workers.

There’s a funny thing, too, that should be mentioned about New York City. The last numbers I saw estimated that NYC salaries are about 20% higher than the national average, while the cost of living in NYC is about 200% higher. Unfortunately that was a print source and I lost my original, but here are some links that can give you a pretty specific idea of what it’s like: costs broken down by type of expense, Daily News articles on the general awfulness, CNN cost of living calculator so you can compare how far your salary would go here.

Yeah. Bridging that tremendous gap would be easier if I were making twice the salary I make now, for twice as many years. The rent my family pays on a small, cockroach-invested 2BR apartment in a questionable neighborhood in Queens could get us a beautiful 4BR house in most parts of this country. The costs of groceries and transportation in NYC still make me boggle, after over a dozen years here. Childcare was an unaffordable dream on my salary for the first three years of my daughter’s life, yet without childcare there’s little hope of doing the work that could help us earn more. But my point is merely that there’s tremendous regional variation in income and how far that income will stretch, which is helpful to keep in mind when one is comparing salaries.

There’s been quite a bit of news around the country lately about the supposedly astronomically high salaries of faculty driving up costs for college. I won’t link to it because I don’t want to be a part of driving traffic to those sites, but it’s easy enough to find.

This is another of those pernicious lies. It’s scapegoating. Faculty are being pinched just like students and parents are.

Who is telling these lies about faculty salaries? Mostly politicians and university administrators. Who’s not feeling the pinch of the higher costs of university education? Mostly politicians and university administrators. Yep. Always ask yourself what the person telling you “facts” has at stake, and examine where they got their information.

That goes for me, too, of course. So let me tell you where I got my information.

First, it’s not hard to find out what faculty really make. Most of the sensationalist news articles have been giving a single “average” salary figure for all faculty at an institution, or even all faculty in general, and in every case that number has made my eyes bug out of my head in fascinated disbelief. I have no idea where they’re getting those numbers from, but I can tell you that you can search here, at the Faculty Salary Survey from the Chronicle of Higher Education and get actual average salaries of actual faculty at nearly every university in the country, broken down by rank and gender.

It’s important to break down the figures, and ideally you’d break them down even more than that site does, because salaries vary widely across the academy. From well below the poverty line to astronomical sums. This is really wide variation, which is not accounted for by variations in cost of living.

So lets talk first a bit about how and why faculty salaries vary so much.

First, are we talking about full-time or contingent faculty? The majority of people teaching college-level courses in this country are contingent faculty. Contingent faculty are usually paid on a course-by-course basis, with zero job security and zero benefits. It is not physically possible to live on this money unless you teach an insane load, like 8 courses at a time, and even then, you’ll barely scrape by. NO ONE does this unless they are either (a) absolutely desperate to get a full-time job and hoping this will help them to achieve that dream, and/or (b) they love what they do so much that they are willing to work mostly for free, usually at great personal sacrifice. In most cases, working as contingent faculty is only possible for families with another “real” income from somewhere else.

So when we’re talking about faculty salaries right off the bat we have to exclude all the contingent faculty who don’t really get salaries at all. And let me repeat: these mostly selfless and often desperate people are the majority of faculty in this country. Look here at the Coalition on the Academic Workforce Survey to see what kinds of pay (if you can even call it that) are typical for contingent faculty across the country.

How about full-time faculty?

Well, from the first link I gave you above, the first thing I hope you’ll notice is that across the board, at comparable ranks and institutions, women make less than men. Sometimes a LOT less. A small part of this disparity may be explained (though not excused) by the fact that there are more women in the fields that pay less (basically, the humanities), but that doesn’t explain away the whole gap. It’s also less common for women to negotiate for higher salaries than it is for men, but that too doesn’t explain away the whole disparity. Some schools are—for reasons I can’t fathom—much worse than others. But the pay gap between men and women exists, unfortunately, in nearly every field of employment in this country.

Let’s go back to the fact that the humanities are paid less than other fields. If you have a friend or relative who is an accounting professor, you may be under the impression that faculty are pretty well paid. In the case of accounting professors, you may be largely right (allowing for differences between institutions, the gender gap, and assuming we’re talking about full-time faculty). But a professor in the humanities at the same rank, with the same education and same duties, may very well make half what the accounting professor makes, at worst, and certainly will make much less. Fields like accounting, business, law, engineering, and applied chemistry all compete for faculty with employers outside academia. Someone with a Ph.D. in accounting may get a job as an accounting professor, or work as a CPA. So, faculty salaries for accounting professors have to compete with CPA salaries in order to attract good faculty.

If you have a Ph.D. in history, though, the employment options that will most directly make use of your degree are: academia, museum work, k-12 education, and government research. All historically under-paid fields (it is not a coincidence that all but the last are also historically dominated by women—historically female professions are universally less well-paid than historically male professions). So, universities don’t need to offer as much to attract the top candidates, and subsequently history professors make much, much less money than professors in accounting, business, law, and a few other fields. The same is true for other humanities fields like philosophy, literature, languages, and fine arts. Social sciences and some of the hard sciences, and mathematics, are an in-between category, where Ph.D.s are generally more employable outside academia, but not as obviously so as in more applied fields, and so the salaries in these fields may sometimes be slightly higher than in the humanities, sometimes not.

Then there are some fields in the hard sciences where faculty salaries are largely paid by outside grants. These can sometimes (but far from always) be higher than average. And other fields, mainly athletics, are hugely higher paid at certain institutions because they help to create a money-making industry (like a successful football or basketball team).

There are also vast differences based on the employing institution. You can poke around on the web site I linked to, and what you’ll find is that the hierarchy of salaries basically follows this pattern (note that I’m talking about the US throughout this post — faculty salaries around the world follow other patterns):

Research-1 universities (huge private and public university systems with enormous research budgets, like the Ivy League, Chicago, Stanford, Berkeley, U of Michigan, Wisconsin, NC-Chapel Hill, etc) – pay the highest salaries across the board, in order to attract the top researchers in most major fields, in order to maintain their status as the world’s best universities.

Small liberal arts colleges with huge endowments (like Middlebury, Pomona, Williams, Sarah Lawrence, etc) pay the second highest salaries. They have enormous amounts of money from internal sources, and can afford to attract top faculty in order to attract a very selective student body.

Major state universities and smaller private liberal arts colleges (like Michigan State or Kalamazoo College or my own employer, Queens College, CUNY) are third in line — they are mostly hiring top candidates from top Ph.D. Programs, so they have to offer respectable salaries compared to their peer institutions, but they’re usually not involved in bidding wars over big names or establishing top programs in any field.

Regional schools and community colleges (like Grand Valley State University or LaGuardia Community College) — smaller schools like these rely principally on contingent faculty, but the full-time faculty they do have are often there because they need a job in that region or are committed to the mission of schools like this that serve populations who would not otherwise be able to afford college, so they are often forced to compromise on things like salary.

The final major variation in salary range in the academy is based on experience. Like anywhere else, faculty salaries start fairly low and climb higher over the years for any given faculty member. Salaries for senior scholars grow incrementally from the base salary, so women faculty tend to be earning even less compared to their male peers when they get to senior positions, because their raises are a percentage of their lower starting salary. The same is true for faculty in the less well-paid disciplines—they start out earning less and so their percentage raises are also less.

Often, when the media reports a “typical” faculty salary, the salary they quote can only be that earned by a big-name star senior faculty member at a research-1 institution in a field with a very competitive non-academic job market. So, yes, it is entirely possible for a faculty member to be earning half a million dollars a year. But there are only handfuls of such faculty in this country. The vast, vast majority are lucky if they make the equivalent of a deck hand on the Staten Island Ferry, while they also struggle to pay off gigantic student loans and, much like the rest of the middle class in America these days, do not look forward to ever enjoying a pension adequate to support life.

There is also an enormous generational difference. The generation of faculty who entered their first jobs in the two decades following World War II (and the GI Bill that brought record numbers of college students into the system) are largely male, largely had wives who could afford not to work and thus took care of childcare and the home, largely were offered jobs instead of competing for them, and largely had respectably upper-middle class salaries and zero student loans (the major federal student loan programs started in the 1970s). It was much harder to reach the point of a Ph.D. for that generation—with few exceptions you had to have the right sort of background in addition to being very bright and working very hard—but once you got a Ph.D., you got a job and a salary on which you could support a family. Much has changed for subsequent generations, and not just in academia.

While the percentage of Americans going to college continues to grow, the pace of growth has slowed, and universities are choosing to expand the administration rather than the faculty (leading to larger student-faculty ratios and more courses taught by TAs). Meanwhile, Ph.D. programs have churned out more and more graduates, creating an increasingly huge surplus of qualified candidates (I get my information on this phenomenon in my discipline mainly from the American Historical Association — I’m not sure how much of their publications are available to non-members). This means these graduates, more and more desperate for more and more competitive jobs, can be offered relatively lower and lower salaries. Meanwhile, access to Ph.D. programs has broadened—all you need to get in now is brains and drive, no matter what your background, with a few significant exceptions—but for this very reason more and more Ph.D. students have to take out loans to complete their education.

(A personal example: I am the second person in my very huge extended family to get a Ph.D. My maternal grandfather was the son of truck farmers, my paternal grandfather grew up malnourished in Kansas during the Depression, where he got his first job to help support his large family when he was nine years old. With excellent grades and test scores, I was able to attend the second most expensive private college in the country—and in my not-humble opinion the most rigorous college in the world—with half the tuition paid by grants and half by loans. I competed to get into a top grad program with paid tuition plus a small stipend. All these opportunities were unprecedented for someone of my background just one or two generations earlier. But, I also have a big debt burden and can’t look to my family for financial support, unlike some of my cohort in graduate school, and radically unlike the previous generation of Ph.D.s).

So where on earth is the astronomically rising costs of college coming from?

There are a few explanations that I’ve read about and seen with my own eyes.

First, for the more competitive schools, there has been a rising expectation that to attract the best students colleges need state-of-the-art technology, gyms and other recreational facilities, and living spaces. All this is very expensive, and has contributed to the rising cost of tuition in private colleges, especially.

Second, there’s one part of university payrolls that has sky-rocketed since the 1980s. It ain’t faculty salaries. It’s administrative salaries. A part of this is justifiable—new federal regulations require new personnel. And, justifiable or not, the new facilities that parents and students increasingly expect—like gyms and dorms but also disability services, writing tutors, etc—require administrators. As a rule, although they do not directly educate students, administrators make more money than faculty (the disparity is far greater for top-level administrators, though the sheer numbers of arguably suitably paid mid- and lower-level administrators is, collectively, part of the rising costs). The reason for this is presumably that administrators, like accountants, can choose to work in the private sector, so their salaries need to compete.

(Are you noticing a trend here? That universities take advantage of the Ph.D. degree that uniquely qualifies graduates for the professorate by paying them much less than anyone else with less difficult-to-obtain degrees? Obviously no one would put up with this…unless they were so devoted to their subject of inquiry and their teaching that they put up with being treated unfairly in return for making a difference in the world….cf. social work, teaching, nursing, mothering, and most other female-dominated professions….)

But there’s still another phenomenon at play here, and it’s true at every institution of higher education in the country. Since the 1980s, there’s been a push to apply “corporate” or “private” financial principles to the administration of institutions of higher education. On this principle, top administrators have been hired at astronomical salaries (at least six figures, sometimes high six figures or more) to “fix” university budgets by applying these magical principles of capitalism that make money fall out of the sky.

The thing is, decades have passed, and university budgets have shown no improvement. The biggest difference between the university of today and the university of 1980 is not a streamlined budget and efficient administration. The very idea is laughable. The biggest difference I see is the proportion of the university budget that pays enormous salaries to administrators with no background in education who flit from one institution to the next “fixing” budgets but leaving them, mysteriously, in no better shape than they were before.

The problem is based in part on a fundamental error in how the public, the government, and university administrators understand capitalism. Taking a course in the history of modern Europe or in basic economics could resolve this fundamental error, but apparently a large sector of our public failed to take such a course, or just plain failed it.

If you’re feeling skeptical about what I’m about to say, please, I beg you, read Adam Smith’s Wealth of Nations, the acknowledged bible of capitalism. You don’t have to believe me. Go back to the primary source (as any good professor will tell you to do) and judge for yourself. Just don’t blindly believe the talking heads on your TV, I BEG YOU.

Capitalism is about the exchange of commodities. Education is not a commodity.

Confusing this issue is a fundamental error that is bringing down the (to-date) world’s best system of higher education. We are fast losing our edge to new universities in India, China, and the Middle East, because we are mis-applying financial principles to a non-financial sector.

I could go on—and on and on and on and ON—but to spare you I will stop and let you process what I’ve already said.

Just one more thing, while you’re processing. Adam Smith understood what has been lost in the American mainstream discussion of capitalism today: healthy capitalism requires regulation. Without regulation, capitalism is destroyed by monopolies and corruption. The people who are monopolizing and corrupting our capitalist economy are, today, in the mainstream media, accusing true capitalists of being evil socialists (which is also a complete misunderstanding of socialism, but that’s a subject for another post). The irony would be delicious if the consequences weren’t so incredibly dire for nearly every American.

I may seem to have strayed waaaay off topic, here, but it’s all much more deeply connected than you may think. Every citizen in the United States should have a basic understanding of how capitalism works, including the facts that capitalism works with commodities and commercial services (not life-or-death necessities that you can’t effectively comparison shop for or decide not to “buy,” i.e., health, safety, or in our modern world, education), and that it requires basic regulation to avoid falling into corruption (which is by definition not capitalism, but a failure of capitalism). But most citizens don’t have this basic understanding, in large part because civics education has been eliminated from most public school curricula in the past few decades. And because, while more Americans than ever are going to college, they increasingly aren’t taking “frivolous” classes like the history of modern western societies (in which capitalism is always a major theme), or aren’t understanding them, because they come into college ill-equipped to succeed thanks to our decimated system of primary and secondary education. It’s all connected.

Take-home message? As always, question your sources of information.

Why not start with these? Some links to respectable articles about faculty pay and tuition:

CHE: College Costs Too Much Because Faculty Lack Power

NYT: How Much is a Professor Worth?

NPR: The Price of College Tuition

New Yorker: Debt by Degrees

Business Insider: America’s REAL Most Expensive Colleges

Philip Greenspun: Tuition-Free MIT

 How the American University was Killed in 5 Easy Steps

New: The Adjunct

* Note that “assistant professor” is not an assistant to a professor. In the US those are known as teaching assistants or research assistants. The three ranks of full-time, tenure-track professors in the US are, in order from junior to senior: assistant professor, associate professor (often achieved along with tenure), professor (known as “full”). A retired professor is “professor emeritus.” All teaching faculty are referred to in a general way as “professors,” usually (but not quite always) including adjunct or other short-term-contract faculty. Almost all faculty these days have Ph.D.s (so that “Dr. So-and-So” usually also applies), but in some fields the terminal degree is at the master’s level, such as a Master of Fine Arts. Generally, the only people independently teaching college-level classes who aren’t loosely referred to as “professor” are graduate students, who are officially “graduate instructors” or “teaching fellows.” In my day, we usually asked students to call us by our first names at that rank. Mind you, in Europe these ranks and titles are all completely different, which is very confusing.

Posted in Profession | Tagged , , , | 1 Comment

Rocky IV

Rocky statue at the Philadelphia Museum of Art. By Sdwelch1031, via Wikimedia Commons

All of the Rocky movies appeared on Netflix recently, and I was inspired to put them on in the background while I was doing some mindless busy work. Ah! How they bring back my childhood. Anyway, I was particularly excited to watch Rocky IV, “the one with the Russian,” for the first time since it came out in 1985. At that time, I was ten and didn’t particularly know anything about Russia. The Cold War had been reignited and I was scared of nuclear war. I took the movie at face value, enjoyed it, was mildly interested in the scenes where Rocky is (supposedly) in the Soviet Union. The message I got about the USSR at that time was mainly that it’s cold there, the people are apparently big and scary, and they have a lot of technology.

Watching it twenty-seven years later, as a professor of Russian history whose students don’t even remember a world in which a Cold War existed, was of course very different. I expected to giggle at the cliches about Russians (yeah, they’re not any bigger on average than we are, duh), but my dim memories of the movie had not prepared me for how hilariously, amazingly backward the whole portrayal is of Soviet athletes versus Americans.

The Russian boxer is characterized in the movie as almost super-human not just because of his size, but mainly because of the super-cutting-edge technology his team of top trainers use (lots of rooms full of flashing lights!), while Rocky is of course all natural, just one man against the world, able to beat stronger opponents through sheer will power and his ability to endlessly take a beating. When he travels to the “Soviet Union” to train for his big face-off, Rocky demands not a high-tech training center equivalent to what his opponent is using, but a humble cabin off in the snow somewhere (Russia is cold, yo). He runs in thigh-high snow, he climbs mountains (is he supposed to be in the Urals? there aren’t a lot of mountains in Russia, actually, and none of them are very impressive looking), he throws logs around. That’s our authentic Philly boy, there.

There are so many things wrong about this that it’s hard to know where to start.

First, in the 1980s, to our detriment, we vastly over-estimated both the economic and technological power of the Soviet Union. Mind you, we had to guess because the USSR worked very hard to prevent an accurate picture of their real abilities from reaching the West, but our guesses were very, very wrong. There are multiple reasons for that, but one of them must be that we let our fear become reality. We were afraid the USSR was ahead of us in technology, so we assumed they were ahead of us in technology. Sometimes that kind of thinking can be useful — be prepared for the worst case scenario, right? But we acted on this fear, even though there was no evidence that it was substantiated, in all kinds of ways that are still hurting us today (mainly by running up our national debt to astronomical levels in a race to “match” an opponent that actually was way behind us from the beginning).

What we now know about Soviet technology in the 1980s is that it was woefully behind western standards and falling farther behind every day. The first scene of supposedly Soviet high tech that I saw when re-watching Rocky IV made me laugh out loud.

Yeah, right — they wished.

The Soviets obtained the technology for microcomputers, for example, quite early, but had endless delays in their attempts to reverse-engineer it, and by the 80s, from what I’ve read, they had started buying many of the parts, and some whole computers, from abroad. In 1985 when I first watched Rocky IV I had had a TRS-80 personal computer in my family home for 5 years already, but the USSR as a nation was still struggling to develop and distribute comparable technology.

I’m generalizing, of course — there were a few areas of technology where the Soviets put all the investment they were capable of (as far as I know, the first personal computers that got into classrooms in the latter part of the 80s seemed to have been developed for military purposes), but athletics wasn’t one of those areas. And in any case the level of investment they were capable of in the 80s was basically in the realm of imaginary numbers — in a nutshell, the empire collapsed a few short years later because they’d been running on imaginary numbers for decades. By 1980 at least, the game was already up for the Soviet Union — this just wasn’t admitted to until 1989 and beyond.*

If you remember watching the Olympics in those years you know the Soviets did put on a show of strength — that was part of the game of the Cold War. There’s some truth to the cliche of those cold-eyed coaches who pushed their athletes to the limits and achieved huge successes. But don’t confuse a hard-as-nails coach with technological or economic superiority.

Why does the movie, Rocky IV, portray Soviet strength in technological terms? Well, as I’ve said it wasn’t implausible to an American audience at the time because we didn’t know the truth, but it’s also no doubt because the robot-like Soviet villain makes such a nice contrast to our humble homeboy Rocky Balboa. Throughout all the Rocky movies there’s a running theme that Rocky wins not so much because he’s stronger or a better boxer, but because he can take punishment and stay standing. He is a hero for his ability to withstand suffering.

This is astounding, that such an iconic American hero-figure is portrayed this way. The standard narratives of what Americans are all about have never had anything to do with suffering. In most American myths, we are pioneering, we are adventurous, we are brave, chivalrous, we are often the plucky underdog. But looking on ourselves as the underdog had to be getting pretty difficult after World War II, when we were essentially the only Western power left standing, and certainly by the late Cold War, when we more or less ruled the globe. Clearly Rocky is a plucky underdog type, calling on an origins myth (we became Americans, gaining our independence from Britain, against the odds, so somehow we’re still underdogs at heart), but adding this layer of suffering is curious, to say the least, especially at a time when Americans seemed unbeatable.

To a Russianist, the words “hero for his ability to withstand suffering” is overwhelmingly Russian. The word “suffering” in Russian — stradanie — is full of deep cultural associations. A big part of the reason is that Russian Orthodox Christianity values suffering (and humility) much more overtly than most other Christian Churches. As I understand it (not being an expert by any means in the theology), suffering is a path to God. Those who endure great suffering and remain devoted to God are often recognized as saints — it’s one of the highest virtues.

In a general cultural way, for Russians suffering is understood as an inevitable fact of life (in stark contrast to the American view, where the pursuit of happiness is actually a right of all people in our Declaration of Independence). What matters when suffering is inevitable is that you keep standing. Like…Rocky Balboa.

And then, in historical terms, no one can deny that Russians have endured a hell of a lot of suffering over the centuries. Suffering isn’t something you can quantify, but most people familiar with Russian history and the histories of western Europe are struck by the sheer ubiquity of suffering in Russia. Mind you, western Europe (and since it was founded the U.S. too) are the real exceptions here, if you’re looking at the whole globe. It’s a false comparison. Nevertheless, Russians have compared themselves to the West since at least the beginning of the 18th century, and so it is that comparison that contributes to Russians’ sense of themselves as a historical people. And that sense of themselves is colored by endless stories of suffering. Where Americans have won nearly all of our wars and only experienced major bloodshed on our soil once (when we fought ourselves), have never experienced foreign invasion, and have never been targeted for takeover and elimination by a foreign power, Russians have experienced all these national tragedies over and over.

A really abbreviated list of just the worst national tragedies and humiliations would include:

  • Devastation and then foreign rule at the hands of Mongols (1247-1380)
  • Terrible defeat at the hands of Crimean Tatars 1571
  • Vicious internal warfare, most notoriously at the hands of Ivan the Terrible (1560s and 70s mostly), plus a big loss in the Livonian War
  • Near takeover by Poles, resolved in the nick of time in 1613 by the election of a new monarch
  • Terrible loss to the Turks in 1711
  • Crushing defeat at Napoleon’s hands (1807)
  • Crimean War (1853-56) — first major, lasting military loss since Russia became a Great Power
  • Russo-Japanese War (1905) — humiliating loss to a tiny peripheral nation that contributed to bringing down the monarchy
  • World War I (1917) — in the middle of revolution, the Russians made a separate peace with Germany on punishing terms
  • Relatively bloodless revolution devolves into destructive Civil War (1918-23)
  • Stalin effectively declares war on his own people (1929-1953) — collectivization, purges
  • The Cold War (1949-1991) — Gorbachev essentially threw in the towel, arguably bringing to an end (for now) Russia’s place as a Great Power in Europe

Okay, those are just the Big Events (and note how many times Russians suffered at the hands of their own government, in addition to their vulnerability to foreign invaders, due largely to the absence of natural defenses).

Here’s another list of just some really BIG ways Russians have suffered as a people:

  • Enserfment of the vast majority of the population (arguably beginning in 1649, arguably ending in 1861 but arguably not really ended until it really went out with a bang with Stalin’s collectivization and industrialization which was a tragedy in itself…but it’s a really long story)
  • A rigid system of hereditary social estates, police surveillance, and passport restrictions that severely limited the life choices of every Russian (developing in bits and pieces over time, but arguably oppressive at least from the 18th century to the present)
  • Economic backwardness — due to a variety of geographical factors as well as the mistakes of a long series of regimes, and “backward” only relative to western Europe, the fact is that famines are common throughout Russian history, industrial development was very slow, and access to wealth was/is restricted to a miniscule portion of the population…more or less from the 13th-century Mongol invasion to the present.

And this is just the r e a l l y big stuff. So let’s go ahead and conclude that when it comes to suffering, Russians know what they’re talking about.

Back to Rocky IV. Now that you know how important the concept of enduring suffering and staying on your feet is to Russians, and their long legacy of economic and technological backwardness, which was certainly still relevant in 1985, look again at Rocky Balboa, running through snow, pushing through the pain, taking cruel punishment, but still standing in the end. Note that Rocky is also decidedly working class — the Soviet Union was founded as a working-class state, and while the falseness of that claim is legendary, the claim was still an important part of the Soviet national myth. And look at Ivan Drago, surrounded by coaches and computers and drugs, using fancy machines to push himself to unprecedented capabilities (isn’t striving and achieving without regard to any old-world notions like social class part of the American myth? Isn’t innovation — especially in technology — also a big part of how we see ourselves?). This is the crazy, astounding thing about Rocky IV:

Rocky is the Russian, and “the Russian” is really the American.

Mind — blown.

 

For further reading: If you’re interested in late Soviet realities, I recommend Stephen Kotkin’s Armageddon Averted: The Soviet Collapse, 1970-2000.

 

*While the USSR was definitely behind on technology, I want to point out that they may well have been ahead on the brain power that is needed to make technology work — Soviet programmers were relatively well-supported and very well-educated, and I’ve read of underground experiments on the early internet in the ’80s, among countless examples of extraordinary intellectual achievements in early Soviet computer science. To this day Russian programmers tend to lead the world. What they lacked in the ’80s was money, mainly, though there were also bureaucratic, ideological, and infrastructure-related obstacles. A final unrelated note because I can’t not mention it — did you know the Russians invented tetris? Remind me to tell you my tetris joke sometime.

Posted in History, Random | Tagged | Leave a comment

Students: What to Do When You’re Drowning

William Blake, via Wikimedia Commons

William Blake, via Wikimedia Commons

1. Get help

If you’re drowning in your schoolwork, the last thing you should do is pretend it isn’t happening or hide. Talk to your professors. Go to the student counseling center. Talk to the dean of students. Make sure someone knows what is going on. This means you can get help if you need it, and your problem will be documented, so that professors might be able to accommodate you.

2. Don’t make the dumb mistakes

A. Something is better than nothing.

If you just never turn in a graded assignment, you get a zero. One zero may mean failing the course, or very close to it. Even if you turn in incomplete gibberish, it may get some points, which is better than zero!

B. Show up to class.

Showing up is by far the easiest thing you can do with the biggest payoff. (This is true throughout life, by the way.) Sitting in class every day means you’ll hear announcements and reminders, you’ll get hints about assignments, and you’ll get at least a passive exposure to the material. If you can’t handle anything else, you can handle this, and once you’ve done it, you may find that the assignments aren’t as hard to handle as you expected. It should go without saying that while in class you should stay awake and keep your mind on the class, not the laptop or smartphone.

C. Don’t be a jerk.

Don’t lie to your professors, don’t brown-nose, don’t whine, and don’t try to manipulate them. They have seen all these tactics before, and whether they call you on it or not, you will have alienated them. Be nice, be respectful, take responsibility for your own behavior. Those are the ways to win real goodwill.

D. Keep in touch.

Don’t just disappear. If you’re unable to come to class or turn in an assignment, tell your prof about it as soon as possible (before the date in question is infinitely better than after!!). Be honest, and take responsibility for your own inability to follow through on the class. It may be that there’s nothing your prof can do (without being unfair to other students). It may be that your prof can find a way to work around your issue, if you’re willing to do your part (such as an alternate assignment, etc). You won’t know which is true until you ask.

3. Survival Tactics

A. Read the syllabus! Frequently!

This is where all the course policies and schedule are spelled out. At the beginning of a course, make sure you have all the required readings and you know where and how to turn in assignments, and what the due dates are.

B. Skim intelligently.

If you’re overwhelmed by the readings, make an effort to figure out how to skim effectively. This is a skill. Just letting your eyes pass over the pages without taking anything in is not what I’m talking about here. Read this guide [link goes to PDF] to reading a book, and apply it to any reading assignment. Look first for clues about the main ideas (title, abstract, introduction, section headings, conclusion). Think about how the subject of the reading connects to the subject of the course, and the topic for the particular day or week for which this reading was assigned. This will tell you what aspects of the reading you should pay most attention to. Make a list of questions—What is the author trying to say? How does this add to what we’re covering in class? What is most interesting, surprising, or confusing about this reading? Then look through the reading for the answers to these questions. If this is all you manage to do, you’re probably still well ahead of the game.

C. Use a calendar.

Set up an early warning system. Google calendar or any other calendar software will allow you to set up reminder emails or alarms. Go through the syllabus at the beginning of class and put all the due dates in the calendar. Set alarms for the day you should start working on an assignment (1-2 weeks before due date, usually), the day when you should have a draft (a few days before due date), and the last few hours, when you need to proof read, and print or download. You might also look into an online to-do list, like the one built in to the google calendar, or the more complicated one at vitalist.com

D. Take good notes, be organized.

You need some kind of system to make sure you keep all papers related to a given course in one place, where you won’t lose them. Create a system for your notes, too. Take them in a notebook so you can’t lose pages. Use margins to insert subject headings or comments about the relative importance of a given passage of notes (for example, write in the margin, “for exam!”)

E. Take care of yourself.

Shower. Eat. Sleep. Exercise. Block out a reasonable period of each day to relax (preferably after working), and stick to it.

F. Avoid Wikipedia.

If you don’t know the answer to a question, the last thing you should do is google it or look to Wikipedia. Even assuming these sources will give you accurate information (and they don’t always), the information will be organized for different purposes, with different emphases. Always start with the materials that are required for the course. If course books have an index, start there. Look through headings and sub-headings in the required readings. Look at the topics on the schedule in the syllabus to see where each reading falls, to tell you what it relates to. Look through your notes from class.

G. Plagiarism is never the answer.

Plagiarized papers are never good papers, even if the plagiarism isn’t caught. Students never believe me about this, but it’s true. A good paper reflects (thoughtfully!) the questions and problems that the class covered. A plagiarized paper is almost never a direct answer to the assignment posed (since it came from some other context). Even purchased, custom papers are written by people who were not in the class. Even if they are experts in the field in question (they almost never are), they don’t know what the professor is really looking for, because to find that out you need to be in class. And if you plagiarize from another student in the class, the prof will see both your papers, which makes things rather obvious. To plagiarize well is possible but actually harder than simply doing the assignment in the first place.

Also, the penalty for plagiarism ranges from a zero on the assignment to an F for the course to expulsion from the school. Even assuming an inflated expectation of your potential success if the plagiarism isn’t caught, this is not worth that risk. Turning in a crappy paper may get you, say, 30-40 points out of a 100 if you truly don’t know what you’re doing but put in a minimal effort (say, no more effort than it takes to paste random lines out of wikipedia). That’s better than zero.

4. Failure can be an opportunity.

Failing at something gives you lots of information, which you can use to improve your situation. But you need to examine what happened carefully and honestly in order to take something out of it and turn yourself in a better direction. Failure may tell you that a certain subject is not for you. Nobody is good at everything; this is okay. Failure may tell you that your priorities are not lined up well with what you’re actually doing. Re-evaluate those priorities, and try to act according to them. Failure may also mean your goal is fine, but your methods are flawed. Try new methods.

5. Take a break?

This is often heresy in American educational circles, but if you’re not in a place in your life where you can put real effort into your studies, or if you do not see the value of the classes you’re taking (despite actually trying!), it may be time to take a break. Do some honest self-assessment, and come up with a realistic plan for how to come back, in case you need it. Remember that if you have loans, you’ll have to start paying them back (usually 6 months after leaving school). But if you’re not getting anything out of your classes, then you are wasting your time and money. The world will not stop turning if you don’t finish college four years after graduating from high school. It is possible (though harder!) to come back later. You don’t have to leave forever—try starting with a semester. Talk to an advisor at your college about your options.

At my college we often see students failing out on their first time around, and then coming back a few years later, after work or other outside experience. The difference is miraculous – the older students usually have perspective, motivation, maturity, and focus.

Posted in Teaching | Tagged , , | Leave a comment

Syllabus: History 102, Fall 2112

As a historian, when I’m following current events I almost always think about them as I imagine a historian will do a hundred or two hundred years from now. I can’t help myself, because this is just how I think, but the process also puts an interesting twist on my reading of current events. My affiliation with the study of history is far stronger than my affiliation with any political party, position, or policy. In fact, my view of the world through a historical lens probably determines a lot of my political views. In trying to understand events, I look for patterns, like anyone else, but the kinds of patterns I look for play out over decades and centuries.

Moscow in XXIII Century. Kremlin. 1914

1914 Postcard depicting 23rd-century Moscow. Via Wikimedia Commons

Thinking along these lines, I began to imagine what the syllabus might look like for a course on the modern western world (similar to a course I currently teach), when it’s taught a hundred years from now. It was an interesting exercise, not only to try to predict the future, but to think about how future historians might look back on our past and present. It would necessarily be drastically compressed in a survey course like this, so I thought about what aspects of our lifetimes would stand out.

It should go without saying (but perhaps does not) that what follows is not what I want to happen, but what seems possible or even likely given our current trajectories and what I know of how political systems, economics, and societies evolve—that is, that the only thing you can count on is constant change. I very much hope our future is actually much brighter than this. But for that to happen, we’d have to start making much better choices as a society than we’re making right now.

Here’s what I came up with, as a thinking exercise, not a recommendation!

History 102: The Western World in the 19th to 21st centuries
Fall 2112

Week 1: The Invention of Citizenship (1750-1860)
The American and French Revolutions, and the modern British constitutional monarchy. What are the origins of democracy? How was citizenship defined? Who was included in the new democracies, and who was left out? Reactions to the new ideas: reactionaries, Romantics, and revolutionaries.

Week 2: Industrialization and Cultural Revolution (1780-1900)
The origins of modernity, introduction of class warfare, the origins of environmental devastation. The rise of the middle class, decline of aristocracy and the exploitation of workers.

Week 3: Racism and Imperialism (1860-1914)
Public misapprehensions of science, racist ideologies, and the scramble to colonize the globe.

Week 4: The Wars of Ideas: Capitalism, Socialism, and Fascism in the 20th century (1860-1991)
Mass politics, ideological warfare, and state terrorism. A civilization destroys itself. The United States as the only major power left whole.

Week 5: American Dominance (1945-2001)
The expansion of the American Empire around the world. The American nuclear umbrella and the Cold War. Oil and gas at the center of global politics and security.

Week 6: Decline and Fall Part I: European Empires (1945-2008)
Decolonization, and political and economic obsolescence: Europe retreats.

Week 7: The Information Revolution Part I (1950-2050)
Microcomputing to internet to unlimited global connectivity: access to information as a global resource, and the Neoconservative backlash (ignorance as political platform).

Week 8: Decline and Fall Part II: The American Empire (2001-2090)
Deregulation and the destruction of capitalism. Cycles of global economic crashes and the contraction of the American Empire. Great War with Iran triggers American decline relative to the other Great Powers. India emerges as military superpower through technological and organizational innovation.

Week 9: Federalism and Localism (2001-2090)
European micro-economies and micro-democracies combined with the revival of the EU to regulate trade and security bring Europe back to political prominence. Late in the period the same model was adopted in parts of U.S., initiating partial recovery of prosperity.

Week 10: The Rise of the Third World (2030-2090)
Africa, East Asia, Latin America and the Middle East adopt the European model of combined federalism and localism and rise to compete with India and Europe as global super-powers. Return of the multi-polar world. War for Arctic Resources and global climate change make authoritarian Russian Empire the richest country in the world and arbiter of global energy supplies, causing political tensions with the democratic regional federations.

Week 11: The Resource Race (2050-2090)
Water shortages, famine, and climate chaos leads to civilizational wars. The collapse of the United States into social-democratic Northern States and neo-fascist Southern States. Collapse of the Russian Empire into very rich social-democratic North and authoritarian South.

Week 12: The Information Revolution Part II (2050-2090)
Rising wealth and access differences between educated and uneducated (mirrors late Industrial Revolution, except access to information rather than economic class origins is determining factor in wealth and social status). Micro-governments increasingly divided into informed and rich versus uninformed and poor, leading to violence and the break up federalist institutions around the world.

Week 12: Cataclysm (2090-2100)
The Great Demographic Catastrophe, renewed “dark age.” Mass famines, warfare, and destruction of world knowledge archives causes sharp decline in technological development.

Week 13: Renaissance Part II  (2100-present)
Reduced global population resolves environmental and resource problems. Now-smaller communities re-organize into renewed micro-economies with balanced resource distribution and equitable access to information.

————————–

Like all histories, this one leads up to the “present” as if everything that ever happened before was headed toward a happy ending on purpose. It’s very common to not only think that all of history is an upward trajectory leading to a superior present, but also that history comes to an “end” with us, and no further catastrophes will occur on the scale they once did.

One of the greatest challenges today of teaching 20th century European history is finding ways to make today’s college students understand how people in 1914 could have so stupidly allowed World War I to happen, or why everybody in Germany in 1933 didn’t just emigrate, and why seemingly “normal” people in every country in the industrialized world in the 1930s thought fascism was a good idea, or why millions of people in Russia between 1917 and 1991 continued to believe in the dream of socialism even while the Soviet government did all the things it claimed to be against.

An important lesson I think you can learn from studying history, actually, is that human beings have an infinite capacity to bury their heads in the sand and do stupid, self-destructive things rather than rationally face the reality in front of them. All of us are doing this all the time, but it’s difficult by definition to catch yourself doing it. Analogies to the past—where people like us were making the same mistakes but we now see clearly how wrong they were—can help wake us up. There are many good reasons to study history, but I think this is one of the most important ones.

What do you think history will say about us 100 years from now? What lies ahead? Please share in the comments!

Posted in History, Random | Tagged , | Leave a comment

Why Is Academic Writing So Unpleasant to Read?

Most of us are trying, really we are! Image via Wikimedia Commons.

I’ll be the first to admit that many academic books and articles just aren’t a good read. Sometimes they could be much better written. Sometimes they’re as well-written as they can be, but the subject matter and purposes of the work don’t lend themselves to easy reading. Not everything can — or should — be easy. Either way, knowing some of the reasons why an academic text you may have been assigned to read is so turgid and unpleasant may ease the pain just a bit.

What follows is my short list of common assumptions about academic writing, and my own explanation for why people get that impression. Important background for this discussion is in my earlier post, What Is Academic Writing?

Academic writing is always boring, dry, formulaic, and unnecessarily complex.

It doesn’t have to be, and academics increasingly agree that it shouldn’t be. But just because something is published doesn’t mean you can rely on its being well written. In the academic world, having something truly new to say – or maybe even just something that more or less fills a gap (or even just having a famous name) – can be enough to get published, despite bad writing.

But original ideas communicated well through effective writing are still the goal.

In many cases the writing (the form) must be simple or plain, because the ideas (the content) are by definition new and complex. The ideas themselves are meant to be the source of excitement. The writing is meant to not get in the way by making these ideas less clear or harder to assimilate. Some readers don’t like this, as a matter of taste (it seems dry or formulaic), but in the academy it is inescapable.

If you’re not excited by the ideas in an academic piece, it may be that that subject is not for you, but it may also be that you don’t yet know enough about it to see why it’s so fascinating, or it may be that the author simply didn’t write clearly or directly enough to ‘let you in’ to ideas that do have inherent interest.

Academics perversely make the simple and obvious seem more complicated than it is, and refuse to recognize what everyone else knows (i.e., common sense).

The whole purpose of academia is for some people to spend time working out the really difficult questions, facing the complexity, and bringing to public attention the hardest and most hidden truths. It’s a dirty job, but someone’s got to do it.

Sometimes, it’s true, the inertia of the academic machine (not to mention the cruel tenure review process) causes common sense to get momentarily lost. But the nature of the endeavor – in which every claim is constantly questioned and judged by one’s peers – is meant to ensure that nonsense doesn’t hold up forever.

If there were no scholars (from undergraduates to the big-name professors) to ask questions and vet the information we use to build bridges, cure diseases, form public policy and define ourselves as a people, where would we be as a society?

Academic writing is a static, unchanging entity, and separate from every other kind of writing.

On the contrary, academic writing often has much in common with many kinds of journalism and other “public” writing, and the lines distinguishing one from another often blur. Moreover, standards of what academic writing ought to look like have changed over time and continue to evolve, constantly taking on influences from trends inside and outside the academy. If you start noticing the publication date of what you read, you’ll start noticing patterns — academic work written in the 1960s is different in style and form from that written in the 1980s, or the 2000s.

We might just note here that the teaching of “academic writing” is itself a relatively new phenomenon. In the not-so-distant past, becoming an insider in the academy was an option for only a few, and the fact that one had to learn the rules of how to look like an insider more or less by osmosis ensured that the ranks remained thin. Clear, effective writing was – and in some circles still is! – considered a little risky, for if just anyone could understand what academics were talking about, what would happen to their prestige?! Fortunately, this is one bit of nonsense that is on its way out.

The aim of an academic paper is to quell controversy, to prove that a certain answer is the best answer so effectively that no one will ever disagree about this issue again (and if a paper doesn’t do this, it has failed).

Though many students are taught in high school to treat argument in writing as a kind of battle-to-the-death, this is more a reflection of teachers’ need to force novice writers to find their independent opinions — so they may effectively assert and defend them in writing — than a reflection of how the academy really works or what’s actually expected of your written arguments in college and beyond.

In reality, academics are usually collegial people who respect each other’s research and conclusions, and whose main aims are to refine and expand our collective knowledge. To that end, we value controversy very highly, as a means to open up new questions and identify the gaps in current knowledge. An argument that sets out to definitively prove some absolute solution will – in most cases! – be seen for what it is, the mistake of a novice who has (presumptuously) overstepped the bounds of what can be proven. Most arguments suggest tentative conclusions, expand on conclusions made by others or quibble with aspects of others’ evidence or reasoning, or – in many cases – simply lay out some new, surprising thought or theory so as to deliberately provoke controversy, rather than resolve it.

As an undergraduate, you should (like any other scholar) aim to develop arguments that honestly reflect your reasoned judgment of the evidence. If the evidence leads you to conclude only that more evidence needs to be gathered (which cannot be gathered now, in the scope of the current project), then you may need to either redirect the focus of your project to address a problem where you can conclude something more substantive, or – if the reasons for being unable to make a conclusion are sufficiently surprising or interesting in themselves – you may simply present those reasons as the “evidence” for an open-ended thesis statement.

Academic writing is full of a bunch of meaningless jargon.

Sometimes, yes, it is. But most of the time the jargon is far from meaningless, though it may not contribute much to the clarity of the writing.

Ideally, jargon is used only when necessary, but there are times when it really is necessary. Jargon should be understood not as made-up words people use to sound smarter than they are (though occasionally it is that). Proper jargon is a form of short-hand. A term of jargon always has a very specialized definition, often for a word that is also used in different ways in other contexts, which is part of what makes it so confusing to outsiders.

Jargon by definition is understood largely by insiders, which is probably why it so often seems downright offensive. But, in highly complex conversations taking place amongst a small group of researchers on a given topic, jargon serves to sum up whole complicated parts of the conversation in one word or phrase. It’s a means of efficiently referencing long, drawn-out thought processes that the whole insider group has already been through.

For example, there’s a concept well-known in many social science and humanities circles under the term “orientalism.” Edward Said wrote an entire book to define what he meant by that term, and since then people who want to apply some part of his ideas in other contexts refer to all those interrelated ideas as “orientalism.” If you’ve never read Edward Said’s work or had the term explained to you, you couldn’t possibly know what it’s about. You can’t guess from looking at the word, and a standard dictionary won’t help you. However, this term, like some others, is so well established by now that a good specialized encyclopedia will have it listed. Even a comprehensive general encyclopedia like Wikipedia will give you an explanation, though you should remember that Wikipedia can only ever be a starting point, to orient you. It can’t give you the nuanced and specific background that you really need to understand how a term like orientalism is being used in a given scholarly work—it can only tell you where to begin to look to understand it.

Hopefully, in a reasonably well-written piece of scholarship, jargon terms will be defined somewhere in the text. But this is not true of some terms that are so widely used in so many fields of scholarship that most scholars consider them obvious, like “discourse” or “civil society,” or, increasingly, “orientalism.” If you come across undefined specialized terms like this, the first thing you need to know is not to try to find them in a dictionary. Start with encyclopedias instead, the more specialized the better. Again, Wikipedia might be a good starting point if you have no idea where else even to look. But then go back to how the term is used in the text you’re working on, and think about its specific application in this context. Find an encyclopedia specializing in the field or discipline you’re reading about. You can also look to other related readings and your professor if a given term is obviously important and you can’t figure it out. For better or worse, jargon goes with the territory of academic writing, and you can’t completely avoid it.

Nominalizations

Okay, this isn’t a common accusation leveled at academic writers, but it should be. I learned about this endemic problem as an undergraduate student of the Little Red Schoolhouse at the University of Chicago. Once you’re aware of it, you see it everywhere. Unfortunately, I can attest that as an academic writer, being aware of the problem makes it only a little bit easier to address. Okay, I know you’re asking, what is a nominalization? It’s when a verb is made into a noun. As in, the sentence that should state “the committee members revised the bylaws” is more often written, “the revision of the bylaws was enacted by the committee members.” If you present the latter version of that sentence to an English teacher, that teacher is likely to point to the passive and “empty” verb “was enacted” as a problem. But a more direct way of assessing the problem is to note the nominalization — “revision,” a noun made out of the verb, “to revise.” When you turn a verb into a noun, you are often forced to supply some sort of empty verb, often a passive one, to fill the verb-void. Nominalizing a verb also often results in strings of ugly prepositional phrases, like, “the revision OF the bylaws BY the committee members.” So why on earth would anyone change their nice, fat action verbs into awkward nominalizations that force the whole rest of the sentence into unpleasant contortions of logic? There’s a surprisingly, depressingly, obvious explanation. When a writer knows her subject really, really well, she tends to think in terms of lists of concepts. But a reader who is NOT familiar with the subject will find it much easier to digest in a totally different form: as stories about who did what to whom and why (that is, grammatically, via substantive nouns with action verbs). The writer deeply embedded in her subject is likely to write in strings of concepts (often in the grammatical form of nominalizations) linked by empty verbs like “to be,” “to have,” “to enact,” etc., and prepositional phrases like “the yadda-yadda of the humdinger of the balderdash of the chupa-chups.” In the ideal case, the writer revises from strings of nominalized concepts into “stories” (even if abstract ones) structured into substantive nouns and action verbs. But, speaking as someone who has finished revising her first book under ridiculous time constraints and sleep deprivation (“constraint” and “deprivation” are both nominalizations), sometimes there just isn’t enough bloody time to revise as much as we would like.

(For those academic writers of any level who could use some help with the nominalization problem and more, I can’t recommend highly enough Joseph Williams’ Style: Toward Clarity and Grace.)

Academics have no sense of humor

Well, okay, I do see where this criticism is coming from. Without debating whether academics themselves have more or less humor than the general population, I will admit that academic writing generally contains little in the way of jokes or whimsy, let alone hilarity. The main reason is probably that we all want to be taken seriously by our colleagues and many of us live in fear of not getting tenure or promotion (which rests in part on our publications). A second reason is that our subject matter often doesn’t it particularly lend itself to humor (you try to make Stalinism or nuclear physics funny, why don’t you, and don’t forget to make an original contribution to the field while you do it!). And still another reason is that, again, our main focus is always clarity, since by definition our subject matter is complex and new.

That said, academic whimsy does exist and you occasionally find it in the wild. In Norman Davies’s God’s Playground: A History of Poland, Vol. II, on page 75 (1982 paperback edition) there’s a whole sentence where nearly every word begins with the letter P:

The proliferating profusion of possible political permutations among the pullulating peoples and parties of the Polish provinces in this period palpably prevented the propagation of permanent pacts between potential partners.

LOL. Okay, let me catch my breath. No, really, that was hilarious, was it not? Admit it, you laughed.

In sum:

There are a lot of reasons why academic prose may not be exactly scintillating. It may actually just be badly written, whether because the writer didn’t consider style important, or because the writer never had training in good writing, which most scholars didn’t systematically get until very recently. Or it may just be about a subject you can’t stand, and this aversion makes it harder for you to follow complex prose. The text may depend on a lot of jargon (necessarily or not). It may have been written with a very tiny audience in mind, of which you are not (yet) a member, so there may be assumptions to which you are not (yet) privy (though you can ask your instructor for help). It may, in rare cases, even be badly written on purpose, to “sound smart.” Figuring out, if possible, which of these is the case in a given instance may help you to wade your way through. Regularly consulting dictionaries and encyclopedias to expand your vocabularies is not only necessary, but part of the point — if you understood everything you read in college, you wouldn’t be challenging yourself, and you wouldn’t be learning, now would you? In any case, none of these reasons can serve as a good excuse for you to write badly, insofar as you can avoid it. Aim higher!

Posted in Profession, Teaching | Tagged | 1 Comment

What is Academic Writing?

Bundesarchiv B 145 Bild-F001323-0008, Bonn, Münsterschule

This is not what we mean by academic writing. Bundesarchiv B 145 Bild-F001323-0008, via Wikimedia Commons.

An academic essay is best defined by the PURPOSE that distinguishes it from other kinds of non-fiction writing:

It aims to identify and resolve complex problems in relation to ongoing discussions among fellow thinkers about the most difficult or abstract human issues.

In every field there are scholars working to resolve debates and questions of general interest (a “field” of inquiry can be anything from “history” to “the early nineteenth-century cultural history of the Russian gentry”).

As students or scholars, our written work is intended to be a part of such ongoing debates, and our aim is not only to illuminate a very particular problem through analysis of sources and original reasoning, but also to relate that problem to similar ones other scholars are working on, so that we – as a group – may better understand our whole field of inquiry.

The complexity of our subjects requires that our writing be as simple and clear as possible, and the goal of situating our ideas in relation to a wider public discussion requires that we refer to and analyze outside sources (i.e., other writers) as an integral part of our own work.

As such, scholarly essays generally have the following FEATURES in common:

-one main problem or a cluster of related problems is identified and its significance to the field is explained

-original claims and interpretations intended to resolve the main problem are made by the author, and supported by reasoning and evidence

-secondary sources: situate the author’s problem and main claim within a public discussion, and may also serve as support for some claims

-primary sources: support the author’s claims (Note that some kinds of scholarly writing – like book reviews and many undergraduate research papers – refer only to secondary sources)

-analysis of sources, both primary and secondary, to explain, question, and explore how they can support the author’s claims

-definitions of all specialized terms so their nuances can be analyzed in detail, and so terms may be reliably used in the same way by other researchers, or applied or adapted as necessary in new contexts

-style and structure appropriate to the intended audience

-rules of logic, evidence, citation and intellectual property are adhered to according to convention

READERS of an academic essay are assumed to be fellow toilers in the academic endeavor to “let our knowledge grow from more to more and so be human life enriched” (Crescat Scientia, Vita Excolatur, the motto of my alma mater).

In other words, we expect our readers to be looking to our writing for:

(a) information that will enrich or enlighten their own studies and

(b) our original ideas, conclusions or interpretations that will also help to further other studies and general enlightenment.

Readers of academic essays are generally not looking for:

(a) entertainment or aesthetic gratification,

(b) simplified or summarized versions of things they already know,

(c) conclusions or plans of action without the reasoning or evidence that led to them

(d) suspense or delay in finding out what the point is (though these are all valid elements in other kinds of essays, to suit other purposes).

Therefore, the virtues of STYLE AND STRUCTURE most often looked for (though not always achieved!) in academic essays are: clarity, cohesion, and brevity.

We want to find what we’re looking for, understand it, remember it, and apply it in new contexts, as quickly and easily as possible, without losing the inherent complexity of the ideas.

In order to best fulfill these goals, the classic short academic essay has a skeleton that looks something like this:

-Introduction: context, problem, proposed resolution (=thesis, which at this point may be only generally implied or stated in broad terms that will be elaborated later)

-Body: Argument (consisting of claims, evidence, reasoning), also including definitions of terms, background information, and counter-arguments as needed to make the argument clear and accurate

-Conclusion: restatement of problem’s resolution (thesis), and re-contextualization (how does this resolution serve the greater discussion, and where do we go next?)

(The citation and analysis of sources often plays an integral role in all three major parts of an academic essay: sources can be used to contextualize as well as to support the author’s claims. Every reference to a source, whether it is directly quoted, paraphrased, or merely mentioned, must be accompanied by a citation.)

Within this formula, there is enormous room for creativity, experimentation, and even subversion of the formula.

It is important to remember, however, that the formula is what academic readers expect to see. When you give them something different for no good reason (whimsy and rebellion are not good reasons), they will be confused, and your essay will have failed to achieve its goals.

To subvert the formula you must know the formula – that is, the reader’s expectations – so well that you can predict and guide reader responses in your own directions.

Every field or sub-field of academic inquiry has its own conventions, jargon, habits and expectations. Undergraduates encounter a greater variety of conventions than most other scholars ever have to deal with on a daily basis, and almost all of it will be new to them. This is very difficult, but it helps to concentrate on the basic principles and methods common to all academic writing (as defined by the common purpose described above), with occasional side- tracks into issues of particular interest to historians. When you work in other fields, you need to look for and assimilate the conventions or assumptions peculiar to those fields, and integrate them into the general principles and methods of effective analytical writing you have already mastered.

Finally, it may also be helpful to define an academic essay by WHAT IT IS NOT:

-Writing which aims to entertain or give aesthetic gratification (fiction, poetry, memoirs or “New Yorker”-style essays) may use entirely different devices to convey meaning (such as imagery, formal complexity, foreshadowing, juxtaposition, etc), and they may emphasize expressionistic or impressionistic understanding over analytical understanding. Structures and formal elements can vary infinitely. (academic writing relies exclusively on reasoning, logic, and rules of evidence because it must be reliably understood in the same way by every reader.)

-Writing which aims only to convey information (news journalism, some professional reports, textbooks or technical writing). naturally does not usually include an argument or thesis and has no need to refer to other arguments or theses. Often the most important information is placed right at the start, with other information following in decreasing order of importance.

-Writing which aims to direct future action or justify an action (exhortatory or opinion-based journalism, grant proposals, legal briefs, certain kinds of professional research reports). In these cases, an argument is an integral part of the structure, but the goal is to convince or inspire the reader toward a specific action, rather than to contribute new information or enlightenment for its own sake. Such pieces generally begin and end with a statement of the action desired, and the body would consist of evidence or reasoning. They may or may not emphasize a critique of alternative arguments. Depending on the intended reader, they may simplify reasoning or evidence. Such works also differ from academic writing in that they are not necessarily situated as part of any larger discussion (therefore making much less use of outside sources or analysis of sources), and may require different rules of evidence or citation, or no such rules, depending on the intended audience.

-Writing which aims to tell a story based in fact ((auto)biography, memoir, narrative history, summaries of various kinds) generally eschews argument and analysis of sources, and may employ certain literary devices. Organization is usually chronological.

 

Coming soon: Why is academic writing so unpleasant to read?

Posted in Teaching | Tagged , , | Leave a comment

Rogue Professors

Okay, so you’ve read my posts about managing your expectations in college, taking responsibility for your own behavior, and understanding what grades do and do not mean. And you still think your professor is being unfair.

Ion Theodorescu-Sion, via Wikimedia Commons

Okay, it’s possible your professor is being unfair. It happens. It happens partly because failure happens in every field everywhere. And in academia a professor’s failure may happen because of the insane constraints imposed on contingent faculty or the insane workload of full-time faculty or the incredible pressures of trying to make ends meet with a faculty workload and low faculty salary (more on that soon). Whatever is the cause of the failure of an individual faculty member, let’s remember that it isn’t the tenure system.

Okay, whatever, what do you do when your prof is being unfair?

First, double-check yet again that he or she really is unfair. Re-read the syllabus, and the assignments, and all other course materials, and be honest with yourself about your work.

Okay, still unfair?

Talk to your professor. Most likely, there’s a miscommunication issue, or a simple mistake, at bottom. Typos happen, on assignment sheets and on grades. It’s not totally uncommon, and it can usually be easily remedied.

Eternally Good Advice: Always submit your work electronically as well as in hard copy, if you can. Whether by email or through course software, if you submit your work electronically it is time-stamped, proving that you did it on time. This is a good way of covering your butt in any case of confusion.

Talk to your professor respectfully, honestly and with an open mind. Be fair to yourself and to your professor.

If your professor does not respond to email, give it a week or two and then send a gentle reminder (knowing that faculty inboxes are inundated constantly with demands, most of which have more immediate deadlines than yours).

If, after directly trying to resolve any situation with your professor, you still feel that you are being treated unfairly in a way that will have serious consequences on your final grade, you can refer your complaint to the chair of the professor’s department. Again, be respectful, honest, open-minded, and fair (and if communicating via email, allow 2 weeks for response).

In extreme cases (and this is very rare), if you have a real case and you are stone-walled even by the chair of the relevant department, you can try explaining the case to the dean of students.

There are cases of real unfairness, and in those cases you absolutely should bring it to higher authorities. They really need to know if something seriously wrong is going on. Faculty can and should be held accountable for real incompetence.

But it’s also true that you are a student, and the vast majority of faculty members would not have gotten anywhere near the positions they’re in without many years of incredibly rigorous evaluation and training, so don’t take what they tell you lightly.

And in still other cases, there may be real unfairness going on, and whether or not you can get the department chair or dean to listen to you (and I certainly hope you do), it may not be worth killing yourself over. Ultimately, one grade in one class is not a matter of life and death. Do an honest evaluation of the costs and benefits to yourself of pursuing a case where you believe you have been treated unfairly. In any such case, you should always make sure someone knows what happened (with as much documentation as possible), in case there is a larger pattern at work, but once you’ve done that, it may not be worth what it costs you to pursue the matter further. The best course of action will depend very much on your individual circumstances.

I say this both as a professor who has seen many students upset and indignant over their own complete misunderstanding of basic policies that apply to everyone, and as a former student who was once or twice indignant myself over faculty behavior that felt—and may have been—very unfair. The best course of action really does depend on many factors.

Rogue professors do exist—they do—but they are not as common as your friends will tell you.

Posted in Teaching | Tagged , | Leave a comment

Being Original

Many students have the mistaken assumption that having an argument or thesis means they have to prove that some professional academic who wrote a book is wrong about his own specialty (an obviously impossible task for an undergraduate writing a short paper under strict time constraints). Such students often conclude that the expectation of having an argument in every paper is ridiculous, and they give up before they’ve even started writing the paper.

1810EconomicalSchool

By Baroness Hyde de Neuville, via Wikimedia Commons

No professor (unless they really are crazy, of course) expects you to become an expert in a subject overnight, nor to refute in a short essay ideas that were developed over years by an expert with access to all the original sources.

What they do expect is that you direct your very able and unique mind to the text and ask important, worthwhile questions. You should then explore those questions, and posit some possible answers, based on nothing more than your careful reading of the text and your reasoning.

Every book, no matter how carefully researched or how famous its author, rests on certain assumptions, is limited in scope, and is derived from some finite set of sources. Your job when asked to review or critique a work of scholarship is to examine its assumptions, limits, and use of sources, and from these to understand the goals of the work, and to assess how effectively it met its goals. Then, ask yourself what else could have been done, or should be done next, to further our collective understanding of this subject.

Once you have explored all these ideas, you ought to have come to some sort of conclusions of your own about the value of the work for various purposes, and what remains to be explored. These conclusions should be articulated as your thesis, and you will support this thesis with arguments grounded in the text to illustrate why your reading of it is fair and accurate. A critical review is not the same as a bad review.

A closely related problem that many students have is the idea that, as an author of a paper, a student has to at least pretend to know everything about the subject.

Actually, you really ought not to pretend anything, as an author (unless of course you’re writing fiction). What you should do is research and think about your topic as thoroughly as you can within the scope of a given project, and reflect that reading and thinking accurately on the page. Nothing more, and nothing less.

If comments you receive on your writing suggest to you that you are supposed to “know everything about the subject,” what it probably really means is that you did not do as much reading or as much thinking as the assignment required, or that the reading and thinking you did do somehow did not make its way onto the page. Look at your syllabus again, and/or the assignment sheet. Did you carefully read everything that was required for the assignment? Did you do everything the assignment asked of you?

In almost every case, when a student throws up her hands, saying the professor expects too much, that student did not fail to write a truly original, publishable paper. Such a paper was never expected. What is most likely is that the student simply failed to carefully understand the course materials and requirements. The latter is a perfectly reasonable expectation.

Posted in Teaching | Tagged , , , | Leave a comment

What is Tenure?

Metternich (c. 1835-40)

If you don’t like tenure, you might be a fan of this guy. Klemens von Metternich. Portrait by anonymous, via Wikimedia Commons.

 Many people think tenure means job security. That it means that educators, unlike everyone else, can’t be fired.

This is nonsense.

Tenure does not equal job security. It does not exist in order to protect the jobs of teachers.

I could say this a thousand times, and still many people in this country would refuse to believe me, even though what I say is undeniably true here on planet reality.

That is because many people are listening to politicians who lie.

The same people tend to be cynical about politicians, but nevertheless, they choose to believe this particular lie.

It’s sometimes comforting, when times are hard, to identify someone who seems to have it better, and to hate that person.

The thing is, the people identified as scapegoats in these situations (historically speaking) tend to be people who do not, in fact, “have it better.”

So it is with teachers.

This is why tenure exists.

That link goes to a historical document, known as the Carlsbad Decrees, dating to 1819. It represents the true reason that tenure exists, and it also explains the purpose of tenure, but you may need some context to understand why.

In 1819 Europe had recently experienced some revolutionary movements. European revolutionaries at this time wanted the same basic rights of citizenship that Americans now take for granted as defining who we are as a people: freedom of speech, freedom of the press, and the right to vote for a government that is made up of representatives of the people, not of kings. In the first half of the nineteenth century, these ideas were still scary and radical in Europe. The monarchs who sat on thrones across most of Europe at that time did not want to acknowledge such rights. And many rich, powerful, landed aristocracies sure as heck didn’t want to extend voting rights to a bunch of uneducated, not-terribly-hygienic “masses.”

By “masses,” they meant my ancestors, and most likely yours.

In this climate, in the various German-speaking provinces of Europe (some of which were independent tiny principalities at this time, some of which were part of an enormous Empire ruled by Austria but made up of many peoples, from German speakers to Poles to Hungarians to Muslim Serbs), some people liked the ideas that the French and Americans were so excited about, that people have “natural” rights. But these people were ruled either directly or indirectly by an Emperor, and their Emperor was in his turn ruled by a powerful minister, Klemens von Metternich.

Metternich thought social classes (that is, ranks in society that were determined by birth: aristocracy, middle classes, working classes, peasants) were ordained by God and should not be meddled with. People who were not born to wealth and social rank should not vote because, Metternich thought, God said so. It was the natural order of things, and upsetting that order would lead to chaos. Also, Metternich himself was born to wealth and social rank (pure coincidence, I’m sure), and he liked that, and didn’t want anyone else horning in on his privileges.

Metternich, in other words, was the embodiment of everything the American Revolution fought against.

Metternich was the man behind the Carlsbad Decrees. He forced the German Confederation (a loose group of German-speaking states that Metternich dominated) to all agree to sign this document.

What does the document say?

You can, and should, read it yourself. Here’s the quick version: it is based on an assumption that universities are a hotbed of revolution (of ideas like those that founded the American Republic, in other words). Students are young and silly and get persuaded by their over-educated professors to think wild ideas. Sound familiar? It’s something we’re hearing in the news in the USA right now, in 2012. But the “wild ideas” that Metternich was so terrified of were the same ideas that ALL Americans, liberal or conservative, now hold dear, that freedom of the press, freedom of speech, and the right to vote for a representative government are the best way to go. Metternich was terrified of students learning these ideas from their professors at university. So, in the Carlsbad Decrees, he made it the law in all the signing German-speaking states that universities be watched over by a government appointee whom Metternich selected. Students would not be allowed to meet in groups. Any professor caught saying things in class that Metternich didn’t agree with, would be fired.

Sounds familiar? Yeah, it’s totally the plot of Harry Potter and the Order of the Phoenix. Yeah, that’s probably not a coincidence. J.K. Rowling is an educated lady.

Tenure was created because of the Carlsbad Resolutions, and other laws like it in Europe in the decades following the French Revolution. The main idea of tenure is that professors should not be fired for disagreeing with a prevailing political view.

Professors can be fired for other things, like not doing their jobs. They can, and are, fired for not showing up to teach, for not being qualified to teach. Probably not as often as they should be, but can you honestly say that everyone in your field of work is fired as soon as anyone realizes they’re not terrific at their job? Of course you can’t. Incompetent people exist in every profession.

Tenure does not technically prevent anyone from being fired for incompetence, and protecting such people certainly isn’t its purpose. It does prevent people from getting fired for saying something that others disagree with. The tricky bit is that the line between these two things can often be grey and is almost always contentious, but it’s a VERY important line.

Why? Because the nature of education (when correctly understood as a process of exploring and learning about the world, not the way Metternich understood it as a process of making everyone think just like he did), is that professors MUST discuss ideas that not everyone will agree with. Students are not forced to agree. But they are forced to be exposed to ideas they may not agree with. This is the very definition of education.

And if a student is secure in his or her beliefs, there is nothing dangerous about this process, and much that is beneficial.

Also, not all teachers have tenure. In order to get tenure, you have to go through a process. This process varies from place to place and from level to level of teaching, but no matter where you look, that process is difficult, and more intense, I argue, than any review anyone undergoes in any other profession as a contractual part of employment.

Whoa. Think about that for moment. No one else, in any other profession, has as part of their employment contract the requirement to go through a process of scrutiny this intense. This is after all the scrutiny required to get the degrees you need to even apply for a job (for university professors, it’s the highest and most difficult degree you can get), and after the job application process. This is in addition to all that.

Usually, at the university level, the tenure process involves at least the following:

  • Recommendations from one’s peers
  • Recommendations from one’s students
  • Recommendations from one’s colleagues outside one’s own institution
  • Examples of one’s original research from prestigious, peer-reviewed presses (in my field, usually a book and at least a couple of articles)
  • Examples of one’s teaching pedagogy, through syllabi, assignments, examples of written feedback, written explanations of one’s “teaching philosophy,” etc.
  • Evidence of one’s ability to compete successfully for outside funding
  • Evidence of a substantial research plan for the future
  • Observations of one’s teaching provided by peers in the profession
  • Evidence of one’s service to the institution where one works
  • Evidence of one’s service to one’s discipline, or the profession as a whole

Do you have to do all this to keep your job after 5-7 years? Does anyone have to do this outside of education? It is unique. I’d argue that educators are more closely vetted than any other profession on earth. (Okay, except maybe spies.) We’re also uniquely underpaid, among professions that require comparable levels of education, and especially among those that do require fairly extensive ongoing training and adherence to ethical standards, like law and medicine—an interesting fact in itself, but one for another post.

But we can still be fired, after all this, in cases of demonstrable professional misconduct where academic freedom is not a complicating issue. So tenure is not job security.

What we can’t be fired for is for saying something that our bosses disagree with.

Now, that is also different from most professions.

In many corporations, or hospitals, or law firms, you can be fired if you step up and say to clients, or to patients, that they are being cheated by the institution to whom they are paying money for services, for example. (This is to the vast disadvantage of clients, and patients, if you really are being cheated, by the way.)

But universities are different. Because our job is to teach young people, we have to be able to be completely honest with them.

The students, on their part, have the right (and for heaven’s sake the DUTY!!!!) to think for themselves about what they hear from their professors. Any prof worth their salt actively encourages this. Some of us jump up and down and wave our arms, literally begging students to question what we say. Teaching students to question what we say is our whole reason for existing in this profession, and most of us feel pretty strongly about it, or we wouldn’t sacrifice so much to go into such a benighted and underpaid profession in the first place.

Tenure protects our right to say what we see and understand (and remember “we” are selected according to a uniquely rigorous process that takes five to seven years, after five to ten years of post-graduate training) is necessary, in order to expose students to all possible points of view, so that students can choose for themselves what to think.

Tenure does not protect our jobs. It protects students’ right to think for themselves.

Tenure was created to protect the right to think such “seditious” ideas as the United States was founded on.

There is nothing in the world more patriotic than the institution of tenure. Everyday, tenure protects our republic from people who want to bring back Metternich.

Anyone who tells you different is either lying to you, or too ignorant to be worth listening to on this matter.

If the person saying these things is lying, it is a good idea to imitate the best kind of college student and ask why.

Food for thought on this topic from the Chronicle of Higher Education.

Food for thought from the always great podcast of the Colonial Williamsburg museum: Thomas Jefferson’s ideas about education

Update: more food for thought: How the American University was Killed in 5 Easy Steps

A final word: I know you can name someone who has tenure and should be fired, but isn’t being fired. In those cases, the solution is to look into (a) the tenure review process — if people are getting through who shouldn’t, then the process at a given place may need to be revised and (b) what the real reason is that the person in question isn’t being fired. What may look to you like a clear case of incompetence may actually be a more grey area of differing views on effective teaching. If it IS a clear case of incompetence, there are other factors that come into play besides tenure: those whose responsibility it is to fire someone in this situation may not see it as worth their while. Just one of many reasons they may not fire someone is fear of a discrimination lawsuit, or union blowback. However you may feel about the validity of discrimination lawsuits or unions, you should separate those issues from tenure. Not. The. Same.

Posted in History, Profession, Teaching | Tagged , , , | Leave a comment

What is a Historian?

Herodotus, one of the first historians. Why is it that so many historians still look like this? Strange. Image via Wikimedia Commons.

When I was in middle school, we had an assignment to research a profession we were interested in pursuing.

In order to find such a profession, we were first asked what we were interested in, and what we were good at.

For me that was easy. Even though I had never had a history class in school, I knew I loved history and was good at it. I loved everything old. I read every book that crossed my path that had to do with the past. There wasn’t a historical museum or monument that didn’t fill me with awe. I understood the past in a way I understood nothing else.

So, when I was directed to a big, multi-volume reference book about all the professions the world had to offer, I looked first for “history.”

I found an entry for “historian.” But the description that followed wasn’t at all what I expected (even though I didn’t know what to expect). There seemed to be two definitions of a historian: one was a person who taught, and the other was a person who investigated family trees (“see: genealogist”).

I knew I didn’t want to teach, because my dad was a teacher (I had to do something different, you see) and because I hated school (I loved learning, mind you, but school in my experience at that time had nothing to do with learning). So that was out.

As for the second part of the entry under “historian”? Blaech. I don’t care about other people’s family trees. That’s not what I liked about history. I was interested in big questions of how people behave and why, not in lists of names and dates.

I looked for further cross-references, and found “archaeologist.”

I fulfilled the assignment using “archaeologist” as my chosen profession, even though I didn’t really understand what this meant, other than that it seemed to involve digging things up from the past. Close enough. As part of the assignment, I had to interview a practitioner in my chosen field, so I found a real, live archaeologist at the local college who was willing to answer a few questions by letter (this was the Dark Ages before email). It turned out, this archaeologist told me, you had to study a lot of science in order to be an archaeologist.

Well, damn. I sucked at science and kind of hated it, too.

I did the assignment, but then I kind of forgot all about being a historian, because it seemed like there wasn’t really a profession that matched anything I actually wanted to do.

As it turns out, there absolutely was (and I can’t account for why it wasn’t stated explicitly in that reference book in my middle school library, except that my experience since then has shown me that remarkably few people have any idea what a historian actually is). I didn’t learn that until halfway through college, though, and I had to first get over the idea that I didn’t want to teach.

What an academic historian really does is half teaching, half research.

The teaching half is pretty obvious, since that’s the part that much of the public actually sees.

The other half was the mystery omitted from that reference book in my middle school library.

So, what does academic historical research actually look like?*

A lot of it looks like hunching over very old books and papers in a series of obscure archives. I told you I love all things that are old!

Archives are places that conserve old things of historical interest—mainly papers. Archives exist all around the world. They usually specialize in conserving unpublished materials (unlike libraries), so they tend to be full of people’s old diaries and letters, but also tax records, legal records, legislative records, and so on. Reading materials in an archive is a lot like snooping, to be honest, but you’re snooping into the lives of people who are long since dead, so you don’t have to feel too guilty.

Yeah, it’s totally awesome.

Most archives are not directly open to the general public (again, unlike libraries), because their materials are unique (since they’re unpublished, they’re usually the only copy in existence) and often delicate (because they’re old). So they can’t let just anyone paw through the collections. Sometimes, if you’re investigating your own personal history, you can get into an archive to search records with a professional archivist, who will help you select the right materials and understand what they have to tell you. (If you’ve seen the BBC TV show, “Who Do You Think You Are?” where celebrities trace their family trees, you’ve seen them helped out by archivists in this way.)

But professional historians get a different kind of access, usually through their affiliation with a university. Historians usually get to order documents more or less at will, and read them on their own (though we are often required, for example, to wear gloves so that the oils in our fingers don’t do damage to old paper, and often we can’t bring in our own pens or sometimes even laptops).

I love the smell of the old paper. I love seeing and touching something very few people have ever seen or touched. I love reading a diary from 200 years ago and seeing the ink blots, the hesitations over a word, the places where the handwriting got hurried. Even the occasional centuries-old squashed bug or water stain.

I once touched the original signature of Catherine the Great when a document was given to me by accident. I admit to being totally thrilled by this experience.

One of my favorite archival experiences so far happened in a tiny local museum in Shuia, Russia. I was studying the Chikhachev family, and I had already read thousands of pages of their letters, diaries, and other documents in the central archive for the region. I went to the tiny town of Shuia because I was told they had some books that were the sole known remains of a library founded by Andrei Ivanovich Chikhachev (he had founded the first public library in the province). The curators at the Shuia museum were incredibly kind and welcoming to me — researchers didn’t come there often, let alone a researcher from half a world away. They showed me the books. I looked at a shelf of bound copies of a periodical in which I knew Andrei often published articles, covering a number of years in which he was most active. I thought, “hm, this could be useful, I’ll be able to make sure I get all his articles from this period.” Then I started paging through, and realized these were the issues of the newspaper that Andrei and his family originally received when they were first published, which they later had bound up and then donated to the library. I saw Andrei’s handwriting in the margins, marking his articles with an excited “mine!” or just “!” and scrawling alongside other people’s articles, “exactly right!” or “I completely agree!”

The excitement of archival work comes in not knowing what you’ll find until you find it, and in reaching across time to share a moment with someone who casually wrote something down one day in, say, 1835, in, say, his study in a tiny village in central Russia, never in a million years imagining that an American historian of the 21st century would later try to mine it for clues to every aspect of the writer’s life.

It’s the closest anybody will ever get to time travel.

So, what historians do, a lot of the time, is sit for many hours, day after day, in cold archive reading rooms (they’re often kept cold on purpose because it’s better for the documents, not for the researchers!). We read other people’s diaries, and letters, we read legal cases and transcripts of legislative sessions and often we spend days or weeks or months reading much less interesting things like inventories and phone books and land registries. From all this material we work to reconstruct how life worked in the past, or how individual people lived.

At its best, this process is absolutely as wonderful and exciting as solving a murder mystery by piecing together a series of strange, quaint clues. At its worst, this process is an exhausting and pointless effort to find a needle in a haystack.

But the archives are just the starting point. Once we’ve done our primary research—reading original documents from the time period we’re studying—we need to start writing, to put together why these old documents matter, and what they have to tell us today.

In order to not repeat work that has been done before us, we read basically everything anyone else has ever said on our subject (this is secondary research), and frame our own new findings in reference to these other works.

In the end, we write up new facts and interpretations about the past, framed in terms of how our new information relates to what was already known.

Like the archival research, secondary research and writing is incredibly exciting and incredibly boring at the same time. Being at the forefront of creating new knowledge is exciting. Being creative, thinking through new problems, is fun for those of us who go in for that sort of thing. On the other hand, the daily slog of trudging through pages and pages of boring stuff, the slog of piecing it all together (it’s much like completing a 2,000,000-piece puzzle), the daily stress of keeping track of everything, and the excruciating slog of checking your work against that of other historians and finishing off every footnote is often boring beyond words.

If you love history, you might be a historian at heart, but that really depends on two things. First, it depends on your doggedness to continue even when the clues are incredibly few and far between and your fingers are freezing and you haven’t eaten in hours because the archive is only open 5 hours a day so you can’t spare the time for a lunch break. Second, it depends on your tolerance for the tedium of taking careful, accurate notes of every finding, of citing every reference, and putting it all together in a way that answers new questions (but does not necessarily add up to a satisfying narrative, as it does in historical fiction and popular history).

If you love history, it’s more likely that you love it passively—that you love to watch the History Channel (or did, before it inexplicably became the Aliens Channel), that you love to read books written by historians, or maybe just historical fiction and popular history. That’s wonderful! You’re the audience for historians, and we need you. If you’re interested in teaching it at the K-12 level or working in a museum, you might be the perfect kind of person to help kids and the general public see why history is awesome. The world needs more people like that.

But there aren’t very many people in the world who will really want to be academic historians. (According to the American Historical Society, there are about 10,000 academic historians in the world.) Like every other job, it is hard work, and a lot of it can be downright unpleasant.

It also requires skills of reading, writing, and interpretation that can only be acquired through many years of training and experience (have you recently tried reading early nineteenth-century handwriting in Russian? And can you follow the archaic usages that aren’t found in most dictionaries? And once you’ve done that, can you figure out what matters in what you’ve read, and synthesize it with the several thousand books written on related subjects, but not quite on this subject? It really does take a lot of training).

Most of all, an academic historian is driven to not just consume history but to actively add to it by wading into untouched primary sources and building an original argument about how and why they matter. There’s a lot of creativity involved in that part, and that can’t be forced.

I love being a historian, because I love the aesthetics of old stuff — relics of another time. I love nosing into people’s private documents. I love reading history books, I don’t care how many. I love traveling to strange places and finding my way around without guidance. I love the feeling of not knowing what I’m doing, and being forced to figure it out on my own. I can’t help coming up with my own (re)interpretations — I did this while reading history books as a child, I can’t help myself. Those are the things that make an academic historian.

We are, in short, the sort of people you probably never want to see a historical film with.

 

* UPDATE: There are of course other kinds of historical research, but I don’t know much about them, which is why I didn’t talk about them in this post. People do historical research in connection with museums and historical sites, documentary filmmaking, on behalf of various institutions, etc. There are also other ways of working with primary historical documents — archivists and librarians, for example, focus more on how to catalog and maintain access to materials, how to conserve them, and increasingly how to digitize them. For many more stories about how people “do” history in real life, I’m very excited to link to a new web series produced by the American Historical Association: What I Do: Historians Talk about Their Work. You might also be interested in their series of short text interviews with AHA members: Member Spotlight.

 

Posted in History, Profession, Research | Tagged , , , , | Leave a comment

Why I Hate Grading

By The Tango! Desktop Project, via Wikimedia Commons

When it’s time to grade papers, I suddenly go into housecleaning frenzies. I start preparing next semester’s courses. I finally get around to reading the most obscure and boring articles on my research reading list. I actually clear out my email inbox. I do things like write blog posts.

I would rather lick the bottom of a New York subway car than grade papers.

Why is grading so awful?

It certainly isn’t because my students or their work bore or annoy me. Even my worst student ever on my worst day is not more boring or annoying than housework, let alone as repellant as a subway car.

I think it’s the disappointment. Grading involves layers and layers of disappointed expectations.

When a course begins, I always feel hopeful and excited about my students. I enjoy getting to know them, I enjoy their comments and questions in class. They almost always seem (mostly) engaged in the course material, and they’re by definition a bunch of bright young things—who doesn’t enjoy hanging out with a bunch of bright young things, talking about Important Stuff?

But then I get the first stack of papers, and I have to come to a bunch of disappointing realizations:

1. My course is not their first priority. Students have many competing demands on their time, and even the best and brightest rarely have time to devote their all to any given assignment, so reading papers is an exercise in seeing a bunch of bright young things not quite living up to their potential. Before grading, they are nothing but vessels of potential. But in the process of grading mundane reality hits me full in the face: nobody is perfect, and extraordinary performances are extraordinary because you don’t see them often.

2. I may have been fooling myself a bit about how engaged they really are in my course. Not everybody loves my subject like I do, and some people positively hate it. While most students are mostly polite about this when we’re face-to-face, indifference or aversion for the subject always comes through in the writing.

Between the factors described in point 1 and point 2, I often come to the painful realization that many of the papers were probably written in less time than it takes me to evaluate them.

3. Some students work really, really hard for sadly little payoff. Mostly it’s because they’re beavering away in the the wrong direction, sometimes it’s because this isn’t their subject and much as they want to succeed, they can’t see the forest for the trees. Seeing this come through in the writing is even more painful than seeing the work of students who just don’t give a damn.

4. Even though I work very hard designing my course to meet their needs as well as possible, it never really meets their needs to the degree that I want it to. In trying to balance (a) how much can be covered in the time allotted and how much needs to be covered to meet the expectations of the department, (b) meeting the needs of students who are increasingly ill-prepared for college when they get here with the need to maintain high standards of academic rigor, and (c), meeting the needs of students who range incredibly widely in background, skills, and interest levels, there will always be a degree to which the balance cannot be struck. When you’re in the classroom or planning for the course, you’re actively working to fill these gaps and maintain the balance. There’s a certain amount of satisfaction in that effort. But when you’re grading papers, you’re confronting the degree to which you failed in that task. It’s always sobering, and often positively devastating.

In short, when you’re grading, you’re finding out exactly how much of what you said and did as a teacher made it through into the students’ heads and back out again in writing.

You will inevitably find yourself reading misquotations of your own words, put out of context and misapplied.

You may find out that most of the students only did a fraction of the readings. Some of them did none at all.

You learn that even if you state a basic instruction (such as: “turn in the paper BOTH in hardcopy and on the course management software”) several times in several venues (on the syllabus, on the assignment sheet, and out loud in class, with key words in all-caps) that a certain astoundingly large percentage of the class will still disregard these instructions (leaving you to manually upload dozens of papers and wait for the plagiarism checker to work, while hunting down all the missing papers which could be anywhere — mailbox, emailbox, main office, randomly dropped on a table somewhere in the department…the process can take hours).

So, you spend hours and hours of your time in this discouraging endeavor of grading, so that your evaluation and feedback will, hopefully, help the students to do better next time.

Only to hand back the papers and watch the students glance at the letter grade and then stuff the paper away, or even straight into the trash can. You spend the next week or so fielding complaints from students who all-too-obviously didn’t do the reading or show up to class, but who are still angry at you for failing to give them the terrific grade they feel entitled to get, according to the prevalent misapprehension that one receives good grades in return for paying tuition, rather than that you earn them by demonstrating specific knowledge and skills.

And when all this is done, another stack of papers arrives and you do it all over again, except that it hurts a bit more the next time because each subsequent stack of papers demonstrates that all the work you put into feedback on the previous stack of papers got mostly ignored. Again.

This process is so miserable that most of us would probably run away screaming from the entire profession because of it…except for one thing.

There is one thing that makes it all worthwhile in the end (though no easier to face when you begin). In each stack of papers there will be a few papers—you never know how many but there’s nearly always one and in some delightful cases there are quite a few—that were written carefully, thoughtfully, and with a passion for learning (not for getting good grades).

Some students take intellectual risks in their papers, and that’s a beautiful thing even when it doesn’t fully pay off. Some students go above and beyond the requirements because they really want to understand. Some students put such creativity and old-fashioned sweat into their work that they achieve fascinating, unexpected insights. Some students simply seem to be really enjoying themselves and the course. Some students don’t do anything extraordinary, except that they actually take what you’re offering to heart and show significant improvement from one assignment to the next. Those are maybe the most gratifying papers of all.

Because of these few students, we all keep at it. But that doesn’t make staring down a fresh stack of papers any easier. On that note, I think I have some dishes that need washing….

Posted in Teaching | Tagged , , | 7 Comments

Billable Hours

How does an academic spend her time?

Felipe Kast Encuentro con alumnos Ing Comercial, PUC

This part, that everybody sees, is a tiny, tiny part of what we actually do.
Image by Aldeasycampamentos, via Wikimedia Commons

Mostly out of your sight, which is why so few people actually understand the nature of academic work. What people see is our classroom teaching, and maybe our “office hours,” designated times when we meet with students. These hours don’t seem to add up to very much. If the average college class meets about 3 hours a week, and an average load for a full-time professor is three classes per semester, that’s nine hours a week. The same prof may have 2 hours of office hours each week, so 11 hours.

11 hours a week? Plus semester breaks and the summer off? Hey, profs have the easiest schedule known to man! Call the media! It’s a scandal that these people get paid at all! Actually, don’t bother to call the media, because they’ve already been called.

Also, this is an utterly false assumption.

Wait, okay, you know of course that profs also have to plan those courses and grade the papers. So, for each weekly 3 hours in the classroom, the prof has to prepare by reading what the students are assigned, composing the lecture and handouts or preparing an agenda for discussion or organizing exercises and other in-class activities.

The standard recommendation for students is to spend 2-3 hours studying for every 1 hour in the classroom, and professors are not only doing what the students are doing (the same reading, plus composing the assignments and fielding all the endless questions and problems coming from students), but they are also actually creating the content for what will be said in the classroom. But let’s be conservative and say that for the prof it’s 3 hours of prep for each hour of classroom time (it’s much more than that when you’re teaching a course for the first time, but we’re using averages here). That’s an extra 9 hours per week per class, and still assuming a 3-course load, that’s 27 hours per week there.

Then, students write papers and take tests that need to be graded. Assuming a class size of 55 with no grading assistants (which is about average, though this like everything else varies widely from school to school and professor to professor), when a student writes a 3-page essay, a professor needs to not just read but closely analyze and evaluate 165 pages. Then she needs to write 55 sets of comments, and record 55 grades (the recording of grades may have been simple once, but the expectation now is that all grades will be posted online on course management software, which anyone will tell you is ornery and unreliable, so you record grades online, laboriously, hoping they get saved properly, then you record them again on paper or in Excel for a backup, and no, you can’t just upload from Excel to course management software, because it will screw up all your data, of course). So, for each written assignment you have about 12-20 hours of work, depending on how efficient you are. The average course in my department involves, say, two exams and 3 short or two longer papers. Average that out over the course of a semester (15 weeks) and you get a very rough estimate of about 6 hours more per week per course. Multiplied by three courses, that’s 18 more hours per week.

So, all told, as a rough estimate, we’re actually talking about 56 hours per week for teaching.

Whoa, you say. Professors only have to write their lectures from scratch the first time, and then you teach it every semester for decades, and you’ve probably read the course readings a million times and don’t have to keep doing it, and everybody knows professors assign grades at random, so this is all way off.

There’s some truth in this (except for that last part). With experience you can be much more efficient than what I’ve laid out above, and thank goodness, because otherwise this model wouldn’t be remotely sustainable. Mind you, professors in mid- and late-career still develop new courses, and even with fully developed courses there is intense preparation to adapt the course to current students and other circumstances, to stay fresh, to incorporate new material in order to stay up to date, and to remind yourself what you did last time, since many of us don’t actually teach the same course over and over every semester. But it does get easier. That said, it’s also much HARDER than I’ve described in the early-career stage when all of these things need to happen at once and all of it is new (and all the other pressures I’m about to talk about are also more intense, and you’re making less money and if you choose to have a family at all you are probably also right in the prime child-rearing phase!).

Of course some people—in any profession!—don’t take their responsibilities seriously. But that shouldn’t define this profession more than it does any other, plus evaluations of our teaching are taken into account in tenure and promotion decisions, and contrary to popular belief, we CAN be fired for bad teaching (it’s complicated, but we can).

So let’s average it out and say about 45-50 hours a week for teaching. That’s more than full time hours.

But, you say, you get all those vacations!

And I’ll tell you that teaching is only about 40-45% of my job description.

What’s the rest? It’s 45-50% research, and 5-10% service to the department, the university, and the profession. (These proportions vary according to the institution and the scholar’s career stage, but all three portions are always present, and in those places where teaching is a significantly higher percentage of the job description, course loads are also comparably higher, so it more or less evens out for our purposes here.)

So how much time does research take up?

Well. That’s much harder to estimate. The real answer is: as much as you can give it. Every waking hour, and many when you should be sleeping, are truly meant to be consumed by your research agenda.

Academia is a profession in the traditional nineteenth-century sense of a profession as a vocation, as a part of your identity, and arguably as a form of work that actually consumes your identity. A doctor is always a doctor, whether she is on-call or not, and a lawyer is always a lawyer. This is why I get to ask my uncle, an orthopedic surgeon, about my back problems when we run into each other at family gatherings (poor man). It’s also why you can ask me any time you see me about what really happened to Grand Duchess Anastasia, and I’ll tell you (she died with her family, and all the impostors were simply impostors—sorry to disappoint!).

This situation is very different from when I worked 12-hour night shifts at a Jeep Grand Cherokee factory in Holland, Michigan. When I did that, I had no idea how the factory worked, and I did not know how the door panels I worked on were actually made (my job was just pushing a button on a molding machine every time the light went green, and then trimming off excess vinyl—a simple job I was so bad at that I nearly cut my finger off once). I knew nothing about what I was doing, and I cared even less. I showed up when I was told to and I followed orders so that they would pay me, and when I wasn’t on duty, I did my best to pretend the place didn’t exist. That was a job. Academia is a profession.

Being a professional means that if you’re on a deadline and the work is taking too long, you keep working until it gets done. You’re paid a salary instead of an hourly wage because you work until you finish what needs to be done, instead of working a set number of hours and then stopping.

So, what needs to be done, for an academic? Everything. Our job is to understand the world better, so literally the project is infinite. There’s always more research.

Okay, but what’s realistically expected? Again, there is no end point, no “finished,” built into this part of our job descriptions. The more research we do, the better. If we do significantly less than our colleagues, we risk being fired (again, yes, we can be fired). But you never know exactly how much is “enough” to avoid losing your job, because that boundary is constantly shifting. So you do as much as you can. Often there are deadlines: you need to meet a publisher’s deadline and/or you need finished books and articles and conference papers on your cv (an academic resume) for every annual review (yes, we have annual reviews of our work).

But all these constraints aren’t actually what drives most of us to work constantly. The thing is, getting to the point of having a research job in academia involves so much time, effort, and sacrifice (for so little ultimate reward, at least financially) that it’s a rare person who gets this far without actually liking and wanting to do research. Most of us are driven in life by wanting to know. So most of us work constantly because we’re driven to work constantly.

So if it’s fun, is it not work? Well, first, it’s work because it produces something the world needs (original knowledge). Second, an academic’s definition of fun might be a little strange to an outsider. Most of us are driven to know, but few people on earth are driven to spend countless hours peering at faint, tiny text or glowing screens, few are driven to write creatively and clearly about abstract and obscure concepts that no one else has written about before under intense pressure, and few of us are driven to do all this knowing that doing it will result in very little remuneration or praise, while not doing it will certainly result in censure and joblessness. So it is work. The motivation to do it may primarily be the joy of learning, but the actual doing of it is a lot of bloody hard work.

Plus, we miss out on a lot when we spend all our time working. Those rare hours spent in hobbies or watching a movie or hanging out with loved ones are almost always suffused with guilt because we’re not working. Many of us recognize that this situation is unhealthy, and ultimately may hinder our ability to be creative and insightful in our research, but it’s a struggle to find balance, and a struggle that itself is not only hard work, but work that we are discouraged to do, since the system (by which I mean the universities who pay us and may fire us) for the most part strongly discourages us from finding that balance.

So, research takes up as much time as there is. It eats up every moment of those so-called “vacations” (most academics can’t afford to travel anyway–what travel we may appear to do is almost always actually work, because we sometimes must travel for conferences or field research).

And what about service? Service means “voluntarily” doing much of the administrative work that makes universities run. Committees of faculty members create curriculum, decide on admission, awards, and other opportunities for students, decide on tenure and promotion of colleagues, manage outreach between the institution and the surrounding community, and so on. As individual faculty members we also advise students, direct their independent studies, supervise their internships, and we organize departmental events and so on.

And then there’s a whole other level of service known as service to the profession. This generally means organizing and participating in conferences (which is how academics share their ongoing research, making the connections that help us further and disseminate knowledge), reviewing books and articles in progress (this is peer review, a process through which new knowledge is vetted), sitting on committees to decide fellowships, and very occasionally being interviewed by someone in the media about what we do (we would do this more often but the media rarely calls, and when they do, they often misquote us or put our work wildly out of context, which is why our work may often sound silly to you).

How much time does all this service take up? It varies vastly, but an average academic is probably sitting in a meeting for at least one or two hours per week, and for many hours on specific occasions when a particular event comes up. Let’s say, averaging it out over the academic year, about 3-4 hours a week for a mid-career scholar for committee work, plus 2-5 hours a week for advising students (this is advising done for the department, unrelated to the advising you do for students in your own courses), and this number applies to all faculty at every career level. Service to the profession also comes in spurts, and is much greater for senior scholars. For early- and mid-career scholars, this kind of service probably averages out to 1-3 hours per week of the school year, and for senior scholars it could be as much as 20 or more hours per week.

So, adding it all up: During term time, faculty spend about 45-50 hrs per week on teaching, and on average between 8 and 18 hours more on service. After the “work day” ends and on weekends, and during breaks including summer, faculty spend every hour they can find on research.

It is true that many of these hours of work are spent not in the classroom (where you see us) but in our offices, at home, in an archive or library or lab, or even in Starbucks. Often we have a lot of flexibility about where we work, and it is a huge advantage to us that if we need to go to the doctor or take care of a sick child, it is often possible to shift around our schedules (except of course when it’s completely not possible to shift anything, as when we have to teach or when an important deadline is looming — I would also add that most other professionals, who have less education than university faculty members, also enjoy this kind of relative flexibility much of the time, so it’s not atypical).

It is also true that in the idiosyncratic academic calendar there are occasional sweet-spot moments when an academic can breathe. For a few days of the semester when students are studying for exams, there might be time to clean out your office and enjoy a lunch with a colleague just for fun. Right after you get tenure, or publish a book, you might grant yourself a week or even two to relax and decompress, so you can be ready for the next hurdle (though you’ll probably feel guilty the whole time anyway — guilt becomes a habit).

But what does it all add up to — how many hours total does the average faculty member really work?

ALL OF THEM.

Wait, you say, this is impossible! Indeed, it is. This is why professors are notoriously harried, frantic, and absent-minded. We do at least three jobs in one, and we’re not well paid (on average) for even one of them. For the vast majority of us, it is a labor of love.

Actually, for most of us it’s more of a love/hate or l’amour-fou kind of relationship. It’s insanely difficult. There are huge highs, like when you write your very own book book bookity book! or when a student tells you you changed her life for the better. But those highs happen pretty rarely, and they are separated by vast swathes of low times when you labor away, killing yourself physically and mentally, clinging to the faint belief that your work means enough to make it worth all these sacrifices, only to look up from your desk every once in a while and hear a Congressman telling the public that academics are lazy, overpaid parasites. And then you hear the public — and not an anonymous public, but people like a facebook “friend,” your neighbor, your cousin, and other people who know you — applaud his statement.

Ouch.

 

Update: important further reading on this topic and more here

And here and this too also this and this and this and you might want to check out the comments for about a thousand other relevant remarks….

Posted in Profession, Research, Teaching | Tagged , , , | Leave a comment

Seeking historians of Russian material culture and serf demographics

These strange (to me) symbols popped up in all the family diaries and at first eluded me. Over time it became clear they represented days of the week. Then, I found this key, listing each symbol with its meaning and related day of the week, in the naval diary of Natalia Chikhacheva's father, Ivan Yakovlevich Chernavin. I don't know whether he invented it or it was a common naval code (perhaps a reader of this blog can tell me?)

In researching my book I came across plenty of fodder for at least a couple of other major research projects. I wanted to mention that here, in case someone is looking for these kinds of sources.

In the private family documents of the Chikhachev family of Vladimir province (housed in the State Historical Archive of Ivanovo Region, GAIO), in addition to all the diaries and letters that I used as the primary basis for my study, there are also many records of the names and ages of the serfs the family owned. These are not in themselves unusual—there are thousands of these revizskie skazki in the archives—but in this case they could be compared with references to specific serfs in the diaries and in the donosenie (reports written by serf elders to the landowners). I’m not sure how much could be extrapolated from this process (the mentions in diaries are terse and in passing), because I didn’t pursue it myself, but it strikes me as an unusual cache.

There’s also a rich trove there for historians of mid-nineteenth-century material culture. There are several detailed credit/debit books covering multiple years (mostly 1830s), diaries by Natalia Chikhacheva with rich detail about purchases and agricultural production, plus a few inventory lists (of dishware, books, and two lists of the possessions of Yakov Ivanovich Chernavin, made after his death).

Anyone who would like to know more about what I found, please contact me by email or comment here.

Posted in Research | Tagged , , , | Leave a comment

Top Ten Avoidable Mistakes Made by History Students

Bundesarchiv Bild 183-48500-0005, Leipzig, Turn- und Sporttreffen, Jugendrotkreuz

Bundesarchiv Bild 183-48500-0005, via Wikimedia Commons

(in no particular order)

1.    Using words vaguely

I frequently get the impression that many students choose words that are merely “close enough” rather than the one word that most precisely captures their meaning. Similarly, many students seem to read course materials as if the meaning of my words is similarly arbitrary, and read the course texts the same way. Although the occasional mistake or typo occurs, I choose my words carefully, especially in writing! Treat your professor and the authors you read on the assumption that each word they use is specific and thoughtfully chosen.

If you are not absolutely certain you fully understand the meaning of a word, look it up. While a basic dictionary is usually sufficient (though you must remember that some words have multiple meanings, so don’t stop at the first definition you see! Figure out from the context which one of multiple definitions is correct), in some cases words are used in a very specialized way (especially abstract concepts), and you may need to look them up in an encyclopedia or textbook (start with the one you have for this class, if there is one), or perhaps a specialized encyclopedia like the Stanford Encyclopedia of Philosophy.

One reason that so many students read and employ words incorrectly or vaguely is probably that their vocabularies simply aren’t at a college level yet. This is part of what college is for, and if you skate through avoiding learning these things, you’re wasting your time and money. The only way to catch up is to read — read widely and frequently, and think about what you read, looking up all the words you don’t know!

2.    Not seeing the forest for the trees

College coursework is stressful: at no other time in your life are you likely to be confronted by so much new information from so many different subject areas so quickly. This can be very disorienting, and make it difficult to sort out what is most important, and what is supporting detail.

The only way to really get better at this is practice. But you can become better at this more efficiently by regularly asking yourself how to find out what matters most in a given instance: pay attention to obvious markers like syllabi and assignment sheets. When listening to lectures, think about how the material is organized, what is repeated, what gets emphasized most, what is said first and last. When reading texts, pay special attention to introductions, highlighted terms, summaries, conclusions, etc.

In other words, don’t just swim through college minute by minute and hour by hour, never looking up until it’s over. Try to maintain an mindful awareness of what you’re doing and why, and think about how you can work more efficiently or effectively. If what you’re doing isn’t working, ask for help.

3.    Bad time management

Many students simply run out of time to do well on assignments. Now is the time to train yourself into more effective habits. It will matter even more in your first post-college job!

Start by strictly limiting the time you spend on web browsing, facebook, twitter, texting, etc. These activities are proven to decrease your attention span! Save them for after you’ve completed your work, and don’t spend more than an hour or so on these activities per day (Really! You’re only in college once!). Use browser plugins to prevent yourself from visiting web sites that distract you, and turn off other devices while you work.

Try the “Pomodoro Technique” to train yourself into expanding your attention span: get a kitchen timer and set it for 25 minutes of work time. When it rings, do something relaxing (preferably that involves getting up and moving around) for 5 minutes. Then work for another 25 minutes. If this doesn’t work, start with 10- or 15- minute work periods. Try to build up to 40- or 50-minute work periods (keep all the rest periods at 5 minutes, though!).

Plan relaxing activities to reward yourself with after you’re done working for the day. Make sure you get enough sleep, try to eat decent food, and take your vitamins! Limit caffeine and alcohol consumption. Sleep and proper nutrition can drastically improve brain function.

4.    Not showing up, not following directions, not turning in assignments

There seems to be an epidemic on college campuses over the past decade or so of students simply not bothering.

If the fact of the tremendous waste of your time and money isn’t enough to deter you from doing this, you need to take a very hard look at why you’re in college, and what else in your life is distracting you from coursework. See my previous post, “Reality Check.”

5.    Following instructions too literally

It is not clear whether this is a result of recent changes in secondary education, but professors are increasingly seeing students who put time and effort into coursework but still perform very poorly because they follow instructions mindlessly. If you think that you are being asked to do busy work, or that the goal of your work is to get by with a minimum, or that the goal is to finish as quickly or briefly as possible, or just to please the prof and get a grade, then YOU. ARE. DOING. IT. WRONG.

You are wasting your time and that of your professors and fellow students. You are not learning. You are wasting thousands of YOUR dollars in tuition money.

In college the focus of all our work is fundamentally to train you to think critically. If you’re not doing this, then you’re failing. If you don’t understand how to re-focus your energies in the proper direction, ask for help.

This said, not following instructions is of course also a problem! Sometimes students who are overwhelmed find that course materials — handouts, syllabi, and other instructions provided by the professor — are just more things on the to-do list. Remember that these kinds of resources are intended to clarify what you need to be doing and how, so you shouldn’t ignore them; they should make you work more efficiently. But instructions and guidelines won’t help if you just mindlessly follow one step after another. They are generic resources created for all the students of course X, with all their many problems. You are an individual, with individual problems. You need to thoughtfully adapt these kinds of resources to your own purposes, and your individual completion of the assignment.

6.    Failing to revise

The first thing every student can do to vastly improve any paper is to revise it thoroughly, yet few students do any revision at all of their written work.

Clearly, part of the problem is time management: you need to start working on assignments earlier, and put more focused, thoughtful attention into the process (i.e., don’t rush).

But another part of the problem is that many students are not aware of what is meant by revision — many confuse it with proofreading. Proofreading means scanning for typos and other mistakes. It is a quick process. Revision means really re-visioning your essay: re-evaluating the content, thinking, organization, and style. It is a long and intense process, during which most of the work and learning happens. You should leave at least two days before the due date for revision for a SHORT paper of about 3-4 pages. This gives you enough time to think, to get a little distance from the paper before re-reading it, to look again at your assignment and source materials, re-evaluate, and then re-write the paper accordingly.

7.    Not aiming high enough

Many students who generally do fairly well in their coursework still waste their time in college: if you can get As and Bs without much effort, that’s nice. It means you came to college better prepared than most of your peers.

But if you come out of college with skills not much more advanced than you came in with (no matter how advanced compared to your peers you were or remain), then you have wasted your time.

If you do not feel challenged in your coursework, ask your professors about how to get more out of your experience at college. Consider signing up for an independent study — ask departmental advisors how they work.

8.    Forgetting about context; each course is an island

A critical mistake many students make is to treat each course as if it were entirely unconnected to the rest of your courses, college, and life in general. Consider how methods and ideas learned in one course can help you in another. Also, within each course, context is still very important. No idea or skill or task exists in a vacuum. Remember to always ask yourself, what is this (idea/skill/task) a part of? What else can it help me do/ understand?

9.    Not taking responsibility for your own learning

A fundamental error that is all too common in college students is the misapprehension that your performance depends on the professor, the course, the subject, the college, the weather, issues in your personal life, or any other of a million possible distractions.

The fact is that no one can insert knowledge and skill into your brain for you. In order to learn, to be able to do things you couldn’t do before, to make yourself valuable in the workplace and to society, you must challenge yourself, and put time, focus, and deep thought into your work.

Learning can be fun and it is always full of rewards, but it is rarely simple or quick. Nothing in the world will help you learn if you do not actively make an effort, and nothing in the world can stop you from learning if you really apply yourself.

That said, everyone has subjects that they finder easier or harder than others, and everyone finds some subjects or tasks more interesting than others. College is a rare opportunity to do two things at once: to safely explore subjects you might not otherwise encounter, and to pursue the subjects you love most in great depth. You should try to do both. Be honest with yourself about your personal inclinations, but don’t prevent yourself from discovering new talents or acquiring new skills, either.

The bottom line is that the best way to succeed is from intrinsic motivation (having a personal interest, inherently caring about something) than from extrinsic motivation (getting rewarded or punished for your performance by the outside world, as grades do, or future salaries and other perks).

10.    Not taking full advantage of campus resources

Many students, from those who are making every mistake on this list to those who make none of these other mistakes, still fail to take advantage of the many resources their college has to offer.

It’s always a great idea to visit your professors during their office hours. This is not something reserved for those who are having problems! You should feel free to ask any sort of question or just chat (though the more you can ask specific questions, the more you’ll get out of the interaction).

In addition, nearly every college has academic advisors (general and departmental), peer counselors (who are full of very useful hints and tips), writing tutors, a disability services department, and counseling for personal issues such as academic stress or time management problems, unrelated emotional issues, or family/personal crises.

If you’re not sure where to go, ask one of your professors, an advisor, or any other sympathetic university employee.

Posted in Teaching | Tagged , , , | Leave a comment

A Reality Check for College Students (and their parents)

Test (student assessment)

Via Wikimedia Commons

According to a survey I took on the first day of class in my modern European history lecture course in the spring of 2011, 90% of students in my two sections were at least considering going to grad school.

The minimum GPA required for admission to the Queens College MA in History program (which, though a wonderful program, is far from the most selective) is 3.4. I can tell you that nowhere near 90% of those students have a 3.4 GPA. That a graduate program is selective at all was news to many of my students.

A majority of students in those same two sections of modern European history correctly identified most forms of plagiarism, but a disturbing 25% incorrectly believed that paraphrasing a text without citation did not “count” as plagiarism, and 13% believed copying from a text found on the internet did not “count.”

On the first round of primary source interpretation papers in that class, 80% of papers had no argument or analysis in them at all. 20% did not meet minimally acceptable standards of coherence. Three papers were plagiarized. None of the plagiarized papers could have received a passing grade even if there had been no plagiarism (due to incoherence, factual errors, lack of analysis, and failure to follow directions).

On a multiple-choice midterm exam,* the average score in one section was 60/100, and the highest score was 84. I then offered a make-up quiz, which could be taken at home with open books. Two attempts were allowed. In many of the questions the answer was given away in the wording of the question, and all the questions were repeats of material from the midterm. Counting from both sections, 47% of students never attempted the quiz, and 46% of students who did attempt it were unable to achieve the 100% required for credit.

I discussed these results with colleagues, and found they are typical enough.

At Queens College, students that year paid $305 per credit hour (i.e., $915 total) for the opportunity to take one class. Yet, every semester, a large proportion of students miss more than 3 class days, arrive late or leave early on remaining days, frequently fail to do the reading on time, rarely if ever consult personally with the professor, and fail to turn in one or more graded assignments, fail to take substantive notes, and/or fail to catch up on notes and readings from missed class days.

Every student I have ever had who has engaged in all of the aforementioned behaviors has failed the course. Those who avoid these behaviors do better in direct proportion.

College students and their parents are probably familiar with the following average annual earnings by level of education, from CollegeBoard.org:

high school diploma, $33,801
associate degree, $42,046
bachelor’s degree, $55,656
master’s degree, $67,337
professional degrees (law, business, medicine), $100,000+

When you see figures like these, please also note the following qualifying factors:

•    Those who do not finish a degree get no return on their investment

•    These numbers do not factor in the financial burden of paying off student debt

•    In the 1970s financial aid for college shifted from mostly grants to mostly loans. Most figures showing the lifetime financial benefit of higher education reflect generations that did not have loan burdens

•    According to economist Paul Krugman, long-term trends are “hollowing” out the middle-class jobs that most college graduates expect to be able to get. Projections show limited growth for a tiny percentage of the highest educated and highest skilled, and much greater growth for manual laborers, while middle-class jobs are being increasingly automated by innovative new software.

•    The correlation between college degree and higher-paid jobs reflects the fact that employers are often willing to pay more for employees with more advanced cognitive reasoning skills and the ability to work independently and responsibly. Having a college degree but not having the skills that are generally assumed to go with it will not keep you employed. A college degree is not a ticket to the middle class.

•    Having useful skills, and especially having the ability to think critically, manage your time, and learn new tasks independently will serve you well no matter what you do, and no matter what economic situation you find yourself in. You cannot acquire these skills by phoning it in.

•    In the present climate, you would do well to think about your future in terms of how you can become a job creator, rather than a job occupier.

In The Dumbest Generation: How the Digital Age Stupefies Young Americans and Jeopardizes Our Future (Tarcher, 2008), Mark Bauerlein observes that students are increasingly rarely making an effort to store information in their own memories, since googling for information is so quick and easy. The unfortunate consequence of this trend, Bauerlein argues, is that students lack the basic knowledge that makes higher order thinking possible.

In Academically Adrift: Limited Learning on College Campuses, Richard Arum and Josipa Roksa found that students majoring in social work, education and business learned the least in college, according to an extensive national study. Those majoring in the humanities, social sciences, hard sciences and math did relatively well on tests measuring critical thinking skills.

A survey by the Association of American Colleges and Universities found that 89% of surveyed employers said they want college students to pursue a liberal arts education. A survey of employers conducted by the National Association of Colleges and Employers indicates that workplaces most value these three skills that you are usually more likely to find with a liberal arts eduction (as opposed to a business degree): communication skills, analytic skills, and teamwork skills.

According to Arum and Roksa’s study, overall 45% of college students did not significantly improve their reasoning or writing skills in the first two years of college, and 36% did not significantly improve over the course of four years in college.

Barbara Schneider and David Stevenson, summarizing their study, describe college students as having “limited knowledge about their chosen occupations, about educational requirements, or about future demand for these occupations.” (The Ambitious Generation: America’s Teenagers Motivated but Directionless, Yale UP, 1999, quoted in Richard Arum and Josipa Roksa, Academically Adrift: Limited Learning on College Campuses, U of Chicago Press, 2011, e-book location 137)

Labor economists Philip Babcock and Mindy Marks found the following:

Full-time college students in the 1920s-1960s spent roughly 40 hrs/wk on academic pursuits (combined study & class time). Today, they spend an average of  27 hrs/wk (that is less time than a typical high school student spends at school).

Average time studying:
1961: 25 hrs/wk
1981: 20 hrs/wk
2003: 13 hrs/wk

Percentage of college students reporting more than 20 hrs/wk of studying:
1961: 67%
1981: 44%
2003: 20%

(“The Falling Time Cost of College: Evidence from Half a Century of Time Use Data,” Review of Economics and Statistics (forthcoming), quoted in Richard Arum and Josipa Roksa, Academically Adrift: Limited Learning on College Campuses, U of Chicago Press, 2011, e-book location 141)

There is a growing consensus that American high schools have been lowering their standards for student performance for a long time, and slowly many colleges have been forced to do the same, as students in the last decade especially are coming in so unprepared that the majority of them could not hope to pass basic courses according to previous standards.

Students: Don’t be tempted to think that any of this is to your advantage, by making your life easier.

These trends are incredibly costly to students, and you will pay for them for the rest of your life unless you use your personal initiative to buck the trend. When standards required to get a certain degree drop, this simply means that that degree will be less valued. It’s probably easier to get a BA now than it was 30 years ago. But it’s also true that to get the job you could get with a BA 30 years ago, you now need an MA.

The overall trend is for you, as students, to spend far more time and money to get far fewer skills, which leave you less competitive for jobs.

In your own best interest, you should be fighting hard for higher standards, harsher grading, and far more work—especially more reading and writing, tasks which are most vital to all professional careers and which can only be mastered through years and years of active practice.

Luckily, there’s nothing stopping you from applying these higher standards to yourself that our society is not presently asking of you.

 

*Note: that was the one course in which I have ever, or will ever, give a multiple-choice exam. It was an experiment, intended to find out whether students would demonstrate greater knowledge of the course material if the exam format was very familiar. In other words, I wondered if students were doing poorly on essay exams because of problems with writing skills and test anxiety, or whether they simply didn’t know the material, or both. In an introductory survey course in which content knowledge is one of three main goals, I wanted to find a better way to assess how much content knowledge was really getting through to them. For this experiment, I first surveyed the students and found out that 100% of them were very familiar with multiple-choice exams from high school, confirming that this format was indeed more familiar. They also unanimously preferred the multiple-choice format, suggesting there was less anxiety associated with it. I used this book to help me construct questions and answers that were clear, fair, and that tested substantive and conceptual knowledge of the course material. I instructed students to add any further information they wanted to in the generous margins of the test, so that I could give credit to students who knew the material but were confused by the wording of the answers, or who “out-thought” the test and considered possibilities not raised in the given answers. In these circumstances, students performed drastically worse than my students generally do on essay exams. It was not a scientific experiment, of course, but the results were so stark that I have concluded, first, never to bother with multiple choice again, and second that despite very real problems with writing and test anxiety — and student assumptions to the contrary — essay exams are actually a more effective way for students to demonstrate what they know and what they don’t know. (I would also like to note that in the semester I experimented with the multiple-choice exams, the students still wrote as many pages as they always write in my classes — it was just separated from the exams.) A final and grim conclusion I took from the experiment was that students weren’t actually gaining much content knowledge. In subsequent semesters I have used the same multiple-choice questions as weekly study quizzes (for minor participation credit), and this has actually been resulting in significant improvement in the content knowledge I see on exams.

Posted in Teaching | Tagged , , | Leave a comment

Who is “the reader”?

Professors (and editors) tend to talk a lot about revising your writing to suit “your reader.” Who exactly is this person? The following description of the academic reader may be helpful to students, undergrads and grads.

Exercise book2

Henri de Toulouse-Lautrec, via Wikimedia Commons

Your reader for any piece of academic writing is probably sleep-deprived. He or she may be hyped up on caffeine. He or she is smart, curious, well-read, but not necessarily familiar with the subject of your research. Even if he or she has read what you have read, it was probably a while ago, so a refresher would be useful. But don’t pander: the academic reader has a solid grounding in the basics of western civilization and intense, specific knowledge in at least one field, so there’s no point in pretending to more than you really know – it’ll show.

The academic reader is also painfully familiar with all the usual evasive tactics. He or she knows all about using different fonts and spacing to make a text look more or less dense, and s/he can see through fancy SAT-words instantly. Say what you mean as efficiently and accurately as possible – when your reader is this sleep-deprived, that’s the only way to win goodwill. Don’t be annoying. Don’t play games, and don’t try to cover up. Your academic reader has seen that many times before, and probably tried it him/herself. It’s really obvious.

There’s one thing that will get your academic reader really excited, and supportive. Say something interesting. This doesn’t mean you have to reinvent the wheel or out-smart the entire canon of published work in the field. It means that you should apply your own thinking to your careful reading of the sources. The combination will be interesting.

The academic reader is a total sucker for ‘interesting.’

Do some real, contemplative, time-consuming thinking. Explore a bit. Try to think about your topic from as many different perspectives as you can think of. Recall what went on in class, and everything that especially interested you as you listened to class discussions. Whatever most interesting ideas come out of this process – tell your reader about them.

Your reader is weary, and jaded, yet wants to be interested. The way to elicit interest is to both do your homework, and to contribute your own particular perspective to whatever problem(s) is/are presented by the materials under scrutiny.

(Write down your thoughts, initially, in whatever order they come, then re-arrange them in an order that would make sense to anyone else.)

Remember always that your reader is busy. Your reader has a minimum of one hundred other commitments, just like you do. Your reader has read so many student papers and so many published works of academic writing that his/her eyes positively glaze over at the sight of any title with a colon in it. Your reader wants to be done already. Don’t waste his/her time with anything that’s beside the point or deceptive.

Your reader is also fair. Your reader genuinely likes and is interested in this field. Your reader could be making far more money working in some other sphere. So you can safely assume that your reader – no matter how tired or jaded – is still open to what you have to say, so long as you say it honestly and scrupulously. The trick is only in really having something to say. This by far the hardest part – the rest is details.

Having something to say is not dependent on skill or experience. Having something to say depends on whether you’re paying attention, and whether you care, and whether you’re actively thinking.

In short, the academic reader loves writing that clearly conveys thoughtful analysis and exploration of interesting questions. Do that, honestly, and you’re golden.

If you’re letting yourself have fun while you do that, you’re doing it exactly right.

Posted in Teaching | Tagged , | Leave a comment