Interview in Fair Observer!

Another interview about my book, An Ordinary Marriage, has been made available, this time in text form from the Fair Observer. Thanks so much to my interviewer, James Wiener, and the FO editors, who were wonderful to work with.

In addition, I was delighted to see today a wonderful review in the journal Slavic Review by one of my heros in the field of Russian History, John Randolph. This is behind a paywall, however, so it needs to be accessed through a university/library subscription.

Podcast interview about my book!

I 429681_318481998200771_146568862_nrecently had the great pleasure of talking about my book with Anna Fishzon, new host of New Books in Russia and Eurasia (part of the fantastic New Books Network of podcasts).

 

Here’s the interview!

The Absent-Minded Professor

Lindroth The Absent-minded Professor

By Per Lindroth (1878-1933), via Wikimedia Commons

You know the stereotype: the professor with crooked glasses, bumping into doors, unable to remember his own name?

I actually taught for weeks with broken glasses one semester, because I didn’t have time to replace them until the break. And while I have no explanation for it, there does seem to be a correlation between professors and klutzes. I haven’t bumped into a door (in front of a class) yet, but I do have a tendency to inadvertently break pens in a way that causes pieces to go flying in every direction.

I suppose the myth goes that we’re so involved in the lofty life of the mind that we just can’t be brought to notice such mundane details as doors or our own names. In my experience, there’s no choice or preference involved, and it doesn’t feel at all lofty. The reason I am often genuinely, appallingly absent-minded on a regular basis has to do with the peculiarly scattered schedule of my job. There are lots of kinds of jobs that demand a lot of recall, and thinking on your feet, but I think there are few others that ask for so much variety of recall and thinking on your feet all at once.

Here’s a sample ordinary day for me to illustrate the point.

On a given day during the semester, I probably get woken up by my small daughter around 7, spend the morning wrestling her into presentable clothes and getting her to eat something other than bread for breakfast, then getting her to school. I get to campus around 9, and need to turn off my Parent Brain that just answered 100 questions about mermaids and dinosaurs and transition into Work Brain to try to remember the 1000 things I need to finish before the day is done. I might spend my first hour-and-15-min class time lecturing to freshman in a general education survey course about modern Europe. I lecture without notes, so I’ll need to recall the relevant concepts, their definitions, and all the context around, say, the unifications of Italy and Germany, and also remember the content and where to find the key passages in the couple of primary source texts the students read that day, which I’ll ask them about, and try to get them to connect to some of the concepts from lecture. I’ve been reading about these subjects for many years, so it’s no great task to know them well enough for this lecture, but since my main field is Russian history, and I lecture on a lot of different subjects, teaching the modern Europe survey only off-and-on, and since this survey blasts through 200 years of the most major events of western civilization in more than 2 dozen countries, there is a lot to keep on the top of my head. History is very detail-intensive. But, at the same time, my goal with my lecture is to convey just a few key concepts, so I also need to keep all those details organized in my head in such a way that I can explain these concepts to students cogently—it can’t be a barrage of details. So my brain is very busy sorting information as well as recalling and articulating it.

After class, a scrum of 5 or 6 students will gather with questions. I try to master all my students’ names (which means every day in every class some part of my brain is busily trying to remember that Miguel-in-the-hat sits in the corner while guy-in-hat in front row is Raveen). But then they come one-by-one with questions. One student has a family crisis and needs a makeup exam. I have different policies in an intro freshman survey than in an upper-level seminar course, so I recall my policy, try to find out the student’s situation, and explain the policy, which the student may not like. Usually, there’s paperwork and follow-up involved in any little situation like that, and these things come up dozens of times for every class. Another student may have a question about the content of the lecture—and on the fly I’ll recall some details about, say, Otto von Bismarck’s memoirs, to answer her question. Then another student asks for a recommendation letter. I quickly think through what I would be able to say about this student, and whether it adds up to a good enough letter, to make a decision about whether to do it—then give the student my spiel about what info they need to give me and when, so the letter will get done on time. And it goes on.

Then, oops, I have to run to my next class. Another hour and 15 minutes, this time in a higher level survey course for history majors. Maybe we’re on Ivan the Terrible this week. I don’t need as much concentration to recall and organize material in my own field, as I know it in my sleep, but I do concentrate hard on how to present it effectively. Sometimes it’s harder to think through what to present and what to leave out on a subject you know really well. I would have prepared a bunch of materials—readings and visual materials—ahead of time, so I have to recall the details of them and what I need students to take from one given set of materials for this point in the semester. Again I’m trying to master everyone’s names, respond encouragingly to their input in class, gauge how much they’re understanding, all while being articulate about the civil war Ivan waged on his own people, what evidence our knowledge is based on, and how historians argue about it, plus how to convey all that intelligibly to students who didn’t previously know anything about Ivan, while also walking them through the mechanics of being a historian—what kinds of questions we ask about primary sources, how to find the thesis in a scholarly article, etc. These things are second nature to me after years of training and teaching, but it’s a lot of mental effort to bring the right information to the fore at the right time, in a way that’s clear to students.

After the usual swarm of student questions and all the separate recall issues that brings up, it’s lunch time. Except I have a meeting. Perhaps I spend the next hour and a half discussing the college’s general education requirements, and how we will implement changes in, say, lab requirements. I don’t have any idea how labs even work, so I need to learn new things, process that with the goals of my committee and what we did last time, which was probably at least a month ago. And then think through my chaotic schedule over the next few months to see how I’ll fit in the ongoing committee work.

Then—class time again! Now it’s a master’s-level class on the collapse of the Soviet Union. Students read 3 or so scholarly articles on some topic. I haven’t had a chance to look over them before class, so I’m recalling what I read months or years ago, synthesizing it with my knowledge of related historiography, while devoting most of my concentration to listening to what students are saying—which is often not quite coherent yet, so this requires very close listening—and managing the discussion—keeping it on track, helping students to articulate their points and make connections between each others’ points, and keeping the discussion grounded in the evidence from the readings. This is slightly less exhausting than lecturing, just because I can sit and don’t have to talk nearly non-stop, but it’s very taxing mentally. It’s also been a long day already, and I’ve mentally played hop-scotch through three totally unrelated centuries of history. After the grad class there are usually fewer logistical student problems to wade through, but I’ll often get asked really good, nuanced questions about the material. This is great fun, but again, mentally exhausting.

On my way back to my office, I’ll start mentally going through the rest of my to-do list for the day. I need to prep materials (handouts, going over readings, preparing illustrations or examples) for the next day’s classes, I need to write a recommendation letter, I need to see about rescheduling a midterm and contact the affected students, I need to ask the department secretary about proctoring the rescheduled exam, I need to contact the student whose paper was an ungradeable mess and ask him to come see me about possibly failing the course, which I dread doing. I need to submit a proposal for a conference that’s due that day, putting my head in a whole other mental space the requires the recall of vast detail of still another unrelated subject, which I don’t yet know well. But I get to my office and my email inbox is already full: more student problems, requests, questions. By the time I wade through them all, I have to go pick up my daughter from school. A quick chat with two or three colleagues on my way out—perhaps I update one on what happened in the committee meeting, I hear from another about some other department issue, and have to think through how it applies to me, maybe my chair asks for a form I forgot to fill out. More mental gymnastics, leaping from one unrelated world of knowledge to another.

My brain shifts back to mermaids and dinosaurs while I pick up my daughter and feed her supper. I may get to see my husband in passing, but I doubt the kiddo will let us exchange many words that aren’t mermaid-related. Then the frantic bedtime period of wrestling kiddo into pajamas, reading stories. She’s finally out, and then I can begin to think about the rest of that to-do list I never got to. The proposal must go out, so I finish it, my mind feeling like it’s been through a tumble dryer at this point. But I still have to throw together an assignment sheet for the primary source essays in one class, not forgetting anything because if I do forget, I’ll get a pile of awful papers and not know what to do about it. Then I need to look for portraits of Ivan the Terrible for a class session about his legacy. I stumble into bed much too late, then can’t sleep because I can’t turn off my spinning brain, which keeps churning up tasks I forgot to do.

Then, the next day, my “off” non-teaching day, I’ll hope to spend all of it immersed in an entirely other detail-rich subject that I am just beginning to master well enough to write on it, but chances are an hour will be swallowed up by student emails, another hour by a last-minute funding application, plus I have to do that form I forgot yesterday and go over the readings for the grad class (burying myself in yet another completely unrelated but detail-rich subject I need to be able to explain effectively at a high level of complexity).

And so it goes on, day after day. Breaks and summers aren’t “off” time—I almost want to cry when people say that—they’re time to catch up on everything that couldn’t get done during the semester, though they are slightly less frantic in that I can group my tasks in ways that require less mental gymnastics.

It’s not that professors work harder than anyone else—though I do think most of us work as hard as anybody!—it’s that there’s an unusually high amount of variety, detail, conceptual complexity, and newness (all at the same time!) to the kind of knowledge we have to have swimming around our heads in a given day, with a lot of switching from one mental world to another moment by moment and hour by hour. Because our job is actually explicitly defined as three separate jobs—teaching, research, and service. So yes, sometimes I’m lucky if I can remember my own name. I memorize my students’ names, but one semester later I may barely recognize their faces let alone recall their names, because I’ve had to get to know a whole new set of students in the meantime.

There are a lot of things I don’t do well, and things I can’t do at all because I just can’t fit it in. But I can explain the differences between a dozen different kinds of socialism, theoretical and practical, in the same day I tell a cool story about how Rasputin brought down the Russian empire that forces students to ask important questions about the role of personality in history, the same day that I take a roomful of people who can’t make out a given text and send them out an hour later with an accurate understanding and an articulate response of their own to it, while also maintaining my original research project and contributing to the running of my college. That is not harder than a lot of other things people do, and I’m not trying to argue that it is. Faculty have a lot of advantages, too, most notably a partially flexible schedule, usually a fair amount of independence/authority, and very few physical or social demands—let alone a dress code. But the kind of work we do does leave very little active RAM space in any given moment.

Making High School Suck Less

Hedmarktoppen Folk high school

Hedmarktoppen Folk High School, in Hamar, Norway. I went to a year of high school at a public school nearby, in 1991-92. Photo courtesy of Wikimedia Commons.

 Today I read this article arguing that while the attention being paid lately—most notably by the President in his State of the Union—on expanding Pre-K and making college more accessible are laudable education reforms, if you look at the statistics, the area in which we’re actually failing most egregiously is high school, and no one’s talking about it. I shared this article on facebook and some of my friends quite rightly pointed out that the article doesn’t actually offer concrete solutions. It may be that for personal reasons I’ve spent far too much of my life thinking about this subject (approximately since I started high school!), but I can think of 10 obvious practical solutions off the top of my head, none of which should be expensive (they should save money in the long run, at least), and none of which are insurmountably difficult in practical terms. Sadly, the real obstacle in my opinion is a cultural/political unwillingness to consider big structural changes in education, no matter how many decades continue to go by in which we’re patently experiencing a “crisis” of our educational system.

At bottom, the obvious concrete solution to offer here is simply to say that there are already schools in this country and elsewhere who have solved these problems. What we need to do is pay attention to what is working and apply it more broadly. And we have a huge bureaucratic apparatus with a mission to do that, and rather more usefully we have many thousands of experienced and proven teachers who know how to do it. What is required is the decision-making to set it in motion, and—equally important—broad popular support to drive it along.

Which is why my first suggestion is:

1. Fire Arne Duncan.
The Secretary of Education, of all people, should be advocating evidence-based rational policy changes, and this man is doing the opposite, in my totally-not-humble opinion. His continued support for testing and simple-minded “race to get scarce funding based on your success on completely arbitrary criteria” guiding strategy is as destructive as it is nonsensical. Okay, that’s my most controversial suggestion out of the way. In addition to more articles like the Slate one I linked above—which, for all its lack of practical solutions does raise public consciousness and that’s important—here are 9 other concrete suggestions that shouldn’t even be debatable. In no particular order.

2. Expand play-based/experiential learning upward into higher grades.
Our elementary schools are for the most part quite effective by international standards. This should tell us that that’s an area where we’re doing some things right (despite our huge and diverse population, massive income inequality, and messy federated structure, which do set us apart from most of our international rivals). There’s plenty of research on which schools are doing what to produce those quite solid results. One of the major revolutions in early childhood education of the last few decades is usually known as play-based learning. There are many variations of the model, which is perhaps best known under the terms Montessori or Waldorf, but even many typical public schools apply the basic principles. In this model, learning occurs in a real-world context, and hands-on manipulation of the real-world environment is integrated with more abstract knowledge acquisition and skill-building. Right now, most schools that take this approach as a cornerstone of the curriculum do so for students no higher than the third grade. But there’s no reason the principles can’t be applied throughout a student’s school career. The kind of work and play being done would vary, of course, as students get older, but it should be based on the same pedagogical principles—you can play pirates and make a “ship” in 1st grade, and you can put on a completely student-made play in 4th grade, you can write, edit, and publish a print book in 10th grade and produce a respectable documentary film in 12th grade. In fact, it’s widely accepted in education that “experiential learning” (as it is called for older students) continues to get the best results even at the university level.

3. Copy schools and methods that are working, wherever we find them.
I’ve already said this is the guiding notion behind my whole list, but I want to point out specifically that there are individual charter schools, public schools, and longstanding independent innovators in education like the University of Chicago Lab School that have solved these problems (moment of irony: Arne Duncan is a product of the Lab School–so they’re not always successful ;-) ). It’s absurd to ignore them, especially since the real edge they have is their ideas, not their funding (there are far better funded schools that achieve nothing special). There are also very effective methods such as the writing program developed by Columbia University’s Teacher’s College that are being gradually adapted in some public schools. This can be done more broadly, and more rapidly. I’m particularly fond of this writing program, and not just because the teaching of writing is one of my special interests. When writing is taught in-class as a full process from early drafting to polishing, you not only get better writing, but you avoid the whole plagiarism problem.

4. Use the charter system to set up folk high schools.
Another idea worth copying, that I mention here because I happen to know about it, comes (in my personal experience of it) from Norway. Throughout Norway there are institutions known as folk high schools that exist separately from the public school system (with, I believe, some public funding). Each folk high school focuses on one or more kinds of subjects that aren’t or can’t be done in any depth in the public schools. There are folk high schools for music, theater, technical subjects, photography, etc. Students who have unusual talents or simply who aren’t succeeding in the standard program can go to a folk high school for a year. They still have to finish the standard curriculum in a public school, but the folk high schools provide productive, educational, and useful alternative environments that supplement regular schooling, and fill the many gaps that American kids often fall into. (A quick google search has told me these are common around northern Europe, at least.)

5. Independent community-based entities can offer sports, arts, and social activities outside of school.
Again, I’ve seen this personally in Norway, but it is in fact the standard that is taken for granted in almost all of the rest of the industrialized world. Revolutionary as the concept sounds for Americans, school is actually for learning. I completely agree with most people that a well-rounded childhood and youth should include other kinds of activities besides school, but there’s no reason under the sun that these activities have to happen in school, where it distracts from and (in my experience and according to the article linked at the top of this post) even supersedes real academic work. Also, that way coaches can be coaches and history teachers can be trained history teachers! We already have YMCAs that do some of this, which could be expanded. In every community: build one crack team in a popular sport that can beat an established public-school team, and it would expand from there.

6. Drop testing, make assessment qualitative, and track individual students over time.
It is not possible to assess student learning (or teacher effectiveness) by tracking aggregate test scores of the school. Even though we’ve been doing this—and funneling huge amounts of public funds into private testing companies while we do it—what we actually measure with the test scores is the range of population any given school happens to get (which we already knew). To actually find out how much students are learning, you have to assess a given student at one point in time, attempt to teach them something, and then assess that same student again (which, by the way, is what grades and prose reports are always meant to have done, and could be doing now, if done properly). And then when you’ve done that, you have to remember you just assessed student learning, not teacher effectiveness, since the most effective teacher in the world only brings the horse to water; she can’t make it drink. This should all be obvious, but it has nevertheless gone straight over the heads of our government and most of the public for more than a decade. During that time, we’ve filled virtually all our students’ class time with preparation for a literally meaningless test. I have been teaching college students since 2000, and I have watched with each year how their vocabularies, basic knowledge, and basic skills have steadily dropped year by year. What we’ve collectively done to this generation of students is a tragedy. We must stop it now. (And while we’re at it, I’d strongly support an inquiry into how it got started, and who exactly owns the testing companies that all the money is going to.)

7. Raise standards for moving up a grade, and for getting into college.
It may seem counter-intuitive to raise college entrance standards if we accept the goal of getting more students into college–and I do accept that basic goal, with the qualification that 4-year liberal arts programs do not benefit everyone, or they wouldn’t if everyone was getting a proper high school education first. However, we do students no favors if we let them into a program that they are not remotely equipped to handle. The results of this wrong-headed strategy show in the poor retention rates of community colleges and the public 4-year colleges like my own that cater to large numbers of first-generation college students (references for this are in the Slate article I linked at top, so I’m not re-creating them here). There’s no point in getting every American into college if we’ve also set them up to fail there. At every stage of schooling students need to be qualitatively assessed as individuals to ensure that they are ready to move on to the next level. There is no real long-term cost to staying back a year. There is enormous cost, which we are already paying, in sending students through the system even when they haven’t learned anything.

8. Integrate classroom learning with other public institutions more closely.
Following on the idea of experiential learning in general, there’s an enormous missed opportunity in the relatively wide gap between our public schools and other public institutions. Sure, school kids go to museums, but so much more could be done. Rather than follow a guide around a museum in a bored, restless clump, kids should become part of designing, building, and exhibiting museum collections (this is already done in wonderful museums all across the country—but it could be done on a much larger scale). Similarly, while school libraries are great and important, middle- and high school students should be brought into public libraries and archives to learn the incredibly vital 21st century skills of data collection and data management. As just one example, a class might be given interactive assignments to read and create meta-data, and in the process they would be tackling the enormous backlog most major libraries have of uncatalogued collections—not to mention how students could become part of building and indexing the Digital Public Library of America (I’m sure, by the way, that that project is already working with kids, but it should become systematic, universal, and a national priority). There are infinite possibilities—how about a national contest for high school accounting students to solve post office budget problems? I bet they could do it. Wouldn’t it be amazing for 6th-graders to be invited into a state representative’s office to help him read and sort mail from constituents? They could do it, the task has real-world value, and they would learn about the real nuts and bolts of governance. How about math and science students working with their local fire department and learning real-world forensic skills? Using math to solve traffic tie-ups? Many of our public institutions are finding it difficult to reach the public, while our students are sitting at desks all day staring at the wall while their teachers talk at them. There’s a way to make everybody happy. We already pull off very effective science fairs, from which we’ve seen amazing innovations in recent years. That wonderful idea could be expanded into every area of schoolwork, and integrated into existing institutions, where student needs could serve real communities at the same time. And I don’t mean occasional events (which already happen). I mean that instead of defining the curriculum around the next standardized test, the whole curriculum of a given school be designed with input from local institutions, with integrated activities throughout the year and as part of core graded assignments.

9. Teaching is a great way of learning, so address over-extended classrooms with mentoring top-to-bottom
Teacher, teacher assistant, student teacher…why can’t we add student TAs from a higher grade, who come back to a younger classroom for an hour a week (each) to tutor students in small groups? How about mentors within grades, where older students are paired with younger students for given projects? Assignments in which a student doesn’t just present a book report to the class, but teaches a lesson in a more structured way? This, too, could become systematic rather than an occasional event. Many of the other ideas I’ve mentioned here may seem impractical in classrooms where there’s one teacher and 30 students. But a dense mentorship system could resolve that, and it benefits both the mentor and the mentee.

10. Extend the mentoring idea for teachers, too.
Many years ago I briefly volunteered some time to help staff a program in Chicago called “Principal for a Day.” City leaders, especially business leaders, were asked to visit a Chicago public school for a day, on the assumption that it would be the beginning of an ongoing relationship. It was a good idea that worked well in most cases (though is is sadly now discontinued). There are other programs that in various ways draw public engagement into schools. That’s all great. But I would also like to see the leaders of the schools we know that are working, and the faculty of our great teacher’s colleges, being partnered up with more troubled schools, as a way of disseminating ideas that work. I know not every relationship would work out (and that a limited version of this exists here and there) but it’s a relatively simple and cheap way of getting good ideas moving around more rapidly through the system. Similarly, college instructors like myself should be invited into high school classrooms to share what the expectations for college really are. I’d love to do this, and it would count toward the service requirement asked of me by my employer—but I’ve never been asked, and I don’t know anyone who has been asked. There are great programs where college faculty teach summer classes for high school students, and/or high school students visit college campuses (we have such a program at my college), but as I understand it, the total number of students involved is still pretty small, and it’s mostly the most successful students–who are already likely to qualify for college–who benefit. What about the smart 3rd grader who could qualify for college if she had any idea what was involved and found out early enough to do something about it? Good teachers do talk about these things in their classrooms, but it could be reinforced through an ongoing relationship with local college faculty.

 

Update: What we are actually doing, of course, is exactly the worst possible thing we could do. Write your representatives, and your school districts!

How to Read for Class

Lecteur - statuette pierre

Don’t turn to stone in front of your book. (Photo by PRA, via Wikimedia Commons)

If you’ve tried reading a scholarly book or article the same way you would approach a terrific new novel of your favorite genre, you’ve probably discovered that the scholarly work doesn’t flow in the same way, and you may find yourself having difficulty focusing, retaining what you read, or getting to the end. Scholarly works have, as we know, different purposes from fiction and should therefore be read differently.

1. Know why you’re reading it.

If it was assigned, look at where you are on the syllabus and ask how this text fits into the goals of the course and the topic for this date. If it’s for your own research project, remind yourself why you chose this source: how is it relevant to your project?

2. Look for the main idea.

In a scholarly book, you’ll find the main point in the introduction (sometimes the preface or first chapter). In a scholarly article, it is in the introduction, usually towards the end. However, a more fully articulated version of the main argument is usually in the conclusion of a book or article. Look at both. When you find the main thesis stated, don’t just underline it, think about it. Does it make full sense to you yet? Do you have doubts about it? (Write those down.) In what specific ways does it help to serve the purpose that you identified in step 1, for the course or your own research project?

3. Find out what the argument is based on.

Look to the preface, introduction, conclusion, bibliography and footnotes to find out what kind of sources the author used. Finding out who the author is and the basis for his/her expertise on the subject may also be relevant (but if the author is simply a history professor somewhere, that’s often the end of the story – there isn’t always a lot of information to get from this).

4. Look at scope and organization.

How did the author limit the material being covered? There is usually a limit to the time period and geographical region involved, often stated right in the title. You should also look through the table of contents, and check the introduction to see if the author gave a more detailed outline there of what would be covered where (they often also say why). This tells you, first, how much of the work is directly relevant to your project, second, whether the author’s own goals make sense (did s/he exclude something that seems relevant? Can you find out why?), and, third, how to prioritize what you read next.

5. Take a breath, and then — read.

Think about what you’ve discovered so far, reassess your thinking about the value of this source and how it might help you in your own project. That should tell you what parts of the rest of the text you should read first (there’s no obligation to read in order, though that often will be a sensible route), and most importantly, it will tell you what you’re looking for as you read. It might be helpful to write out some questions/thoughts that have occurred to you so far — such as, “why didn’t the author cover X?” “something about Y sounds fishy so far.” “Can I use Z as support for my claim in my paper?” If you are reading for a class discussion, you might just hunt around for answers to these questions for now. If you plan to use this source in your own writing, now is the time to read it thoroughly to make sure you understand it fully and don’t miss important nuances, qualifications, etc.

“Should I Go to Grad School?”

PhD graduand shaking hands with Sir Dominic Cadbury, the Chancellor of the University of Birmingham - 20120705

Don’t get a Ph.D. just for the fancy gown. (Photo by Ede and Ravenscroft, via Wikimedia Commons)

Wrong reasons:

-You want to stay in school.

-You’re afraid of the job market.

-You don’t know what else to do.

-You’re really smart and do well in school, so you should prove that by going as far as you can go.

Right reasons:

-A graduate degree is required for the career path you’re aiming for, and you’ve researched it all thoroughly, including talking to people who hold the kind of job you want.

-You have specific research interests: there are significant questions that you want to answer, that with a little training you will be capable of answering, and that you can do at least as well, if not better, than anyone else at answering these questions.

(Note: This is a good reason for grad school with all other things being equal. However, given the current state of the academic/research job market and research funding, you must carefully research your specific field to rationally assess the chances that you’ll be able to get into a grad program and find a research position afterward)

-You have the means to pay for a program without debt, and you’re really interested in a given subject.

See also: my posts on what a Ph.D. program in history is like and how you might think about choosing a program, and why you shouldn’t feel bad if you don’t go.

It’s That Time of the Semester

US Navy 030313-N-3228G-002 Nearly 250 candidates for E-5 mark their answer sheets while taking the March 2003 advancement exam at the Club Pearl Complex

U.S. Navy photo by Photographer’s Mate 1st Class William R. Goodwin. [Public domain], via Wikimedia Commons

It’s mid-term exam season!

These are some checks you should do before turning in any take-home essay exam for a history class. If you have any ambition to do well, you should be at the point where you think you’ve “finished”  AT LEAST 24 hours (preferably several days) before the deadline, and then look at this checklist.

1. Did you do the reading for the course, and show up every day or nearly so?

Of course it’s too late now to fix this, but if the answer to that was “no,” re-calibrate your expectations for this exam right now. If you do well, it will be through sheer luck. (You might want to remember this experience next time…)

2. Look at your exam paper. Look at it next to any printed text — like your textbook, for example. Does the font on your exam look huge?

Guess what — it’s going to look huge to the person who grades it, too. And when the person grading a big stack of papers comes across one with a gigantic font, they sigh. They know they’re about to read something thrown together by someone struggling desperately to fill the space. I hope that’s not the case with your exam, but in a huge font, that’s what it’s going to look like. So make it a normal font. Times New Roman always works. An exam is not a time to be creative with fonts.

3. Now that your exam is in a normal font — does it reach the page length requirements you were given?

No? Well, guess what. If you wrote half as much as was asked for, the best you can hope for is half credit (even assuming everything you did write is solid gold!!). And half credit isn’t a passing grade.

Write more. And make it good.

Okay, now you have enough words. Now, worry about what kind of words they are.

4. Did you answer the question directly, and fully?

Read over the question again, and your answer. Did you stick to the point? Did you answer all the parts of the question? If there’s any part of the question that you dismissed as “impossible,” or for any other reason didn’t answer? You must answer it if you hope to get anything like full credit. I can guarantee you that in any class of 30 students, at least 5 are going to answer this same question thoroughly, in all its parts, incorporating course readings and their own original analysis. Those people will get As. You can be one of them, but you have to stop thinking it’s “impossible” and just do it.

5. Did you answer the question accurately?

Did you check all your facts and dates? And did you look them up in your course materials, NOT Google?

If there’s something in an exam that you don’t recognize, and you can’t find it in your course materials (the index in the back of a textbook is the first place to start, if you have one), then you might try Wikipedia just to get yourself enough information to know where to find the term in your lecture notes or other readings. But BEWARE OF WIKIPEDIA. Make sure you’re looking at the right entry, for starters. Wikipedia will often have many different people or events with the same name, so you need to make sure you’re looking at the one that’s relevant for your course. If you’re in “Hist 110, Russia since 1855” and the guy you’re looking up was a monk who lived in the 1700s, you probably have the wrong guy (yes, this is an actual mistake I’ve seen!).

Once you know you looked up the right term in Wikipedia, read the entry just to find out enough about the term to know where it was covered in lecture or the course readings. DO NOT USE WIKIPEDIA IN YOUR EXAM ANSWER! Even assuming Wikipedia gives you accurate information (it doesn’t always), and assuming you cite it (if you don’t, you’re plagiarizing and deserve an F in the course at best), Wikipedia still isn’t likely to help you. It’s a general reference work, and the answer you should be giving in a history essay exam should be a lot more than general reference information. Your answer is supposed to demonstrate your mastery of the readings and your analytical thinking about historical concepts. Wikipedia gives you neither of those things.

Instead of Wikipedia, find the term you don’t know in your course readings and lecture notes. Obviously, if you haven’t done the readings or been to all the lectures, you’re now in trouble. This is why we tell you to do the readings and go to class. You cannot expect a respectable grade if you don’t do those things.

But let’s say you have, and you find the terms that confuse you in your notes, and one or two of the readings. DO NOT REGURGITATE! Do not, in other words, just put what you found straight into your exam essay — for one thing, you should cite anything you quote or paraphrase from another source. For another, your exam is supposed to be almost entirely your own words and ideas (the occasional quote is fine, but a quote salad is NOT!)

Instead of regurgitating course materials onto the page, slow down. Read the materials carefully, and think about them. Re-read the question, and think about how the two are connected. Brainstorm how you want to answer the question — you might make some lists, draw pictures, talk it through with your cat. Whatever. Just sort through the material in your own way, thinking about what it means, what questions you have about it, and how you can answer those questions. Mull it over, and try it a few different ways. Then write it up into a nice essay. That’s what A and B students do.

The difference between B students and A students is that A students don’t just think through the material and arrange logical and accurate answers in their own way — A students also contribute substantive ideas of their own that demonstrate original critical thinking. It doesn’t have to be original in the sense that no one else has ever thought of it before. It’s original in the sense that the student independently brought their own ideas to the course material and synthesized the two in some meaningful way. They didn’t just pose interesting new questions about the material and its significance, but they made a thorough, thoughtful stab at answering them, too.

The following are NOT examples of a student’s original contribution:

Topic X is important because a lot of people talk about it.

Topic X matters because it’s the main subject of our course.

Topic X is really cool.

I’ve always been really into topic X.

I really learned a lot about topic X from this course.

If you’ve got statements like this in your essay, just take them out. Every word in your essays should be adding value — if you’re just re-stating information that’s already in the question, or stating the obvious, then delete it. If you’re just stating your personal preference, delete it. There’s a big difference between personal preference (“I like / don’t like X”) and your independent thinking (“When Trotsky argues that the ends justify the means, it seems to me like he’s assuming the ends are predictable, but they aren’t.”)

Okay. Now you’ve got full, accurate, substantive answers. Are you done? No.

6. Now it’s time to check your spelling and grammar.

This does not mean running the spellcheck and grammar check in MS Word. Neither are reliable.

And yes, this stuff does matter, even if your answers are brilliant. Because you may not get credit for your brilliance if the person grading you can’t tell what the heck you’re talking about because your spelling and grammar are all over the map.

Also, what do you think it looks like to your grader if you complete a whole essay exam on Imperial Russian history and you can’t spell “Tsar” correctly? (“Czar” is less preferable, but acceptable – “Tzar” is just wrong.) It looks like you’re sloppy, semi-literate, and/or just don’t care. That may not be the case, but that’s how you’re presenting yourself. For the same reason that you shouldn’t go to a job interview wearing shorts and flip-flops (no matter how brilliant and qualified you might be), you shouldn’t turn in any assignment for school or work with ANY spelling or grammar mistakes. It makes you look like an idiot. And you’re not an idiot, so make sure people know that.

To spell correctly, you need to regularly use an actual dictionary, not an in-built software spellchecker. While you’re at it, make sure you’re spelling your instructor’s name correctly on your exam. It’s disrespectful to not bother to check to get someone’s name right. Don’t start your exam by disrespecting the person grading it!

If you have trouble with grammar, you should be working with a writing tutor (most campuses have a writing center for this purpose). There are also a bunch of good books you can get to help you work on grammar and clarity. But the best thing you can do to improve your grammar — and a host of other skills — is to read a lot. Actual books, too, not just magazines and the internet. Read a variety of things, and think about the words.

When checking an essay before turning it in, read it out loud. Don’t read what you think it says, but read it exactly as it is on the page — this will often make you notice typos, fragmented sentences, and other problems that you didn’t notice before.

7. Turn it in in the right place, in the right format.

If your prof asked for a hardcopy, turn in a hardcopy. If your prof asked for an electronic copy, turn in an electronic copy. If your prof asked for both, turn in both. Are you sensing a pattern here? Follow instructions. Always. Honestly, if you show up and follow instructions when told to, you’ve already mastered 80% of success in life. Just do it. If your prof specifies a particular format for an electronic submission, be sure to get it right. If you don’t know how to upload to course software, now is the time to learn — you’ll inevitably have to do it more often later, so practice now. If a hardcopy, make sure you staple it. If you leave a hardcopy somewhere — an office, a mailbox — make sure it has your name, the prof’s name, and the course name all prominently on it, so it doesn’t get lost. If it’s an electronic submission, make sure it has a sensible file name, something like your last name plus the assignment title. Naming your file “Paper.doc” makes you look careless (and is also a good way to accidentally overwrite the paper you haven’t turned in yet for one class with a paper for another class…). If you submit your paper by upload, make sure you view your file after it’s uploaded to make sure it’s really there, and didn’t get turned into gibberish by the system. It is your responsibility to safely get your work to your grader — so double-check that you did it right! There’s no excuse for submitting a file in an unreadable format — in fact, if you do so, it’s likely your grader will think you have no paper at all and are trying to pull the wool over their eyes (since this is a sadly common tactic).

8. Now that you’ve turned it in, DON’T sit and hope for a great grade.

The grade you get is not a matter of hoping. Or praying. Or wishful thinking of any kind. It’s also not a judgment of how much, or how little, your professor likes you. It’s simply an assessment of how the work you turned in compared to that your classmates turned in. If you know the material and reflected that knowledge on paper, with your own original, thorough, specific reasoning, then you should do very well. If you know you didn’t do all the reading, you didn’t put much effort into your writing, and/or you didn’t really think about what you wrote down, then you don’t deserve to do well, and if you do get a decent grade, it will be a matter of sheer undeserved luck that won’t really feel good anyway (and deep inside you know that). Take responsibility. If it didn’t work out this semester, start fresh next time. You can do it.

A review!

My first and likely best book review.

“I need an A in this class”

AGradeIf you have uttered this phrase to an authority figure at college, you have already hopelessly screwed up your chances. You’re doing everything wrong. I mean it: there’s no way to lighten up this message. You screwed up. I’m sorry.

It’s unfortunately already long past time to back way up and adjust your thinking, and your strategies.

Yes, I know what you’re going to say: you REALLY need an A. Without it, you’ll be put on academic probation, or lose your scholarship, or your internship, or your chances for grad school. I know that the stakes might be very high.

But here’s the thing. Once you’ve reached the point where one grade in one class will entirely make the difference in any of those high-stakes scenarios, you have already had a problem for a very long time — through different courses, with different professors, with different things going on in your life. And that is your responsibility. So right now it may be time to face the worst case scenario. No one owes you an A.

In fact, no one is ethically allowed you give you an A no matter how much they might want to, unless you earned it. Only you can earn the A. It’s entirely about you, and the choices you make.

An A simply indicates that you have demonstrated certain skills and knowledge to a degree generally recognized as exceptional as compare to your peers. Someone among your peers is getting that A, and therefore demonstrating those skills and knowledge at an exceptional level. It’s unfair to that person to give you an A for doing less than they did. It’s also unethical. When you ask for an A, you are asking someone in authority to do something unethical for you. You are asking that they do something totally unfair to your classmates, both those whose earned As would mean less, and those who don’t get the same unfair advantage you’re asking for. Think about that for a minute.

It also doesn’t ultimately help you. Because the only point of grades is to tell some future employer or admissions committee that you know certain things, and can do certain things. If you fake it with grades you didn’t earn, sooner or later in a job or grad school it will become obvious that you don’t actually have the knowledge or skills to do what is asked of you. You’ll fail then, and it’ll hurt even more then, and you’ll have wasted more of your time.

Okay, okay, I know what you’re going to say — some classes are just hoops you have to get through, and you’ll never need what those classes are teaching. That may not be as true as you think. It’s at least worth asking your instructor or a professional mentor in the field you hope to be in about the specific ways a course might be useful. But I’ll concede that in some cases there might be such a thing as a useless hoop. I would point out that learning for its own sake — because knowledge makes your life richer — it always worthwhile, but I’ll also concede that I can’t make you care about that. I might also point out that perhaps the most important professional skill of all is learning how to show up, follow instructions, and follow through on the tasks given to you, and by definition that’s what you’re not practicing here. But you can say this one instance is exceptional, there were circumstances beyond your control, blah blah, yadda yadda. Fine. I’m perfectly happy to believe you, I really am. So you need a class for some reason, and the content of the class will never be of use to you.

That still doesn’t entitle you to a free grade. Again: other students are doing the work, and demonstrating skills and knowledge you don’t have. They earned the A, you didn’t.

Again: if you needed that A to meet some outside goal, you should have planned ahead, and maintained a high enough average in other classes that an A isn’t necessary now.

But, you say, you paid your money for this class.

That ain’t how it works, friend. A grade is not a prize, or a product. It’s a certification of what you can do. Think of tuition like a gym membership — you can pay up, but if you don’t work out, you’re not going to lose any weight.

But, you say, you only got into this mess because a few classes went badly, and that’s only because you had a terrible prof who hated you. Please see this post about Rogue Professors and this one about Stupid Professors. I will concede that sometimes — far more rarely than you imagine, but sometimes — a prof makes your life harder for no good reason at all. But you still have to demonstrate knowledge and skills to earn a grade. Most of the time one hopes that the professor is helping you get there, directing you to what you need to know, the best ways of achieving it, and pushing you to practice valuable skills. But if the professor is not helping, you still need to do it, you just need to do more of the work yourself. That’s a bummer, but if you do it right, you’ll actually learn a lot more than you bargained for, and in the long term that’s not a bad thing. Life is not always fair.

Wait, what if the prof is just a mean grader, and you did earn an A? Well, the prof is by definition far more qualified than you to make that judgment. But I’ll concede that very rarely that is the case. But if you’re getting a C, D, or F and think you deserve an A, that’s such a huge disconnect that either a mistake was made somewhere along the line (by all means find out! Contact the prof, and if that doesn’t work, the department chair), or you’re fooling yourself (if that’s the case, you also want to find out — talk to the prof, and if you can’t, talk to another prof or the department chair). If you think you deserved an A and you got a B, calm down. It happens. The occasional lower-than-deserved grade should not affect your future as long as you have established a pattern of performing exceptionally.

Okay, so you’re ready to concede that getting an A is about what you do, not what anyone else can (or in this case can’t) do for you. So how do you get an A?

1. Go to class every day, and while you’re there, listen and take good notes

2. Do all the reading. Don’t just read, but think about what you read

3. Always. Follow. Instructions. All of them. Pay particular attention to your syllabus. It is your bible in this course. Know it well.

4. Talk to the professor. Speak up in class, ask questions when you don’t understand or things are going too quickly. And go to office hours for help. If you’re not sure you are taking “good” notes, go to office hours with your notes and ask. If you’re not sure what’s expected of you on assignments, go to office hours and ask. If terms or concepts keep coming up and you don’t get them, go to office hours and ASK.

5. Start assignments early. At least a week early for a short paper (3-5pp), and more than that for anything longer. (No, I’m dead serious. Do you want an A or not? This is what A students do.)

6. Finish a draft of any assignment at least 24 hours early. Preferably earlier than that, ask if you can show it to the professor or TA and discuss how you’re doing. If you can’t do that, take it to your campus writing center. Read some of my other posts about writing.

It is possible that you are really, truly, doing all of these things and still not getting As. This is something you should discuss with a professor you like. Go to their office hours, bring samples of the notes you take, and some assignments that you worked really hard on that didn’t get a grade you hoped for. Getting to the bottom of this problem needs to be really specific to the work you’re doing, so I can’t tell you the answer. It may be that you’re beavering away in the wrong direction — putting your energy into fruitless work and not seeing more productive ways of using your time. It may be that you’re simply misunderstanding a few key terms or concepts, without realizing it, and that’s causing you to miss the main points you’re supposed to be learning. It may be that you have persistent problems with reading comprehension, or writing, or both, and need to work with a tutor to get up to speed. It may be that you’re in over your head — that the competition in a class, a major, or even a college is just really intense. Your grades are usually given more or less in relation to how other students are performing in the same class, and it may be that you’re doing well, but a proportion of other students are just always doing better. That’s nothing to be ashamed of, though if you’re ambitious about grad school, especially, in may be an indication that you should think about whether you’re in the right field. Be careful before you change majors or even transfer schools, though — most people learn more when they’re not the smartest person in the room.

Finally, what not to do when you need an A in a class:

Do not, ever, under any circumstances, say to anyone including yourself, “I need an A in this class.” When you say that, it tells the world that you are going about the class in entirely the wrong way.

What you should be telling yourself is, “This class is a priority for me. I need to devote time and attention and effort to it.”

What you should be telling your professor is, “This class is a priority for me, but I’m worried that I don’t know how to make the best use of my effort — can you help me?”

It should go without saying (but sadly often does not) that you should be saying these things AT THE BEGINNING OF THE COURSE. After even one major formal assignment has gone by, it’s probably already too late. To get an A average for a whole course, you have to do A work throughout — that’s just math. If you bomb one early assignment, it MIGHT in some cases be possible to make it up and still reach an A average, but you would need to talk to your instructor immediately about whether that’s possible, and if so what you need to do.

There is absolutely zero point in telling anyone that you need an A when the course is almost over. Just don’t do it. It won’t get you an A, it’ll only cause whoever you say this to to lose respect for you. Just don’t.

“My prof is so stupid”

Leslie Ward, via Wikimedia Commons

I’ve heard this said on my campus. Often by a student who is also making fundamental factual and grammatical errors in the process of an extended whine that, I can only assume, was prompted by a lower-than-expected grade.

I’ve also gathered from students who have asked me about grad school that it is often assumed that becoming a professor is about going to school a lot and then answering job ads like with any other job, and that more or less anyone who can stand going to school a lot would have a decent chance.

That’s kind of true, but mostly really, really not true.

As I’ve written before, the Ph.D. degree — which is pretty much always required for a professor’s job — is not some kind of ultimate IQ test. It really requires more drive and motivation than anything else. But, at the same time, there’s intense competition at many stages to go from an undergraduate degree to a tenure-track job as a professor, so that while it’s certainly possible that your professor might be “stupid” (whatever that means exactly), it’s really unlikely your prof is just some random person pulled off the street who doesn’t know more than you do about his or her subject.

On the contrary, for admission to an MA/PhD program, there are hundreds of applicants and only a tiny handful of openings, so for starters the vast majority of people admitted into these programs went to the most competitive colleges in the world, earned top grades and test scores, and are recommended enthusiastically by Big Name professors. Then, in five to ten years of graduate study, these select few are put through an incredibly rigorous regime, and close to half drop out before finishing. Those who do finish (after having taken extensive exams in all the fields they might teach, judged by the best people working in those fields, and having written a book-length research project which is approved by a committee of top people in their fields), then face an incredibly tough job market (right now it’s the toughest it’s been since the 1970s). You don’t look for ads in the newspaper of the city where you want to live for an academic job. In my subfield this year, for example, there were five jobs in the WORLD. Five. And probably about a hundred people applying for them. All of whom have PhDs from top-ranked schools (Ivies, Chicago, Stanford, Berkeley, with few exceptions). Then, once in a position, those lucky few face rigorous reviews every few years to keep their position.

So that person at the front of the classroom who seems like an idiot to you — he or she had to go through some incredibly intense and competitive hoops just to get there, all after excelling to an extraordinary degree at the level of education you’re currently immersed in. That doesn’t mean that prof is perfect, and he or she may be so overwhelmed by the intense pressure of essentially holding two or three full-time jobs at the salary of half of one that you may not be seeing his or her best work. If your instructor is an adjunct, he or she may be commuting between several school, cobbling together 5 or 6 courses at the same time to earn less than sufficient money to pay rent, with no benefits. Research-intensive universities tend to value a professor’s research agenda much more than teaching and so in those cases you might see someone who has never actually had any training or interest in teaching (but who is a top expert in their field). But that’s becoming rare even at the big research institutions.

On the whole, the chances are you should maybe open your mind a little bit to the possibility that this person might have something of value to teach you after all.

“Grades are so subjective”

AGradeActually, they’re probably less subjective than you think. And to the degree that there is still some subjectivity, it probably works in your favor, not against you.

First, in many classes these days grades may be almost completely objective, as multiple-choice tests are sadly common in overcrowded, underfunded classrooms. History is one of the subjects that’s less likely to do this at all, or at least not exclusively. Most of our assignments are usually written essays, or some other form of project that you may think is graded subjectively, because the professor reads your work and then slaps a grade on it, and you may have so little idea what happens between those two steps that it might as well be random.

It’s not random. (Well, everyone has heard an anecdote to the contrary, but those are mostly jokes, made in the throes of the abject torture profs actually go through when they grade.)

It’s increasingly common these days for essays to be graded according to a rubric. Rubrics break down an assignment into component parts, often attaching some point value to each part. These are mainly intended to communicate more clearly to the student what the instructor is looking for, and to show relative strengths and weaknesses in different areas of an assignment. But there’s still a judgment being made on how to assign points — a number — to something like your writing style, argument, factual accuracy, creativity, etc. None of these things can really be reduced entirely to a number, so there is a certain amount of arbitrariness involved. But not very much, because the instructor is grading your essay compared to others by students in the same class. The quality of your argument may not be fully represented by, say, the number 18 out of 20. But if the quality of argument across a class of 30 students ranges from 20 to 5, and you’re an 18, you know that you’re doing very well, significantly above the median, but not quite at the top of the class. That’s real, though limited, information.

How does an instructor judge your work? How can she be sure yours ranks at 18 — that those handful of students who got 19 or 20 definitely did turn in work measurably more successful than yours? For one thing, things that may still feel really amorphous to you, like what an “arguable thesis topic” looks like or the level of specificity in your language choice are not at all amorphous to a professor who has been writing and reading these kinds of statements literally many thousands of times, day in and day out for years on end. Examples:

“The Bolsheviks won the Civil War because of their geo-strategic advantages” is an arguable topic, and therefore acceptable.

(That thesis statement should be followed by a detailed explanation of the specific geo-strategic advantages that the Bolsheviks did have, and that the Whites did not have — see? Arguable)

“Stalin’s purges were caused by his lust for power” is not arguable, and therefore not an acceptable thesis statement.

(What would you follow this statement with? A series of repetitive statements that all essentially say, “Stalin was a bad man. Real bad.” Believe me, I’ve seen it. But that’s not evidence supporting the thesis — it’s a circular restatement of the thesis over and over. Because that’s all you can do with something as amorphous “Stalin was bad because Stalin was bad” — it’s not arguable.)

[Note that whether or not your prof agrees with your thesis is totally irrelevant here — your prof is reading THOUSANDS of pages all saying basically the same things. She really just doesn’t care either way what your thesis is, only that it is actually a workable thesis, demonstrating that you understand fundamental concepts. She wants to be done grading already.]

How do we distinguish between “specific” languages choices and vague ones?

It’s easy to see that the sentence “Lenin was a ruthless leader” is vague when another essay states, “Lenin’s NEP was an ideological compromise that divided the Party and made Stalin’s manipulation of factions possible.” The second sentence is not only better writing, and more convincing as part of an argument, but also tells your prof that you actually know what happened and how and why it mattered.

Do you see how the difference between those two statements is both obvious, and objective? Multiply that by a million little judgments of exactly the same kind, and that’s how we can grade fairly.

Also, remember that your grade is not an absolute value that sums up everything about your work (let alone about you — this is not you as a person under judgment, but the words on a page that you turned in). It simply ranks the relative quality of your work compared to that of the other students on a few basic criteria that the instructor deems most significant (hopefully, your instructor told you what these criteria are — if not, ask).

If you read the same set of papers from your class that your professor got, you too would be able to roughly rank them in terms of clarity, accuracy, and how convincing they were as arguments. Most likely, your ranking would actually come pretty close to that of your professor (I say this because I often have students grade themselves and their peers in exercises, and they’re always right on in their assessments). The professor’s experience allows her to do this much more quickly that you probably would, and her expertise allows her to catch the errors. But otherwise, grading is not all that mysterious and most people would do it in a very similar way in most cases.

Each professor reading each essay does ultimately make some degree of holistic assessment (“this essay is cogent and careful, but doesn’t go out of the box; that one is creative but doesn’t fully support its claims; this other one blows my mind; and this one here makes me wonder if the student even knows what course they’re in”). But when multiple instructors read the same essays, they nearly always end up with very similar assessments (I’ve seen this from experience as a TA in large courses where multiple people do read the same essays, and I’ve seen studies concluding the same thing).

This general agreement on relative success comes from three things: (1) the more specifically one defines what one is looking for in an essay, the easier it is to see where those goals are reached and where they aren’t (2) experience reading lots and lots of essays makes these things much simpler to spot than it seems could be possible to the novice who is writing this kind of essay for the first time and (3) the differences usually are pretty stark — in an average class of 30 with a grade spread from A to F, the difference between A work, C work, and F work is blindingly obvious. The tricky part is distinguishing between, say, a B and B+. Those judgements are very fine, and it is true that two experienced readers may disagree at that level. Luckily, those kinds of fine distinctions aren’t really significant in the long run.

(In my own case, I tend to use pluses and minuses as signals — a B+ tells that student that the essay is not A work, but it’s coming close, and would need only a small amount of revision to get there. On the other hand, a B- tells the student that while their essay was essentially accurate and complete and therefore belongs in the B category, it just barely reached that level in some respects, so that the student knows s/he would need to revise quite thoroughly to reach A-level work.)

Finally, there is the issue of bias. Students talk a lot (or so I overhear on campus) about this or that prof having favorites, or “not liking” them. The first point to make here is that professors are insanely busy people who usually see hundreds of students every semester. Honestly, most of us don’t have time to form actual opinions about individual students. But of course it’s true that a student who comes frequently to office hours and turns in excellent work is going to build a good reputation with faculty, and students who don’t show up to class, turn in late and/or shoddy work or don’t turn in work at all, and then beg for a higher grade because they “need” it are going to lose the respect of faculty. But either way, that reputation is far less likely to be reflected in grades than students think (it does enormously affect things like recommendation letters and how willing a professor is to spend time chatting and giving advice — which ultimately may matter more). Simply because grades are much less subjective than students realize, there’s really no need and little opportunity to manipulate grades in this way. Even if we assume a truly ill-willed instructor who has the time to bother artificially inflating some grades and deflating others, the chances are that sooner or later complaints about this practice will accrue with the department chair and deans, and eventually there will be consequences for the faculty member, which would discourage those few who would ever bother with such asinine and pointless manipulation anyway.

But there is one way in which the relatively more subjective process of grading an essay is different from the wholly objective process of grading a multiple-choice exam, and that works entirely in favor of the student, in my experience.

I experimented briefly with multiple-choice exams once, in a class in which students also did a lot of writing. My notion was that since students had mostly been assessed by multiple-choice in the past (I did a survey to confirm this) that I could eliminate the anxiety involved in learning a new format of demonstrating their knowledge, and just find out what they actually knew. Then in separate written essays I could focus more on teaching them how to write well. As it turned out, the entirely objective grades from the exams were abysmal, far lower than I usually see on essay exams or written short-answer exam questions also aimed at testing content knowledge. I did some surveying to find out why, and while I can’t be sure, the problem seems to have been a combination of two things. First, because there was less anxiety about a multiple-choice exam, students studied less. Second, and most relevant here, when I grade an essay, I am more flexible in how I award credit to the student. For example, if the student answers a multiple-choice question and gets it wrong, it’s wrong, period. But in an essay on the same subject, it may be clear that the students is confused about one factual detail, but does fundamentally understand the concepts under discussion, and has analyzed the material well. In that case I’ll dock a small point value for the one bit of confusion, but give credit for the general understanding. “Objective” assessment does not give me the leeway to do that.

ASEEES and AHA

In the last few months I’ve enjoyed the rich alphabet soup of attending ASEEES and AHA in NOLA. Say what? I mean I attended the annual conference of Slavicists and Eastern Europeanists and that of the American Historical Association, which both happened to be held this year in New Orleans, LA. If you’re a new graduate student or enthusiastic undergrad considering a Ph.D. in history and wondering whether you should try to make it to a conference, GO. Do it. But you might want to read this first to know what to expect.

What’s a conference like? You may picture the scene from The Fugitive when the US Marshals track down Harrison Ford’s friend at a conference, where a bunch of really boring-looking people in tweedy jackets sit around and talk about papers with incomprehensible titles in a fancy hotel. And for once, Hollywood pretty much got it right. Conferences are the ultimate insider’s gathering: no one from the outside of these little worlds would ever want to go to one of these things, I imagine. But they can actually be rather a lot of fun, from the right point of view. If you’re working on becoming an insider, conferences are a great introduction — they are nothing more or less than the physical manifestation of “the field” or “the discipline.”

The annual conference for Slavists (which used to be more entertainingly called AAASS, but was recently changed to ASEEES, the pronunciation of which no one can agree on) brings together people from nearly every phrase of my life, so it’s a strange and interesting social occasion. There are people I took Russian language classes with as an undergraduate, people I know because we once rented the same room in St. Petersburg, people I went to grad school with, people I taught, people who taught me, and people I don’t know but whose work I’ve long admired from afar. So a big part of that conference is reconnecting with people from all these spheres — most of us never see each other anywhere else.

But the main purpose of the conference, of any academic conference, is to share new research. Conferences, perhaps more than anything else we do, are really at the core of our jobs as researchers, which is funny since most people if asked will readily whine about the poor quality or general boringness of conference panels (self definitely included).

In theory, all papers presented at conferences are works-in-progress: new research that is presented among colleagues (hence the conference’s definition as an insider’s affair) for comment and criticism. The reality is that one has to propose papers and panels almost a year in advance, so that one often has to basically guess what one’s work will look like a year in the future, then as that year passes all too quickly with a million other deadlines and the overwhelming time commitments of teaching, all of sudden the deadline for the paper comes up and one often slaps together something either too rough, or too familiar — something not as new as it ought to be, because the newer work isn’t ready yet.

Also, in theory, panels bring together papers on related themes, which creates juxtapositions and comparisons that breed more ideas. In reality, panels too are put together in a rather hodge-podge way, and since panelists don’t often communicate much before showing up at the conference, many panels feel random, and you don’t get that synergy of ideas at all. Often panels are idealistically planned to be interdisciplinary — at the Slavic conference, people often try to bring together historians and literature specialists, political scientists and art historians. The idea is that by talking to each other, we’ll each broaden and enrich our approaches. Sometimes this happens, but sadly I find it’s more common that these kinds of panels just bring into stark relief the fact that our disciplines’ differences in rules of evidence, jargon, and ways of framing questions are almost impenetrable.

The theory also goes that the audience is as important as the presenters, and that the conversation should really include everyone — when we write our papers and present them, we all hope for constructive, thoughtful comments that will help us improve. As audience members, we all hope to hear rich, engaging, well-presented papers that will provoke excited responses. But then there’s the counter-stereotype, of presenters droning through turgid papers while audience members ramble “questions” that just happen to really be all about their own work, not what was presented. The reality generally runs the gamut from one stereotype to the other and covering everything in-between.

My experience of conferences, as a junior scholar, has been that they are a series of slightly disillusioning or uninspiring talks broken up by moments of incredible, sometimes life-changing excitement and inspiration that make it all worthwhile. For example, I found out about the documents that would become my first book — and met someone who became a good friend — by attending a panel on regional history at a AAASS conference as a mid-stage graduate student. The panel was great, but what really mattered was talking privately to each panelist afterward, to ask about their archival research and whether they had come across anything that might help me in my project. One of the panelists knew of amazing materials that were perfect for me, and — poof — my life changed. The topic of my new book also came out of a similar chat with another colleague, at that same conference. Many important insights about my work were born in a conversation here or there — often not in direct comments on a paper I presented, but through indirect conversations at other panels, often totally unrelated ones, or chats over lunch. I think the real work and value of these conferences is that they bring all these people to one physical space and throw them together, which creates the circumstances in which these kinds of unpredictable synergies can happen.

Conferences can also be great as a kind of giant snapshot of the state of the field. Usually they have some kind of theme — the theme for last year’s ASEEES was “borders and peripheries” and next year it’s revolution — but I find the official theme is often kind of like a parlor game, as everyone tries to shoehorn it into the topic they want to present on, no matter how awkward the marriage. This can lead to amusing paper titles (though I won’t call anyone out publicly, it’s worth browsing the program for a giggle). What’s more interesting are the patterns that turn up by accident — it seems like the last couple of years have thrown up a lot of papers on childhood and education, and religion seems to be popping up more than it used to. A few years ago, it was all empire, all the time in the Slavic and EE world. That’s still there, but less heavily than before. These kinds of patterns do give you a sense of where “the field” is heading in a way nothing else can.

The other factor worth mentioning about conferences is location. Each big annual conference is held in a different city every year, though my conferences tend to be in the northeastern cities most often: Boston, NYC, DC, Philly, Pittsburgh. For those of us who live out here, this is convenient. The conferences are cheaper (faculty with full-time positions are usually at least partially reimbursed for these quite expensive events, but note that many, perhaps most, attendees are paying at least partially out of their pocket), and in those cities in the winter months, there’s often little reason to leave the conference hotel, which keeps the panels well-attended.

This year was very different, though, and the effect was noticeable on attendance at panels at both ASEEES and AHA: in New Orleans, everybody was playing hooky at least some of the time to go out and explore the French Quarter. It may have been a little depressing to see the mostly-empty rooms, but speaking for my own panel, which was barely outnumbered by its audience, we may have inadvertently benefited from low attendance. At any rate, it was the most interesting and fruitful question and comment session I’ve had at a panel where I presented. All the attendees were there listening instead of out eating beignets because they had an intense interest in our topic. And because there were so few of us, the barrier between panel and audience really broke down, and we actually had a real conversation.

When I played hooky myself, I not only enjoyed some fabulous food, but I made some of those great professional contacts that conferences are for, which may not have happened in the hallways between panels. What started as a cup of coffee with an old friend grew to a three-hour, multi-course lunch where I met several new people whose work interests me in completely unexpected ways. And after my own panel, several of us moved on to a lunch where our conversation continued less formally, but just as productively.

It was a side-bonus that I was also able to get acquainted with one of the most extraordinary cities I’ve ever been to. New Orleans struck me as a rather odd mashup of Vegas, the deep South, and (inexplicably) a little bit of Budapest.

Please excuse the crappy cell-phone photos.

Please excuse the crappy cell-phone photos.

I could -- and did -- spend hours just walking around the French Quarter, soaking up the unusual colors, shapes, sounds and smells.

I could — and did — spend hours just walking around the French Quarter, soaking up the unusual colors, shapes, sounds and smells.

I think it was the courtyards that most reminded me of Budapest, along with the cafe culture -- especially the Palace Cafe, with its central curved staircase, rails, and mirrored walls.

I think it was the courtyards that most reminded me of Budapest, along with the cafe culture — especially the Palace Cafe, with its central curved staircase, railed interior balconies, and mirrored walls.

One of the notable things to come out of both ASEEES and AHA this year was how very little Slavists and historians tweeted or blogged the conference, a practice which is increasingly common in other disciplinary conferences, notably the MLA. Of course, historians are historians because we like old things, and we have always been famously technologically backward.

The tiny little pharmacy museum was apparently a hot spot for conference visitors, and certainly one of my favorite finds of the trip.

The tiny little pharmacy museum on Chartres St. was apparently a hot spot for conference visitors, and certainly one of my favorite finds of the trip.

What can I say, we like old stuff.

What can I say, we like old stuff.

I was ridiculously excited about the blue glasses -- it was a cliche that the Russian women "nihilists" (radicals) of the 1870s wore blue glasses, but I could never really picture what that meant. They are really blue!

I was ridiculously excited about the blue glasses — it was a cliche that the Russian women “nihilists” (radicals) of the 1870s wore blue glasses, but I could never really picture what that meant. They are really blue!

For many years we have been somewhat snickered at because we still mostly read papers at our panels, instead of using PowerPoint or (newsflash! This is The Thing now) Prezi, or even poster sessions. Personally, I wanted to tweet both conferences but was inhibited by not owning a mobile device — which is due to a combination of being a late adopter of technology in general and just plain not having any money. But since poverty is endemic throughout academia, that can’t be the reason historians and Slavists are so behind. My other problem would be that I’m too wordy for Twitter. *cough* This might also be common to historians generally. *cough* Excuse me, I seem to have something stuck in my throat.

Another big feature of the AHA conference is the job fair — the AHA is the primary venue for first-round interviews for academic jobs in history. This is why this conference always feels considerably less warm and friendly to me than ASEEES. It’s always filled with so many nervous people wearing nearly identical dark suits. If we all had less / more kempt hair, it would look like a secret service convention. Interviews are usually held either in suites (so that you often run into nervous people pacing the hallways upstairs) or all together in a ballroom, where a thousand tiny cubicles are formed for interviews to be held in, with an outside waiting room known as the holding pen. This year the holding pen was freezing cold, and the interview pens were made from floor-to-ceiling black curtains, with bright overhead lights, making that room unusually warm. Job interviews, or KGB interrogation? Sense of humor definitely required for survival.

Finally, the conference feature that is seemingly tangential yet a favorite for nearly everyone: the book exhibit. Scholarly publishers put together booths all in one big ballroom with books from their list relevant to the conference discipline. For laypeople to understand why this can be exciting, you have to understand that the kind of books most academics write and like to read are almost never stocked in stores, so conferences are a rare opportunity to browse books in person. Plus, university-press books are incredibly expensive, and at conferences they’re usually discounted about 20%, often 50% on the last day. The book fair is a geek’s wonderland.

I got to see my book on display for the first time at ASEEES, then again at AHA. Definitely a personal highlight!

I got to see my book on display for the first time at ASEEES, then again at AHA. Definitely a personal highlight of the year!

The New York Times even took note of the AHA, with a nice little piece highlighting some of the trends of the conference. But at the same time, for me this brief summary for outsiders highlighted, between the lines, the enormous difference between the conference and my conference. The piece did capture the “news” from this year’s AHA in that it records some of the points made by big names at the high-profile events (which I mostly didn’t attend), as well as capturing a little something of the atmosphere (I guess; I eschewed “historian-themed cocktails” for what felt like more historical cocktails — famous local concoctions which date back to the Prohibition era when cocktails got interesting largely to cover up the horrible taste of badly renatured industrial alcohol). But at the same time, the major points quoted in this piece are in a different way not at all representative of the real nature of the profession, at least to me.

Michael Pollan asked why he uses our books as sources and his version sells so much more than ours. Did no one point out that the failure of our books to sell might not actually be a problem? Is best-seller status the only marker of success, or usefulness? Scholarly books are meant to be read by scholars, because some problems are so complex that only people with a lot of training are going to be able to take the time to go over all the evidence in detail, but someone — and it should be a lot of someones — needs to comb through that evidence, so that when someone like Pollan (whose role is also very necessary) takes away the general conclusions and frames them in a way that’s useful to the general public, he can be pretty sure that the conclusions are truly evidence-based and meaningful. He can’t write his book for the masses unless we first write books for each other. If we all tried to write for the masses, we wouldn’t be doing the evidence-sifting that we’re trained for, and on which the general conclusions depend. (From the summary in the article, it does seem that Pollan was more or less making this point, but it’s not clear to me to what degree either the conference or the NYT author are understanding this as a good thing for academia, rather than a “problem.” But I wasn’t there, so if anyone would like to tell me whether / how this point was raised I’d love to hear about it in the comments.)

Similarly, outgoing AHA president William Cronon and president of Oxford University Press Niko Pfund are both quoted as worrying about the state of the academic monograph. According to the NYT article, Cronon said that historians “tend to default to a dry omniscient voice that hasn’t changed since the 19th-century, despite the fact that historians no longer believe in that kind of omniscience.” And Pfund, noting that the pressures of tenure decisions are a key reason why historians are still married to the traditional monograph, added that historians remain “absolutely imprisoned in the format of the printed book,” a situation he called “borderline catastrophic.” As a junior scholar, conference attendee (who admittedly skipped out on the event) and as an author of a recent monograph published by Oxford University Press, I’m confused by these remarks.

First, as explained above, I’m not sure that scholars writing for other scholars to solve problems that can’t be solved better in other ways is a problem. Second, while I am absolutely a very strong advocate for good, readable academic prose (there’s no reason that an original argument written for other trained scholars has to be written badly after all), it is precisely the senior scholars like Cronon and the editors of prestigious presses like OUP that keep standards for monographs so rigid, and maintain monographs as the key format for historical research. Perhaps Cronon and Pfund are trying to convince their peers to change, for which I applaud them, but the most recent AHA newsletter showed graphs demonstrating how newer digital formats for scholarly research are less respected than any other aspect of a scholar’s portfolio in tenure decisions, and Oxford, with most other university presses, actually fights rather hard against digital incursions into the traditional monograph market. Finally, my editors at Oxford actually made me revise my book manuscript to more closely follow that “dry omniscient voice that hasn’t changed since the 19th-century” than the original manuscript did, in contrast to the general trend among academic writers to be more forthright with voice and authorship (a simple example is the old schoolmarm rule about using “I” in formal prose — my editors still frown on it, while a Google Scholar search will show that it has already become the standard).

I adore Oxford University Press for some of the quirks that may increasingly seem old-fashioned but have real value, like the Oxford comma, or quality craftsmanship in a physical book, or simply the high caliber of editorial staff they maintain in an age when authors slapping a manuscript straight into an Amazon ebook is becoming dangerously tempting. And of course I adore OUP simply because they wanted my book. How could I not? I also love them for publishing many of my favorite academic books (including, perhaps, some of those very “dry” monographs “imprisoned” in beautiful covers on my shelf where I can pick them up on a whim, flag useful passages and discover unexpected connections when browsing the shelf — monographs that are not best-sellers, but are purchased by a small number of people exactly like me…).

But at the same time, as a young junior scholar who is coming up for tenure soon, hearing senior, powerful people in my field tell us we need to go in direction X when they are among the primary gatekeepers blocking the doors to direction X, I’m deeply confused and troubled. And I do think my confusion highlights one of the downsides of conferences — I’m not sure there’s very much meaningful exchange between the most senior scholars and the rest of us. I know that from my first conference as a starting grad student to this year I have interacted at panels, in hallways, and socially with everyone from undergrads to mid-career scholars as a matter of course. But many senior scholars forgo conferences — after all, they’ve been staying in crappy hotels and listening to boring panels in cold rooms year in and year out for decades — or if they do attend, with the exception of my own advisors and mentors I see them only from afar, from the back of an audience for a keynote address attended by hundreds and therefore decidedly not a venue for those kinds of serendipitous exchanges of ideas that ideally a conference is for.

Fellow conference attendees: what do you think?

Potential conference attendees: I’m sorry, were you looking for practical advice on presenting papers? Look here as a start.

 

UPDATE: Nice AHA wrap-up from the Tenured Radical and an entertaining account from a Job Seeker

I’m on the OUP blog!

Check out my guest post today on the Oxford University Press blog, about a mid-nineteenth-century Russian stay-at-home-dad.

Book!

My first book is now available as an ebook, and will ship soon in hardcover from Amazon! It has already made its appearance at the annual conference of the Association for Slavic, East European, and Eurasian Studies in November. And it will also be available at the American Historical Association conference in January (and probably be 50% off on the last day).

To find out more about it, click on “Research,” then “Book,” in the menu bar above.

This book represents about a decade of work, as well as being the very first time my name appears in print on something I authored. This book was much harder to produce than my daughter. I know it more intimately than I know anything or anyone. It was the most difficult thing I’ve ever done. It very nearly never happened, because I frequently wanted to give up on it completely. I’m so sick to death of this project at this point that I’d almost rather talk about anything else on the planet. Yet I’m so proud of this book that I can’t wait to tell everyone about it. Writing a book is a strange journey indeed.

I hope you’ll be interested in reading it. I like to think it’s rather a fun read for a scholarly monograph (mainly due to the quirkiness of one of my main subjects, Andrei Ivanovich Chikhachev). But it is a scholarly monograph, so I don’t expect that very many people actually will read it (if you skim some parts, I swear I won’t be offended!). If you’d like to buy it, I’ll be thrilled (and my publisher even more so). But like most academic monographs, it’s pricey—believe it or not, these kinds of books are always published at a loss, despite the high price, because they almost always get purchased only by libraries and a few handfuls of individuals.

If you’d like to read the book but can’t afford it, there are two options:

First, a paperback edition will come out at some point, in a year or two, which should be considerably cheaper. Hopefully the ebook price will also go down at that point.

Second, you can always request your local library to buy it. That’s a wonderful way to support the book (and me) and to enable not just yourself, but others, to read it!

I appreciate your interest more than I can say!

Adventures in Russian archives

Ivanovo Train Station

The Ivanovo train station.
(Photo from Russian Wikipedia, used under a Creative Commons license.)

I first arrived in Ivanovo, Russia, in the fall of 2004 by overnight train from Moscow. We pulled into Ivanovo at seven in the morning, and I peeked out, still sleepy and disoriented. I asked the elderly gentlemen getting off beside me if this was, indeed, Ivanovo. He looked out at the bleak landscape, still dark, of a handful of crumbling concrete buildings with a gigantic Soviet-era wall mosaic of a worker, and replied with an ironic grin, “sure looks like it.”

Archive Mittens

Hand-felted mittens, adapted for archive use.

I made these incredibly ugly mittens to wear in the Ivanovo archive where I did the bulk of the research for my book. They were knitted in Russian wool, then fulled with hot water and soap to make them denser and therefore warmer. The forefinger on the right mitten only was made separate from the rest of the hand, so that once the fulling process ensured the wool wouldn’t unravel, I could cut tiny holes in the pad of the forefinger and thumb, so that I could (just barely) grip a pen with the mitten still on. I went through all this in the long fall evening hours after the archive was closed, then wore them all through the winter. Those mittens tell you a lot about doing archival research in Russia.

I did my research mostly in just one archive, and one that few westerners ever visit: the State Archive of Ivanovo Region, or GAIO for short.

Archive

Like a beacon in the distance, the State Archive of Ivanovo Region calls to me…

GAIO is a provincial archive, and the city of Ivanovo is the capital for its region, also called Ivanovo. Like most enterprises in Ivanovo, the archive is run pretty much entirely by women. Ivanovo’s nickname is “City of Brides” because it has been a disproportionately female city for more than two hundred years. This phenomenon began because the city of Ivanovo grew out of a region that dominated Russia’s new textile industry in the late eighteenth century. Textile workshops tended to mostly employ women in those days, so there were disproportionate numbers of women workers. Today, Ivanovo’s textile industry is dead, but the disproportionate domination of women continues.

I assumed the “city of brides” thing was little more than a nickname, but Ivanovo actually has a hair salon specializing entirely in brides….

I lived in Ivanovo for almost ten months, all of them winter. Today, with its industry closed, Ivanovo is mainly known for its malls, a couple of which were built in abandoned factory spaces. Most young people try to leave Ivanovo as soon as they can, as there aren’t many jobs. Too many of the relatively small number of adult men can be seen wandering the streets, drunk at midday—there’s not much else for them to do, if they’re not both well-educated and lucky. When I was there, from 2004-05, there was some new construction, but mostly the town looked like a graveyard for the various historical epochs it has survived. There are old merchant homes from the late nineteenth century all over town, made of wood with decorations around the windows and doors. They are quaint, but decaying fast. In between them, there are the hastily erected apartment buildings and institutional constructions of the 1960s, ugly and decaying even faster than the nineteenth-century buildings. Along the river banks are the shells of what once was an enormous factory complex, and here and there are sparkling new apartment buildings offering “luxury” units to the entrepreneurs of the new shopping malls.

Ivanovo Landscape

A nineteenth- or early twentieth-century house with a 1960s apartment building in the background, in Ivanovo.

The Ivanovo archive, like most archives, opens its reading room for pretty limited hours, about 4-5 hours each day, four days a week. As a researcher, you can only request a limited number of documents each day, so you try to plan ahead to make sure you’ll have enough to fill your time until you can request more, since you can’t afford to waste an hour. When you first arrive, they tend to not give you most of what you ask for. Instead they’ll give you one or two documents to start with, and watch how you handle them, to make sure you’re a serious researcher and are handling the documents carefully. And, at least when I was there, it was very difficult to get a xerox or digital photo of anything. It was very expensive, and you had to ask permission separately for every page. They approved only a few pages once in a while, and usually only something that obviously couldn’t be easily transcribed, like a drawing. This means you have to sit there and copy out the documents you’re interested in by hand. Eventually I was given permission to use a laptop, but I found that copying by hand was actually more efficient for my research, since the handwriting of private, nineteenth-century Russian documents was hard to decipher, so it was often easier and faster to “draw” the illegible bits in my notebook than to try to indicate what I thought I saw in the middle of typing. That’s why it took almost ten months to get the information I needed, and I barely got it all before I had to leave.

I didn’t bother to get a photograph of the EKG-type handwriting, as it wouldn’t have helped. This is an example of difficult, but decipherable handwriting. It’s an excerpt from an account book.

The handwriting isn’t really difficult because it’s old and Russian. First, I’d been reading Russian for more than ten years by the time I started this project, and it’s also not that difficult to adjust to the idiosyncrasies of the mid-nineteenth century. There are reference books that provide some of the standards of the time, though the real trick is getting to know the personal quirks of a given writer. I was lucky in that the vast majority of documents I needed were written by just a handful of people, so I could get to know each one within a week or two, and have little trouble with them thereafter. Deciphering the handwriting is a bit like the last stages of figuring out a code: you can see most of it, so you isolate the strange parts and try to identify patterns about when they appear. Once the context tells you what a figure must indicate in one instance, you can apply that to the other instances, and hopefully everything suddenly becomes clear. This is all rather fun. Though sometimes you come across the handwriting of someone who just completely defeats you. I had one such case in Vasily Rogozin, the husband of Aleksandra Chikhacheva, the daughter in the family I was studying. His handwriting looked like an EKG readout, and I had to give up on it, with great regret, since the content, if only it were legible, probably would have solved a few mysteries, because Aleksandra is one of the most enigmatic figures in this collection of documents. But I felt better when I read a letter by her father to Rogozin, complaining about his impossible handwriting!

These strange (to me) symbols popped up in all the family diaries and at first eluded me. Over time it became clear they represented days of the week. Then, I found this key, listing each symbol with its meaning and related day of the week, in the naval diary of Natalia Chikhacheva’s father, Ivan Yakovlevich Chernavin. I don’t know whether he invented it or it was a common naval code (perhaps a reader of this blog can tell me?)

This code was never mysterious, but is definitely a lot of fun. Andrei Chikhachev and his best friend and brother-in-law Yakov Chernavin invented a system of signaling to each other across their opposing balconies that they referred to as their “home telegraph.” The system involved navy-style flags (later they invented a nighttime version with lights). This is a page from their telegraph signal book.

Some mysteries remain: this seemingly coded text was inscribed by Andrei Chikhachev into his parallel diary. I have no idea what it means. Maybe someday someone will recognize it, if it wasn’t a completely idiosyncratic code unique to Andrei and his brother-in-law Yakov Chernavin. Other mysteries in the documents include odd lists and charts that I believe may have been related to various games the family played.

The hardest part about the archive work, for me, was the cold. The archive is deliberately kept cold because the low temperatures are better for the documents. But when you’re sitting still for 5 hours at a time in a cold room, you soon begin to feel like your limbs will fall off if you attempt to get up again. I coped with the help of those archive mittens, and an ankle-length down coat worn at all times (with hat and scarf and fur-lined boots). I went out into the hallway for a break with hot tea and crackers three times a day, and did quick stretches every time, to get the blood moving again.

The other greatest challenge was confronting the very different attitudes toward research and access held by the authorities of this archive (or any other state archive in Russia, though they vary in the details). Mind you, I had it incredibly easy compared to most foreign researchers in Russia. There’s even a whole book written about adventures in Russian archives. In the old days, your biggest problems included being followed by the KGB and getting permanently banned from ever traveling to Russia again. These days, keeping warm is really the biggest issue for most of us. Although it can still be very difficult to study certain subjects from the 20th century (some archives have still not been opened to researchers at all), for someone like me, studying gentry women in the early and mid-nineteenth century, there’s generally no question of whether I can get access. I’ve been denied some documents, and always told this was because they were “in restoration.” Sometimes I suspect this really means that they can’t be found, or that an archivist is in a bad mood, or that I’ve been asking for too much lately, but it’s never been anything very important.

What was much more challenging for me is that in Ivanovo in 2004-05, archivists were still very wary of digital photography, though they did eventually allow me to photograph a few documents, under strict supervision. Even now Russian archives are slow to permit digital imaging, although it has become pretty standard in most of the world and it’s potentially a marvelous way for archives to get paid to digitally preserve their own collections. For many decades, Russian archives were focused on keeping information from getting out, and this is how most working archivists were trained, so it has been a very slow—some might say glacial—process to shift policies toward the priorities shared by most western archives, which is that archives exist in order to provide access to the documents, so that researchers can do something productive with them, instead of letting them literally disintegrate unseen.

So, I labored away, copying by hand under the somewhat suspicious eyes of the authorities. But this is really not an accurate depiction. There are very few people who work in the reading room of the Ivanovo archive for more than a few days, and I was there every single moment of every day for so long that I became quite close to the main reading room archivist, and the archive as a whole was incredibly generous in helping me to pursue my research (they have little control over central policies, and in any case there’s a long history of archivists losing their jobs by being too kind to foreign researchers–their task is not an easy one). Working in the Russian provinces was very different from the kind of experience you’d have working at, for example, the Bakhmeteff Archive in New York, but not necessarily worse.

While it was harder to live in a rented room in a foreign town while I did my work, this aspect of my research was also incredibly fun. Ivanovo is a strange and interesting town in many ways. For whatever reason many of the names of streets and squares have not been renamed since the collapse of the Soviet Union (as they mostly have been in Moscow and especially St. Petersburg), so there’s a Revolution Square and Red Army Street and Marx Street and so on. There’s also a rock in the center of town to commemorate the fact that Pushkin once traveled somewhere near Ivanovo, but not actually to Ivanovo. This rock is maybe my favorite part of Ivanovo. The contrast of Pushkin rock and Revolution Square is just the beginning—beside the crumbling buildings there are fancy new western-style supermarkets and a McDonald’s knock-off. Above the post office that still smells of old Soviet paper there is an internet cafe full of foreign students sending emails to far-flung parts of the world. Ivanovo is home to a town-within-a-town full of universities, so there are a lot of students. There’s also a formerly-secret military base not far from town, so plenty of soldiers, too. And dotted here and there are a handful of pre-revolutionary churches, with shiny gold paint newly re-applied to their onion domes.

Back streets of Ivanovo

The back streets of Ivanovo: path to the archive.

To get to the archive every day I took a short-cut through the back alleys of one of the older neighborhoods, where I saw spectacular new dachas being built alongside 150-year-old peasant huts. There were still hand-pumps for water by the side of the roads, and every morning a lady walked her goats across the path I was taking. As I exited this neighborhood and neared the main road where the archive was located, I passed a 1960s-vintage apartment building with a pack of wild dogs encamped in the courtyard. You read that right. Dogs in Russia are not routinely spayed or neutered, and there isn’t much in the way of systematic dog-catching, so there are a lot of strays wandering everywhere. Calling them “wild” is probably a stretch, but they are dangerous, to each other and to passersby. I got used to them after a while, which I cannot say for the -30 degree windchill (Celcius) in February.

An area of Ivanovo I like to call “wild dog alley.”

By far the most exciting part of that research year, however, was traveling beyond Ivanovo, into the countryside. I went there to find the villages once owned by the gentry family I was researching. Their main residential village still exists, complete with manor house, then being used as the village school. I was able to meet several of the teachers, who gave me a tour of the house and village. We went back again in spring, and the teachers treated us to a memorable feast in an upstairs bedroom that once belonged to the woman at the center of my study.

The road sign to Dorozhaevo. We went once in the bitter cold of mid-winter, and again in a muggy and buggy June.

Dorozhaevo Village

The village of Dorozhaevo

Enjoying the quality of freshly pumped well water in remote Dorozhaevo.

An upstairs bedroom of the Chikhachev house in Dorozhaevo, which the locals told me belonged to the lady of the house (and nothing I read in the documents contradicted this).

Traveling on back roads from the village of Berezovik (once owned by the Chernavin and Chikhachev families) to the nearest town, Teikovo.

A wooden church from the outdoor museum at Suzdal

A rich peasant’s house at the outdoor museum at Suzdal.

Interior of a rich peasant’s house, from the outdoor museum at Suzdal.

We also traveled to another village, where the church still stood, and to nearby towns that had been significant in the mid-nineteenth century. Of these, Suzdal is now a major stop on the tourist circuit known as the Golden Ring. It features two medieval monasteries and an outdoor museum with reconstructed village houses from the nineteenth century. We also visited Rostov-the-Great, home of a magnificent medieval fortress containing several cathedrals, which should also be a tourist site, but is somewhat off the beaten path and so not as prosperous as Suzdal.

A bell tower from a monastery in Yaroslavl, a beautiful and mostly thriving city on the Volga river.

Sadly, Yaroslavl is also the home of what I believe may be the world’s ugliest building.

Skyline of Vladimir.

Finally, we visited neighboring Yaroslavl, and the former provincial capital, Vladimir, both cities that are adjusting rather better to post-Soviet times than Ivanovo, thanks in part to their more diverse economies and several significant historical sites, which bring in tourist money.

None of these visits were really essential to my research, but they helped me to assimilate the setting in which the events of my study took place. Perhaps most exciting of all my side-trips, though, was a last-minute excursion to tiny Shuia. I went because I’d been told at the Ivanovo archive that the little town museum in Shuia had a few books that had belonged to the father of the family I studied. It turns out they had a shelf full of Andrei Chikhachev’s bound volumes of the newspaper Agricultural Gazette, full of articles he had written, and with his own marginalia! Not a bad surprise for my last day of research in Russia for that project.

On an article titled “The Influence of the Moon on Trees” Andrei wrote, “Rather useful article” (perhaps not the most revelatory annotation, but characteristic of Andrei!)

These are some of the aspects of historical research that don’t really get talked about in books or classrooms, though they should. For my current research I have been working so far in the central State Archive of the Russian Federation in Moscow, and will be doing more in St. Petersburg and possibly in archives in France and Germany, so my experience has been rather different. I can order xeroxes easily in Moscow, so I can gather my materials much more quickly, and I am less immersed in the process, as I work for short periods on summer “breaks.” This is probably more typical of most historians’ archival research, and I must admit there have been far fewer moments, lately, when I wished to myself that I had chosen to study Italian history instead.

 

For more images related to the people and places in my book, look here.

 

NOTE ABOUT IMAGES: All photographs are my own (© Katherine Pickering Antonova 2012), unless otherwise noted. Please don’t use or distribute without my permission. Photographs of archival documents were taken with permission from the State Archive of Ivanovo Region.

Dickens and Dostoevsky Just Got Real

Check out this nicely written and detailed summary of a recent dissertation that should be getting a lot of attention, in my totally-not-humble opinion (the author may just happen to also be my spouse).

Which reminds me to mention that the site that produced the review is a really interesting one: it provides reviews of recent dissertations from all fields, hopefully helping to extend their reach into non-academic circles, or at least across disciplinary boundaries.

What Is Socialism?

Judging by the way the media and the GOP talk about it, you might conclude that socialism is anything the GOP disagrees with.

Teaching what socialism actually is is part of my job, so I get asked this quite a bit.

First, socialism isn’t one thing. There is socialism the idea—and the idea has been expressed in different ways by different people—and then there are a vast variety of ways that the idea of socialism has been implemented in various times and places.

When I talk about socialism in my classes, I usually start by drawing an umbrella on the board. Because socialism is an umbrella term for all these different manifestations. Only one of the many manifestations of the socialist idea is “Communism.” And then there’s Soviet Communism as opposed to, say, Maoist or several other kinds, and Soviet Communism also changed dramatically over time, so there’s really no such thing as one Soviet communism. More on that below.

At the most basic level, the core of socialism that all these variable manifestations share is the notion that it would be a good thing if economic resources were distributed equally in a society.

Here’s just the start of a list of things not all socialists agree on about how that equal distribution would happen:

1. By “equal” distribution of resources, do we mean absolute equality (everyone has the same) or do we mean relative equality (some degree of correction of the enormous gaps between rich and poor that characterize capitalist systems)?
Various mid-nineteenth century experiments in communal living aimed for absolute economic equality. Today’s European social democracies aim only for a modest degree of relative economic equality.

2. How would this distribution of resources be imposed, regulated, or maintained?
Since the assumption is usually that a society with non-socialist economic principles would be shifted to socialist economic principles, some mechanism would be required to effect the shift of economic resources from just one part of the population to a more even distribution across the whole population, and then to maintain that relative balance as time passes. There are many, many possible ways for this to happen. Just a very few of the possibilities are:

    A. Voluntary sharing of wealth (as in a commune or co-op)

    B. Government regulation and taxation provides incentives and other “invisible” methods of shifting some limited economic resources to the poor within an essentially capitalist economy.

This could in theory be done in a very minor way–as it is in all industrialized countries right now–in a moderately progressive way, as it is in some social democracies in Europe, or aggressively, which has arguably never yet been tried.

    C. Government legislates salary caps and high minimum wages to deliberately even out wealth
I don’t know of a case where this has been tried to any significant degree.

D. Government nationalizes property (wholly or partially), sets prices, and otherwise directly controls the economy, seizing and redistributing assets as necessary

The Soviet Union did this in the early years following the October Revolution, in a policy referred to as War Communism, since it took place during a civil war and was justified as necessary to save the revolution in its infancy. Lenin changed this policy—reintroducing a limited market and limited private property—as soon as the Civil War was completed, though doing so was very controversial in the Party. We don’t really know what Lenin intended in the longer term, since he died in 1924.

E. Government plans economic production ahead of time (wholly or partially), determining what is made or exchanged by whom on what terms

The Soviet Union began doing this with the first Five Year Plan in 1928 (under Stalin), and it characterized most of the Soviet economy in subsequent decades.

    F. War/revolution are employed to redistribute wealth by force

Arguably, this is another way of describing the Soviet policy of War Communism, and other examples of forced requisition during wartime in many other parts of the world.

3. What resources are we talking about? Just cash? Money and property? How about commercial services? Does socialism address political equality directly?

Traditionally, the discussion of what to equalize is about tangible economic resources, not health, education, or political rights. Although there are clearly connections between economic resources and how easily you can access medical care, education, or civil rights, socialism is at its core a theory about economic resources. The idea is that once those are equalized, the rest follows. Access to intangibles such as political rights, health, safety, and knowledge are really about the distribution of power, and are therefore fundamentally political, not economic, in nature.

IMPORTANT: Socialism, as theory, is an economic idea, not a political idea. So there is no inherent connection between socialism and any particular form of government.

Sing it with me: Economic ideas are about how money and other tangible resources are distributed. Political ideas are about how power is distributed.

Many Americans assume that there is some inherent connection between capitalism and democracy, and between socialism and authoritarianism. There is no such inherent connection, neither in theory nor in practice. There have been democracies with socialist economies (much of Scandinavia in recent decades, for example), and democracies with capitalist economies (such as the US). There have been authoritarian governments with capitalist economies (most absolute monarchies in the nineteenth century), and authoritarian governments with socialist economies (such as the USSR).

While all socialists like the idea of some degree of equality of wealth, socialists have not historically agreed on their preferred form of government. Since the collapse of the Soviet Union, however, most (though not all) people arguing for socialism in the industrialized world prefer democratic governing and non-violent methods of wealth redistribution.

It should go without saying—though sadly it does not!—that by “people arguing for socialism” I do NOT refer to the U.S. Democratic Party. Economically speaking, the American democratic platform is on the conservative end of the spectrum and from a European point of view virtually indistinguishable from the U.S. Republican Party on economics. By “people arguing for socialism” I refer to people actually arguing for socialism. Such as the Socialist Party USA or the American Social Democrats. Ask them what they think of Obama, I dare you. (LOL)

4. Is socialism something that can be achieved, or does it happen “spontaneously”?

This has historically been an incredibly contentious question. Many proponents of socialism consider economic equality a goal that can be worked for, and perhaps fought for. Others acknowledge that economic equality would be an improvement for human societies over capitalist or other economic systems, but do not believe that socialism can be created “from above,” that is, imposed by professional revolutionaries or government fiat.

Karl Marx inspired many professional revolutionaries, including the Bolshevik Party that took power in Russia in October 1917 and set about imposing socialism from above, but Marx himself believed socialism would happen “spontaneously,” from below, through a process of economically exploited classes recognizing how they are exploited and working together to take control of their economic power as producers, which would eventually result in a system characterized by greater economic equality and which Marx identified as “socialism.”

He wrote about all that in the second half of the nineteenth century, as labor in Europe was indeed being grotesquely exploited. After Marx’s death, labor in Europe and the U.S. began to organize and to strike for better conditions. As it happened, the general revolution Marx predicted did not occur (at that time!) — instead, the owners and managers compromised enough on working conditions and wages that workers began to enjoy (just) sufficient health, safety, and access to material goods and education to not be motivated enough for a revolution along the lines Marx expected. The democratic socialism and welfare systems of liberal democracy that dominated Europe after the second world war have essentially held that compromise in place. Until recently, that is, when deregulation, anti-union legislation, and the defunding of welfare and other public programs in the US and (to a less extreme degree) in Europe is beginning to shift the labor-management relationship backward again. It remains to be seen where this relationship will go, but I find the Occupy movement a fascinating early sign of resistance to these anti-labor policies. I say this only to point out that Marxism is not necessarily a relic of history, but still a framework that can be applied to working conditions and economic systems today.

Okay, so that’s socialism. What about Communism?

Communism is even more confusing!

Communism has a lot of meanings, too, depending on the context in which it’s being used.

Marx and Marxists have been known to use “socialism” and “communism” interchangeably, but when they’re being picky, socialism is often referred to as a transition stage on the way to communism. In this sense, socialism marks a stage after a revolution has overthrown private property, but before government has “withered away.” Communism then describes a utopian stage where government is unnecessary—society is classless, all labor is equal, and the system can maintain itself.

What gets really confusing is when a country like the USSR undertakes a revolution and declares itself a Marxist state — what they said they had achieved was not socialism or communism, but a revolution that was directed toward that end. So, when the Bolshevik Party that seized power in Russia in 1917 changed their party’s name to the Communist Party and their country’s name to the Union of Soviet Socialist Republics, they were using those terms aspirationally—they were aiming for socialism and communism. As the years followed, the Party dithered about just how much socialism had actually been achieved at any given point, but technically communism, if you read your Marx and Lenin, as every Soviet citizen did, remained on the horizon.

That would be confusing enough, except that these aspirational names have by now become descriptive of the countries engaged in this experiment. So, while the Soviet Union was attempting to achieve Communism, it became known as “a communist country,” and thus we began to speak of “Communism” not as the utopian final phase of Marxist development, never (yet) achieved on earth, but as “what they’re doing over there in the Soviet Union.” This is an extremely problematic usage when even in the USSR the Communist Party admitted that what they were doing was not actually Communism!

Since the end of the Cold War (at least) most scholars don’t like to refer to anything the Soviet Union was actually doing as “socialism” or “communism” because the terms are so imprecise. We tend to use those words mainly to describe the theories. The reality in the Soviet Union is known by the specific policy names used by the Party at the time — such as War Communism or the New Economic Policy or Perestroika — or in more general contexts by the leader who is associated with a certain cluster of policies, hence, “Leninism,” “Stalinism,” or for the Brezhnev period, “stagnation,” a term coined by Gorbachev that is irresistibly evocative, if not precisely literally accurate. One can also speak accurately of the type of socialism actually practiced in the Soviet Union as “planned socialism” or simply a planned economy.

Anarchism

A final note on anarchism, another frequently misunderstood term. Anarchists do not advocate chaos. Anarchism is also something of an umbrella term, encompassing both individualists and collectivists, but the collectivist branch can be seen as a variant of socialism. What distinguishes collectivist anarchists is that they are particularly concerned with the role of government in establishing or maintaining economic equality—namely, they want government to stay the heck out. A case can be made that if there were ever hope for the Bolshevik Revolution to live up to any of the theoretical principles on which it was based, this hope was derailed by the domination of government and Party at the expense of workers. Other arguments can be made to explain the many hypocrisies of the Soviet state, but there’s no question that Lenin’s notion of the Party as “vanguard” leading the revolution on behalf of workers resulted in a much more powerful role for the state than many socialists condoned at the time or since.

Russians Love Their Children Too

By Rita Molnár, via Wikimedia Commons

I’m quoting Sting, of course, in his famous — and at the time daring — song, released in 1985, during the Cold War. He was hoping that Russians, though our enemies, are human too, loving their children enough not to push the button to start nuclear war. Fortunately, it turned out that indeed, Russians love their children, too.

Imagine a bunch of Russians on an internet forum debating the merits of capitalism. Imagine that they’re talking about the United States in the 20th century as if it was all one, unchanging thing. As if the Civil Rights movement, the Great Depression, and post-Reagan neoconservatism were all happening simultaneously, and all characterize who we all are as a people. Imagine that people are saying all Americans have been merely reactive to our regime, that we are materialistic products of the free market, which drives our every action. Imagine that these writers on an internet forum acknowledge no social or cultural changes of any kind, and seem to believe that all our political leaders (FDR and Hoover, Coolidge and Clinton, Bush—either Bush what’s the difference— and Obama) had essentially the same outlook (because after all we’ve been a capitalist democracy the whole time, haven’t we?). Now imagine that these Russians are arguing that these “facts” about the U.S. prove that capitalism must necessarily lead to chauvinistic imperialism and enormous gaps between rich and poor to the degree that thousands of people are homeless in the richest country in the world (Russians didn’t know homelessness until they “democratized,” a correlation that could easily be misunderstood as causation).

It’s all patently ridiculous, of course. It’s hard to even know where to begin to correct all the false assumptions embedded in that argument.

Yet, I’ve heard it — often. Pretty much every time either “capitalism” or “democracy” is mentioned in my presence when I’m in Russia, actually, most of the points I’ve outlined here are made to me as if this should suddenly make me understand everything about my homeland that I’ve been blind to all these years.

The thing is, Americans just as frequently make the same mistake about the Russians. Every time you see a bunch of Americans (often on an internet forum) talking about how Russia proves that socialism isn’t possible, you’re seeing that same mistake being made.

I wrote that imaginary scenario by reading an actual internet argument by Americans about the Soviet Union and socialism, and just replacing the USSR with the US, socialism with capitalist democracy, to show how silly it is.

You can’t look at one moment in time and use it to characterize a whole century.

It is a mistake to confuse rhetoric and reality.

It is also a mistake to assume that socialism, an economic idea, has in inherent connection to authoritarianism, a political system. Socialist democracies exist, and so do authoritarian societies with capitalist economies.

It’s a mistake to confuse a people with their government.

It’s a mistake to lump hundreds of millions of people together and imagine they all think and behave the same way.

Yet everybody does makes these mistakes, all the time. People are ignorant everywhere, too — which is only natural. You can’t know about everything, and it’s easy to be unconsciously influenced by media. Does anybody think middle-class New Yorkers really get to live in apartments like the ones you see on Friends? If you do, I have a bridge to sell you. For the same reason, you shouldn’t imagine that the movie From Russia with Love tells you anything about Russia — it tells you only what those western filmmakers imagined about Russia for their own artistic and economic purposes. See my previous post on Rocky IV.

Interestingly, I’ve noticed that there are a lot more realistic Russian films set in normal-looking apartments than there are American films featuring people living in anything like any dwelling I have ever known in real life (though Russian TV is getting weirder and weirder and there are fewer realistic films and more ludicrous shocksploitation ones being made, so this is changing; I refer mostly to the 1970s-1990s).

I don’t think most Americans walk around deliberately spreading unfounded assumptions about other countries. We have a reputation abroad for doing it more than anyone else, though, deliberately or not, and that’s embarrassing. I find the the most effective way to remember not to make these kinds of mistakes oneself is to see how it feels when someone else does it to you. I’ve lived in Norway and in Russia for fair amounts of time and traveled briefly around Europe, so I’ve collected my share of anecdotes of this nature. A woman in Prague in 1992, who checked my passport at a currency exchange point, saw that it was issued in Chicago and asked me if I was afraid to live there. I thought it was the usual “don’t you get shot by gangs whenever you set foot outside” thing, but it turned out it was Al Capone — she thought he was still alive and busy! That was not the last time I came across someone who thought Al Capone was our contemporary.

The first time I lived abroad in 1991-92, I was continually asked if I lived in New York. No. Miami? No. L.A.? No. Well, but you can tell me what they’re like, right? No, actually I’d never been to any of those places. WHAT?!! But you said you were American?! Even those Europeans who have traveled to the US often visit only a major city or two, so many have little idea what’s “in” the rest of the US. Outsiders’ perceptions of our economic status are also often taken from Hollywood, or otherwise filtered through lenses. For example, when I taught English in St. Petersburg in 1998-99, a student of mine once confessed to me that he saw a documentary about the homeless in American back in the ‘80s and because he saw the homeless people on TV wearing blue jeans — which at the time cost a month’s salary in Russia — he concluded that even the homeless in American were rich!

Before you laugh too hard, remember that the assumptions Americans make about other countries are often distorted in exactly this way.

Unlearning High School in Five Painful Steps

By Maho mumbles, via Wikimedia Commons

This is addressed to all the college freshman out there.

There are a few habits you may have learned in high school that will have to be adjusted in college. Remember that the chief difference between high school and college is that high school aims to fill your brain with some basic knowledge of the world and introduce you to the main fields of inquiry (mathematics, science, social science, humanities, the arts), while the main goal in college is to train you to think critically about the world: to analyze, to find and sort through new information effectively, and to apply lessons from one sphere to another. Each discipline uses different techniques, which you are meant to familiarize yourself with as you take courses in different departments, but the overall goal of all disciplines is to train you in advanced critical thinking. Later, as you choose a major, you will also be expected to master many of the subtleties of a specific discipline, more narrowly defined than they were in high school.

In the case of history, in high school you are taught the basic facts of history and you are perhaps exposed to some questions any citizen might ask about our past. In college, you are expected to act as an apprentice historian, to try out the more complex methods of professional historians in order to understand them fully, and to ask deeper questions about the nature and uses of history, and how history influences our society.

In other words, in high school you are told a story; in college you are invited to discover how stories are written and what they may mean from different points of view.

1. The 5-Paragraph Essay

Frequently taught in high schools, the 5-paragraph essay model is a solid way of teaching students the basic outline of most scholarly writing: an introduction that sets up a problem and a resolution to it, a series of points of evidence supporting the resolution, and then a conclusion that summarizes the case made and connects it to broader implications. This is a good basic model. Naturally, however, not every argument relies on precisely three points of evidence, and not every introduction or conclusion can best be articulated in precisely one paragraph each.

The rigidity of the five paragraphs can safely be left behind in college, though you should retain the overall structure of introduction-problem-resolution-evidence-conclusion.

In college we expect you to be familiar enough with this model to reproduce it reliably, and we now want you to focus on content: think through real problems and evidence and come to your own reasoned, supported conclusions.

This difference implies something very important about how your writing process in college should be different than it was in high school. When your goal was just to practice the 5-paragraph model over and over, it made sense to start with an outline, fill it in, then you’re done. That is not sufficient in college, because it allows you only to record whatever you already know, not to discover new knowledge.

In college, writing should be a process of sorting through complex information, understanding it better, and then figuring out what you think about it. To do this properly, you must write many drafts. Start by explaining the evidence and arguments from your source texts in detail in your own words — that’s the best way to figure out what the evidence really is. Then start to ask questions about what the evidence means, what it adds up to. As you clarify the questions the evidence can help you answer, you will gradually come to some conclusions about how to answer your questions. Only at this point can you put all this into an outline and revise according to the introduction-problem-resolution-evidence-conclusion model!

2. You must do the reading at home

The number of hours spent in the college classroom is obviously far fewer than in high school. This is not because college is easier, or because it’s meant to be done on the side while you work (or play).

The way college courses are structured, the expectation is that a full load should be at least 40 hours a week, or the equivalent of a full-time job by itself. You should expect to work an average of 2-4 hours at home for each hour you spend in class (however with practice you will find that you’ll spend less time than this some weeks, and much more other weeks).

Because class time is so limited, we cannot waste it sitting and reading in a room together. Class time is for synthesizing the material, asking questions about it, and learning how to identify patterns in it. For that time to be worthwhile, you must come to class fully prepared.

At home you should be mastering the basic facts covered in the course (usually provided in the textbook) and absorbing the content of the other readings, so that in class you can think about the questions, problems, and arguments they raise.

In class, you should be taking notes, but don’t try to write down every word said. If you are sufficiently prepared you should not need to write down every factoid, but should be able to focus on questions, problems, and patterns.

3. You will not be rescued from disaster at the last minute

We can fail you, and we will. I understand that it has become common in American high schools to never fail a student no matter how poor their performance (which, you may have noticed, only serves to bring you to college grossly unprepared, which is really doing you a disservice in the long run), and it is common to allow make-ups, revisions, extra credit, etc, to improve grades. Do not expect this to happen in college. You are personally responsible for your performance, and your own learning.

If we could put the knowledge and skills you need on a flash drive and stick it in your ear, we would, but it doesn’t work that way.

Think of college as being like a gym membership: you pay to have access to the facilities, and to trainers who can help push you along, show you the most efficient way, and keep you from hurting yourself. But you still have to do the work, or you’ll never get in shape.

4. Assessments are far less frequent, so they count more

In college it is typical to have only one or two exams per semester, and perhaps one or two additional papers (this can vary widely–when I was an undergrad, most of my class had just one paper, or one exam!). This means you must master a greater amount of material for each assignment than you may be accustomed to, and the grade of each assignment will count more in your final course grade. Final exams frequently ask you to synthesize material from the entire semester, to enable you to tie together everything covered and to make connections among different places and periods (for a history class).

So studying is not about memorizing details just long enough to pass a test, then forgetting it all. Generally, there is less memorization needed at the college level, but it is vital that you fully understand concepts and that you think through the material being covered. Always ask how each piece of material connects to others, and why it matters — these are the most significant “facts” you need to learn.

And, of course, remember that it’s not okay to “bomb” one exam or paper — because of the smaller number of assignments, this will make a big impact on your final grade, and it won’t be possible to make up a bombed assignment later.

5. Feedback matters

In high school you may have found that you got very small amounts of feedback very regularly, and that it was generally positive. (The theory that constantly bolstering students’ self-esteem will help them succeed — though now convincingly debunked in my opinion — has been dominant in the schools since I was in kindergarten.)

In college it is more likely that you will get feedback relatively rarely, but it will be detailed and focused on what you need to do differently next time. The idea of this kind of feedback is not to be mean. Feedback is never about you as a person, but about the written work you turned in on a given occasion.

The instructor’s goal is to help you, by showing you where you need to improve most, so that you can do better next time. Always pay very close attention to feedback; don’t take it personally, but do consider it a guide to how to approach your next assignment (even if that next assignment is in another course!). If you don’t understand the feedback you’re getting or it isn’t enough, talk to your professor!

You’re an adult now. If they don’t hear from you, they assume you know what you’re doing.

 

 

Note: much of my information about what the high schools are up to these days comes from colleagues, as does the gym metaphor, for which I will be forever grateful.

Revision

By Hownote, via Wikimedia Commons

There are two kinds of people in the world: those who revise, and those who don’t. The former are writers, the latter are not.

This implies that the way to become a writer, is to revise. A lot. And that’s absolutely true.

Yet, many novice writers, especially college students who are writing a lot of papers under tight deadlines, persistently believe the myth that by “writing process” one means: start typing, continue until you hit the word limit, proof-read or spell-check, and hit “print.”

This is a recipe for papers that—even if full of brilliant ideas—probably can never make it out of the B-range, and very often are much worse.

Almost any experienced scholarly writer can tell you that revision IS the writing process. How you get a first draft on paper matters very little, and every writer will have her own habits (and superstitions) about how to do it. But taking the usually mushy, half-formed, inarticulate ideas from your own head, where they are warm and happy and seem clear, and translating them into a form that an unknown reader can quickly and easily understand is a complicated craft that involves many steps.

Moreover, almost anyone who’s ever written something truly original or exciting will tell you that most if not all of these ideas come out only in the process of writing (that is, revising). What seemed brilliant when you sat down at the computer becomes “belaboring the obvious” after a few hours of working the sources and your own thoughts into organized structures. It is this process that usually reveals the connections and inconsistencies that lead to brilliant new ideas.

Most students turn in papers with a thesis at the end of the essay (regardless of whatever it was they wrote at the end of the introduction, way back at a different stage in their thinking and now forgotten). Often, this thesis-at-the-bottom is very interesting, because it was developed out of a detailed discussion of the evidence. But, unfortunately, most students stop and print at this point because they run out of time. These essays are never more than half-baked, and serve only as a record of the student’s thought process.

To make it a solid essay, the student must recognize that when that thesis finally “articulates itself” at the end (that’s often what it feels like when it happens), they have merely reached the half-way point in the writing process. Now, it is time to translate the “writer’s draft” into a “reader’s draft.” The new, richer thesis must be put at the end of a new introduction that tells the reader what the paper is, now, really going to be about. The discussion of the evidence must be re-worked for the convenience of the reader, not the writer. And finally, the student must reflect a bit on what has been accomplished, and put this new perspective into a new, real conclusion. Only then have you reached the point of polishing the prose and proof-reading for errors. But having got here, you will have the satisfaction of knowing that your essay is finely crafted and original, and that you have expressed yourself effectively.

Even when students do recognize what the revision process is really about, they often claim they still can’t do it, because they believe that revising takes more time than they have, or is not worth the time put into it, because after all the great ideas are on paper somewhere and that’s all that matters.

Think about it: do you want to bank your grade on the idea that your TA or professor will do all that work I’ve just described to untangle your paper for you, so they can have the privilege of receiving your great ideas?

They read many, many papers and some of them will be just as interesting as yours, but better organized and clearer. They can only put the same amount of time into each. They have seen (and probably tried themselves, at some point) every trick there is involving fancy fonts and margins, high-flown language, and “filler,” and recognize all such silliness for exactly what it is (which doesn’t stop them from being annoyed by it).

More importantly, though, in the long term learning to write a solid paper is easier than trying to get by with unrevised schlock. In fact, in purely practical terms, the single easiest thing you can do to improve your grades on essays is to spend more time revising (as long as you do it mindfully). Putting your exciting thesis exactly where the prof expects to find it and following it with a series of points of support that in every case is accompanied by at least a couple paragraphs of thorough discussion complete with specific examples, caveats, counter-arguments and elaboration and interpretation of all quotes, can hardly help but result in a good grade with any professor or TA (assuming of course that you’ve correctly understood and followed the assignment, and read and understood the sources).

You don’t usually have to guess what the professor wants—the standards are usually quite predictable for a short college-level essay. And if you’re reading the sources and understanding the material, there’s really nothing stopping you from doing well but time. Start your next paper with twice as much time to work as you usually give yourself. The beauty of getting really good at revising is that it gets faster and faster with practice, so that eventually you can expect to need little more time than you probably take now, but will produce much higher quality work.

Obama the Professor

Obama Chesh 5

“How is it that not one of you has actually read the syllabus?!” Heh. Via Wikimedia Commons.

There have been a lot of profiles written about Barack Obama, and I have read many of them with interest. As usual, I tend to read them with half my mind thinking about the difference between these kinds of profiles written in the moment, and the versions of a life written by biographers and historians long after the fact. It’s the sort of exercise that entertains me.

I don’t claim to have any profound predictions about Obama’s legacy, or even unprofound ones. I’m merely interested to watch it unfold. Right now, what interests me is the huge variety of interpretations about a man who is alive and working and accessible (more or less) to the journalists doing the writing. Historians are used to trying to re-construct the life of a person who is long dead, whose friends and coworkers and family are all long dead, and who may, in many cases, have left precious few written traces of his or her actions, let alone thoughts (chances are, in the case of a “she” there’s even less than in the case of a “he”). To me it seems like an embarrassment of riches to write a life of someone still living, with the benefit of interviews where you can ask whatever you want, with extraordinary documentation, and access, potentially, to thousands of people who know and work with him.

With this touch of envy in mind, I always feel a bit dissatisfied by contemporary profiles of important people. Especially when there are a lot of them, as there are with Obama, it seems like the more you read, the more it becomes noise, and the less you can pin down who this person is.

I have particular difficulty with the classic lengthy profile that often appears in periodicals like Vanity Fair or The New Yorker. You know the kind, where the author plucks from obscurity a handful of random but colorful anecdotes, asks some random but colorful questions, and mashes the whole thing together into a rambling “think piece” that feels profound, but…isn’t. It leaves you knowing less than you did before you read it, and somehow all the anecdotes taken from interviews and in-person observations feel inauthentic. One has a sense that the writer was gathering them like a preschooler collects bits of paper for a collage — “ooh! A red one! Score!”

I don’t mean to sound snarky. I really enjoyed the recent piece in Vanity Fair by Michael Lewis. It struck me as unusually insightful about what it’s actually like to be president. And I think he may have asked the most brilliant question I’ve ever heard asked of a president for the purposes of finding out his character:

“Assume that in 30 minutes you will stop being president. I will take your place. Prepare me. Teach me how to be president.”

But I came away from the article having little if any insight into Obama.

One of the most insightful people writing about Obama, I think, is Andrew Sullivan. Sullivan tends to characterize Obama as a conservative, even a paragon of a conservative. I’m of the school that thinks that’s incredibly accurate on a number of levels (whether that’s a good thing or a bad thing and on which levels is another question, of course).

Much more often, Obama is accused of being a kind of Bambi — too soft on this or that, unwilling to take a stand when stands need to be taken, unwilling to push hard, unwilling to ram his will through no matter what. (Of course, he’s also accused of the opposite, but I’m trying to pull some of the more prominent threads out of the infinite cacophony here).

But the thing about Obama that has always struck me as most obvious, even blinding, is something I don’t really see get mentioned in these profiles. I’m talking about the fact that Obama is a professor. He was literally a professor when he taught law at the University of Chicago law school (disclaimer: at the time he was doing that, I was living in an undergrad dorm next door, and some friends and I may have gone wading in the law school fountain once and been yelled at by some law school prof who almost certainly was not Obama, though I like to tell myself that it could have been). Less literally, he’s always struck me as being a professor type, and I say this as a professor type with a lot of professor-type acquaintances, in addition to having done my time (and then some) staring at a podium from the other side of the room.

Of course the media has not missed the fact that Obama was a professor. This piece was particularly interesting. And he’s fairly often criticized as “professorial” when he’s being stiff and wonkish (but even more often, in 2008 especially, he was criticized as speaking in a “lofty” way devoid of detail or substance — another example of the media not being able to make up its mind about him).

I think he’s professorial in much deeper ways than speaking style, and I think it explains the sense people get of his conservatism (which often outrages his base) as well as the “Bambi” meme.

Run with me for a minute here. Imagine a college classroom, a small seminar class. The subject doesn’t matter. You’re the professor, and it’s your job to (a) get the students engaged and talking (b) to get them to understand the material being covered and most importantly (c) to get them to think critically, for themselves, about that material.

In that situation, you don’t go in guns blazing and force people to obey your will. Why would you? That’s just a completely irrelevant, as well as unethical and pointless, approach.

You also (if you know what you’re doing at all) don’t go in there and tell the students what’s what. Even when you’re really, really sure you know what’s what. Even when you’re feeling frustrated with the impossibility of the task in front of you and you are incredibly tempted to just skip to the end and tell them the answers already. Tempting as that can sometimes be, you do know it would be a hollow and temporary victory, because they wouldn’t really take anything in, and telling people what to think is not your job.

You also don’t go into that classroom with a goal of changing the world. You don’t even aim to turn those students in that room into scholars. Most of them probably couldn’t get there, and more importantly, there’s no reason for them to get there. They have other things they need to do, and it’s your job to help them do that. You’re not making clones of yourself. You’re giving people the knowledge and skills they need to define and pursue their own goals.

You aim when you go into that room to move the students forward from where they were when you got them.

You leave your own ideologies and convictions behind when you walk into the classroom, because you know they’ll just get in the way of the process at best, and completely undermine your ability to do your job at worst.

You don’t preach to the choir. You work with ALL the students. Even the ones who seem hopelessly behind.

With experience, you learn that students can always surprise you. All of them. Some of them that seem really with the program can turn out to be putting on a show for a grade, and not really understand or care about the material or learning in general. Some that seem like they don’t even belong in that room will work their butts off and ultimately make you feel stupid and lazy with their hard work and original insights. You never know. And it’s not your job to guess, or care, what each student is ultimately capable of. You take them as you get them, and you work to move them forward from wherever they are.

Sometimes, as part of that work, you play devil’s advocate. You find yourself saying things you don’t remotely believe, and you actually try to put conviction into your face and voice because you’re so focused on seeing the lightbulb go off in the students’ eyes, the expression on their faces that means they’re thinking, really thinking.

You willingly give up a lot of control of the classroom — control you know how to use, and would on some level love to use — because you know from experience that you can’t do the thinking and acting and learning for them. You can only push, facilitate, re-direct. They’ve got to do the thing for themselves, ultimately, or it won’t stick.

And then, after a semester of all this hard work, which you do pretty darn selflessly because you really — REALLY! — believe in the inherent value of the process…at the end of the semester, after you’ve turned in your grades, you get your evaluations. And you find out just how many students blame you for their own unwillingness to invest themselves in learning. In other words, you find out that their failures will be billed as your failures, while their successes are their own.
What does all this have to do with Obama? I think his personal convictions are so hard to read because as a representative of the people, whose job is to govern, he actually tries to represent the people, and part of doing that well is putting your more idiosyncratic attitudes out of even your own mind.

I think he listens to all sides — even the sides that hate him irrationally and eternally — because that’s his job. Like it or not.

I think he’s not saving the world because, well, first, he can’t, and second, because he realizes that. I really doubt he sets his sights that high. And I would be astounded if he looks on politics as the epic battle between Democrats and Republicans that it is often portrayed to be by the media. He’s a problem-solving type of thinker rather than an ideological type — that’s been widely observed and is after all pretty characteristic of many post-Boomer Americans — but more than that he’s a professor type. That means focusing on taking what you’re given and moving it forward, doggedly, semester after semester. That’s very different from viewing your job as a matter of wins and losses.

A professor is rarely confrontational toward students, except perhaps temporarily to make a point. Most professors genuinely don’t even feel confrontational about their students’ ideas — if you get into this gig at all, you care pretty strongly about the integrity of the process. Truth, to an academic, should be not this answer or that answer to a problem (there are rarely neat and final answers to the questions asked at college level and beyond), but the rigorously honest pursuit of a solution, using all available tools. To do that, you have to listen to everyone, even the ones who seem nuts. They are the most likely, in fact, in my classroom experience, to insert something really innovative into the conversation (though often unintentionally), and they are often the ones to name the elephant in the room. (Naming the elephant in the room is something most academics welcome; most politicians are the ones putting curtains up around the elephant.) Even the students who don’t actually contribute have to be included in the process, because otherwise the process loses all meaning and integrity.

In the Michael Lewis profile, Obama is quoted saying some remarkably professorial things. In a passage about the writing of Obama’s Nobel speech, for example, he is depicted as instructing his speechwriters to put together his favorite authors’ ideas on war — he gathers his sources first, in other words, like an academic would — and he apparently explained to his interviewer that, “[h]ere it wasn’t just that I needed to make a new argument. It was that I wanted to make an argument that didn’t allow either side to feel too comfortable.”

That’s how you lead a classroom discussion. That’s how you compose an argument that gets people to think, instead of telling them what to think.

Then Obama explained his goals for the speech: “What I had to do is describe a notion of a just war. But also acknowledge that the very notion of a just war can lead you into some dark places. And so you can’t be complacent in labeling something just. You need to constantly ask yourself questions.”

This is professorialism at its best. Nothing is black and white. The devil is in the details. Caution. Never get ahead of your evidence. Always. Ask. Questions.

Narrating Obama’s decision not to approve a no-fly zone over Libya that was intended to give an appearance of protecting innocent civilians but could not possibly have helped, Lewis quotes Obama as saying, “I know that I’m definitely not doing a no-fly zone. Because I think it’s just a show to protect backsides, politically.” This stance could read as noble. A president who puts morality (and practicality) above politics. It could be that. It could also be the overwhelming impatience of the true scholar with anything that confuses the fundamentals: the questions, evidence, and reasoning that can solve problems. Arguing about how this or that method of problem-solving looks — or finding ways to avoid the problem altogether — is a waste of time when one could actually be coming up with an answer. Even if it’s not ultimately a satisfying answer, at least you tried, and learned something from the effort that may help future efforts. That’s the pursuit of knowledge.

This professorial quality implies a few things. Most importantly, it implies that Obama believes in and is animated more by the process of governing democratically than perhaps any general policy principle. Compare this to his record, and I think you find a lot of consistency, especially in places where allegiance to party platform or political expediency is sometimes absent. I don’t want to imply that Obama’s professorial tendencies define him completely. None of us are defined by anything so simple. There are no doubt many sides to his character and his decision-making, as there are for all of us. But I think this one part is often unrecognized. I also don’t make any claims about whether these tendencies are good, great, suspect, or terrible in a President of the United States. Like any good prof, I’m just throwing it out there, to see if it makes people think.

Rules

via Wikimedia Commons

Sometimes my students get a little too hung-up on rules, when it comes to writing essays. Mind you, some rules are vital—if your writing is ungrammatical, readers will have trouble following what you are saying. Other rules (which are really more like guidelines) relate to structure and flow and they also help readers to understand you. Then there are still other rules, which don’t actually contribute much to the reader’s ability to understand and remember your text. These rules aren’t so important. The trick is knowing the difference.

Mind you, there are individual readers and—cough—the occasional rogue professor who care very deeply about this third category of rules, and if you’re writing for one of those people you might as well suck it up and follow those rules, too. But you should still know the reasoning behind them, and why in other contexts it might be okay to ignore them.

You should never use “I” in an academic essay.

Often, when a teacher tells you to “not use ‘I’” or to not use it so much, you can safely interpret this as “I need to give more substance to my opinions by inserting more reasoning and evidence, and possibly more sources, into my essay.” In other words, what this teacher often really means is that you’re asking the reader to believe something just because you said it was so – your essay is full of phrases like “I think…” and “I believe…”.

In other instances, students themselves or their teachers may fear that using “I” makes an essay ‘sound too subjective’ no matter it is used. The truth is, if you are a human being, authoring anything, that thing you author cannot be truly objective. There is a difference between saying, “John’s a fraud,” and “I think John’s a fraud,” and it is intellectually honest to differentiate for your reader what is your opinion or reasoned conclusion, and what is taken from the sources you’re citing. In these cases, using “I” is advisable.

However, it is true that some writers use phrases like “I think” more often than is required by the content – it becomes a kind of nervous tic. In this case, many of the ‘I’s can be safely eliminated or changed.

And remember that you can always find another way to convey that an idea is yours, to keep the ‘I’s from getting excessive or to please a professor who, for whatever reasons, particularly despises the presence of the word ‘I’ (though if you dare you might suggest they try searching it on Google Scholar, to see just how prevalent it is in scholarly journals from every field, including the hard sciences).

Note: Years ago, when scholars were perhaps not quite so resigned to their subjectivity, it was common to assume a sort of royal ‘we’ even when a paper had only one author. This is now frowned upon as misleading. The age of intellectual property has trumped the age of positivism! Nowadays, when an author uses “we” it generally refers to the writer and readers together, as in, “now we turn to a new subject.” Some people like this construction (it makes it easier to avoid the passive voice and nominalizations), and others dislike it (they find the intrusion of writer and reader into the text a distraction from the subject at hand). It’s largely a matter of taste and context.

You should never use the passive voice in an academic essay.

You should avoid split infinitives.

You should always have exactly three main points of support.

Always put your thesis at the beginning.

The answer to all these imperatives is, “Actually, it depends.” If there is any general rule that always applies, it is that a writer should be aware of her purposes and her audience, and suit her structure, style, and language to the particular purposes and audience of a given piece of writing.

The passive voice exists in English because it can be useful – not just to hide the subject of a verb (as in, “mistakes were made”), but also to shift the subject to the end of a sentence, where it may be more convenient for reasons of emphasis or transition (such as “mistakes were made by the President, who is now facing impeachment”).

The notion of avoiding split infinitives is borrowed from Latin, where splitting infinitives can cause confusion. But English works quite differently, and sometimes, in English, not splitting the infinitive can cause confusion. So whether you should do it or not depends on the context.

Grammar Girl has a great guide to splitting infinitives and avoiding them.

The five-paragraph essay model works very well  when you’re writing an essay that logically only has three major points of support and only needs to be five paragraphs long. However, for the vast majority of essays that don’t fall into that category, you will have to explore more complicated models.

Putting the thesis at the beginning of an essay has many strong advantages, and seems to work best in any case where the reader is approaching your essay for enlightenment rather than for entertainment or pleasure (you don’t, after all, want to keep your grader in suspense about whether you have something worthwhile to say!). But of course, there are exceptions, and you should always consider the demands of a particular instance when you make such choices.

Often, academic writers put a sort of provisional thesis at the beginning, which tells the reader what to expect without going into detail. This is sufficient to contextualize the information to follow, and fulfills the purpose of assuring the reader that you do, indeed, have a resolution to the problem you’ve set up (that is, that you’re a competent and responsible writer). Then, a more elaborate and specific thesis is stated at the end, incorporating terms and claims that have been made clear in the body of the essay but which were, perhaps, too new to the reader to use effectively in the first paragraph.

 

Update: See this nice piece from the Smithsonian on rules that aren’t really rules.

Bias

View from Victoria Point, from Robert N. Dennis collection of stereoscopic views

Stereoscopic Views, from the Robert N. Dennis collection, via Wikimedia Commons.

When historians read a text, we are trained to filter what it tells us through an understanding of who wrote it, with what purposes and with what intended audience. Author, audience, and purpose are all important factors in shaping the meaning of a text, so identifying these factors can help us reconstruct what a text meant to its author, and to the people who read it when it was written. Identifying these factors can also help us to figure out what might be relevant, but missing from a text (something the author may not have be aware of, may not have thought was important, or even something the author may have wanted to deliberately suppress).

In college history classrooms, professors ask students to practice this skill, most commonly in assigning “primary source interpretation” essays, where the student takes a historical document (or 2) and tries to analyze it (them) in the way I just described.

Where many students go wrong in this process is confusing bias with point of view or reasoned opinion.

I’m probably particularly attuned to see this mistake because I spend so much time grading primary source essays, but also I see it constantly in talking heads on TV, in written media, and on internet forums. It’s a really insidious problem in our current political climate, in my view, so I offer this version of a handout I use in classes (originally relating only to writing primary sources essays).

Bias is a form of prejudice. It refers to opinions or assumptions that one holds despite or willfully in the absence of evidence.

Point of view refers to the fact that no one person can be aware of everything all at once. We all see the world from our own particular perspective.

It is possible (though difficult) to examine an issue without bias, but everyone always has a point of view. Your point of view is the way your previous experience, skills, inclinations, attention and interest limit your experience of the world.

Reasoned opinion is a conclusion, or claim, that a person comes to after examining and reasoning through relevant evidence. This is very different from bias (because it is based on objective reality — evidence and reasoning) and from point of view (because the exercise of reasoning through evidence is the practice of deliberately expanding your personal point of view to include evidence from others’ points of view, or evidence gathered through experimental observation).

When reading a historical text — or when you want to better understand any other text — you should look for bias, point of view, and reasoned opinion. But it is crucial to distinguish between these, because we can draw different interpretive conclusions about an author’s claims based on whether the author stated a given claim in willful contradiction of relevant evidence, merely out of an inability to see or comprehend new information, or lack of access to other evidence, or as a reasoned conclusion drawn directly from all available evidence.

Common mistakes students (and others!) make:

1. Looking for obvious biases (prejudices), but failing to look for “honest” limits to an author’s point of view.

2. Noting limits or absences and attributing these to point of view without first asking if the author’s point of view is actually so limited because it is based on assumptions from bias.

The way to avoid this mistake is, after identifying limits or absences in a given text, identify what underlying assumptions about the world led the author to “miss” these key points. How do those assumptions relate to the evidence available to the author?

3. Mistaking reasoned opinion based on evidence for mere bias. If an author seems to “like” a position or be “passionate” about it, they could be biased, or they may be enthusiastic about a conclusion simply because it is an excellent explanation of all known facts. Find out which it is by examining the evidence on which the author bases their conclusion.

Relative enthusiasm, or lack of enthusiasm, tells you nothing by itself.

Message to take home: Always look to the evidence. When someone makes a claim, do they follow it with evidence? Is it good evidence? Is it enough evidence? What part of the claim is an assumption (i.e., not based on evidence)? Some assumptions are reasonable (one has to start somewhere), some seem arbitrary (a bad sign!).

 

Update: Related reading

Objectivity

Via Wikimedia Commons

Many students come to college believing that academic writing is objective writing, or is supposed to be, and if it’s not, it’s “biased,” which is another way of saying “bad” or “useless.”

There is no such thing as objective writing.

If something is authored, then that human author’s stamp is somehow on the material, if only in the selection and organization of it (even texts authored by computer are ultimately products of the software, which was engineered by a human being, who made choices and set priorities!).

The best we can do, as writers, is to indicate to the reader explicitly what it is in our texts that comes out of our own heads, what is the opinion of other authors cited in our own work, and what is reasoned conclusion or a direct report of data (and with the latter you explain how you derived your data and chose what to share).

Best of all, we can identify and examine our own assumptions about our material, and when appropriate tell our readers what these assumptions are. We can mention that there are other factors or opinions which we have chosen not to go into, and we can say why. (Often, such things are legitimately beyond the scope of your essay, but by telling your reader you are aware that these other factors exist and have made a conscious decision to exclude them — for reasons you briefly explain — then you allow them to trust that you are, in fact, in control of your essay and have done your research. Going through these steps makes your reader more likely to trust you with the main points of your argument, as well.)

In other words, the best we can do as subjective, human authors is to acknowledge our subjectivity, to note our biases and assumptions and to factor them explicitly into our writing. Attempting the impossible task of writing objectively can be more misleading than accepting our bias and moving on.

Yet I often see student papers watered down to the point where no analysis is left at all — in some cases, I know the student had interesting and relevant ideas about the material, and I have asked why it wasn’t on the page. This is when I hear, “I thought that’s just my opinion, so it doesn’t belong in the paper.”

Analysis is a form of opinion — a very specific form that is based on evidence, in which you explain exactly how you reasoned from your evidence to form your opinion. Analysis is what we want.

Why you shouldn’t feel bad you didn’t go for (or finish) the Ph.D.

By WMAQ-TV, Chicago, via Wikimedia Commons

Sometimes when I tell people what I do for a living, they tell me they almost got a Ph.D. Sometimes, they say this unapologetically, just as a factoid of interest, but unfortunately sometimes it’s said with a direct or implied apology, and some sort of excuse. As if an explanation is required.

A Ph.D. degree is not the ultimate IQ test.

A Ph.D. is nothing more nor less than a degree required for a particular range of professions (mainly, teaching at the university level). It’s a very narrow degree, and one that is very rarely required. So why on earth would so many people feel bad for not getting one? If you don’t need or want a Ph.D., then you shouldn’t waste your time and money getting one!

Contrary to, apparently, popular belief, a Ph.D. doesn’t test intelligence. True, you probably need to have at least average intelligence to get admitted to any respectable Ph.D. program. But succeeding in a Ph.D. program really depends more on having the drive to complete that particular degree in that particular field than on anything else.

It’s not like intelligence and specialized knowledge are remotely exclusive to people with Ph.D.s. We all experience that in people we meet every day. Yet some people–especially those who are used to doing very well in school–internalize the idea that because they are smart, their success should be defined by achieving the highest possible degree. Well, no, not if that degree is only suitable for one narrow profession, which you might not want.

The people I know who got Ph.D.s (self included, of course) finished the degree mainly because of three factors.

The first and most important factor is that they were obsessed with their field. Some people do finish the degree and decide not to actually practice in the field, but pretty much always, if they finished, they at least had some kind of obsessive devotion to the subject. Sometimes it’s a healthy devotion, occasionally it borders on the pathological, but in any case it’s pretty extreme. Most people just aren’t that into—say—early nineteenth-century Russian women’s mysticism. And that’s okay. We need people with these kinds of interests, but we don’t need LOTS of people with these kinds of interests!

The second factor is that most people I know who finished Ph.D.s aren’t really good at much of anything else. I know that’s true for me. There are other things I can do if I must, but I’m not really very good at them. I’m quite good at researching and teaching the history of Russia, and to a lesser degree, Europe and the western world. Other stuff? I’m average at best, and with most things I’m completely incompetent. I didn’t just end up in a Ph.D. program because I’m pretty smart. Being pretty smart can land you in a lot of different places. I ended up in a Ph.D. program mainly because I wrote a quite decent essay about the upbringing of early nineteenth-century Russian heirs to the throne that had a fairly original argument in it when I was only 22. Not that many people can do that, or more accurately, very few people would want to bother to do that. But, the vast majority of the population can calculate interest rates, change a tire, manage a multi-line phone, and do a lot of other things I’ve singularly failed at (despite numerous sincere and concerted attempts!). We’ve all got our niches.

The third factor I’ve seen that separates those who finish Ph.D. programs from those who leave them or don’t attempt them, is that those who finish tend to have some kind of stubborn, perhaps even stupid, determination to finish no matter what, just because. People who finish psychologically have to finish. Those who do not finish often do not need to finish. And may very well be much healthier and better off for it. Have you read my posts about what academia is really like and what it pays, even when you’re lucky enough to get a tenure-track job?

While I’m talking about those who have the stubborn drive to finish, I would like to mention another phenomenon I’ve seen many times.

In the home stretch of finishing the Ph.D. dissertation, when it’s not quite almost-done but too much done to quit, everyone I know has had a moment of crisis when they decide that they absolutely must quit. It’s too much, it can’t be done, the person in question feels like an impostor, the person in question never really wanted it anyway, etc.

It’s important to distinguish between this very typical last-minute crisis of the almost-finished Ph.D. from the more serious existential crises of an earlier-stage graduate student who truly is uncertain about whether the degree is worth pursuing. When you’ve got multiple chapters of the dissertation written (even in draft from), you’re probably one of the hopeless ones who can’t really do anything else, and you may as well finish, since you’re so close. Just know that this crisis is completely typical. But if you’re not there yet and you really don’t feel motivated to get there, ask yourself why you think you should pursue a Ph.D.

If the only honest answer you can give yourself is that you can, because you’re smart enough, then maybe you shouldn’t bother. Plenty of people are smart enough to complete a Ph.D. Only a select few of us are stupid enough to actually follow through, and only because it’s the only thing we can and want to do. If that’s not you, then unburden yourself of the guilt and expectations that a Ph.D. equals, “what smart people do.”  A Ph.D. is usually a ticket to low pay and constant work. If you can think of an alternative you like better, by all means, get out.

(If you can’t think of an alternative and love what you do so much you’re willing to live on mac-n-cheese so you can spend all your time reading obscure monographs on the subject that makes your heart go pitter-patter, well, hello, kindred spirit.)

 

Further Reading: On Being Miserable in Grad School

What is a Ph.D., Really? And What Is It Good For?

I’ve gotten the impression that many people think a Ph.D. program is like a master’s program, but longer. That you just keep taking courses—like a million of them—and then eventually you write another really big paper, and you’re done. This is kind of accurate, but also wrong in all the most important ways. I’m sure these misconceptions are partly due to the fact that there aren’t really very many movies about people in Ph.D. programs, unlike, say, law school or med school. Unless you count the show Alias, in which Jennifer Garner pretended to be a Ph.D. student by walking around saying ridiculously unlikely things and never doing any work at all. But you can’t really blame Hollywood—people in Ph.D. programs aren’t really very exciting to watch, since they mostly hunch in front of computers for days and weeks on end.

John Hamilton Mortimer - Studies of Academics - Google Art Project

By John Hamilton Mortimer (1740 – 1779), via Wikimedia Commons

 

NOTE: Everything that follows is really about programs in the humanities and social sciences, because that’s what I know. I don’t know what programs in the STEM (science, technology, engineering and mathematics) fields are like, but I picture a lot of labs. I’m probably mostly wrong about that. The only thing I’m sure of is that nothing about STEM Ph.D. education resembles anything seen on Numb3rs or Bones.

So, in the U.S., most Ph.D. programs are actually combined with MA programs (not so in Europe and Canada), though if you already have an MA when you enter the Ph.D. program they’ll usually grant you advanced standing, which usually allows you to skip a year of coursework.

But a standard U.S. MA/Ph.D. program in the humanities and social sciences generally begins with the MA portion. For the MA degree, you usually take 1 to 2 years of graduate courses (these are usually the only courses you will ever take in the whole program), and then write a thesis. In history, the MA thesis is usually envisioned as about the size, type, and quality of a publishable article. Ideally. But publishable articles usually max out at 30 pages, and most real MA thesis are actually about 50 to 150 pages. So the whole article model thing is a bit misleading. But the MA thesis should, like an article, incorporate original primary source research and original analysis (and, unlike undergraduate essays, it needs to be original not just to the writer but original in the sense that no one has published that argument before).

I should mention here that MA courses are not like undergraduate courses, and MA-level courses in a Ph.D.-granting institution usually vary quite a bit, too, from MA-level courses at an MA-only institution. MA courses involve more reading and writing than at the undergraduate level, and in history it’s often true that you’ll read mostly secondary sources in a grad class, where you would read mostly primary and tertiary sources in undergrad. But the main difference is in the kind of work you’re expected to produce. Graduate work assumes you have basic skills and knowledge in the field, and asks you to think critically about how knowledge is produced and to practice more advanced skills, like synthesizing larger amounts of material, and dealing with more difficult primary sources, often in foreign languages.

After the MA thesis, some people decide they don’t want to go farther, and they can leave the program with a “terminal MA.” At least they got something for their time, is the expectation. But most students continue on, sometimes after a review of their progress by their advisor or something like that.

The next stage is often, though not always, marked by the M.Phil. degree. I’ll confess right here that I didn’t know what the heck an M.Phil. degree was even after I got one, so it’s not at all surprising that most people who aren’t in Ph.D. programs have no idea. It’s sometimes referred to as a “research masters,” and I’ve been told that it derives from the British model, where you can (I believe—someone correct me if I’m wrong) get an MA through graduate coursework or an M.Phil. through independent research. Except this makes absolutely no sense in the U.S. context, where the M.A. signifies that you completed coursework and wrote an independent thesis, and the M.Phil. is, in the programs I’m familiar with, a prize you get for passing oral exams.

Oral exams, or comprehensive exams as they are often known (since they aren’t always oral) mark the transition between coursework and going out on your own as a sort of apprentice scholar. Comprehensive exams require the graduate student to demonstrate their comprehensive knowledge of their chosen field, and it’s usually described as preparation and qualification for teaching (as opposed to research, though having this broad background is essential to doing research, too). The format and style of these exams varies a lot, but usually you have from six months to a year to study, and then you are examined in written or oral form or some combination thereof.

As an example, as a specialist in Russian history, my oral exams had to cover four fields, three “major” and one “minor,” and at least one had to be “outside” (of Russia). For a major field you try to cover pretty much everything, and for a minor field you designate some set of themes you’ll cover, that are hopefully complementary to your major fields. My three major fields were Russian history to 1917, Russian history 1917 to the present, and East Central European history from 1750 to the present. My minor field covered a few themes in French and British history from 1750 to 1850, which I chose because it was helpful comparative background for the kind of research I planned to do on Russia in that period. The major fields were chosen to cover all the material I hoped to be expected to teach.

I had an advisor in each field who was a specialist, and those people helped me to create a list of about 100 books for each major field and 50 books for the minor field that represented a comprehensive survey of the scholarship to date (you examine a far greater number of books to start with, and then narrow it down to the final list that you study closely). Then I spent a year reading them all, and taking detailed notes about the major analytical questions, themes, and problems that I saw in each field. This process was a way of synthesizing how each field as a whole has developed.

The exam itself was oral in my case, meaning I met with my four advisors for 2 hours while they quizzed me. These kinds of exams generally aren’t so much about the specific material covered in each book, but about the student’s ability to synthesize these major arguments and see how the individual pieces fit into the whole.

Once you pass your comprehensive exams, you get the M.Phil. degree.

At some point before this time, you probably also have to pass some language exams. Historians tend to need to pass several, though those studying American history may need only one language. For a Europeanist historian, you usually need to pass at least three language exams, and in some fields you may need as many as five. These exams are usually written translation only, with a dictionary, because those are the skills you will need to handle foreign sources in your research. In my case I needed to pass exams in Russian, German and French. At the exam we were given passages in the language at hand that represented the kind of source a historian would read—often an analytical piece written in, say, the early nineteenth century. We had to translate them into English in a way that was both scrupulously accurate and readable.

After you’ve passed all your exams, the next step is the dissertation prospectus. This is a proposal outlining what your final, independent research project will be. The dissertation is meant to be comparable to a publishable book, and in this case it usually really is that, because in order to get a teaching and research job, in many fields you’ll have to publish a book within the first few years, and the dissertation is often the first draft, in a way, of this book. It must be based on original research and make an original argument, and it must be a significant contribution to your field of study (more so than an MA thesis).

So, for the proposal, you need to of course have some idea of what you want to research, and then you spend some time doing the necessary background reading and finding out what you will need to do to complete the thesis, in very practical terms.

For a Europeanist historian like me, this mainly means finding out what kind of archival sources exist, where they are, roughly what they might be able to tell you, etc. When your archives are located outside the U.S., you need to start applying for funding that will pay for your travel overseas, as well. Other social scientists need to plan and organize different kinds of research models, exploring possible methodologies, preparing interview questions and so on. Some other social scientists also travel, for “field work,” where they observe or interview subjects in a given location, but others work with computer modeling or published sources, etc.

In any event, all this planning and then writing up a detailed proposal about what your research and the dissertation will look like often takes about a year. Then you defend your proposal before a faculty committee of specialists in the appropriate fields, both from within your own university and from outside it. They ask you lots of pointed questions to try to make sure your plans are realistic and your thinking is coherent and reasonable.

Once you pass your proposal defense, you are “ABD.” ABD is not an official designation, but it is very commonly used—it stands for “all but dissertation.” It means you’ve completed all the requirements of the program except for writing and defending the dissertation. ABD is a somewhat ironic designation, because it sounds like you’re practically done, except that the dissertation is really the heart and soul of any Ph.D. program, and all the rest is, in a way, just a lead-up to the Real Show.

This is also the stage where the time taken to complete it can vary incredibly widely, which is why when you ask “how long does your program take?” or “when will you finish?” most Ph.D. students can’t answer, and many will squirm miserably at the very question.

The dissertation stage takes as long as it takes.

In some fields, if you don’t have to travel and all your sources are readily available, you can go straight from the prospectus defense to “writing up” and be done in about 2 years, usually. Since coursework is often 2 years, plus 6 months to 1 year for the exams and another 6 months to 1 year for the prospectus, the shortest Ph.D. program is generally about 5 to 6 years of post-graduate work (again, this can vary significantly in the STEM fields).

But, if your research requires significant travel, that part alone can take at least one full year before you can even begin to “write up.” That typically makes 6 to 7 years a bare minimum for anyone studying the history of a place that is not local to their university, for example. For those of us who travel abroad for extensive periods, often to multiple countries and/or dealing with sources in multiple languages, we often also need extra time for all the translation, sometimes for language study for those who are taking on sources in a less commonly taught language, like, say, Turkish or Georgian, where you often have to go abroad to study it at all. And once you’ve got all your sources (and, if necessary, translated them and/or used computer modeling or database software to manipulate or analyze your data), then you can finally begin to write all this information into something coherent. This last phase can take any amount of time depending on how you write.

By this stage, any graduate student will have written many scholarly papers, but the dissertation is really fundamentally different because of its scale. A book-length academic project requires extraordinary information management just to keep all the data straight and accurate, and then the bigger scope of the arguments also requires a more complex engagement with larger numbers of secondary works, and more complex thinking, to communicate clearly about something so comprehensive, without skimping on any of the nuances. It’s bloody hard work. I’ve never seen anyone do it in less than a year, and I’m very impressed by 2 years. Many people take more like 3 or 4, especially if they’re teaching at the same time. Add in the fact that most graduate students at this stage are in their late 20s or early 30s, so that many are getting married and starting families (if they can manage it financially on a scant grad student stipend) and all that can add further delay.

I should also mention that your guide through this final stage of dissertation researching and writing is your advisor, someone who has probably guided your progress from the beginning of the program, but who now takes on primary responsibility for keeping you on track and, hopefully, catching you before you make any really awful mistakes. Over the course of the whole Ph.D. program you are moving farther and farther away from the student-teacher model of education. At first you take courses, but then with the MA thesis, the exams, the proposal, and finally the dissertation you work more and more on your own at each stage, until by the time you finish your dissertation you are most likely the world’s foremost expert on your topic (since it was chosen to be an original contribution to the field), and you have gradually—sometimes somewhat uncomfortably—transitioned from being a student to being an independent scholar and a colleague to the other scholars in your discipline.

So far I’ve only briefly mentioned teaching, but that’s the one other common part of a Ph.D. program. Some programs require no teaching at all, but that is becoming downright rare these days. My program required, as part of its funding package, three years of being a teaching assistant. TAs in history led discussion sections, gave guest lectures occasionally, and did most of the grading. This is a fairly common scenario. Often, after the TA requirement is fulfilled (usually in the second, third, and fourth years of the program), advanced-stage graduate students will apply to teach as instructors, where they lead their own courses. Sometimes a lucky grad student can create the course of their choice, but more often they teach the freshman survey courses, or required writing courses, and that sort of thing.

When I started my program, there was no formal guidance whatsoever given to grad students on how to teach. We were just thrown into classrooms to figure it out. From the university’s point of view, we were just cheap instructors, and it was up to the individual faculty members we worked with as TAs to give us guidance, advice, or instruction—or not—entirely at their discretion. In my experience some faculty members took this responsibility very seriously, others less so. While I was in my program, however, I was part of a collective effort on the part of grad students to create our own teaching training program, and our program was eventually adopted by the whole graduate school. Right around that time, in the early 2000s, there was a general consensus that teacher training needed to be integrated into graduate programs, and that is increasingly becoming the norm today, thankfully.

Right now, because of the miserable state of the academic job market (with the exception of a very few fields, there are many times more qualified candidates than there are jobs available), it’s more difficult than ever to get any kind of academic employment with a Ph.D. from anything but a top-tier school (which schools are top-tier varies by field). There has been criticism from the American Historical Association in the last decade of programs that either offer too many doctoral degrees, or programs that are third or fourth-tier yet still offer doctoral degrees to paying students, knowing that they will very likely never be employed in their fields. Basically, if you have to pay to go to a Ph.D. program, you probably shouldn’t go, because the reputable ones are now under considerable pressure not to admit students without funding (there are occasional exceptions—sometimes you are expected to pay tuition the first year with the expectation that if you perform satisfactorily funding will be granted for subsequent years, but this can sometimes be fishy, too—do your research).

Most recently, the AHA is recommending that programs incorporate training in so-called public history, and other alternative career paths for Ph.D.s, into their programs. Public history includes museum work, community outreach, documentary filmmaking, etc. Other alternative career paths include mainly government and corporate research or think thanks. There is some resistance to this pressure—many programs argue that they are not equipped to train students in these directions, and others point out that the job market is little better in any of these alternative fields. But the overall trend is for fewer, more elite programs to offer degrees to fewer people (with better funding), and to diversify the training as much as possible.

On the whole, I think you can see that a Ph.D. is a unique education, encompassing tremendous breadth and depth, and is more like a professional apprenticeship than the model of being a student forever that many people imagine. It probably requires more drive and stubbornness and dogged work than it does pure brain power, and anyone who completes the process very likely has an extraordinary ability to process information (because at bottom that’s what it’s all about). There are plenty of things a Ph.D. is not remotely useful for, but what it does, it does well.

 

Further Reading: On Being Miserable in Grad School

Should you go to the best school you can get into?

1408 px - Harvard Gate Inscription

Harvard Gate. Not the only way in to the educated life. (Image via Wikimedia Commons)

Students ask me this question a lot, usually about graduate programs, and sometimes I get asked about it with regard to choosing an undergraduate program as well. Especially in these days of astronomical tuition costs and uncertain job market potential, it’s important for students to really think through the cost/benefit ratio of a program before committing (with the caveat, of course, that education is much more than a ticket to a job!)

My answer to this question is the same answer I (like most academics) always give to almost every question:

It depends.

This is why academics annoy people, I know. But really, the answer is complicated, and entirely depends on factors specific to each applicant.

Advice for everyone:

In terms of pure quality of expertise, the faculty are broadly comparable at any institution of higher education in the U.S., since for the last several decades institutions have all hired from the same overpopulated pool of people with Ph.D.s from a small circle of prestigious graduate schools.

But there can be very big differences in, first, how much one-on-one interaction you get with faculty, and, second, the culture of the student body—how focused students are, how motivated, and how stimulating they would be for you. These differences don’t correlate with the superficial prestige of a given institution—schools at all levels vary widely in these terms.

In many cases, you can get an outstanding education at relatively low cost at a public institution, and you will have missed nothing for bypassing Harvard.

However, in some cases the cost-benefit ratio is different: what you personally can achieve with a more prestigious degree may justify a higher investment in obtaining the degree.

And sometimes a very expensive private institution may actually be cheaper than a public one if they want you badly enough to pay you to come!

In short, making the best choice for you depends on doing a lot of very specific research. And you can improve your range of choices vastly by preparing well: do your best work at every level of education, engage thoroughly in your courses, and talk with faculty and professionals in the fields that interest you. Get as much information as you can before making your decision.

Advice specific to aspiring undergraduates:

The answer to the question of which school you should go to depends on what you want to get out of your degree, on your personality, and on the field you will study (which of course you may not know yet!). But the short answer is that making the right choice for you needs to be a much, much more complicated reckoning than just U.S. News and World Report school rankings (which actually tell you nothing at all of use).

At what kind of school are you most likely to do the best work you’re capable of?

A small, residential college that feels like a family?
A bustling, huge research school that gets your juices flowing?
A place where you’re around students that are a lot like you?
A really diverse group?
People who will constantly challenge you?
A place where you’re the “big fish” and can feel confident?

How important is the name on the diploma for the specific kinds of jobs you want (and how likely are you to stick with that goal)?

This consideration necessarily involves taking a big risk, because you may very well change your mind about a career goal. But in any case, it’s worthwhile to do careful research about several prospective careers that interest you. If you can, interview people who have the kinds of jobs you want, and ask what level of education is required, what kind of GPA is expected, how much employers care about what kind of school you went to, and many other questions too, about salary, job satisfaction, rate of advancement, benefits, etc.

How important will it be to your career goals to have one-on-one faculty mentoring?

Will your future employability rest on recommendation letters and/or connections, or on your test scores and degree from a post-graduate professional school?

What do you want from your education besides employability?

College should also enrich your life and your mind in ways that cannot be measured in dollar signs. What kind of enrichment do you most want and need?

Do your horizons need to be broadened by a place different from what you’re used to?

Do you need a really rigorous environment where the “life of the mind” is the primary focus?

Do you need access to lots of activities to siphon off all your excess energy, so you can focus?

Do you need a comprehensive general education program that forces you to explore fields of study you tend to avoid when left to your own devices?

Or do you need/want to specialize very intensely (think really carefully about that one — what if you change your mind? — would you still have options?)

Find out exactly what the financial picture would be for you if you went to each of the prospective institutions you’re thinking about.

Don’t just look at the ticket price listed on web sites! The most expensive private schools also tend to offer the most aid, and more often in grants than loans, as compared to other schools with smaller endowments. Do all the calculations (including room and board and living expenses, taking into account cost of living in different areas) for each school. If you’d need loans, find out how much your payments would be after graduation, the interest rate, and how long it would take you to pay it off assuming an average starting salary for the very specifically defined types of jobs you hope to get. You may have to go through the whole process of applying and filling out the FAFSA before you’ll know the real numbers for each school, and it may be worth applying to one or two schools you think you can’t afford, to see what they can offer you.

Advice for aspiring graduate students:

Again, the answer here depends on your field and prospective employment after graduation. But at this level in certain cases it probably matters more that you go to a highly ranked school for your subject than it does in undergrad. In other cases, it matters even less! Read on.

First, a given institution can be top-tier for one degree program, second-tier for another, and third-tier for still another program. And Ivy League schools, or other top schools everyone has heard of like Stanford, Berkeley, and Chicago, are not automatically the “best” schools for a given field of study. You need more specific information. The best people to ask are probably recent graduates from programs you’re interested in, who are now employed in the kinds of work you want.

For master’s-level work, the prestige of the degree-granting institution is less likely to matter than for other graduate degrees. Sometimes, if you’re already working in a given field, you can get tuition assistance from your employer for a local graduate degree. Look into this before starting a program. And, if you wish to work in a given location, local programs may make you more employable than distant programs that technically rank higher.

In master’s and doctoral programs in the liberal arts, you’re more likely to work with a specific advisor, and having a great advisor who actively supports your work and is widely respected in the field may be more important than the prestige of the institution you attend. This is something you should talk over in very specific terms with undergraduate advisors or other academic mentors.

BUT—be very wary of a general liberal arts master’s degree. These can make you “overqualified” for many jobs, and not qualified enough for others, leaving you in an academic no-man’s-land. Only go for a liberal arts master’s if you know exactly how you will use it, and that it is certainly required (or, if you can afford it, if you simply want to enjoy the education!).

An MA program can be a way of strengthening your application to a Ph.D. Program (but an incredibly expensive way; you may be better off excelling in your BA and writing an impressive thesis). This is different outside the U.S., so again, consult advisors about your specific situation.

An MA can also be a way of achieving a higher income for teachers, librarians, and other professionals, but you should find out exactly what programs are preferred, when you need to complete one, and whether your employer can help you pay for it.

For law school, things are quite different in several ways. First, many law firms seem to be especially concerned with the prestige of the school you graduated from. There are many, many law schools out there that are happy to take your tuition money even though they may not make you employable at all. Get information from knowledgeable people in the kind of law and location you hope to work in, about where most of their lawyers got their degrees.

Medical and business school are similar to law school. Law, business, and med students tend to borrow enormous sums on the assumption that their high salaries after graduation will make repayment possible. This may be the case, but know that:

(a) for your first several years in your profession, assuming you’re hired, your income will mainly go to paying off your loans

(b) you may graduate into a glut in the market, and be saddled with an impossible debt burden

(c) not all medical, business, or legal jobs pay equally highly. Many lawyers, especially, do not earn the kinds of incomes required to pay off off law school debt.

Then there’s the Ph.D. (or the MFA and similar terminal degrees for the arts). Here’s another field with a glut of qualified graduates: academic research and teaching. College-level teaching almost always requires a Ph.D. In almost all academic fields, the number of Ph.D.s from top schools is vastly higher than the number of positions, so that graduates from even second-tier schools are limited to adjuncting (this is slave labor with extremely low wages and no benefits, and very little hope of moving to a permanent position), or community college positions (which tend to be all or mostly teaching positions at lower pay than 4-year institutions).

The advantage to teaching at a CC is that there are many of them, usually in every community in the country, so you may be less geographically circumscribed than if you search for a tenure-track position at a 4-year. But, increasingly community colleges are able to hire people from top-tier institutions, so even this is not a given. You should research your field very specifically.

There are a few fields in which academic jobs are actually growing (being both interdisciplinary and very applied in your research seems to be the key here), and a few where salaries are higher than average (accounting, law, etc), but still less than in non-teaching positions in the same field.

Whichever level of prestige enjoyed by the school you choose, it is NEVER a good idea to enter a Ph.D. program without full funding (tuition, fees, plus a stipend). It is extremely unlikely that a Ph.D. will earn you enough to pay back years of loans. Don’t ever plan on it.

Important final caveat for prospective students at all levels:

You have to ask yourself all these questions. If you allow other people (say, your parents or friends or academic advisors) to tell you who you are and what you want, you may find after much time and money have passed you by that their image of you was filtered by their own limited perception and their own wishes for you (they always are), and therefore not entirely accurate.

Exploring what you really want and need is difficult, especially when your experience of the options is still limited. Consulting with others is a good idea, but test everything you hear by the yardstick of your own gut instinct about your skills, goals, and potential. The best you can do is to continually re-assess as you gain more experience. No decision is 100% irrevocable, and often the twisty path takes you exactly where you need to go, when a shorter, straighter path may have rushed you to the wrong destination.

And, of course, you should never just take my word on any of the issues raised here. I wanted to raise questions worth asking. Other academics will given you different advice based on their experiences. Perhaps some will do so in the comments on this post!

 

Update: some links.

What is academic history?

Thomas Henry Huxley by Theodore Blake Wirgman

Thomas Henry Huxley by Theodore Blake Wirgman. Via Wikimedia Commons.

History is unique in being counted (or confused) as falling under both the social sciences and the humanities.

From its beginnings in oral storytelling, history was a partly literary exercise (and thus a part of the humanistic tradition) until it became professionalized in the nineteenth century.

From at least that time, history has also been counted as a social science because modern historians use objective data as evidence to support larger claims, and employ methods that are loosely based on the logic behind the scientific method. Some of our evidence is empirical (gathered through experiment or observation, as in the natural and social sciences), and some is interpreted through the “close reading” of texts (as is the evidence in other humanities fields, like literature and philosophy). In fact, as the study of everything that has happened in the past, in a way history can be said to encompass all other disciplines, with all their diverse methodologies.

Historians also rely on an exceptionally broad range of types of evidence: we use documents of every kind (public and private, statistical, official, informal, etc) as well as literature, but also fine arts, everyday objects, architecture, landscape, data on demographics, climate, health, etc, and just about anything else.

What holds together this very broad field is simply that we all study the past. That is, a historian of science may need to master many principles and methods of scientific inquiry, but her goal is to understand the development of science over time; contrast this to the scientist who may share some principles and methods with the historian of science, but whose goal is to further new scientific knowledge, rather than to understand how it developed up to the present.

More specifically, historians can be distinguished from scholars in other fields by the kinds of questions we ask. The questions historians ask can usually be reduced to some combination of the following:

(a) change and continuity over time
(what changes & when, what stays the same while other things are changing)

(b) cause and effect
(which factors affect which outcomes, how and why)

Dates, events, and famous names are elements we seek to master only so that we can more accurately explain the bigger questions of continuity, change, cause and effect.

Understanding the past helps us to know ourselves better (since we are in many ways the products of our pasts), and also to understand in a broad sense how societies behave, and how the constraints of particular societies affect their behavior.

This understanding – though always and inevitably imperfect – is both worthwhile in its own right and can also help us to better understand our choices in the present.

Although historical methods are often grounded in theoretical models and strategies (as in all academic disciplines), historians place unusual emphasis on distinguishing between specific contexts (time, place, social/intellectual/political/cultural climate, etc), as opposed to other disciplines which often aim to formulate models that apply accurately to many contexts.

On other words, we’re not lumpers, we’re splitters.

For example, when we as a society wonder about the causes of war, a political scientist may seek to distill the common factors causing many past wars so as to ultimately formulate a working general theory that will (one hopes) accurately predict the causes of future wars.

The historian, on the other hand, is more likely to delve into the unique factors of each particular context in order to understand what caused that war (but not others).

The historian’s ultimate goal, in this example, is to discern how particular contexts affect particular causes – i.e., identifying unique factors and tracking how they affect other factors), rather than directly predicting future events or reducing particular phenomena to general principle.

Note that both approaches are valuable and informative, and – interestingly – they each can serve as a check on the excesses of the other.

“Summarize”

2004-02-29 Ball point pen writing

Via Wikimedia Commons

If you’re a college student you may often be asked to “summarize” a text or film. The tricky thing about this is that people use the word “summarize” pretty loosely, and what is being asked of you might not be what you’re actually doing. To clarify the difference, it can help to be more picky about what we mean by “to summarize.”

If we’re being picky, then, “to summarize” in a general, non-academic context usually means to simplify.

To summarize in this sense is to touch on all the most important and interesting pieces, to highlight them or to communicate them to someone who is unable to read the original text. In this kind of summary, you’re usually looking for coverage – you want to hit all the main points, and usually in the order you found them in the original. You sacrifice depth for breadth, and that often means leaving out the complicated parts.

Students tend to have come to college with more or less this notion of what a “summary” should look like, probably because they’re used to textbook writing. In textbooks, by definition, very complex ideas are simplified, because the purpose of a textbook is to convey large amounts of general knowledge, rather than to further our knowledge in specific, new directions. So a textbook summary tends to focus on coverage of all relevant main ideas and may leave out many complexities or nuances, so that you get a complete overview, rather than depth on any particular point. Students may sometimes be asked to do this kind of summary for a very simple assignment, when the goal is only to show that you read the text, for example.

But it’s usually not what the professor is really looking for.

The reason summarizing gets tricky at the college level is in the academic context, where our main goal is to think critically about what we know and don’t know and why–not just memorize facts–the most important and interesting bits of a text are not simple, and shouldn’t be simplified, as that would deprive them of their interest and importance. Usually, in academic writing, we summarize another work in order to question or elaborate on its conclusions in a new context. If we start with a simplified version of our sources, our own analysis can only be superficial, and very likely inaccurate!

So, when you’re attempting to “summarize” a text that you will use as a source in your own paper, you need to do something much more complicated than just hitting all the main points in their original order. You want to engage with the text in depth, not just skim its surface. This is why in my own classes I use the more precise term “to distill,” which is a metaphor for exactly the action we want in an essay – a taking out of selected bits, without changing their nature.*

When you distill a source that you want to use in your own essay, you usually do not need to cover every key point of the text. Since the source text probably wasn’t written on purpose to be used as a source in your essay, and in fact had different goals of its own, parts of the source text may not be relevant to your essay. Those don’t need to be covered, then. Instead, you want to hone in on the parts of the source text that directly relate to your goals for your essay. And when you explain these relevant ideas, you want to very deliberately avoid simplifying them. Focus your energy on explaining what is complex, interesting, controversial, incomplete, or questionable about the source text, because it is these nuances that you will want to develop in your essay. This is what we mean by “analysis,” another potentially confusing word you see a lot in assignments—when you analyze a text, you apply your own thinking about the source texts, evaluating their assumptions and sources and goals and logic. You can’t do that if you’ve ignored all the details from the source text.

This confusion about what we mean by “summarizing a source” in an academic essay is actually not a minor matter of semantics at all. When a student summarizes source texts in the sense of simplifying them, the student leaves him- or herself with ideas that are too small and too simple to work with. So the student has nothing to add, and therefore no argument. And next thing I know, I have a stack of essays to grade that were supposed to be analytical, but a huge percentage of them have no argument at all. That is a sad state of affairs for us all!

* I got the term “distill” and countless other useful ways to talk about writing from the University Writing Program at Columbia University, directed by Joseph Bizup, who trained teaching fellows like me. It’s a great term that has served me well in the years since.

Scrivener: A Love Story

Writer John

If this were how I had to write, I don’t think I’d write. Image via Wikimedia Commons.

When I was in the early to middle stages of revising my dissertation into a book, I discovered Scrivener. At the time, the Windows version had just been released in Beta. I tried it, and it was still too buggy to use on a huge project that was fast approaching its deadline, but oh, oh did it have incredible potential! My mind was blown. So much so that, I’ll admit, Scrivener was a fairly major factor in my decision to switch to Mac (it was time to get a new laptop anyway, I’ll say that much for my sanity).

Importing a 300-page manuscript full of footnotes was a bit of a pain. Scrivener isn’t really intended for large-scale importing of a whole project at once like that. But it worked. And then my life was changed.

No, really, this software changed my life.

My dissertation project had begun many years before, and I had gone through several major moves, including international ones, with all my research notes and drafts, and I had switched software for various aspects of the data management several times. In short, all my materials were a bloody mess. And here I needed to quickly revise this enormous beast in significant ways — I added four new half-chapters, new framing on every chapter, new introduction, and bits and pieces of new research throughout. It was a monster to keep track of it all.

And I am not someone who deals well with that kind of situation even on a small scale. I think in circles and spirals, not straight lines. I can’t keep anything in my head that isn’t right in front of me. This whole project had the potential for disaster.

But Scrivener was, seemingly, devised for people exactly like me. Scrivener is not word processing software (although it can do all the basics of word processing). It’s part database, part outliner, and mostly it’s something else entirely — a virtual version of a stack of legal pads, index cards, paperclips and a bulletin board. But you don’t have to carry all that paper around with you, and you can’t lose any of it, since it’s got a really smooth automatic backup system. In addition to all that — and many more features aimed at fiction writers that I haven’t explored at all — there are some really nice whistles and bells that just make it very pleasant to use.

Here’s how I use it. At first it was just for the dissertation, so I’ll start with that. Once I’d imported my huge text file and figured out how to get all the footnotes looking right (actually looking better – in a panel beside the main text, much easier to see while editing), I started splitting my text up. One of the core features of Scrivener is that you can break your text up into chunks of any size, and the smaller your chunks, the more you’ll get out of Scrivener. So I didn’t just break it up into chapters, or subsections, but into paragraphs. Each chunk gets a title, and these titles are displayed in a panel next to the text as you’re reading it, so in effect the outline of your whole text is right there in nested folders, which you can quickly and easily rearrange. (Scrivener will also output all your data and metadata into a proper outline where you can change things in groups, etc.) Just the process of splitting up and labeling my chunks of text revealed many places where the organization was a lot less logical than I’d thought, so I did quite a bit of rearranging just in the process of importing.

Each chunk of text has a virtual index card attached to it (I love that it looks like an actual index card), which you can either auto-fill with the beginning of whatever’s written in that chunk, or you can fill with your own summary. There’s a corkboard view where you can see just the index card versions of your chunks, and rearrange them at will. This is incredible.

Years earlier when I was finishing the dissertation, I had actually printed out several chapters, cut them up into paragraph-size pieces with scissors, and spread them all out on my living room floor. That exercise was incredibly helpful, but it was such a big project that I only did it once. With Scrivener I can do it easily and often, with no mess, and no trees killed for my efforts.

Each chunk of text can also be labeled for easy sorting (like, “Chapter,” “Front Matter,” “End Matter” etc), and can be marked with a status (like, “To-do,” “First Draft,” “Final Draft,” “Done”). You can set the options for label and status however you want. In addition, you can add as many keywords as you choose (like tagging — I can add “gender,” “upbringing,” “childhood” to one paragraph, and “gender,” “estate management,” “needlework” to another, and later sort all my chunks to see those that all have “gender” in common, or just the ones on “childhood,” etc.

Each chunk of text also has a free field where you can add notes, like “did I double-check this in Blum?” And you can also insert comments into the text as you do in the revision mode in MS Word. So, you can have comments pointing to one spot in your text, or comments referring to a whole chunk at once. There are, in addition, a bunch of options for custom meta-data and internal references that I haven’t even begun to explore. All this metadata displays in another frame on the other side of the text you’re reading. You can hide this frame, or the one showing your folders, at any time.

One of my favorite features (though it’s so hard to decide) is that you can also split the main text frame, vertically or horizontally, to compare two chunks of text. This feature alone would have been life-changing to me, even without all the rest. I compare documents and cut and paste between chapters or separate files constantly, and even with all the screen real estate in the world, there’s no way to do this in Word without aggravation (and endless confusion about what was changed where and when — in Scrivener everything is in the same file, with created and modified dates on every chunk of text, not just the whole file, always visible, without clogging up space). On my 13” MacBook Air, I can split the text screen horizontally and still see the folders on the left and the metadata on the right. Or, I can hide those two side screens and compare documents vertically, for more intense editing back and forth. All of this can be done with quick, one-step, intuitive clicks.

While I’m writing, the word and character counts show on the bottom of the screen. I can set daily targets for myself (or in my case limits!).

I can also view my text in any old font or size, secure in knowing that when I’m ready to compile into a Word, RTF, or PDF file, I have saved settings that convert everything automatically to the output style I want. All that is easy to do in your custom way, though there are also settings available for the basic options (for people who write things like screenplays, there’s much more to all this). I like that I can read on-screen in 18-pt Helvetica, or some random combination of sizes and fonts that result from pasting in text from a variety of notes files, for example, without it affecting the finished product, and without having to fuss about cleaning up a bunch of little inconsistencies.

I also imported Word and PDF files that I needed to refer to, but weren’t part of my text. These go into a separate folder, where they can’t be edited, but can be viewed alongside your text in the split screen, for reference. Awesome.

Right now I’m really enjoying the first stages of starting my new project on Scrivener, building up the organization and metadata from the start, but there were some particular advantages, too, to finishing up my first book project in Scrivener. As I went through my research materials collecting bits and pieces that needed to be added, I imported them into Scrivener as separate chunks of text. I labeled them as “Added Bits,” which gave them a different color in the folder hierarchy and outline, so they could be spotted easily as I integrated them into the main body of the text in the places I thought they should eventually go. As I worked my way through them, I could either change the label or merge the added bit to a chunk of the original text, as it got integrated, or I could shift it off again to another folder labeled “rejects” or “spin-off article.” When you compile your text into a word processing file, it’s easy to un-select any folders like this that aren’t intended to be part of the whole.

Once I got going with all this, I found that I could use Scrivener for practically everything I do. Most significantly, for all the writing I do for teaching. I have one Scrivener project for all teaching-related materials: syllabi, assignment sheets, handouts, etc. I keep a template that contains most of the boilerplate text for my syllabi, for example, and can very easily slip in the updated text for a particular iteration of the course, then, with a few clicks, compile it straight to PDF in my established format for syllabi. I can easily separate out a chunk of text in a handout that changes when I use it in different courses, for example, with all the alternate versions I need for just that chunk, while the rest of the handout is common to all versions. That way, I can update part of the common sections of the handout, and when I compile one or another version, that update will automatically be there. I can collapse the subfolders for courses I’m not currently teaching, yet still have them handy when I want to go back to an old handout for a new purpose. I have files with reference material like the official college grading scale, official verbiage about department goals and requirements, etc, so that I can grab it when I need it without opening new files, without constantly updating an external folder system full of duplicates, etc.

And now I even use Scrivener for writing blog posts. When I have a random bit of an idea for a post, I create a little “chunk” of text for it in Scrivener, so that I have a running list of many potential posts in various degrees of completeness from raw idea to ready-to-publish (each one labeled with a click and automatically color-coded). This way I can add a bit here or there whenever a moment presents itself, without losing anything or getting buried in duplicates. Or accidentally publishing a half-baked post!

It’s also easy, once you have a system down, to create a template in Scrivener that you can use for future projects, and then these templates can be easily shared. I made very basic templates for my own purposes (and to share with my husband), for a book-length historical research project, an article-length project, and teaching materials. These templates don’t use the vast majority of Scrivener features — they’re really just a system of basic organization that I don’t want to have to recreate again and again. I’ve shared them on my academia.edu profile if you’re interested.

To conclude this story of a love affair, I’ll admit that I’ve had one problem with Scrivener so far, and I don’t know if it was my fault. The word count of my manuscript in Scrivener was drastically different from the word count I got when I compiled it to Word. By 30,000 words! This is of course a very serious problem. I assume that Scrivener was not counting the notes or some part of the front- or end-matter, but I did very carefully (many times!) check all the settings and make sure the right boxes were checked to include all those. I tried comparing a short, plain text document, and the word counts were comparable. It may be that the many abbreviations in my footnotes were handled different by Scrivener’s word counter than by Word’s (though I don’t think that could add up to such a huge discrepancy). Right now, I don’t think Scrivener is really designed for massive scholarly research projects with more than a thousand footnotes. It can handle that, but it wasn’t really designed for it, and that may be part of how it was possible for the word count to be so far off. I haven’t gotten to the bottom of this issue, and I welcome thoughts others might have about it. In any case, now that I’m aware of the issue, it’s simple enough to compile the text after any major changes to keep a rough gauge of the difference between a Scrivener word count and Word’s.