Bridging the Computerized Scoring Divide?

Last month, TechCruch featured a story on a new web-based tutoring service, PrepMe. I was contacted by Calvin Truong, PrepMe’s Operations Manager about writing a post on the service here in my blog. From the TechCrunch review, it sounds like a different take on the model of submitting a piece of writing only to have it graded by a computer, a model that many, including Nancy Patterson (in this month’s Language Arts Journal of Michigan) and  Maja Wilson, have been critical of:

Prepme is one online test prep company coming out of the University of Chicago’s business incubator. Founded in 2001, the company offers test preparation for the SAT, PSAT, and ACT, using an adaptive algorithm to customize the preparation course for each student.

Unlike Kaplan’s online offering, Prepme doesn’t calculate the best lesson plan once, but continuously as you work your way through the material. Their system keeps track of what questions you get right and wrong, working you harder on the types of questions you miss.

Additionally, customers can connect electronically, using real time chat, with high scoring college students who serve as tutors.

Source: TechCrunch, “Starts-Ups Change How Students Study for Tests,” 9/1/07

When Calvin wrote to me, he wondered if I would blog about PrepMe here. I replied with some initial concerns:

Prepme does seem like an innovative service that takes advantage of computerized scoring while still adding the element of human judgment. Most of the outright computerized scoring systems out there really worry me as a writing teacher (as well as turnitin.com), so this is a clear departure that blends technology and pedagogy…

… although I do think that your service is innovative, I am still concerned about writing items that seem to support computerize scoring, as many of the professional organizations that I belong to have statements that expressly condemn computerized scoring.

NOTE: After some closer reading, I should note that the writing itself appears to be scored by the tutors, while multiple choice items that are like those encountered on the ACT, SAT, and other tests are the ones being computer graded.

To continue the conversation, Calvin wrote back immediately, and with his permission, I share parts of that response here:

About your concerns, I completely understand, and I think we’re pretty in-sync on both points. We’re working on a few initiatives that would speak more generally to trends in education, and since we’re only grading multiple choice tests and hand grading essays to enable detailed feedback it seems we’re on the same page. I would expect there is much less resistance in using technology to automate grading multiple choice exams since this minimizes human error, but perhaps I’m mistaken on this.

As to the interesting trends that may be worth writing about, there are two that we’re working on that may be of interest. The first is our work with the State Dept of Education in Maine, and the second is more generally about what’s happening in online education.

Maine recently enacted legislation that required the SAT to graduate from high school. We’ve committed to a 3 year program with the Dept of Education where we provide free test prep to every public high school student in the state. Here’s a press release from the Maine DoE website: http://www.maine.gov/education/edletrs/2007/ilet/07ilet072.htm. This initiative is interesting in and of itself, and may provide fodder for an interesting discussion. As far as we can tell they’re doing it to not have to invest tremendous resources to create their own state standardized test and to also drive students to consider applying to college. It’s an interesting social experiment and we’re proud to be a part of it.

Another approach might be to talk about the general trend in online education of trying to find the sweet spot between scalability, quality, and cost. We believe that using technology to give you scale while having high quality services with tutors from top universities, at a significant cost advantage is the way to go. In the pre-college market, having tutors at top universities is a quality win because these are exactly the sorts of students that our users want to be and exactly the sorts of students that our parents want their children to be, and this fosters great relationships online. There is some inherent cost in this approach but we believe it’s worth it.

Clearly there are others out there that disagree — some go for the no-compromise in quality, 1-on-1, in person is the only way to go but that has tremendous cost and little scalability. Other companies are trying the low cost online model with outsourced tutors and we believe this sacrifices too much quality in favor of cost.

It may be interesting to consider the implications of this because what is commercially viable may not actually be what is the most pedagogically pure.

All in all, it was an eye-opening discussion and gives me hope that hybrid models of online grading with humans sharing their insights could be a way to go. In my initial training as an online instructor for our state’s virtual high school, it seemed as though we relied more on the multiple choice items and writing that was highly scripted, almost not requiring a human response (even though I was grading it). As we ask students to write and share their writing online, this is not the best model of them composing digital texts per se, but it is a model that we could consider using in our own classrooms to foster peer response on traditional texts in digital environments.

Also, it points to the need in our field to more fully analyze this phenomenon and come up with alternatives that we feel are viable. I am not an expert in the topic of computerized writing assessment, yet am becoming more familiar with the field. A search in Google Scholar for “computer based writing assessment” didn’t yield anything since 2003 (in the first ten pages of hits). The most recent an comprehensive article that I saw was Goldberg, Russel, and Cook’s “The effect of computers on student writing: A meta-analysis of studies from 1992 to 2002,” originally published in the Journal of Technology, Learning, and Assessment. (An article in the current issue, found after I accessed the JTLA website, “Toward More Substantively Meaningful Automated Essay Scoring,” looks interesting, too).

So, I thank Calvin for beginning this conversation, and giving me something more to think about as I evaluate my students’ writing this week (all of it submitted digitally, incidentally) and consider what else might help them become better writers in the future. I also hope that you — as teachers of writing — share your thoughts both in comments here and by emailing Calvin as well.