Notes from “Using Social Media to Define the New Humanities”

Notes from “Using Social Media to Define the New Humanities” – Antonio Viva

  • Thinking about new humanities
    • Context, conversation, collaboration
    • How do we educate our students for success in the web 2.0 world?
    • Can we harness the power of social media to provide students with a vehicle for exploring and creating original content?
  • Old School Creative Writing
    • Genre based instruction
    • Anthology as primary class text
    • Student work not published
    • Blogging/journaling
    • Assessments were traditional and rubric based
    • Mostly fiction and poetry
    • Workshop style with peer editing and review
    • In depth study of literary elements and terms as a vehicle for creation
  • What is the basis of the new humanities?
    • Richard Miller’s presentation to MLA, December 2008
    • See Digital Digs for a reflection and embedded video
  • Personal paradigm shift
    • Communicating instantly and globally
    • English is about human expression
    • Humanists should be at the cutting edge of this
    • Multimedia composition
  • Why should we reconsider thinking this whole thing? — connecting to panel discussion last night
    • Creativity, collaboration, and courage
    • Schools should be a place where student generate ideas
    • Ability to try out new ideas
    • Fostering new humanities rich environments
    • Provide opportunities for students to convey concepts and original ideas through thoughtful technology rich collaboration
    • Schools should be about communication
  • The WA Mash – Worcester Academy Mash Up
    • What do we want to communicate?
    • To whom and how best do we communicate this message?
    • Model after Salon.com and Slate.com as an outlet for creative writing publication
  • Publishing Tools
    • YouTube
    • Flickr
    • Facebook
    • WordPress
    • Twitter
  • Conversation with students about WAMash
    • How do you get students engaged — turn some of the control of creating and sharing content over to the students
    • What have you learned as a part of taking the class?
      • More technology
      • Enjoy writing more
    • What does it mean to be a writer?
      • Before, I considered writing as an essay style, but now it has really expanded my horizons about writing and there are more ways than just essays and school work
      • What has changed for me is that I am a lot more willing to put myself out there for people to examine and I was questioning my own ability, but there are so many ways to express yourself in writing. I am more able to accept criticism now and having a good support group from peers and teacher.
      • For the past few years, just writing essays, now I have learned that I can express myself more; writing from different perspectives
      • Before the class, I thought that it was limited and you had to just write, but now I realize that writing is more about expressing and getting the word out there about something that you care about because people will listen. Writing is important, and I respect it. It is more of an art than I thought it was.
  • Thinking about change
    • Change needs to be organic — comprehensive school change does not work
    • It will cause chaos — people will not be doing substantive and good work with students
    • Establish a culture for creativity, innovation, and the appetite to try new things are the norm
    • Support the inventors, creative thinkers, risk takers, and innovators with resources, PD, and public accolades
    • Don’t follow the trends, create them

Notes from Steve Graham’s “Evidence-Based Practice in Writing”

Another great session this week, this time with one of the co-authors of the Writing Next report: Steve Graham.

Here is an overview from the MSU LARC site:

Steve Graham, Vanderbilt University

Evidence-Based Practice in Writing – Drawing on Experimental, Qualitative, and Single Subject Design Research for Answers

Wednesday, April 16, 2008
11:30am – 1:00pm
Room 133F Erickson Hall, Michigan State University

This presentation will examine what we know about effective writing instruction, drawing on three recent reviews of the literature. One of the reviews (Writing Next) was a meta-analysis of experimental and quasi-experimental writing intervention research. Another review was a meta-analysis of single-subject design writing intervention research. The third review was a meta-synthesis of qualitative research conducted with outstanding literacy teachers, designed to identify common practices across studies. Advantages and disadvantages to the use of evidence-based practices in writing will also be explored.

About the Speaker:

Steve Graham is the Currey Ingram Professor of Special Education and Literacy, a chair he shares with Karen R. Harris. His research interests include learning disabilities, writing instruction and writing development, and the development of self-regulation. Graham is the editor of Exceptional Children and the former editor of Contemporary Educational Psychology. He is the co-author of the Handbook of Writing Research, Handbook of Learning Disabilities, Writing Better, and Making the Writing Process Work. In 2001, Graham was elected a fellow of the International Academy for Research in Learning Disabilities. He is the recipient of career research awards from the Council for Exceptional Children and Special Education Research Interest Group in the American Educational Research Association.

And, here are some notes from the session:

  • Opening quote: “Kids know the most interesting things” – Mark Twain
    • “It hurt, the way your tongue hurts when you accidentally staple it to the wall.”
  • Writing is nowhere in terms of the educational reform movement in this country
    • The things that drive the educational reform movement are reading and math
    • Now, STEM – science, technology, engineering/economics,math
    • Why is writing out in the cold?
      • This is not always bad, as it sometimes results in school practices that are not good
      • But, we need to make the case that writing is important
        • 1. One of the reasons that people are not paying attention to writing is that there is a general perception that we do not know how to teach writing. Policy makers want evidence, and they want particular kinds of evidence.
          • We do know that there are some things that work for all students 4-12 and younger
          • People don’t think that writing is important. So, we have to look at the effects of writing on content area learning. We make the case that writing can be helpful in terms of the STEM skills
          • Reading gets more play in the literacy discussion. We need to look at the effects of writing on reading. How does writing affect reading?
            2. What are the practices going on in elementary and secondary schools

            • Limitations: survey data that could be rosy, but the data is still not good
            • ELA teachers are doing less than one extended writing assignment a month
            • You don’t wan to go into policy making without good research to make recommendations

            3. Theoretical framework — from Patricia Alexander from moving from knowledge about discourse and enhancing motivation

  • What are three primary resources we can draw from?
    • Professional writers
      • Unfortunately, the advice can be simplistic and only moves confident writers to expert writers; it doesn’t help other writers
    • Effective practices from experienced teachers
      • Talk to effective teachers or observe good teachers in practice and study them
        • Problem: if I go in looking for one thing, I will likely see it (difficult to separate the wheat from the shaft”
        • Problem: Donald Graves and the example that works. Yet, there are times when this doesn’t work.
        • Problem: generalizability. Evidence is often selective.
        • With scientific studies, we collect evidence, presents findings for all participants, replicability, strength of impact — all this leads to something that should be more trustworthy than insight and experience.
  • This presentation, thus, will draw on three sources: experimental, single subject, and teacher practice
    • Other criteria:
      • Four replications
      • Converging evidence (the sun, the moon and the stars align)
      • Recommendations based on higher quality studies are superior
        • Process writing has very poor research, so you need to be cautious about this
        • The more studies, the merrier
    • Effect size:
      • .8 is large
      • .5 is moderate
      • .25 is small, but significant
    • Writing Next looks at overall quality of writing
      • Strategy instruction (planning, revising, editing, and regulating the writing process; 20 studies, .82 effect size (particularly helpful for kids who find writing difficult)
        • Don’t just PEE (post, explain, and expect) students need repeated modeling
        • For instance, the STOP strategy (Suspend judgment, Take a side, Organize ideas, Plan more as you go)
      • Teaching Summarization (systematic and explicit teaching of how to summarize texts); 4 studies, ? (missed it) effect size
        • Teach the six rules of summarization
      • Peer assistance (working together to plan draft and revise); 7 studies, .75 effect size
        • Needs to be a structured in a positive way — having students add questions marks and carats in their peers’ papers
      • Setting product goals (specific goals for the written product to be completed); 5 studies, .70 effect size
        • Need to tell students what you expect without limiting them
        • Product goals and revising
      • Word Processing (using word processing); 18 studies, .55 effect size
        • Some are short studies, but some are up to a year
        • Using the technology which is widely available is important, but it is used infrequently in schools or, when it is used, it is only used for final draft/publication
      • Sentence combining (constructing more complex sentences by combining shorter kernel sentences); 5 studies, .5 effect
        • Work on this together with students, then invite them to apply it back in their own writing
      • Process Approach (extended opportunities for writing, student ownership); 21 studies, . 32 effect size
        • Inviting students to engage in planning and revising is good
        • Bad news: the effect size is scattered all over the place
        • Receiving training from NWP is about a .46 effect, and is insignificant if you did not get that training
        • You can do this in a very poor way, and not get a good effect; this is compatible with a strategy approach that makes the writing more visible
      • Pre-Writing (have students engage in activities such as brainstorming; 5 studies, .32 effect
        • STOP strategy, for instance
      • Inquiry (old research); 5 studies, .32 effect
        • No pre-test done, so these studies may underestimate the effect size
          • Example: set a goal, analyze the data, look at specific strategies, and apply what you learned
            • A student in elementary school looking at conflict on the playground
      • Study of Models
        • Examines examples of specific writers and types of text; 6 studies, .25 effect
          • Model from good readings
      • Writing as a Tool for Learning (writing in the content areas); small but positive effect
        • 26 studies, but I think that it is more effective in science and math than ELA and social studies based on the effect sizes that we see
      • Grammar (explicit teaching of grammar); 11 studies, -.32 effect size
        • Quality of writing is not affected by grammar instruction
        • What this traditionally looks like is that you give a definition, example, and then is used in decontextualized works
        • If we expect it, but do not help students use grammar then it will likely not work
          • Take the kernel sentence: Dog bit mailman

        Recommendations for Struggling Writers (teaching handwriting, spelling, and typing to struggling writers — teaching transcription skills towards automaticity), small positive effect

  • Single Subject Design Recommendations
    • Explicitly teach students strategies to construct paragraph; strong positive impact
      • Showing parts of a paragraph to the point that students understand the goals of writing a paragraph
    • Explicitly teaching students how to capitalize, punctuate, etc. helped
    • Reinforce positive aspects of students writing — social praise, tangible reinforcement or both as a means to increasing specific writing behaviors (small positive effect)
      • Traditional means of grading papers doesn’t work — “we get more with honey than we do with vinegar”
      • Couldn’t draw the summary effect from this, however
      • Need to move the feedback beyond the specific paper and help the student move forward in his/her writing
    • Self-monitoring (students asked to count how many errors they made); might be effective for some struggling writers
  • Individual Teachers
    • Study exceptional teachers and schools
      • Practice had to be applied by the majority of schools or teachers
      • 10 Practices that might make a differences (had to occur in four or more studies)
      • Dedicate time to writing and writing instruction, with writing occuring across the curriculum
        • Get kids in the game of writing
        • Increasing writing by itself is not enough, it also needs to be motivating and give kids tools to be effective
      • Involve students in various forms of writing over time
      • Treat writing as a process
      • Keep students engaged by involving them in thoughtful acticvities such as planning compositions
      • Vary individual, small, and large group instruction
      • Mode, explain, and provide guided assistance when teaching
        • Teachers need to relinquish control
      • Provide just enough supprt so that students can make progress or carry out writing tasks and processes, but encourage students to act in a self-regulated manner as much as possible
      • Be enthusiastic about writing and create a positive environment where students are constantly encouraged to try hard, believe that the skills and strategies that they are learning will help them write well
      • Set high expectations
      • Adapt writing assignments to meet the needs of students
  • Caveats
    • We should not order these practices hierarchically in terms of one being more effective thananother
      • Instead we should order them in a way that we see them working well for us
    • The database is thin
    • Just because a practice has been studied, it does not mean that it will be effective for all teachers in all classrooms.
      • Pay attention and see if it works in your classroom, with your students
    • Little data on those students who are most at-risk: ELL, learning disabilities, struggling writers
    • Lack of data on maintenance and generalization
    • Don’t really know best how best to put all of these things together
      • Think about trying to integrate some of these ideas as part of an overall approach rather than try to fit it into an existing approach
    • Teachers’ views on acceptability of these practices will clearly influence their use — this will include the issue of domain specificity
      • If you don’t accept it as a reasonable practice for you in your classroom it will not work
    • Just because a practices is effective in a study or was used by an exceptional teacher does not mean that it will always work
  • Questions
    • 6 traits
      • Most studies were pre- and post-tests with no control
      • Look at journal article on Writing Next
      • 6 plus 1 looked pretty good for what was there
    • In-Service
      • When we asked ELA, science, and social studies teachers about how well their program taught them to teach writing, 70% said it was inadequate
      • We also asked about in-service preparation — you personally, school, conferences — ELA said that 70% were adequate, but 30% were inadequate
      • Most science and other content teachers didn’t feel prepared to do so
      • Not doing it at pre-service level because most states do not require a course in teaching writing
    • We have been doing this work for nearly 25 years and we have not delivered our work in terms of learning strategies approach and outreach
      • We have a distribution problem — we are not providing what we know in pre-service and in-service ed
    • A lot of this is very complicated, so we did the best practice book to give something for teachers to look at
      • We need to have support materials showing teachers how to do this — if you can see it, you can do it

My Reflections

In thinking about Dr. Graham’s talk, there are a number of salient points that I want to consider. First, he went over the 11 strategies from Writing Next and, even though there is evidence to show that all these strategies are effective, it is the individual teacher that makes the difference in writing instruction.

Second, he talked about how students can use word processing to write and revise, and that is very effective for their growth as writers; however, most of the opportunities that students have to write with the computer only involve typing in a “final draft” of something else that has been written out beforehand.

Next, he talked about peer editing and how students must be scaffolded into the process of giving feedback; just having them give comments to one another is not enough as they must use the language of writing in that talk.

Finally, he talked about the writing process approach and having an authentic purpose and audience for students should happen more often than what it is. Typically, the audience is only within the classroom walls, and students don’t share beyond their friends. Yet, he described a project in his children’s school in which students shared their work more widely and that it could be a goal for many, although not all of our assignments.



Creative Commons License
This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License.

Brown Bag Presentation: Multiliteracies in Composition

Last Friday, I was invited to lead a “brown bag” session for my English department’s composition program. Titled “Multiliteracies in Composition,” we focused our pre-reading on an article about a second-year college composition course developed at Michigan Tech called “Revisions.” Details can be found in the following article:

Lynch, Dennis A., and Anne Frances Wysocki. “From First-Year Composition to Second-Year Multiliteracies: Integrating Instruction in Oral, Written, and Visual Communication at a Technological University.WPA: Writing Program Administration 26.3 (2003): 149-171.

We began by watching the Richard Miller’s presentation: The Future is Now. This presented us with a variety of challenging questions about how we might pursue such a vision of the “new humanities” at CMU, including discussions about professional development, our beliefs about the changing nature of literacy, and how, if at all, a shift in our curriculum would happen in the time frame that Lynch and Wysocki describe from their context.

We then continued in small groups with a jig saw reading, where groups posted 2-3 responses or question in their own page on my wiki. After a watching Wikis in Plain English, they understood the basics of posting and were able to see how using a wiki could allow for multiple groups to post their work and then quickly share it with the class. The conversation continued in a large group discussion, including some emerging questions:

  • What do students need in terms of literacy in a changing world?
  • How do multiliteracies relate to technology and communications?
  • What does the multi-disciplinary approach do for departments? What about specialization?
  • If everyone talks the same language, do we have our own specialties?
  • What does this mean for us in terms of the course? Content? Writing?
  • Faculty-only vs. Graduate Assistants–How is this possible or feasible at our University?
  • What does this look like across the curriculum? Is it sustainable?
  • What about assessment? Individual? Groups? Programmatic?
  • Is there still a need for traditional comp courses? Don’t you still need a first year comp?
  • How does the continuing focus in professional organizations on 21st century lliteracies contribute to this discussion (last week’s NCTE statement on the future of composition), both for college and life?
  • What would the writing center need to/be expected to do?
  • Does this perpetuate a two-tiered society, a Gutenberg in reverse?
  • How do we support faculty in these collaborations?
  • Is the resistance about learning to do old things with new technologies or really coming to understand a new paradigm that the new technologies allow?

We ended with Michael Wesch and his students’: A Vision of Students Today, and just in time for a sunny mid-winter drive home. All told, it was a timely and lively discussion for our department, and I appreciated having the opportunity to facilitate the session. Given the release of the 2008 Horizon Report, it seems as though we are constantly reminded that things continue to change. I hope that this session serves as a spark that continues into further conversations about multiliteracies in composition later this semester.

Creative Commons License

This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License.

Literacy alive and well in computer age – Perspectives – Opinion – Technology

From the Google Reader….

It makes no sense complaining about the decline of the printed word. As it becomes just another medium, we are moving to a kind of multimedia literacy, where capability with print becomes no more important, or useful, than capability with image.

This is not necessarily a bad thing. There is no rule that says that the written word is superior to other forms of media. While some of us are print-oriented and will always remain so, there are people growing up to whom print is of comparatively minor importance.

The vast majority of these people will enter adult life as well educated as the generations before them. But they will rely less on books and newspapers, and more on television and the internet and multimedia.

We are not witnessing the decline of literacy, simply a new type of literacy. It is pointless to make moral judgements about the superiority of one medium over another.

Literacy alive and well in computer age – Perspectives – Opinion – Technology

Graeme Philipson makes a compelling argument for how our culture’s artists such as Doris Lessing and Elton John — both who decry the effects of the internet — need to change their perspectives about literacy in the 21st century. As a topic always on my mind, I found this opinion article a fresh take on the topic, especially the connection that Philipson makes between our thousands of years of oral history that has, only in the past few centuries, become replaced with print. Just because things are changing again doesn’t mean that we are in decline, it simply means that we need to adapt to the change.

This connects with a conversation that I was having yesterday with one of our college’s public relations consultants. She and I were talking about my research interests and how to make “literacy and technology” something newsworthy, and both struggling to find an angle on it. On the one hand, it seems that discussions of technology and literacy should be self evident. Yet, we continue to see school infrastructures and policies, teacher, administrator, and parent attitudes not reflecting a shift in thinking about this, and, as this EdWeek article points out, the fact that what doesn’t get measured, doesn’t get treasured.

So, my question today is thinking about how to make technology and literacy — not just tech literacy, but instead the changing nature of literacy — a key part of the conversation that the media reports on with schools. Clearly, when they publish the box scores for the test results, people stand up and pay attention. Without being punitive, are there ways that we, as educators, can engage the media to get the story of technology and literacy shown to the general public in a compelling manner?

To be more concrete, I want the tone of the conversation in the media to change from “Why aren’t students passing the tests” to “Why don’t students have one-to-one access to laptops for use in their daily reading, writing, calculating, observing, predicting, analyzing, etc.?”

Philipson shows us a way to shift the conversation on the opinion page. Can we think about ways to do it on the front page, too?

Blogged with Flock

Social Networks, School Policies, and Surveillance

My colleague Rob Rozema from GVSU has invited my students and I to participate in a new Ning social network, Teach English. I am very excited about the opportunity to be involved in this project, and we will also have students from Allen Webb‘s course at WMU join in, too.

As we consider what we will do with this network, I think that we have to ask ourselves a key question about its implementation and potential for use: how do we account for and respond to the contradiction in local, state, and federal policies regarding internet use (for instance, no blogging or social networking) and the call to teach these skills in our schools?

In other words, if we teach students how to use social networks, will they be able to use those skills once they are teaching?

Moreover, this raises another issue that my best friend Steve Tuckey and I were discussing a few weeks back — does taking a technology and reappropriating it for use in schools undermine the excitement and potential uses for that technology?

As an example, we talked about the idea of a “cheese sandwich blog,” one that tells basically accounts for the mundane happenings in everyday life. (If we build 20 million blogs, will the readers come?). Contrast that with the more substantive kinds of blogging that many edubloggers are calling for and teaching; that is, a more “academic” form of blogging. Steve asks, what’s wrong with the cheese sandwich?

He asks this not to be sarcastic (well, OK, maybe a little bit), but more to take a critical approach to how we use blogging. From an email conversation, he says, in part:

by trying to call for highfalutin standards of rigor in what our students blog about, we are essentially trying to colonize one of the most democratic spaces with the self-important hierarchy of academia. We try to set up the same old benchmarks for “good writing” in a new environment, all the while touting the greatness of its promise as something “new.” Seems schizophrenic to me. And don’t get me started on how real-time authoring serves to feed the dragon of continuous assessment…

In other words, if we reappropriate “blogging,” into an academic setting, is it blogging anymore? Or, is the definition of “blogging” (or, perhaps, edublogging), such that a higher level of discourse is now becoming expected above and beyond the typical diary/journal/update blogs of the past. And, with microblogs in Facebook and Twitter, are we going to have to think about how to make that academic blogging, too?

Steve was interested in seeing me raise this point with the other edubloggers that are thinking about similar ideas, perhaps in another forum beyond our blogs, too. Perhaps I will write a letter to EJ or something like that. If others have an idea about where and how we might discuss this issues — the appropriate use and reappropriation of blogging for academic purposes — let me know. It will certainly be on my mind as I prepare for next semester.


Creative Commons License

This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License.

Thinking about Multimodal Assessment

Yesterday, our RCWP Project WRITE team had the good fortune of being able to work with NWP’s Director of Research and Evaluation, Paul LeMahieu, on an analytic writing continuum workshop. In his talk, which was similar to the session that I attended last summer, he talked about how the continuum has been developed, the pedagogical uses of it, and how we, as professionals who teach writing, need to not just tell those who value tests to “stop,” but to also offer them something better to use instead (we hope to post some notes on the session soon on the Project WRITE wiki).

Particularly useful for the Project WRITE teachers, as he talked about the different categories for assessment on the continuum (content, structure, stance, diction, sentence fluency, and conventions — modeled, with permission, after six traits), he also talked about how this structure of assessment works for most kinds of writing, but not all and not the least of which is multimodal writing. He mentioned how there are not really any models that explore how to assess multimodal composition and how, perhaps, we could develop one through this work in Project WRITE. That is a very exciting component of this project that I had not anticipated when we originally started, and I look forward to pursuing it more soon. (NOTE: I do think that Bernajean Porter has got our thinking moving in this direction for K-12 students, and put up some good criteria and an interactive rubric maker on her Digitales Evaluation site.)

Coincidentally, I have been chewing on this idea now for the past few days as I was trying to help my students in ENG 201 come up with criteria for evaluating their final multimodal projects. As I asked them to reflect on what they have been doing and how they have been working over the past few weeks on these projects, we talked on Tuesday about how the categories of the analytic continuum (which we have been using all semester) just didn’t quite line up with what they were thinking about in terms of what to earn a grade on. Along with some criteria for judging group member performance, they went back to our discussions earlier this semester about rhetoric, and we came up with the following ideas for grading this project:

  • Ethos: the credibility of the author is established through professional language, use of appropriate sources, and evidence of author’s perspective (within or in addition to the main multimodal documents)
  • Pathos: the texts make appropriate emotional appeals that both engage the reader and provide insight into the chosen topic
  • Logos: the texts present a clear and coherent central idea, supported with appropriate evidence and argumentative strategies
  • Content and Structure: the choice of mode and media support the message in the texts and elements of multimedia are thoughtfully integrated into the project rather than as a gratuitous add-on
  • Design: the choice of design principles (contrast, repetition, alignment, proximity) as well as rhetorical decisions (transitions, word choice, stance) combine to make an attractive and effective presentation

So, it will be interesting to see how this turns out. Students, in groups, will be assessing the other groups’ work and I will be throwing in my grade with the whole bunch to get an average. I haven’t graded anything multimodal yet, let alone a collaborative grading where students are involved in the process. I’ll write more about it once we are done, and look forward to hearing your ideas about how you are teaching and assessing multimodal writing, as well as any resources that you can point to about this messy, yet engaging, component of the writing process.

Creative Commons License

This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License.

Bridging the Computerized Scoring Divide?

Last month, TechCruch featured a story on a new web-based tutoring service, PrepMe. I was contacted by Calvin Truong, PrepMe’s Operations Manager about writing a post on the service here in my blog. From the TechCrunch review, it sounds like a different take on the model of submitting a piece of writing only to have it graded by a computer, a model that many, including Nancy Patterson (in this month’s Language Arts Journal of Michigan) and  Maja Wilson, have been critical of:

Prepme is one online test prep company coming out of the University of Chicago’s business incubator. Founded in 2001, the company offers test preparation for the SAT, PSAT, and ACT, using an adaptive algorithm to customize the preparation course for each student.

Unlike Kaplan’s online offering, Prepme doesn’t calculate the best lesson plan once, but continuously as you work your way through the material. Their system keeps track of what questions you get right and wrong, working you harder on the types of questions you miss.

Additionally, customers can connect electronically, using real time chat, with high scoring college students who serve as tutors.

Source: TechCrunch, “Starts-Ups Change How Students Study for Tests,” 9/1/07

When Calvin wrote to me, he wondered if I would blog about PrepMe here. I replied with some initial concerns:

Prepme does seem like an innovative service that takes advantage of computerized scoring while still adding the element of human judgment. Most of the outright computerized scoring systems out there really worry me as a writing teacher (as well as turnitin.com), so this is a clear departure that blends technology and pedagogy…

… although I do think that your service is innovative, I am still concerned about writing items that seem to support computerize scoring, as many of the professional organizations that I belong to have statements that expressly condemn computerized scoring.

NOTE: After some closer reading, I should note that the writing itself appears to be scored by the tutors, while multiple choice items that are like those encountered on the ACT, SAT, and other tests are the ones being computer graded.

To continue the conversation, Calvin wrote back immediately, and with his permission, I share parts of that response here:

About your concerns, I completely understand, and I think we’re pretty in-sync on both points. We’re working on a few initiatives that would speak more generally to trends in education, and since we’re only grading multiple choice tests and hand grading essays to enable detailed feedback it seems we’re on the same page. I would expect there is much less resistance in using technology to automate grading multiple choice exams since this minimizes human error, but perhaps I’m mistaken on this.

As to the interesting trends that may be worth writing about, there are two that we’re working on that may be of interest. The first is our work with the State Dept of Education in Maine, and the second is more generally about what’s happening in online education.

Maine recently enacted legislation that required the SAT to graduate from high school. We’ve committed to a 3 year program with the Dept of Education where we provide free test prep to every public high school student in the state. Here’s a press release from the Maine DoE website: http://www.maine.gov/education/edletrs/2007/ilet/07ilet072.htm. This initiative is interesting in and of itself, and may provide fodder for an interesting discussion. As far as we can tell they’re doing it to not have to invest tremendous resources to create their own state standardized test and to also drive students to consider applying to college. It’s an interesting social experiment and we’re proud to be a part of it.

Another approach might be to talk about the general trend in online education of trying to find the sweet spot between scalability, quality, and cost. We believe that using technology to give you scale while having high quality services with tutors from top universities, at a significant cost advantage is the way to go. In the pre-college market, having tutors at top universities is a quality win because these are exactly the sorts of students that our users want to be and exactly the sorts of students that our parents want their children to be, and this fosters great relationships online. There is some inherent cost in this approach but we believe it’s worth it.

Clearly there are others out there that disagree — some go for the no-compromise in quality, 1-on-1, in person is the only way to go but that has tremendous cost and little scalability. Other companies are trying the low cost online model with outsourced tutors and we believe this sacrifices too much quality in favor of cost.

It may be interesting to consider the implications of this because what is commercially viable may not actually be what is the most pedagogically pure.

All in all, it was an eye-opening discussion and gives me hope that hybrid models of online grading with humans sharing their insights could be a way to go. In my initial training as an online instructor for our state’s virtual high school, it seemed as though we relied more on the multiple choice items and writing that was highly scripted, almost not requiring a human response (even though I was grading it). As we ask students to write and share their writing online, this is not the best model of them composing digital texts per se, but it is a model that we could consider using in our own classrooms to foster peer response on traditional texts in digital environments.

Also, it points to the need in our field to more fully analyze this phenomenon and come up with alternatives that we feel are viable. I am not an expert in the topic of computerized writing assessment, yet am becoming more familiar with the field. A search in Google Scholar for “computer based writing assessment” didn’t yield anything since 2003 (in the first ten pages of hits). The most recent an comprehensive article that I saw was Goldberg, Russel, and Cook’s “The effect of computers on student writing: A meta-analysis of studies from 1992 to 2002,” originally published in the Journal of Technology, Learning, and Assessment. (An article in the current issue, found after I accessed the JTLA website, “Toward More Substantively Meaningful Automated Essay Scoring,” looks interesting, too).

So, I thank Calvin for beginning this conversation, and giving me something more to think about as I evaluate my students’ writing this week (all of it submitted digitally, incidentally) and consider what else might help them become better writers in the future. I also hope that you — as teachers of writing — share your thoughts both in comments here and by emailing Calvin as well.

Pondering the Curricular Value of Digital Writing

A few weeks ago in Chico, I was fortunate enough to meet John Bishop from the other RCWP, Red Clay Writing Project located near Atlanta, and we had a splashing good time there!

Since then, I have been following his blog and I am particularly interested in the recent post that he created about exploring digital storytelling for youth. He asks some key questions there, one being:

3. How can we help foster skills/practices that are “marketable” for youth? In other words, how can we acknowledge various economic/power structures youth face as they navigate through (and exit) different stages of their educational lives? How does/should our work interact with public school curriculums?

I find this particular question relevant to me on three fronts this week as I spend time in meetings and workshops for our writing project’s work. Some of it is still up in the air, so I won’t go into detail here, but three additional questions emerge for me based on some things that are happening in Michigan.

First, Allen Webb has compiled a website that addresses the implementation of the new Michigan High School Content Standards. There is plenty more info there for you to get the entire story, but basically it boils down to the fact that many English teachers in MI are feeling pressure to develop common curriculum and assessments, one that are not — in John’s words — developing “marketable” skills or digital literacies. There is also a petition to sign, and I think that it is worth considering the broader curricular pressures that teachers are under in the scope of John’s questions. How, then, do we begin to engage in serious curricular conversations about teaching digital writing when more and more prescribed curricula seem to be coming down the pike that fail to address it at all?

Second, I am currently attending a workshop sponsored by the Eastern Michigan Writing Project on NWP’s Analytical Scoring Continuum, a scoring rubric redesigned from the six traits model. It has been an interesting workshop so far, and his given us lots to think about in our site’s work and what I will be doing with my pre-service teachers in the fall. That said, my colleague Marcia and I were talking in the car on the way home about the fact that this rubric — like all state assessment/six traits type rubrics — seems to be focused on print-based modes of composition and almost inherently neglects the demands of digital writing. For instance, the idea that writing is “clear and focused” can certainly apply to a blog post like this (I hope), but does it apply to someone creating hypertext fiction with a wiki? This is not a criticism of the model so much as it is me raising the concern, again, that schools are not even thinking about teaching digital writing, let alone beginning to understand the paradigm shift associated with teaching it. How do we help make that shift?

Third, we are beginning to plan for next year’s professional development and — besides needing to figure out exactly what we will offer related to tech-based writing PD — we really need to get some info about research in the field and effectiveness of web-based writing practices. I am going to do some searching on the Pew Internet and American Life site, the MacArthur Foundation’s Digital Learning site, and UConn’s New Literacies Research Team site to see what I can come up with. So, my final question for tonight is this — if you have an empirical studies on digital writing in schools that you can point me to before Thursday morning, could you please post them as comments here?

Thanks for hanging in there with me on this post. I appreciate all the comments — both online and F2F — that you, as readers, give me about this blog. It is very encouraging as a teacher and writer.

And, just so you know, I am finally thinking about doing a more formal podcast starting soon as I am currently an intern in the Webcast Academy. Wish me luck!

Response to “Writing Next” Report

Monday, we will be discussing the Writing Next Report, issued by the Alliance for Excellent Education. Here are my thoughts on the prompt, “How has reading the Writing Next Report encouraged you to rethink aspects of your teaching practice?”


Writing NextThe Writing Next Report, written by Steve Graham and Dolores Perin, issued earlier this year by the Alliance for Excellent Education as a report to the Carnegie Corporation of New York, outlines 11 teaching strategies that improve student achievement in writing. The report is a meta-analysis of dozens of quantitative studies that allow for the calculation of an “effect size,” or “the average difference between a type of instruction and a comparison condition” (p. 13). More on the measurement process and research method in a moment, but first a look at the results of the study.The authors of the report suggest eleven writing strategies that “are supported by rigorous research, but that even when used together, they do not constitute a full writing curriculum” (p.4). This point merits particular attention as one reads the list of strategies and thinks about what good writing teachers do as well as how and why they implement those strategies. That said, the list of strategies reads like a “greatest hits” of instructional techniques that a teacher can implement in his or her classroom (hence the warning not to call this list a curriculum). Here is the list, taken verbatim from the report, pages 4 and 5 (and I have listed the effect sizes at the end, the larger the better):

  1. Writing Strategies, which involves teaching students strategies for planning, revising, and editing their compositions (.82)
  2. Summarization, which involves explicitly and systematically teaching students how to summarize texts (.82)
  3. Collaborative Writing, which uses instructional arrangements in which adolescents work together to plan, draft, revise, and edit their compositions (.75)
  4. Specific Product Goals, which assigns students specific, reachable goals for the writing they are to complete (.70)
  5. Word Processing, which uses computers and word processors as instructional supports for writing assignments (.55)
  6. Sentence Combining, which involves teaching students to construct more complex, sophisticated sentences (.50)
  7. Prewriting, which engages students in activities designed to help them generate or organize ideas for their composition (.32)
  8. Inquiry Activities, which engages students in analyzing immediate, concrete data to help them develop ideas and content for a particular writing task (.32)
  9. Process Writing Approach, which interweaves a number of writing instructional activities in a workshop environment that stresses extended writing opportunities,writing for authentic audiences, personalized instruction, and cycles of writing (.32)
  10. Study of Models, which provides students with opportunities to read, analyze, and emulate models of good writing (.25)
  11. Writing for Content Learning, which uses writing as a tool for learning content material (.23)

These strategies, as a whole, represent most (if not all) of what I have come to understand comprises good writing instruction. To that end, I am pleased to know that my theoretical orientation towards the field aligns with the experimental evidence about “what works” in good writing instruction. In particular, I am glad to see that writing strategies and collaborative writing rank so high, although it makes me wonder why the process approach ended up toward the bottom of the list. This makes me wonder if they, unlike Katie Wood Ray, are making a distinction between the writing process and writing workshop, and I am guessing that they are not.

Even though Graham and Perin reiterate that this is not a curriculum, I have to wonder if some teachers, schools, districts, and states, could see it as such and “require” teachers to use each of the strategies in a writing program. Like the writing process/workshop distinction above, there are other parts of the report that do not represent the richness of discussions in our field (such as moving beyond word processing into other forms of digital writing or thinking broadly about writing to learn strategies), and I feel that the over reliance on only quantitative data may be limiting some of the implications and, in turn, potentially lead to implementation plans that are not complete.

All that said, the report is useful to me in my teaching in many ways. As a teacher educator, I think that this report can certainly offer evidence of the many practices that I use that stand up, for better or for worse, in a “scientifically-based” study. Thus, when I use these approaches in my teacher education courses and professional development workshops, I can point to the effect size data and suggest that these strategies have been integrated in a variety of contexts, yielding strong results. In other words, it can bring empirical merit to many of my theoretical practices, and the practices I share with other teachers.

As a writing teacher, this report encourages me to reconsider some ideas that I have neglected for some time. I do appreciate that Graham and Perin discussed the negative influence of explicit grammar instruction (p. 21) as it affirms my beliefs and synthesizes a number of good studies that have happened over the years, thus bringing (what we hope might be) a final curtain on the “should we teach grammar in isolation” argument. Also, the processes of summarization and sentence combining remind me — as someone who will be teaching a college writing class this fall — that not all students know how to do these tasks, or do them well. Modeling summary writing and sentence combining could offer some variety to my lessons as well as teach useful writing skills.

In sum, the Writing Next Report was useful to read as it confirmed many of my beliefs about teaching writing with statistical evidence while reminding me of the other aspects that I need to reintroduce into my practice. It also is encouraging to see these practices as the ones held up as “good” for writing instruction because, perhaps, those who works with assessment of writing might be able to think about how to measure these aspects of writing, not just the final product, which is so valued right now.

Notes on Timothy Shanahan’s “The Role of Research in US Reading Policy”

Here are notes from a talk today:

Timothy Shanahan, Current President of the International Reading Association

Tim Shanahan is a professor of urban education at the University of Illinois at Chicago and the director of the UIC Center for Literacy. He has played a leadership role at the federal level in making connections between literacy research and educational policy. Dr. Shanahan served on the National Reading Panel, chaired the National Literacy Panel on Language and Minority Children and Youth, and chairs the National Early Literacy Panel. His research interests include: the relationship between reading and writing, the assessment of reading ability, family literacy, and school improvement. Dr. Shanahan has published numerous research articles and written and/or edited several books including Teachers Thinking, Teachers Knowing (1994) and Multidisciplinary Perspectives on Literacy Research (1992).

Notes from the session, “The Role of Research in US Reading Policy”:

  • Understanding reading in the context of US policy; having become a combatant in the “reading wars”
    • I had been invited to be a part of the National Reading Panel and served on it for two years trying to synthesize research through a meta-study
    • The real upswing of all this is that it led to $5 billion infused into reading education
  • An Ideological History Lesson
    • Governmental role in education
      • 1600s: MA, CT, and NH establish public schools for religious reasons
      • 1788: US Constitution ratified, no mention of education
      • 1791: Amendment X for state’s rights
      • 1791: 7 states make constitutional provision for public education (e.g., establish school boards)
      • 1800s: Freedmans‘ act for curriculum for freed slaves
        • First time that feds intervened in local schools at such a large and systematic level
      • 1900s: Increased centralization, immigration
      • 1950s and 60s: ESEA and focus on science and technology
      • Current: More centralized curriculum
  • Current forces in education
    • Explosive growth in informational technology
    • Internationalization of economic markets
    • Changes in the relationship of literacy attainment and well being
  • Current changes in the economy
    • Growth of service sector and decline of manufacturing
    • Transformation of low education blue collar work into skilled labor
    • Free trade movember of low-paying jobs and workers
    • Outsourcing of middle-income jobs and immigration of high-income workers
  • Changes have led to:
    • More jobs that require reading
    • Increased correlation of reading achievement and economic success
  • Current status of education
    • Since “A Nation at Risk,” US education is continually in “reform” mode
    • From 1971 to 1994, there has been no improvement in reading for 4th graders
    • Cost of education has risen in real terms
    • Public dissatisfaction is still there because the fundamental problems have not changed
    • Educators have not been sure footed (neither convinced of the need for reform nor clear on how to make things work better)
      • Where are the experts at the table in most of these debates?
  • What’s the Point?
    • The politicians aren’t crazy — reading has to improve
    • Their “solutions” are frequently wrong, but they deserve credit for making serious attempts to solve a real problem
    • They are deeply frustrated by educators who don’t seem to recognize the problem (or who want to respond with the union shop kinds of solutions)
  • Context for NCLB
    • Low NAEP scores and the reading wars in the 1990s
      • As it got bigger and bigger, politicians decided to do something that they hadn’t done in education before: appoint an expert panel
        • I had become a member of the National Reading Panel
          • They didn’t want our opinions; they didn’t want opinions, they wanted a determination of fact
          • We can’t make recommendations except for recommendations on more research
          • Can’t tell how well we thought things would work, or not
    • Changes during the Clinton administration
      • focusing Title I money on poorest schools
        • This hadn’t happened before, and the dollars were focused a little bit more on poor districts
      • Reading Excellence Act (SBRR)
        • Some direct money is given to states for reading education, given on a grant basis, although this was done before the NRP was finished
        • Every state was able to decide what they wanted to call; “research” and there were no standards on it at all
      • Pushing adoption of proven curriculum
      • Move from professional development to volunteers
        • Big fight on money for teachers vs. volunteer tutors
  • National Reading Panel
    • Appointment process began in 1997
      • How do you build authority and trust?
      • Took 300 nominations and the Secretary of Education created the panel
    • Open meetings with transcripts
    • Public hearings around the country
    • Explicit methodology: replicable searcher, pre-established inclusion criteria, research had to be consistent with questions, meta-analysis
      • Some things we were not able to find conclusive evidence about things, so we didn’t include it
    • Findings on phonemic awareness, phonics, fluency, comprehension, vocabulary, professional development
    • Controversy
      • There was a very real chance that this would have all ended up on a shelf, but we had a new president come in and he made it the cornerstone of federal literacy policy
  • No Child Left Behind
    • 2001 reauthorization of ESEA
    • More Title I funding, but more accountability
    • Reading First ($1 billion a year for K-3 PD, curricula, materials)
    • This allows Congress a way out of the unholy bargain. We can control quality without being a part of local decision making since the NRP did it
      • Congress keeps its hands clean of the controversy
  • Results of all of this…
    • Higher 4th grade achievement on both the NAEP and the NAEP trend items (reduction of achievement gap, sizable gains, highest trend performance ever)
      • What’s indisputable is that 4th graders are reading better now than they were 12 years ago, despite how you spin the politics on how the gains have been made and by whom
      • With all the state and federal focus on K-3, there has been some improvement at 4th grade. But…
    • No improvement for older students
      • 8th graders are not moving up, so we are losing the gains between 4th and 8th grade
      • What you see in the whole body of ed research is that Reading Recovery, Head Start, and other programs is that we know how to raise achievement early but that we don’t know how to sustain it
        • For instance, the difference between kindergarten full and half days had their gains erased by the end of first grade because all the same students did all the typical first grade curriculum.
      • We need to reform the system at all levels from the ground up. We need to keep all day kindergarten and then do PD for teachers in first grade to work with these higher achieving students.
  • NCLB/RF Problems
    • Accountability of goals of NCLB are unreachable and fail to reward success
    • The costs of testing are burgeoning in terms of lower morale, corruption, mistrust, etc.
    • States are encouraged to reduce standards
    • Peculiar corruption of Reading First
    • Subtle shift of NRP to WWC
    • Problems with the newer panels (NELP, NLP)
  • What is needed to make research-based policy work?
    • Substantial public support for research
    • Open way of determining specific research priorities
    • Benefits for researchers who choose to do this work
    • Is this likely? No:
      • We don’t see evidence so public support for research.
      • The feds are maintaining power over priorities.
      • There is no real infrastructure for carrying out recommendations for policy into practice.
      • There is likely to be evidence soon of the effectiveness of the Reading First policy.
      • There is no increase in university commitment.
  • Question and answer session
    • Shifts in thinking: Clinton and the Democrats wanted national testing in the 1990s, but the conservatives didn’t want to lose local control; now it is vice versa because all the states have their own standards.
    • Reading First: There is survey data to show that Reading First teachers actually feel better now that they “know how to teach reading” and have books in their classrooms. Part of the reason for this success is the Reading Excellence Act.
    • What is dividing the field is not methods, but thinking about the social and cultural aspects of what counts as evidence.
      • What grad students need to do is set aside the rhetoric of whether things are “good” or “bad,” and look at the field as a whole. It doesn’t mean that there are times when different questions demand different kinds of evidence, especially as it relates to policy.
      • There are people in medicine who do anthropology, but they don’t move into the policy debate.

Blogged with Flock