Justifying Digital Reading and Writing

Before the NWP Annual meeting, I had three separate conversations (one by email, one by phone, and one in person) with colleagues from the local, state, and national level about why and how to use digital reading and writing in their classrooms and for professional development. I had many more of these conversations at the NWP Annual Meeting and the ACE Workshop. What I will try to capture here is a basic outline of my response to them, and why I feel that these are critical literacy skills.

I hope to return to this post and update it, both because it is very rough right now and it will always be able to grow. Please feel free to help me out if you have ideas I should add, OK?

Frameworks

First, to conceptually frame digital reading and writing, there are a few places to begin:

Teaching tips and things to do

I know that this is not the most organized or coherent list of stuff. Also, I am thinking of turning it into a page on this site so it remains static. But, for now, I think that it is the beginning of something worth capturing and beginning to build as a more comprehensive resource about how and why we want to teach with these technologies.

Typo Generator

Now, here is a great way to kill time and generate cool graphics for your blog:

typoGenerator

Interestingly enough, the warning at the bottom of this image says “the images used for generating may be subject to copyright.”

Also interesting, as soon as I clicked away from the page, the temp image that was stored — and that I tried to blog above — disappeared. Save early, save often, I suppose…

Blogged with Flock

George Hillocks, University of Chicago: Procedural Knowledge and Writing Instruction

Another great talk on campus from a leading scholar in English Education and Composition…

Notes from George Hillocks’ talk, “Procedural Knowledge and Writing Instruction”

  • Statement about effectiveness of grammar instruction that Mary mentioned– often cited and often ignored
    • The more time that students spend on grammar leads to a negative correlation in writing scores
    • Teachers think it is important to teach grammar and kids get worse as writers as a result
  • Pedagogical content knowledge for teaching English and critical thinking
    • Last English Education was a report on the Summit, focusing on “The State of English Education and a Vision for its Future: A Call to Arms”
      • Goal 1: critical thought, dialogue, and a circumspect and vigilant American citizenry
      • The English teacher should be second to none in this goal
    • It is hard to argue with these goals, but there is no indication about how the authors would go about meeting these goals
    • Let’s assume that this is, indeed, one of the major goals of English Education — if so, we need to know what counts as critical thought and literacy
      • How do you know if someone is doing this?
      • How do you teach it?
      • How do you know if it has been taught?
    • We are entering into what I would call a task analysis.
      • What kind of knowledge, declarative and procedural, to write an argument?
        • At the very least, it involves a sense of what words are and how they work. At another level, it involves propositions and how they are supported with warrants. It separates fact from fiction, and this is the beginning of understanding argument.
      • Nussbaum, Cultivating Humanity — looks at how argument plays a role in civic freedom
        • We need to be able to look at all kinds of arguments, not just the antagonistic ones. We need to understand a cultural of critique in which argument is a Socratic inquiry, not just shouting the loudest.
      • As we listen to the arguments about the US in Iraq, we need to listen more carefully and understand the Arc of Rhetoric
        • Rhetoric is the argument of probability
        • for Aristotle, it was important to bring many arguments to bear in deliberation so that one can consider if it is “holy” (just)
        • These are dependent on warrants being tied to the claims
          • We can’t call something a good movie, without defining what a good movie is
        • Forensics — arguments about the facts of a case
          • There were no forensic arguments in the lead up to the war in Iraq
          • But, where was the evidence? It turns out that even the administration admits that the claims are now untrue.
    • In Aristotelean terms, we can persuade, negotiate, or judge (epideictic)
      • Oedipus as an epic hero and having the right to brag – this is one of the rights of heroes
    • Summary of argumentative forms
      • Fact
      • Judgment
      • Policy
    • The Uses of Arguments – Toulmin
      • One of the criticisms of this text is that you have to keep attacking the warrants and the arguer needs to respond to the arguments
    • Warrants depend on the situation
      • Forensics – based on scientific facts and the situation
      • Epideictic – based on judgment
      • Deliberative – based on ethics
    • Example from a teacher in a Chicago high school, Sara Rose Laveen
      • Students were studying argument over the course of the whole year
      • They had been studying forensic and epidectic and were working on deliberative
        • They were discussing a gang ordinance in Chicago and took different roles (community members, police officers, gang members, those falsely arrested, etc.)
        • Teacher had students working in pairs of two or three and she provided a number of resources for the students, including articles and information from the ACLU
        • Since many had had encounters with loitering gang members and the police, they wrote about their experiences and shared them in their arguments
        • When students prepared and peer reviewed their arguments, they shared them with a panel of Hillocks, a lawyer, police officer, etc.
        • They had three hour presentations where they debated and rebutted one another to discuss the policy
        • Then, they wrote extended papers supporting or opposing the policy.
        • Students operated the entire session and thinking was at a very high level.
    • 1986 metanalysis looking at experimental studies on sentence combining, grammar, and other foci
      • Computing the effect size for the gain the the experimental group divided by the gain for the control group
      • Study of sentencing combining and other tasks of procedural knowledge were the ones that showed the most gains
      • The difference between inquiry and other effects sizes is significant because it focuses on content.
      • Free writing is in the zone of what students can do without help, while inquiry is in the zone of proximal development and pushes them beyond what they can already do. This is a better model than inserting info into something like the five paragraph theme.
    • Trying to get beyond the apprenticeship of observation and move into a more robust model
      • First, we have teacher led lessons
      • Then, we have naturalistic inquiry where development precedes learning (student-centered instruction). This is opposed to Vygotsky’s notion that student develop as they learn.
      • Meeting with students had a low effect size
      • The treatment that had some kind of balance with student-led small group work focusing on a challenging task where they had to interpret or analyze information to come up with something new.
      • Students in the environmental groups out performed student in the natural process group.
    • With students in my masters of teaching degree program, I assumed that they were committed to helping children learn.
      • Certainly, no teacher would deny that they care.
      • But, making consistent manifestation of caring can only come out if the teacher understands her students, content, and the interactions between them.
      • It entails not only the ability to analyze existing teaching materials, but to create and critique new ideas
      • I wanted my students to develop ideas and lessons for active learning in their classrooms with most students on task most of the time and engaged in inquiry and constructing knowledge for themselves.
  • So, what is pedagogical content knowledge for an English teacher?
    • Example activity to help students pay attention to evidence
      • Queenie mystery
        • One warrant is that people fall forward down stairs, and that can lead to one claim about her guilt.
        • Another warrant is about the glass being in his left hand, and he should have been grabbing the banister.
          • The warrant ties the evidence to a claim — generally when people fall downstairs, they raise their hands to protect themselves.
        • There is something on the stove cooking — so what?
        • We have at least two or three pieces of evidence that lead us to believe that there are warrants to support the claim
        • His clothes are looking quite neat, the items on the wall are still straight, jacket is fastened right over left, there is something cooking in the kitchen
      • This activity takes two 45 minute class periods, and then they write on a third day, and we move on to the next topic
      • They were using more evidence at the end on the post-test as compared to what they had done in the pre-test
  • Engaging students in classroom discussions
    • Giving them the skills to take up discussions and interact with one another

    Thoughts on MCTE 2006


    MCTE 2006

    Originally uploaded by hickstro.

    Last Friday, Alfie Kohn spoke at the MCTE 2006 Conference. There were many points that he made about standards, assessments, and accountability, most of which I agreed with, some of which I would want to take issue. However, there were a few research studies that he mentioned (and that I would like to get full citations on, so I might check his website) that had interesting things to say about teaching and learning when under pressure for standardized assessment.

    In the first example, two groups of teachers were given different instructions. The first was told that their students would be tested and that they would be held accountable for how well their students did on the test. The second were told that there students would be tested as well, but told to teach in order to maximize learning. Guess what group did better? No surprise, group two did better.

    In the second example, he discussed how shallow thinking students and deeper thinking students (as measured by a test of cognitive ability) did on standardized tests. Interestingly, the deeper thinkers did worse, often because they could see how different multiple choice selections could all be viable, depending on interpretation. He made this point in the sense that if you see MEAP scores going up in a district, you should be worried about the quality of thinking that is going on in that district.

    There were many other examples, but the final one that I will mention here is that those who design the tests try to make them so that some students, usually the deeper thinkers, will be tricked (no surprise there). What was surprising, though, was that the variation on any given test that any student takes can vary by significant degrees on any given day. Moreover, districts can have a natural drifting of scores from year-to-year (anywhere from 30 to 50%), and statisticians expect this to be natural. In other words, no one will ever reach 100% proficiency (the goal of NCLB).

    One point that he made about standards in general and Michigan’s standards in particular was bothersome. He said that the best standards are vague outlines of what teachers can do, yet then went on to criticize the Michigan Grade Level and High School Content Expectations. Maybe it is because I have worked on MEAP committees and I have tried to integrate these standards into the assessment in the best way I know how. Maybe it is because I know colleagues who fought to keep these expectations as vague as possible, resisting the notion of parsing them out by grade level in the high school. Or, maybe it is because I think that we do, at some level, have to have some direction about what and how to teach. Whatever the reason, I think that he was a bit harsh on the Michigan standards, but I think most of that criticism was aimed at Granholm and her insistence that we get accreditation from Achieve.org. He didn’t have much to say about Dick “DeVoucher” either, so it is tough to say exactly what is going on with all that.

    At any rate, it was a provocative talk and I am glad that we have people like Kohn out on the edge pushing us on all these issues. Next year, Kathy Yancey comes to keynote the conference, so I am looking forward to that already.

    Next up… NCTE/NWP in Nashville in six short weeks.

    Some thoughts on assessment of new media

    David makes an interesting point about blogs and assessment. After noting the old aphorism, “Not everything that is measurable is valuable and not everything that is valuable is measurable,” he adds this:

    I think the things that are most educationally valuable about blogs and read/write web tools are the hardest to measure. Certainly, the creativity they encourage, the excitement they generate are almost impossible to reduce to a simple checklist.

    EdCompBlog

    Indeed, I think that another little saying that involves assessment might be in order here, too. “What gets measured gets treasured.” So, not only are the intangible aspects of new media composing probably the ones that are most valuable to teachers’ pedagogy and students’ learning, they are also the most difficult to justify in light of standardized tests and other measures of accountability.
    Interestingly enough, in Michigan, our new high school content expectations are filled with references to multimedia and other digital projects. In a way, it is good that these digital creations are now “in the standards,” for that makes it easier to justify professional development and the like. Yet, the conceptual jump from teaching the personal narrative to the digital story — and back again — is still a somewhat difficult one to make both in terms of talking about the writing task itself and the teaching of it.

    All the same, I agree with David’s main point. Some of the aspects about teaching writing with technology are the ones that are most difficult to explain and to evaluate. Yet, we need to begin to think about ways to do that. One place to begin looking for answers is Bernajean Porter’s “Evaluating Digital Projects” site.

    technorati tags:,