AI in College Writing

As the new semester begins, many faculty are again engaged in an ethical debate about the ways in which their students might use AI in their writing assignments, whether with explicit guidance and permission, or otherwise.

This past week, I was invited to join educational futurist Bryan Alexander and my colleague and collaborator Daniel Ernst as we discussed a number of ideas related to AI and the teaching of writing at the college level. It was a robust discussion, and I encourage you to view the Future Trends Forum recording here.

Over the past few months — as I have been trying to refine my own thinking on AI and writing through blogging, facilitating workshops and webinars, beginning a new book project with my colleague and co-author Kristen Hawley Turner, and reviewing the transcripts of our focus group interviews from the project Daniel and I have been working on — I have begun to summarize the ways in which my colleagues are describing their use of AI in writing instruction in the following manner.

In short, I am hearing educators talk about and seeing ways that AI can serve 1) as a thinking partner, 2) as a research assistant, and 3) as a co-writer. This is an imperfect list, of course, as the tools continue to change. Yet, as 2024 begins and the range of functions available in generative AI writing tools seems to be settling into a few categories, I share some initial thinking on them here.

AI as Thinking Partner

With the many AI tools that students can use as conversational partners (e.g., ChatGPT, Bing, Bard), I wonder how we can encourage them to engage with the AI as a thinking partner, much the same way we would during a writing conference (or encourage them to interact with peers to share ideas and give feedback). How might we encourage students not to simply ask the chatbot to write an essay or story for them, and instead to prompt it for the kinds of feedback that could further their own writing?

For instance, when prompting ChatGPT in this manner — “Given recent weather patterns, I am getting more worried about changes to our environment, and I am working on an argumentative essay on climate change. What are some questions that could help get me started as I think about specific topics to cover in my essay related to sea level change, heat waves, and forest fires?” — it provided me with a decent list of questions that could lead my writing in additional directions.

Similarly, Bard’s Copilot (which I have access to through my institution’s Microsoft license) generated some questions, though perhaps not as nuanced as ChatGPT’s. As just one example, Chat GPT generated “How has the global sea level changed over the past few decades, and what are the primary contributors to this change?” whereas Copilot asked two separate questions “What are the primary causes of sea level change?” and “What are some of the most significant sea level changes observed in the last decade?”

Even having students compare the outputs of these AI tools could be useful, looking at the depth and nuance evident in the questions, and thinking about which set of questions would lead to more substantive, engaging writing. Even if just being used to prompt thinking, encouraging students to use the AI chat tools as a way to develop new inquiry questions is one way to engage with AI as a thinking partner.

Of note, both ChatGPT and Bing provided a similar set of caveats at the end of their output, which are somewhat helpful reminders (if followed by additional instruction and coaching). Here is ChatGPT’s:

“Considering these questions can help you delve into specific aspects of each topic and provide a well-rounded perspective in your argumentative essay on climate change. Remember to back your arguments with credible sources and evidence to strengthen your case.”

ChatGPT Output

On a related note, Paul Allison has been doing a good deal of work to integrate specific GPTs for feedback and scaffolding thinking in NowComment. This is certainly a tool that is worth exploring as we help students engage in substantive dialogue around texts, images, and videos, all supported by scaffolded thinking via GPTs that are customized to specific academic tasks.

AI as Research Assistant

As tools Perplexity, Bing, and Bard continue to integrate sources into the AI output and fight many of the fears about hallucinations and misinformation that have been part of the AI conversation since the fall of 2022, I have begun to wonder what this means for students in their efforts to critically evaluate online sources. In this sense, the AI output itself is one source, as well as the additional sources that are referenced in these outputs.

For instance, in a search for “What is climate change?” via Perplexity, it yielded links to six additional sources in the first sentence, with a total of eight different sources for the article. It produced a clear, concise summary and prompted additional questions that the user could click on and explore. By comparison, a Google search of the same question (and, yes, I know that we aren’t supposed to ask Google questions, yet it is clear that many people do), provided a list of sources and a summary panel from the United Nations.

Of note, it is interesting to see that Perplexity’s sources (UN, two from NASA, World Bank, NRDC, NatGeo, Wikipedia, and BBC, in that order ) are similar to, though not exactly the same as Google’s output in the top ten hits, for me at least: UN, NASA, World Bank, NASA Climate Kids, BBC, NatGeo, US EPA, NASA, Wikipedia, and NRDC, in that order. This, of course, could lead to some great conversations about lateral reading, tracking of user data across the web and privacy, and the ways in which different tools (traditional search as compared to AI-powered search) function.

Moreover, as we begin to see AI embedded directly in word processing tools, this research process will become even more seamless. And, as described in the section below, we will also want to begin thinking about when, why, and how we ask students to engage with AI as a co-writer, relying on the research it has provided to craft our own arguments.

AI as Co-Writer

Finally, the aspect of AI in English language arts instruction that I think is still causing most of us to question both what we do, as teachers, and why we do it, is this idea that AI will take over anything from a small portion to a large degree of our students’ writing process. In addition to the initial fear of rampant, outright cheating and how to catch plagiarists, in conversations with my colleague Pearl Ratunil of Harper College, we are trying to understand more about how AI cuts to the core of who we are as teachers of writing. Teaching writing, in this sense, is deeply emotional work, as we invest time and energy into the success of individual writers, providing them with coaching and feedback. To think, feel, or actually know that they have undermined our efforts at relationship-building, let along teaching specific skills that are then outsourced to AI is, well, deeply saddening.

Yet, back to the main idea here of AI as co-writer. The tools are here, becoming more and more integrated and our student will continue to have access to and use them in their day-to-day writing tasks. I learned about another new-to-me tool the other day, Lex, and that is on my agenda to explore in the weeks ahead. Add that to the list of many tools I keep exploring like Rytr, Wordtune, Quillbot, and more. Lex claims that “With Lex’s built-in AI, the first draft process becomes a joy. No more switching back and forth between ChatGPT and Google Docs,” so that will be interesting to see.

More than simply an auto-complete, these tools do have the capability to help students explore genre and tone, adjusting messages to different audiences based on needs for style and clarity. Just as we would want students to be capable writers using other tools that they have available to them — both technical tools like spelling and grammar checks, as well as intellectual tools like mentor texts and sentence templates — we need to help them make wise, informed decisions about when, why, and how AI can help them as writers (and when to rely on their own instincts, word choice, and voice).

As Kristen Turner and I work on the book this year, I will be curious to see how some of these tools perform to help support different, specific writing skills (e.g., developing a claim or adding evidence). My sense so far is that AI can still help produce generic words, phrases, sentences, and paragraphs, and that it will take a skilled writer (and teacher) to help students understand what they need to revise and refine in the process of writing.

Closing Notes

In my “welcome back” email to faculty this week, I shared the following as it relates to academic integrity issues.


Having had conversations about this with a few of you last fall – and knowing that a few of you dealt with cases of potential AI dishonesty – as we begin this semester, it is worth revisiting any policies that you have in your syllabus related to academic honesty and AI. It is no surprise that I am still, generally, an advocate for AI (with some guardrails), as our own students will need to know how to use it in their professional communication, lesson planning, and in teaching their own students to use AI tools. 

In addition to the many resources on the CIS AI website, one that they have listed is from Dr. Christopher Heard of Pepperdine/Seaver College, who used Twine to create an interactive where you can create a draft of syllabus language that is then free to use and remix because it is in the public domain. This tool could be a useful start, and I would also encourage you to read recent research on the ways that AI plagiarism detection tools are, or are not, doing so well at the task, and that many are biased toward our multilingual learners, the use of AI detection is perhaps dwindling, as some universities are simply abandoning the tools altogether. If we do plan to use plagiarism detection tools at all in our classes, then we need to follow best practices in scaffolding the use of such tools and making students aware of our intentions.  

Finally, consider this student’s perspective in an op-ed for CNN, who encouraged teachers in this manner:  

“We can be taught how to make effective prompts to elicit helpful feedback, ideas and writing. Imagine the educational benefits students can gain by incorporating AI in the classroom, thoughtfully and strategically.”  

Sidhi Dhanda, September 16, 2023

As we focus more intently this semester on core teaching practices, I will be curious to see where the conversations about the use of AI intersect with our goal to prepare the next generation of teachers.  


Throughout it all — as I keep thinking about AI in the role of Thinking Partner, Research Assistant, and Co-Writer — 2024 promises to be another year dominated by the conversations around AI. In the next few weeks, I have at least three professional development/conference sessions on the topic, and I am sure that we will revisit it during our upcoming MediaEd Institute and summer workshops with the Chippewa River Writing Project, as well as the faculty learning community I am participating in at CMU.

In what ways are you rethinking the teaching of writing in 2024 with the use of generative AI writing tools?


Photo by Aman Upadhyay on Unsplash.

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Pivoting the Conversation on AI in Writing

As ChatGPT has heralded the “death of the college essay” and “the end of high school English, we could be well served to lean into the idea that we need to both rethink our writing assignments and to invite our students to “cheat” on them.

So, I am clearly coming to the conversation on AI a bit late.

As ChatGPT has heralded the “death of the college essay” and “the end of high school English” — and as we see both combative and generative approaches to the role of AI in writing instruction — I might be adding this blog post a bit behind the curve (though I was honored to be interviewed for a story about AI in writing this past week, published in Bridge Michigan).

Of course, I think that this is really the beginning of a much longer conversation that we are going to have about the role of technology and the ways in which we might approach it. So, it is not so much as I am late to the conversation, as it is that I am hoping we move it in a different direction.

Others in academia and beyond are, to be clear, already calling for this pivot, so I am not the first on this count either.

Still, I want to echo it here. Paul Fyfe, Director of the Graduate Certificate in Digital Humanities at NCSU, describes a compelling approach in a recent quote from Inside Higher Ed:

For the past few semesters, I’ve given students assignments to “cheat” on their final papers with text-generating software. In doing so, most students learn—often to their surprise—as much about the limits of these technologies as their seemingly revolutionary potential. Some come away quite critical of AI, believing more firmly in their own voices. Others grow curious about how to adapt these tools for different goals or about professional or educational domains they could impact. Few believe they can or should push a button

Paul Fyfe, associate professor of English and director of the graduate certificate in digital humanities, North Carolina State University (cited from Inside Higher Ed)

Like Fyfe, I too lean into the idea that we need to both rethink our writing assignments and to invite our students to “cheat” on them. AI can be used for idea generation (and refinement), and it can also be used as a way for us to reconsider genre and style. For instance, I continue to be intrigued by the options offered in Rytr, in particular, as it allows us to choose:

  • Tone, including options such as “compassionate,” “thoughtful,” and “worried.”
  • “Use case” or style, including options such as “blog idea and outline,” “email,” and “call to action.”
  • The option to produce up to three variants, with differing levels of “creativity.”

The screenshot below shows the Rytr interface, and the ways that these options can be easily chosen from dropdown menus before a writer enters their keywords and was Ryter use its AI abilities to, well, “ryt” for them.

Unlike the input interface of ChatGPT and other AI writing tools (which, to their credit, allows for natural language input for “write in the style of” including pirates and the King James Bible), the interface for Rytr is prompting me to consider a variety of contextual factors.

As a writer and teacher of writing, this set of choices available in Rytr fascinates me.

Screenshot from the AI writing tool, Rytr, showing the input interface with options for "tone," "use case," "variants," and "creativity level."
Screenshot from the input interface of Rytr (January 21, 2023).

Just as the “Framework for Success in Postsecondary Writing” invites student to engage in a variety of “habits of mind” such as “curiosity” and “flexibility,” I think that that AI writing tools, too, can give us opportunities to engage our students in productive conversations and activities as they create AI output (and re-create that output through a collaborative co-authoring with the AI).

Also, I think that we need to ask some serious questions about the design of our writing assignments.

When the vast majority of writing assignments have, well, already been written about and replied to (see: any essay writing mill, ever), we need to consider what it is that really constitutes a strong writing assignment — as well as the various audiences, positions, time frames, research sources, and alternative genres (Gardner, 2011) — in order to design meaningful tasks for our students that tools like ChatGPT will be, if not unable to answer, at least unable to answer as well as our students could through their knowledge of the content, their ability to integrate meaningful citations, and their writerly creativity.

From there, I am also reminded of NWP’s “Writing Assignment Framework and Overview,” which also suggests that we must design our assignments as one component of instruction, with reflective questions that we must ask (p. 4 in PDF):

What do I want my students to learn from this assignment? For whom are they writing and for what purpose? What do I think the final product should look like? What processes will help the students? How do I teach and communicate with the students about these matters?

National Writing Project’s “Writing Assignment Framework and Overview

As we consider these questions, we might better be able to plan for the kind of instruction and modeling we may offer our students (likely using AI writing tools in the process) as well as thinking about how they might help define their own audiences, purposes, and genres. With that, we might also consider how traditional writing tasks could be coupled with multimodal components, inviting students to compose across text, image, video, and other media in order to demonstrate competency in a variety of ways.

If we continue to explore these options in our assignment design — and welcome students to work with us to choose elements of their writing tasks — it is likely that they will develop the kinds of intentional, deliberate stance toward their own work as writers.

They can, as the Framework implies, “approach learning from an active stance” (p. 4) and “be well positioned to meet the writing challenges in the full spectrum of academic courses and later in their careers” (p. 2). As the oft-mentioned idea in education goes, we need to prepare our students for jobs that have not been invented yet, and AI writing tools are likely to play a part in their work.

All that said, I don’t know that I have answers.

Yet, I hope we continue to ask questions, and will do so again soon. To that end, I welcome you to join me and my colleague Dan Lawson for a workshop on this topic, described in the paragraphs below.


Since its launch in late November of 2022, ChatGPT has brought an already simmering debate about the use of AI in writing to the public’s attention. Now, as school districts and higher education institutions are deciding what to do with next steps, as writing teachers, we wonder: how can educators, across grades levels and disciplines, explore the use of AI writing in their classrooms as a tool for idea generation, rhetorical analysis, and, perhaps, as a “co-authoring” tool? Moreover, how do we adapt our assignments and instruction to help students bring a critical perspective to their use of AI writing tools? 

As I try to explore this a bit more, please join Dan Lawson and me on Thursday, February 2nd from 3:30 to 5:00 p.m. for a hyflex workshop (in person at CMU or online via WebEx) on revising writing assignments to better facilitate authentic learning goals. Please bring an assignment sheet for a current writing assignment. We will use AI writing applications to consider how best to revise those assignments and adapt our instruction for this changing context.

Register here

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Appreciating Writing Assistance Technologies… Finally?

This post originally appeared on the National Writing Project’s “Ahead of the Code” blog on Medium on August 22, 2020.


Appreciating Writing Assistance Technologies… Finally?

You would think that, as English teachers, we would have been more appreciative.

Even from the founding of our major professional organization, the National Council of Teachers of English, we have been concerned with (or simply complaining about) the overwhelming amount of writing that we need to grade and provide feedback upon.

As Edwin M. Hopkins, an English professor and one of the founding members of NCTE asked on the first page in the first issue of English Journal way back in 1912, “Can Good Composition Teaching Be Done under Present Conditions?

His concise answer: “No.”

Screenshot of Edwin H. Hopkins’ article, “Can Good Composition Teaching Be Done Under Present Conditions” from 1912.
Screenshot of Hopkins’ article, “Can Good Composition Teaching Be Done Under Present Conditions?” with his response highlighted in yellow.

And, this just about sums it up.

Even then, we knew that the work for English teachers was immense. And, 100+ years later, it remains so. Reading and responding to dozens, if not hundreds, of student compositions on any given week remains a consistent challenge for educators at all levels, from kindergarten through college.

Fast forward from Hopkins’ blunt assessment of how well any one English teacher could actually keep up with the volume of writing he or she must manage, and we land in 1966. It is at this moment when Ellis B. Page proposed in the pages of The Phi Delta Kappan that “We will soon be grading essays by computer, and this development will have astonishing impact on the educational world” (emphasis in original).

There is more history to unpack here, which I hope to do in future blog posts, yet the mid-century pivot in which one former English teacher turned educational psychologist, Page, set the stage for a debate that would still be under discussion fifty years later is clear. English people started taking sides in the computer scoring game. And, to be fair, it seems as though this was mission-driven work for Page, as he concluded that “[a]s for the classroom teacher, the computer grading of essays might considerably humanize his [sic] job.”

Tracing My Own History with Automated Essay Scoring

Over the decades, as Wikipedia describes it, “automated essay scoring” has moved in many directions, with both proponents and critics. These are a few angles I hope to explore in my posts this year for the “Ahead of the Code” project. As a middle school language arts educator, I never had opportunity to use systems for automated feedback in the late 1990s and early 2000s. As a college composition teacher in the mid-2000s, I eschewed plagiarism detection services and scoffed at the grammar-checkers built into word processing programs. This carries me to my more recent history, and I want to touch on the two ways in which I have, recently, been critiquing and connecting with automated essay scoring, with hopes that this year’s project will continue to move my thinking in new directions.

With that, there are two stories to tell.

Story 1: It was in early 2013 that I was approached to be part of the committee that ultimately produced NCTE’s “Position Statement on Machine Scoring.” Released on April 20, 2013, and followed by a press release from NCTE itself and an article in Inside Higher Ed, the statement was more of an outright critique than a deep analysis of the research literature. Perhaps we could have done better work. And, to be honest, I am not quite clear on what the additional response to this statement was (as its Google Scholar page here in 2020 shows only four citations). Still, it planted NCTE’s flag in the battle on computer scoring (and, in addition to outright scoring, much of this stemmed from an NCTE constituent group’s major concern about plagiarism detection and retention of student writing).

Still, I know that I felt strongly at the time that our conclusion: “[f]or a fraction of the cost in time and money of building a new generation of machine assessments, we can invest in rigorous assessment and teaching processes that enrich, rather than interrupt, high-quality instruction.” And, in many ways, I still do. My experience with NWP’s Analytic Writing Continuum (and the professional learning that surrounds it), as well as the work that I do with dozens of writers each year (from middle schoolers in a virtual summer camp last July to my undergraduate, masters, and doctoral students I am teaching right now) suggests to me that talking with writers and engaging my colleagues in substantive dialogue about student writing still matters. Computers still cannot replace a thoughtful teacher.

Story 2: It was later in 2013, and I had recently met Heidi Perry through her work with Subtext (now part of Renaissance Learning). This was an annotation tool, and I was curious about it in the context of working on my research related to Connected Reading. She and I talked a bit here and there over the years. The conversation rekindled in 2016, when Heidi and her team had moved on from Subtext and were founding a new company, Writable. Soon after, I became their academic advisor and wrote a white paper about the power of peer feedback. While Heidi, the Writable team, and I have had robust conversations about if and how there should be automated feedback and other writing assistance technologies into their product, I ultimately do not make the decisions; I only advise. (For full disclosure: I do earn consulting fees from Writable, though I am not directly employed by the company, and Writable has been a sponsor of NWP-related events.)

One of my main contributions to the early development of Writable was the addition of “comment stems” for peer reviewers. While not automated feedback?—?in fact, somewhat the opposite of it?—?the goal for asking students to provide peer review responses with the scaffolded support of sentence stems was so they would, indeed, engage more intently with their classmates’ writing… with a little help. In the early stages of Writable, we actually focused quite intently on self-, peer-, and teacher-review.

To do so, I worked with them to build out comment stems, which still play a major role in the product. As shown in the screenshot below, when a student clicks on a “star rating” to offer his or her peer a rubric score, an additional link appears, offering the responder the opportunity to “Add Comment.” Once they there, as the Writable help desk article notes, “Students should click on a comment stem (or “No thanks, I’ll write my own”) and complete the comment.” This is where the instructional magic happens.

Instead of simply offering the star rating (the online equivalent of a face-to-face “good job,” or “I like it”), the responder needs to elaborate on his or her thoughts about the piece of writing. For instance, in the screenshot below, we see stems that prompt the responder to be more specific, with suggestions for adding comments about, in this case, the writer’s conclusion such as “You could reflect the content event more clearly if you say something about…” as well as “Your conclusion was insightful because you…” These stems prompt the kind of peer feedback as ethical practice, that I have described with my colleagues Derek Miller and Susan Golab.

A screenshot of the “comment stem” interface in Writable. (Image from Writable)
Screenshot of the “comment stems” that appear in Writable’s peer response interface (Image courtesy of Writable)

And, though in the past few years the Writable team has (for market-based reasons) moved in the direction of adding Revision Aid (and other writing assistance technologies), I can’t argue with them. It does make good business sense and?—?as they have convinced me more and more?—?writing assistance technologies can help teachers and students. My thoughts on all of this continue to evolve, as my recent podcast interview with the founder of Ecree, Jamey Heit, demonstrates. In short, looking at how I have changed since 2013, I am beginning to think that there is room for these technologies in writing instruction.

Back to the Future of Automated Essay Scoring

So, as I try to capture my thoughts related to writing assistance technologies, here at the beginning of the 2020–21 academic year, I use the oft-cited relationship status from our (least?) favorite social media company: “It’s complicated.”

Do I agree with Hopkins, who believes that teaching English and responding to writing is still unsustainable. Yes, and…

Do I agree with Page, who suggests that automated scoring can be humanizing (for the teacher, and perhaps the student)? Yes, and…

Do I still feel that writing assistance technologies can interrupt instruction and cause a rift in the teacher/student relationship? Yes, and…

Do I think that integrating peer response stems and automated revision aid into Writable are both valuable? Yes, and…

Do I think that all of this is problematic? Yes, and…

I am still learning. And, yes, you would think that, as English teachers, we would have been more appreciative of having tools that would alleviate the workload. So, why the resistance? I want to understand more about why, both by exploring the history of writing assistance technologies as well as what it looks like, what it feels like, for teachers and students.

As part of the work this year, I will be using Writable with my Chippewa River Writing Project colleagues and, later this semester, my own students at Central Michigan University. In that process, I hope to have more substantive answers to these questions, and to push myself to better articulate when, why, and how I will employ writing assistance technologies?—?and when I will not. Like any writer making an authorial decision, I want to make the best choice possible, given my audience, purpose, and context.

And, in the process, perhaps, I will give up on some of the previous concerns about writing assistance technologies. In doing so, I will learn to be just a little bit more appreciative as I keep moving forward, hoping to remain ahead of the code.


Troy Hicks PortraitDr. Troy Hicks is a professor of English and education at Central Michigan University. He directs the Chippewa River Writing Project and, previously, the Master of Arts in Learning, Design & Technology program. A former middle school teacher, Dr. Hicks has earned CMU’s Excellence in Teaching Award, is an ISTE Certified Educator, and has authored numerous books, articles, chapters, blog posts, and other resources broadly related to the teaching of literacy in our digital age. Follow him on Twitter: @hickstro

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Resources for ETA NSW

This list of curated resources represents work that I have produced from March to May of 2020, all aimed at helping educators as they transitioned to remote learning during the COVID-19 pandemic.

The full article. “Critical, creative, and compassionate: Resources for teaching English in an era of COVID-19” appears in Australia’s English Teachers Association NSW’s journal, mETAphor (openly available through their website and as a PDF here).

The links here are presented in the order that they appear in the article, which I will provide a link to (once the issue is published online).

March 2020

April 2020

May 2020

Summer 2020

Books

Updated on June 30, 2020 to include article link.


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Resources and Reflections from “Online Environments and Your Students: Strategies to Inform Writing Instruction Webinar”

4Cs Online Writing Instruction Webinar AdEarlier this afternoon, I was pleased to be on a webinar, “Online Environments and Your Students: Strategies to Inform Writing Instruction” (Archived Video) with Jessie Borgman (Arizona State University), and Casey McArdle (Michigan State University). Hosted by Brett Griffiths, Director of Reading and Writing Studios at Macomb Community College, we covered a good deal of ground.

For my segment, we discussed tools for conferring and responding to student writers. Building from my experience in writing centers, NWP, K-12 teaching, college composition, and mentoring graduate students, I consider conferring to be the single most important activity in writing instruction. In the context of online learning (and our current “remote learning” scenarios), I am referring to “conferring” as scheduled meetings with students, via phone or video conferencing. This involves planning the conference, interacting during the conference, and follow-up after the conference.

Again, building from my experiences, I contend that timely, specific, and goal-oriented response helps writer move forward. When conferring is not an option, responding in an efficient and effective manner is second best. I work from the writing center-influenced ideas of responding first to higher order concerns, yet I am also willing to break protocol and offer directed feedback on lower order concerns. Responding can take the form of text, image, audio, or video and can happen at any stage of the writing process. Here are links to the tools that I shared:

Updated on May 17, 2020, with a link back to program page on NCTE’s website and a link to the archived video recording.


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Archived Webinar: “Multimodal Composition: Beyond Boring Nonfiction”

Earlier this summer, I was invited to collaborate with colleagues from the organization who runs Wonderopolis, the National Center for Families Learning, to co-lead a webinar with their Developmental Editor, Wendee Mullikin.

We discuss ways in which teachers can use Wonderopolis as engaging texts for their readers, pivoting into ways that these “wonders” can then become mentor texts for students as digital writers. To consider more of my thinking on this, please review my post from earlier this year for the Educator Collaborative blog, “From Wonder to Writing: Invite Students Into Inquiry Through Online Articles.”

The Vimeo link is now live – enjoy!

[vimeo 349967861 w=640 h=360]

Wonderopolis: Multimodal Composition–Beyond Boring Nonfiction from NCFL on Vimeo.


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Digital Diligence (SIDL 2019 Keynote)

For the fourth consecutive summer, I am honored to present the Thursday morning keynote at the Summer Institute in Digital Literacy. Over the past year, I have become increasingly concerned about dire headlines that move beyond the “kids these days” kinds of arguments we have heard in the past to a deeper, more disconcerting tone that suggests our brains, as well as our culture, are disintegrating. Thus, for my next book project, I am working on a new idea, one that I hope will catch hold amongst educators and parents: digital diligence.

From my work over the years on digital writing and connected reading, and from two decades of teaching, I feel that we need to change the tone of the conversation about educational technology. As we look at 1:1 and BYOD programs, as we consider the hundreds of possible tech tools we could use to scaffold learning and support creativity, why is it that we seem to keep moving back to the most reductive, mundane uses of tech? In our conversations about digital access, usage, and, even “addiction,” are we (educators, parents, medical and mental health professionals, and the media) asking the right questions? Moreover, are we modeling and mentoring tech use for our children and students, or simply managing it?

[googleapps domain=”docs” dir=”presentation/d/e/2PACX-1vR8bDNAFks0JVMOcnxo25ySaIOAczL9pXJvXviCTH0Sun5zUFPqfKMj2sYRCS_JEKl0I5K6sojodbel/embed” query=”start=false&loop=false&delayms=3000″ width=”480″ height=”299″ /]

Thus, today, we will engage in two activities that, I hope, move us toward digital diligence. By this, I define digital diligence as an intentional and alert stance that individuals employ when using technology (apps, websites, software, and devices) for connected reading and digital writing, characterized by empathy, purpose, and persistence. In particular, we will take a digitally diligent stance to better understand how knowledge is created within the Wikipedia community and explore opportunities for civil dialogue using social media.


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Recent Post on The EdCollab: From Wonder to Writing

Lighbulb/idea image from the Ed CollabPlease enjoy my most recent post, “From Wonder to Writing: Invite Students Into Inquiry Through Online Articles” on the EdCollab blog.

Our best literacy teachers, especially those of you engaging with the TheEdCollab, have known for a long time that we must provide students with mentor texts in order to help them better understand the genres in which they write, the audiences for whom they write, and the purposes that their writing can serve. We have also known—and continue to make clear for our students—the idea that various text types have specific features to help the writer stay organized and to cue the reader in the process of making meaning. As we consider the possibilities for digital reading and writing, we need to make these moves for writers and clues for readers equally as explicit as we do in print.

Read more


Image from the EdCollab blog

Preparing to “Build a Better Book”

As often happens in my professional life, earlier this year, I was invited to lead a session broadly related to teaching writing and digital literacy, specifically for middle school students. Unlike my previous experiences, however, this particular opportunity came from CMU’s Center for Excellence in STEM Education‘s partnership with the Build a Better Book Project. In short:

The Build a Better Book project, based at the University of Colorado Boulder, works with school and library Makerspaces to engage youth in the design and fabrication of accessible picture books and graphics… Through the Build a Better Book initiative, middle and high school youth develop technology skills and learn about STEM careers as they design and create accessible, multi-modal picture books, graphics and games that can be seen, touched and heard!

So, in this case, I was invited to lead a session on a topic that I had quite a bit of experience with (teaching character development in writing), but needed to think critically and creatively about how to present the idea, taking concerns about accessibility into account. And, as often is the case, I turned to my PLN for help.

Originally conceived as the “Tactile Picture Books Project” at the University of Colorado Boulder, I quickly discovered that another digital literacies scholar, Bridget Dalton, was part of the research team. Reaching out to her, she shared her scholarship about the project and the four core experiences for any tactile book workshop:

  1. “Introduction to the design task and audience”
  2. “[t]actile sensory immersion”
  3. “[t]eams’ making of tactile pages to retell a picture book” (and presentation of that book
  4. “[r}eflection on the experience.”

In the sense that students will already be immersed in the process, I’m fortunate that my lesson will come on the second day of a multi-day experience, focusing mostly on steps 3 and 4. They will have had some experience understanding the design task and the audience of visually impaired readers, as well as some tactile sensory immersion. When I see them on day two, my goal will be to help them think about ways that authors describe and develop characters in picture books. So, I am working on the retelling, but also the annotating. Taking what I learned from Margaret Price at DMAC earlier in the summer about annotations for accessibility, I will ask students to both write descriptions of the character as well as to use tactile materials for creating far, mid, and close-up representations.

The challenge, of course, is that helping them figure out how to create tactile books – as well as annotations – that accurately and creatively represent those characters.

Thus, I wanted to find a children’s picture book that – both literally through images as well as figuratively through language – “zooms in” on a character. I want them to write/create three different perspectives of the character – long shot, medium shot, and close up – both in writing and with crafting materials.

To that end, I again turned to my PLN to find an appropriate picture book, and Colby Sharp suggested Mother Bruce, by Ryan T. Higgins. His suggestion did not disappoint. Mother Bruce is perfect, with images of Bruce the bear from afar, from nearby, and in extreme close-up. Coupled with a flipped lesson from Aron Meyer on “Using the Zoom-In Strategy to Enhance Narrative Writing,” I will use a series of images from Mother Bruce to then have students think about descriptive words for illustrating characters in terms of shape, size, and proximity.

So, these slides represent my general thinking about how I will approach the lesson. We will look at the generic images, do a read-aloud of Mother Bruce, then look again at the images in the book more carefully, with a lens for both annotation and tacitly illustrating them:

Build a Better Book Lesson - Slide 1
Build a Better Book Lesson – Slide 1 (Images from Mollie Bugg)

Build a Better Book Lesson - Slide 2
Build a Better Book Lesson – Slide 2 (Images from Ryan T. Higgins)

Build a Better Book Lesson - Slide 3
Build a Better Book Lesson – Slide 3 (Resources adapted from Sight Word Games and Interesting Things for ESL Students)

So, the lesson focuses on the words…

  • What would a description of Bruce need to include when we “see” him from a distance? At a mid-range? Close up?
  • How can we use different words to describe shape, size, and proximity?

And the tactile elements…

  • What would his fur or nose feel like from far away? Close up?
  • What about the additional features of his body and face? Eyebrows? Snout?
  • How can we change shapes and texture to help the reader know that the image is a far shot, mid shot, or close up?

My goal will be to have them create the three tactile representations, as well as write the annotations for the tactile books as a way to supplement the readers’ experiences. Though we will probably not have time in my workshop to invite the students to audio record these annotations and connect them with Makey Makeys, that would be one extension that could make the text even more accessible, and is in line with the Build a Better Book pedagogy.

In sum, this is an interesting way to cap off a busy summer of professional learning. When the CMU STEM Ed Center invited me to do this work at the beginning of the summer, I had no idea what I would do. Yet, the challenge was given to me, and I kept thinking about the possibilities with each opportunity that I had to learn throughout the summer. I look forward to seeing how students responds to the lesson and, in turn, what they might do to more completely and complexly represent Bruce through both their annotations and tactile pages.


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Ramping Up Revision – ISTE 2018

[googleapps domain=”docs” dir=”presentation/d/e/2PACX-1vSa7gSsWnE1ma98UxSqx-sFdXUwd0D2xdgRa22s6i3rk_WdQ4-D3009cFEBlQJ_QdRU7WXJQe5wG9Mw/embed” query=”start=false&loop=false&delayms=3000″ width=”640″ height=”389″ /]

RESOURCES TO TRY


Photo by Štefan Štefan?ík on Unsplash

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.