Survey Invitation: Perceptions of AI Writing Tools in the College Composition Classroom

As AI tools becomes more common and stakeholders in the higher education community are both interested in — and concerned about — their use, we invite instructors to share your insights on the role of AI in writing instruction in this survey.

We are seeking input from college composition instructors, writing program administrators, writing center consultants, and others who are interested in the teaching of writing in higher education.

As artificial intelligence (AI) programs are becoming more common and stakeholders in the higher education community become more interested in — as well as concerned about — their use by students, we are inviting you to participate in this survey to share your insights and opinions on the role of AI in writing instruction.

Also, if you are interested, we would welcome you to share your contact information for a potential focus group interview in May or June of 2023.

To read the consent form and begin the survey, please click here or copy and paste the URL into a separate window.

https://cmich.co1.qualtrics.com/jfe/form/SV_2tJMDOyzeiFZXZs

The survey will be open until March 31, 2023, at midnight (11:59 PM) Eastern time.

Please feel free to share this invitation with others in your professional network. 

Questions? Please reach out:

Dr. Daniel Ernst, Texas Women’s University, dernst@twu.edu

Dr. Troy Hicks, Central Michigan University, hicks1tw@cmich.edu

Pivoting the Conversation on AI in Writing

As ChatGPT has heralded the “death of the college essay” and “the end of high school English, we could be well served to lean into the idea that we need to both rethink our writing assignments and to invite our students to “cheat” on them.

So, I am clearly coming to the conversation on AI a bit late.

As ChatGPT has heralded the “death of the college essay” and “the end of high school English” — and as we see both combative and generative approaches to the role of AI in writing instruction — I might be adding this blog post a bit behind the curve (though I was honored to be interviewed for a story about AI in writing this past week, published in Bridge Michigan).

Of course, I think that this is really the beginning of a much longer conversation that we are going to have about the role of technology and the ways in which we might approach it. So, it is not so much as I am late to the conversation, as it is that I am hoping we move it in a different direction.

Others in academia and beyond are, to be clear, already calling for this pivot, so I am not the first on this count either.

Still, I want to echo it here. Paul Fyfe, Director of the Graduate Certificate in Digital Humanities at NCSU, describes a compelling approach in a recent quote from Inside Higher Ed:

For the past few semesters, I’ve given students assignments to “cheat” on their final papers with text-generating software. In doing so, most students learn—often to their surprise—as much about the limits of these technologies as their seemingly revolutionary potential. Some come away quite critical of AI, believing more firmly in their own voices. Others grow curious about how to adapt these tools for different goals or about professional or educational domains they could impact. Few believe they can or should push a button

Paul Fyfe, associate professor of English and director of the graduate certificate in digital humanities, North Carolina State University (cited from Inside Higher Ed)

Like Fyfe, I too lean into the idea that we need to both rethink our writing assignments and to invite our students to “cheat” on them. AI can be used for idea generation (and refinement), and it can also be used as a way for us to reconsider genre and style. For instance, I continue to be intrigued by the options offered in Rytr, in particular, as it allows us to choose:

  • Tone, including options such as “compassionate,” “thoughtful,” and “worried.”
  • “Use case” or style, including options such as “blog idea and outline,” “email,” and “call to action.”
  • The option to produce up to three variants, with differing levels of “creativity.”

The screenshot below shows the Rytr interface, and the ways that these options can be easily chosen from dropdown menus before a writer enters their keywords and was Ryter use its AI abilities to, well, “ryt” for them.

Unlike the input interface of ChatGPT and other AI writing tools (which, to their credit, allows for natural language input for “write in the style of” including pirates and the King James Bible), the interface for Rytr is prompting me to consider a variety of contextual factors.

As a writer and teacher of writing, this set of choices available in Rytr fascinates me.

Screenshot from the AI writing tool, Rytr, showing the input interface with options for "tone," "use case," "variants," and "creativity level."
Screenshot from the input interface of Rytr (January 21, 2023).

Just as the “Framework for Success in Postsecondary Writing” invites student to engage in a variety of “habits of mind” such as “curiosity” and “flexibility,” I think that that AI writing tools, too, can give us opportunities to engage our students in productive conversations and activities as they create AI output (and re-create that output through a collaborative co-authoring with the AI).

Also, I think that we need to ask some serious questions about the design of our writing assignments.

When the vast majority of writing assignments have, well, already been written about and replied to (see: any essay writing mill, ever), we need to consider what it is that really constitutes a strong writing assignment — as well as the various audiences, positions, time frames, research sources, and alternative genres (Gardner, 2011) — in order to design meaningful tasks for our students that tools like ChatGPT will be, if not unable to answer, at least unable to answer as well as our students could through their knowledge of the content, their ability to integrate meaningful citations, and their writerly creativity.

From there, I am also reminded of NWP’s “Writing Assignment Framework and Overview,” which also suggests that we must design our assignments as one component of instruction, with reflective questions that we must ask (p. 4 in PDF):

What do I want my students to learn from this assignment? For whom are they writing and for what purpose? What do I think the final product should look like? What processes will help the students? How do I teach and communicate with the students about these matters?

National Writing Project’s “Writing Assignment Framework and Overview

As we consider these questions, we might better be able to plan for the kind of instruction and modeling we may offer our students (likely using AI writing tools in the process) as well as thinking about how they might help define their own audiences, purposes, and genres. With that, we might also consider how traditional writing tasks could be coupled with multimodal components, inviting students to compose across text, image, video, and other media in order to demonstrate competency in a variety of ways.

If we continue to explore these options in our assignment design — and welcome students to work with us to choose elements of their writing tasks — it is likely that they will develop the kinds of intentional, deliberate stance toward their own work as writers.

They can, as the Framework implies, “approach learning from an active stance” (p. 4) and “be well positioned to meet the writing challenges in the full spectrum of academic courses and later in their careers” (p. 2). As the oft-mentioned idea in education goes, we need to prepare our students for jobs that have not been invented yet, and AI writing tools are likely to play a part in their work.

All that said, I don’t know that I have answers.

Yet, I hope we continue to ask questions, and will do so again soon. To that end, I welcome you to join me and my colleague Dan Lawson for a workshop on this topic, described in the paragraphs below.


Since its launch in late November of 2022, ChatGPT has brought an already simmering debate about the use of AI in writing to the public’s attention. Now, as school districts and higher education institutions are deciding what to do with next steps, as writing teachers, we wonder: how can educators, across grades levels and disciplines, explore the use of AI writing in their classrooms as a tool for idea generation, rhetorical analysis, and, perhaps, as a “co-authoring” tool? Moreover, how do we adapt our assignments and instruction to help students bring a critical perspective to their use of AI writing tools? 

As I try to explore this a bit more, please join Dan Lawson and me on Thursday, February 2nd from 3:30 to 5:00 p.m. for a hyflex workshop (in person at CMU or online via WebEx) on revising writing assignments to better facilitate authentic learning goals. Please bring an assignment sheet for a current writing assignment. We will use AI writing applications to consider how best to revise those assignments and adapt our instruction for this changing context.

Register here

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Appreciating Writing Assistance Technologies… Finally?

This post originally appeared on the National Writing Project’s “Ahead of the Code” blog on Medium on August 22, 2020.


Appreciating Writing Assistance Technologies… Finally?

You would think that, as English teachers, we would have been more appreciative.

Even from the founding of our major professional organization, the National Council of Teachers of English, we have been concerned with (or simply complaining about) the overwhelming amount of writing that we need to grade and provide feedback upon.

As Edwin M. Hopkins, an English professor and one of the founding members of NCTE asked on the first page in the first issue of English Journal way back in 1912, “Can Good Composition Teaching Be Done under Present Conditions?

His concise answer: “No.”

Screenshot of Edwin H. Hopkins’ article, “Can Good Composition Teaching Be Done Under Present Conditions” from 1912.
Screenshot of Hopkins’ article, “Can Good Composition Teaching Be Done Under Present Conditions?” with his response highlighted in yellow.

And, this just about sums it up.

Even then, we knew that the work for English teachers was immense. And, 100+ years later, it remains so. Reading and responding to dozens, if not hundreds, of student compositions on any given week remains a consistent challenge for educators at all levels, from kindergarten through college.

Fast forward from Hopkins’ blunt assessment of how well any one English teacher could actually keep up with the volume of writing he or she must manage, and we land in 1966. It is at this moment when Ellis B. Page proposed in the pages of The Phi Delta Kappan that “We will soon be grading essays by computer, and this development will have astonishing impact on the educational world” (emphasis in original).

There is more history to unpack here, which I hope to do in future blog posts, yet the mid-century pivot in which one former English teacher turned educational psychologist, Page, set the stage for a debate that would still be under discussion fifty years later is clear. English people started taking sides in the computer scoring game. And, to be fair, it seems as though this was mission-driven work for Page, as he concluded that “[a]s for the classroom teacher, the computer grading of essays might considerably humanize his [sic] job.”

Tracing My Own History with Automated Essay Scoring

Over the decades, as Wikipedia describes it, “automated essay scoring” has moved in many directions, with both proponents and critics. These are a few angles I hope to explore in my posts this year for the “Ahead of the Code” project. As a middle school language arts educator, I never had opportunity to use systems for automated feedback in the late 1990s and early 2000s. As a college composition teacher in the mid-2000s, I eschewed plagiarism detection services and scoffed at the grammar-checkers built into word processing programs. This carries me to my more recent history, and I want to touch on the two ways in which I have, recently, been critiquing and connecting with automated essay scoring, with hopes that this year’s project will continue to move my thinking in new directions.

With that, there are two stories to tell.

Story 1: It was in early 2013 that I was approached to be part of the committee that ultimately produced NCTE’s “Position Statement on Machine Scoring.” Released on April 20, 2013, and followed by a press release from NCTE itself and an article in Inside Higher Ed, the statement was more of an outright critique than a deep analysis of the research literature. Perhaps we could have done better work. And, to be honest, I am not quite clear on what the additional response to this statement was (as its Google Scholar page here in 2020 shows only four citations). Still, it planted NCTE’s flag in the battle on computer scoring (and, in addition to outright scoring, much of this stemmed from an NCTE constituent group’s major concern about plagiarism detection and retention of student writing).

Still, I know that I felt strongly at the time that our conclusion: “[f]or a fraction of the cost in time and money of building a new generation of machine assessments, we can invest in rigorous assessment and teaching processes that enrich, rather than interrupt, high-quality instruction.” And, in many ways, I still do. My experience with NWP’s Analytic Writing Continuum (and the professional learning that surrounds it), as well as the work that I do with dozens of writers each year (from middle schoolers in a virtual summer camp last July to my undergraduate, masters, and doctoral students I am teaching right now) suggests to me that talking with writers and engaging my colleagues in substantive dialogue about student writing still matters. Computers still cannot replace a thoughtful teacher.

Story 2: It was later in 2013, and I had recently met Heidi Perry through her work with Subtext (now part of Renaissance Learning). This was an annotation tool, and I was curious about it in the context of working on my research related to Connected Reading. She and I talked a bit here and there over the years. The conversation rekindled in 2016, when Heidi and her team had moved on from Subtext and were founding a new company, Writable. Soon after, I became their academic advisor and wrote a white paper about the power of peer feedback. While Heidi, the Writable team, and I have had robust conversations about if and how there should be automated feedback and other writing assistance technologies into their product, I ultimately do not make the decisions; I only advise. (For full disclosure: I do earn consulting fees from Writable, though I am not directly employed by the company, and Writable has been a sponsor of NWP-related events.)

One of my main contributions to the early development of Writable was the addition of “comment stems” for peer reviewers. While not automated feedback?—?in fact, somewhat the opposite of it?—?the goal for asking students to provide peer review responses with the scaffolded support of sentence stems was so they would, indeed, engage more intently with their classmates’ writing… with a little help. In the early stages of Writable, we actually focused quite intently on self-, peer-, and teacher-review.

To do so, I worked with them to build out comment stems, which still play a major role in the product. As shown in the screenshot below, when a student clicks on a “star rating” to offer his or her peer a rubric score, an additional link appears, offering the responder the opportunity to “Add Comment.” Once they there, as the Writable help desk article notes, “Students should click on a comment stem (or “No thanks, I’ll write my own”) and complete the comment.” This is where the instructional magic happens.

Instead of simply offering the star rating (the online equivalent of a face-to-face “good job,” or “I like it”), the responder needs to elaborate on his or her thoughts about the piece of writing. For instance, in the screenshot below, we see stems that prompt the responder to be more specific, with suggestions for adding comments about, in this case, the writer’s conclusion such as “You could reflect the content event more clearly if you say something about…” as well as “Your conclusion was insightful because you…” These stems prompt the kind of peer feedback as ethical practice, that I have described with my colleagues Derek Miller and Susan Golab.

A screenshot of the “comment stem” interface in Writable. (Image from Writable)
Screenshot of the “comment stems” that appear in Writable’s peer response interface (Image courtesy of Writable)

And, though in the past few years the Writable team has (for market-based reasons) moved in the direction of adding Revision Aid (and other writing assistance technologies), I can’t argue with them. It does make good business sense and?—?as they have convinced me more and more?—?writing assistance technologies can help teachers and students. My thoughts on all of this continue to evolve, as my recent podcast interview with the founder of Ecree, Jamey Heit, demonstrates. In short, looking at how I have changed since 2013, I am beginning to think that there is room for these technologies in writing instruction.

Back to the Future of Automated Essay Scoring

So, as I try to capture my thoughts related to writing assistance technologies, here at the beginning of the 2020–21 academic year, I use the oft-cited relationship status from our (least?) favorite social media company: “It’s complicated.”

Do I agree with Hopkins, who believes that teaching English and responding to writing is still unsustainable. Yes, and…

Do I agree with Page, who suggests that automated scoring can be humanizing (for the teacher, and perhaps the student)? Yes, and…

Do I still feel that writing assistance technologies can interrupt instruction and cause a rift in the teacher/student relationship? Yes, and…

Do I think that integrating peer response stems and automated revision aid into Writable are both valuable? Yes, and…

Do I think that all of this is problematic? Yes, and…

I am still learning. And, yes, you would think that, as English teachers, we would have been more appreciative of having tools that would alleviate the workload. So, why the resistance? I want to understand more about why, both by exploring the history of writing assistance technologies as well as what it looks like, what it feels like, for teachers and students.

As part of the work this year, I will be using Writable with my Chippewa River Writing Project colleagues and, later this semester, my own students at Central Michigan University. In that process, I hope to have more substantive answers to these questions, and to push myself to better articulate when, why, and how I will employ writing assistance technologies?—?and when I will not. Like any writer making an authorial decision, I want to make the best choice possible, given my audience, purpose, and context.

And, in the process, perhaps, I will give up on some of the previous concerns about writing assistance technologies. In doing so, I will learn to be just a little bit more appreciative as I keep moving forward, hoping to remain ahead of the code.


Troy Hicks PortraitDr. Troy Hicks is a professor of English and education at Central Michigan University. He directs the Chippewa River Writing Project and, previously, the Master of Arts in Learning, Design & Technology program. A former middle school teacher, Dr. Hicks has earned CMU’s Excellence in Teaching Award, is an ISTE Certified Educator, and has authored numerous books, articles, chapters, blog posts, and other resources broadly related to the teaching of literacy in our digital age. Follow him on Twitter: @hickstro

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.