Designing Breakout Rooms for Maximum Engagement

Despite (or perhaps because of) the pandemic summer, I was provided with numerous opportunities to facilitate online courses and professional development, as well as one chance to work with a digital writing summer camp for middle school students. There were many models for these sessions, from a one-day event where we hosted about 60 participants to a two-week institute in which we worked with nearly 30 educators for just over three hours each morning to a one-week institute with a facilitation team of over a dozen and nearly 150 participants.

For the most part — both because I am a bit of a control freak when it comes to managing webinars and (at least I would like to think) because my main goal for working with other educators is to smooth out the roadblocks and provide space for others to lead — I was in charge of making many, many breakout rooms via Zoom. And, for many participants, being whisked away to (and back from) a Zoom breakout room still feels a bit like techno-magic (mostly, I would assume, because they are usually in the “attendee” role in a meeting, not the “host” role). Throughout the summer I made rooms of 2 or 3 based on Zoom’s random assignment all the way to rooms of 20+ where participants “renamed” themselves with room numbers and chose the session they wanted.

And, with nearly every session, as I collaborated with numerous colleagues this summer, the question was repeated many times: “Troy, how did you do that?”

Preparing to Move to Breakout Rooms: Technical and Teaching Considerations

There are, as with most inquiries about teaching with technology, two answers to that question. The technical answer, once one has had some experience in the “host” role of the Zoom meeting, is relatively straight-forward, and their help desk article (with video) is actually quite instructive. There are also many, many videos on YouTube that show you the logistics of how to set up and control the rooms, from an educator’s perspective, including this one from Simpletivity that shows some of the additional features for hosts as they set up and move from room-to-room (from about 6:00 to the end). For the technical answers, I would encourage you to look to these resources from others who have answered the questions in a much clearer manner than I could do so again here.

These technical steps, however, are not what I think most teachers are asking about. Instead, they are likely asking about how we prepare for, move to, facilitate, and return from the rooms, setting up a brief instructional arc that relies on collaborative learning and protocols to guide group activity and discussion. Many other talented educators are puzzling through this same set of questions, including the Stanford Teaching Commons, Elizabeth Stone, Catlin Tucker, and Tricia Ebarvia. Also, early in the pandemic, I was directed to Mural’s “Definitive Guide To Facilitating Remote Workshops,” which has some good tips. From all these educators, I add to the common theme: before even considering small group work, especially in virtual settings, we need to have clear structures in place, both for the entire class session and for what happens in breakout rooms.

Sometimes these rooms are assigned randomly, especially for low-stakes tasks where I want participants to talk with someone they likely would not choose to work with otherwise. Sometimes these rooms are assigned, strategically, by me, without much input from them at all (and, perhaps, I might even tell them that the assignments were random!). Finally, there are times where I want participants to make a choice and let me know where they would prefer to go, offering them a voice in their learning.

Two quick tips for having participants choose rooms (or, suggest where they want to go). First, you can have a shared Google Doc for notetaking and, when it is time for them to choose rooms, insert a table for the number of rooms you plan to assign, and have participants write their name in the preferred cell of the table. Second, if you have enabled the capability in your host settings, participants can “rename” themselves to put a room number or name in front of their own name. In fact, I would make the case that having participants rename themselves with their preferred number makes the assigning of breakout rooms much easier for the host, and this is quickly becoming my preferred method.

With these logistics for moving them to rooms in mind, we now consider the tasks in which we would want them to engage.

Tasks for Successful Breakout Room Contributions

All of these options require successful teaching strategies to be in place, and two of my go-to resources for finding protocols for getting students to wrestle with ideas include the National School Reform Faculty’s Protocols and Harvard Project Zero’s Visible Thinking Routines. Being familiar with a number of these strategies — and being able to adapt them quickly in virtual settings — is helpful. They can be adapted in many ways, and groups can work in shared GDocs or GSlides, or Padlet walls, or through other collaborative tools, sometimes with some pre-session setup, yet often on-the-fly, depending on student needs.

There are a few considerations that I keep in mind as a prepare to engage learners in breakout sessions. First, please note that  my audience of learners typically includes college undergraduates, graduate students, and educators. So, these strategies would need to be adjusted for younger students, especially elementary and middle school students, who may need fewer, more direct instructions as well as shorter time frames in the breakout rooms.

With all that in mind, here are a few activities that I use as it relates to setting up breakout rooms for different kinds of groups and for different durations. I think that they are flexible, and useful for learners at various age levels with appropriate scaffolding. To keep it simple, I separate them into quadrants, though there certainly can be some flexibility and overlap.

Structuring Real Time Activities in Video Conference Sessions

Structuring Real Time Activities in Video Conference Sessions (August 2020)

Activities for Any Group, with a Shorter Duration (5-8 Minutes)

To get conversations started, you might try:

Activities for Established Groups, with a Shorter Duration (5-8 minutes)

For groups that have some rapport and community established, you can jump right in with:

Activities for New Groups, with a Shorter Duration (5-8 minutes)

For groups that you are trying to build community, you can have them watch a brief video or read a short text, and then engage in:

Activities for Established Groups, with a Longer Duration (10-15 Minutes)

For groups that have worked together and are moving into deeper conversation or inquiry, they can use protocols like:

Activities for New Groups, with a Longer Duration (10-15 Minutes)

And, to continue building community and to engage participants in activities that will help them move into more substantive conversations:

Of course, protocols by their very nature are all designed to be flexible, and could be used for a variety of purposes with both new and established groups, in durations short and long. Still, with the list above, my hope is that these resources are helpful for many educators, especially those working with high school and college students, in real time video chat sessions.

Closing Thoughts

Given the many reasons why it is challenging to simply get us all in the same virtual space at the same time, we need to make the precious minutes that we spend together in these sessions valuable. As Stone notes in her post for Inside Higher Ed,

[O]ur students have made it clear they want to learn, and they want connections with one another and with us as we continue to live through these uncertain and disruptive times. And I’ve found that in classes like mine, Zoom, far from fatiguing, can be both an energizer and a bridge.

Indeed, if we use our time in Zoom (or WebEx, or Google Meets, or Microsoft Teams, or BlueJeans, or BigBlueButton, or even in a face-to-face classroom), to engage students in meaningful dialogue and collaboration, we are in many ways just following the advice of all those who have been promoting active learning strategies for many years.

More than just providing a standard lecture in a convenient, online format, we have opportunities to be more dialogic, collaborative, and engaging this fall than, perhaps, we have ever had before.


Troy Hicks PortraitDr. Troy Hicks is a professor of English and education at Central Michigan University. He directs the Chippewa River Writing Project and, previously, the Master of Arts in Learning, Design & Technology program. A former middle school teacher, Dr. Hicks has earned CMU’s Excellence in Teaching Award, is an ISTE Certified Educator, and has authored numerous books, articles, chapters, blog posts, and other resources broadly related to the teaching of literacy in our digital age. Follow him on Twitter: @hickstro

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Appreciating Writing Assistance Technologies… Finally?

This post originally appeared on the National Writing Project’s “Ahead of the Code” blog on Medium on August 22, 2020.


Appreciating Writing Assistance Technologies… Finally?

You would think that, as English teachers, we would have been more appreciative.

Even from the founding of our major professional organization, the National Council of Teachers of English, we have been concerned with (or simply complaining about) the overwhelming amount of writing that we need to grade and provide feedback upon.

As Edwin M. Hopkins, an English professor and one of the founding members of NCTE asked on the first page in the first issue of English Journal way back in 1912, “Can Good Composition Teaching Be Done under Present Conditions?

His concise answer: “No.”

Screenshot of Edwin H. Hopkins’ article, “Can Good Composition Teaching Be Done Under Present Conditions” from 1912.
Screenshot of Hopkins’ article, “Can Good Composition Teaching Be Done Under Present Conditions?” with his response highlighted in yellow.

And, this just about sums it up.

Even then, we knew that the work for English teachers was immense. And, 100+ years later, it remains so. Reading and responding to dozens, if not hundreds, of student compositions on any given week remains a consistent challenge for educators at all levels, from kindergarten through college.

Fast forward from Hopkins’ blunt assessment of how well any one English teacher could actually keep up with the volume of writing he or she must manage, and we land in 1966. It is at this moment when Ellis B. Page proposed in the pages of The Phi Delta Kappan that “We will soon be grading essays by computer, and this development will have astonishing impact on the educational world” (emphasis in original).

There is more history to unpack here, which I hope to do in future blog posts, yet the mid-century pivot in which one former English teacher turned educational psychologist, Page, set the stage for a debate that would still be under discussion fifty years later is clear. English people started taking sides in the computer scoring game. And, to be fair, it seems as though this was mission-driven work for Page, as he concluded that “[a]s for the classroom teacher, the computer grading of essays might considerably humanize his [sic] job.”

Tracing My Own History with Automated Essay Scoring

Over the decades, as Wikipedia describes it, “automated essay scoring” has moved in many directions, with both proponents and critics. These are a few angles I hope to explore in my posts this year for the “Ahead of the Code” project. As a middle school language arts educator, I never had opportunity to use systems for automated feedback in the late 1990s and early 2000s. As a college composition teacher in the mid-2000s, I eschewed plagiarism detection services and scoffed at the grammar-checkers built into word processing programs. This carries me to my more recent history, and I want to touch on the two ways in which I have, recently, been critiquing and connecting with automated essay scoring, with hopes that this year’s project will continue to move my thinking in new directions.

With that, there are two stories to tell.

Story 1: It was in early 2013 that I was approached to be part of the committee that ultimately produced NCTE’s “Position Statement on Machine Scoring.” Released on April 20, 2013, and followed by a press release from NCTE itself and an article in Inside Higher Ed, the statement was more of an outright critique than a deep analysis of the research literature. Perhaps we could have done better work. And, to be honest, I am not quite clear on what the additional response to this statement was (as its Google Scholar page here in 2020 shows only four citations). Still, it planted NCTE’s flag in the battle on computer scoring (and, in addition to outright scoring, much of this stemmed from an NCTE constituent group’s major concern about plagiarism detection and retention of student writing).

Still, I know that I felt strongly at the time that our conclusion: “[f]or a fraction of the cost in time and money of building a new generation of machine assessments, we can invest in rigorous assessment and teaching processes that enrich, rather than interrupt, high-quality instruction.” And, in many ways, I still do. My experience with NWP’s Analytic Writing Continuum (and the professional learning that surrounds it), as well as the work that I do with dozens of writers each year (from middle schoolers in a virtual summer camp last July to my undergraduate, masters, and doctoral students I am teaching right now) suggests to me that talking with writers and engaging my colleagues in substantive dialogue about student writing still matters. Computers still cannot replace a thoughtful teacher.

Story 2: It was later in 2013, and I had recently met Heidi Perry through her work with Subtext (now part of Renaissance Learning). This was an annotation tool, and I was curious about it in the context of working on my research related to Connected Reading. She and I talked a bit here and there over the years. The conversation rekindled in 2016, when Heidi and her team had moved on from Subtext and were founding a new company, Writable. Soon after, I became their academic advisor and wrote a white paper about the power of peer feedback. While Heidi, the Writable team, and I have had robust conversations about if and how there should be automated feedback and other writing assistance technologies into their product, I ultimately do not make the decisions; I only advise. (For full disclosure: I do earn consulting fees from Writable, though I am not directly employed by the company, and Writable has been a sponsor of NWP-related events.)

One of my main contributions to the early development of Writable was the addition of “comment stems” for peer reviewers. While not automated feedback — in fact, somewhat the opposite of it — the goal for asking students to provide peer review responses with the scaffolded support of sentence stems was so they would, indeed, engage more intently with their classmates’ writing… with a little help. In the early stages of Writable, we actually focused quite intently on self-, peer-, and teacher-review.

To do so, I worked with them to build out comment stems, which still play a major role in the product. As shown in the screenshot below, when a student clicks on a “star rating” to offer his or her peer a rubric score, an additional link appears, offering the responder the opportunity to “Add Comment.” Once they there, as the Writable help desk article notes, “Students should click on a comment stem (or “No thanks, I’ll write my own”) and complete the comment.” This is where the instructional magic happens.

Instead of simply offering the star rating (the online equivalent of a face-to-face “good job,” or “I like it”), the responder needs to elaborate on his or her thoughts about the piece of writing. For instance, in the screenshot below, we see stems that prompt the responder to be more specific, with suggestions for adding comments about, in this case, the writer’s conclusion such as “You could reflect the content event more clearly if you say something about…” as well as “Your conclusion was insightful because you…” These stems prompt the kind of peer feedback as ethical practice, that I have described with my colleagues Derek Miller and Susan Golab.

A screenshot of the “comment stem” interface in Writable. (Image from Writable)
Screenshot of the “comment stems” that appear in Writable’s peer response interface (Image courtesy of Writable)

And, though in the past few years the Writable team has (for market-based reasons) moved in the direction of adding Revision Aid (and other writing assistance technologies), I can’t argue with them. It does make good business sense and — as they have convinced me more and more — writing assistance technologies can help teachers and students. My thoughts on all of this continue to evolve, as my recent podcast interview with the founder of Ecree, Jamey Heit, demonstrates. In short, looking at how I have changed since 2013, I am beginning to think that there is room for these technologies in writing instruction.

Back to the Future of Automated Essay Scoring

So, as I try to capture my thoughts related to writing assistance technologies, here at the beginning of the 2020–21 academic year, I use the oft-cited relationship status from our (least?) favorite social media company: “It’s complicated.”

Do I agree with Hopkins, who believes that teaching English and responding to writing is still unsustainable. Yes, and…

Do I agree with Page, who suggests that automated scoring can be humanizing (for the teacher, and perhaps the student)? Yes, and…

Do I still feel that writing assistance technologies can interrupt instruction and cause a rift in the teacher/student relationship? Yes, and…

Do I think that integrating peer response stems and automated revision aid into Writable are both valuable? Yes, and…

Do I think that all of this is problematic? Yes, and…

I am still learning. And, yes, you would think that, as English teachers, we would have been more appreciative of having tools that would alleviate the workload. So, why the resistance? I want to understand more about why, both by exploring the history of writing assistance technologies as well as what it looks like, what it feels like, for teachers and students.

As part of the work this year, I will be using Writable with my Chippewa River Writing Project colleagues and, later this semester, my own students at Central Michigan University. In that process, I hope to have more substantive answers to these questions, and to push myself to better articulate when, why, and how I will employ writing assistance technologies — and when I will not. Like any writer making an authorial decision, I want to make the best choice possible, given my audience, purpose, and context.

And, in the process, perhaps, I will give up on some of the previous concerns about writing assistance technologies. In doing so, I will learn to be just a little bit more appreciative as I keep moving forward, hoping to remain ahead of the code.


Troy Hicks PortraitDr. Troy Hicks is a professor of English and education at Central Michigan University. He directs the Chippewa River Writing Project and, previously, the Master of Arts in Learning, Design & Technology program. A former middle school teacher, Dr. Hicks has earned CMU’s Excellence in Teaching Award, is an ISTE Certified Educator, and has authored numerous books, articles, chapters, blog posts, and other resources broadly related to the teaching of literacy in our digital age. Follow him on Twitter: @hickstro

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.