Peer Review in Public

This afternoon, partially as a way to procrastinate from my own writing and partially because I was genuinely interested in the invitation, I participated in an “open review” of Remi Kalir and Antero Garcia‘s forthcoming manuscript, Annotation. Their open review process will continue through August 23, 2019, so jump in! They request that commentary adhere to the following, all good advice for any scholarly dialogue:

Civil. We can disagree. And when we do so, let’s also respect one another.

Constructive. Share what you know. And build upon ideas that are relevant and informative.

Curious. Ask honest questions and listen openly to responses.

Creative. Model generative dialogue. Have fun. Contribute to and learn from the process.

Having read hundreds of academic articles in the past 20 years, as well as offering blind peer review for dozens more, as well as blind reviews of probably two dozen academic books, I thought that this would be interesting. (And, again, I was procrastinating on my own writing, so an engaging intellectual task that can carry me away and still feel like I am getting work done is always welcome). Here are a few things I learned while reviewing their book which, again, you, too, can contribute to through August 23rd.

My Stance as a Reviewer

When I offer peer review to academic articles and books, I am typically using the “track changes” and commentary features in Word or, in some instances, by offering comments and edits on a PDF (my favorite tool for doing that is the iOS app Good Reader). I typically frame these comments as direct suggestions to the author(s) of the article/manuscript I am reading, and I engage in a professional, yet conversational tone.

With my review of Annotation today, I think that I maintained some of that approach, yet I knew that my comments would be captured, in perpetuity, in Kalir and Garcia’s public version of the document. While I didn’t hold back with questions and concerns, I did realize that I changed my tone. Whereas I would try to be explicitly clear in comments and questions (perhaps even providing examples of what I was aiming for with unclear writing) in blind review, I didn’t want that to be part of the public record.

For instance, in the example below, I offered a comment that could spark further dialogue amongst others reading the text, pushing toward some broader implications for teaching and learning. At other points, I was replying to the comments already made by others, and I would specifically say something like “I agree” or “Along these lines.” Also, at points, I directly wrote to Kalir and Garcia in ways that I could do so with colleagues I know, and would be comfortable saying in front of a group of others.

Screenshot of Specific Comment on

My Commenting Style in an Open Setting

Yet, still, it felt strange. In the first few chapters, there were some other annotations/ors, yet they fell away. Even those that remained were offering suggestions for links, not the generative kinds of peer review that (I hope) I have always aimed to offer in the peer reviews that I complete. For instance, I would describe problems and ask questions like:

  • I may simply not be reading this right, but making the comparison of submitting an expense report in relation to the openly annotated future just didn’t ring for me here. Sorry, but perhaps you could find a different example?
  • This is an interesting example, but I don’t know that it fully draws out all the ideas that you mentioned above related to “shifting social norms, changing financial and organizational incentives, and evolving scholarly practices.” Perhaps you could reorganize around — and particularly elaborate upon — these three ideas in relation to SciBot?
  • This is an important, if technical, point, and deserves some elaboration. Why is it important that some are built into the browser, whereas others stand alone. And, for that matter, why have you not mentioned OneNote, Evernote, Google Keep, or SimpleNote anywhere in the text, and especially here before you launch into the important questions you pose below?

By the end of the process — which took me just as long as any other book review — I began to wonder/wander, leading me to other directions.

Reflecting While Reviewing

Of course, during a normal review, the kinds of internal dialogue that I have with myself may make it into the first draft of my comments, but I usually do some editing before a final draft heads off to the editor. Here, I figured that Kalir and Garcia’s invitation to be civil, constructive, curious, and creative would welcome some of these thoughts.

As I went through the process, and saw fewer and fewer reviewers in subsequent chapters, I got discouraged. While this is no fault of the authors, and I know that they have extensively shared their open manuscript, welcoming reviews, it does make me worry a bit about the hive mind, and whether the power of collaboration and collective intelligence is, perhaps, not as powerful as we might hope. A few of my musings, especially as they relate to why scholars may choose not to participate in an open review:

  • This [vision of social annotation and scholarship] is aspirational, and I appreciate it. Yet, I think that you can elaborate more on what actual changes would need to happen to make it a reality. Be specific, and talk about faculty workloads, department/college T&P requirements, and the ways in which “open” is still perceived as subpar.
  • And, yet, there still seems to be reluctance, or at least lack of widespread acceptance [of open review]. For instance, in your attempts to make this manuscript open and accessible (which I applaud), I am still wondering how many total scholars will participate. Even for those of us who saw the invitation to begin with, a gentle nudge was in order for us to participate. And, in the end, I don’t know that my review of this manuscript will “count” on par with doing a review for an established journal or publisher when (and if) I include it in my promotion materials. Of course, for me at least, this doesn’t matter as much as it would to a junior faculty member who needs to decide whether to spend a few hours trying to write her own work, or to participate in a “normal” editorial review board/process as a blind reviewer for an established press/journal. Both of those actions are rewarded in the academy. As much as I respect Remi and Antero (and that’s why I am doing this annotated review), the simple fact of the matter is that I am doing this because I care, not because it will “count.” These are part of the material reality of academe, and I don’t know how we will change that, even with open annotation and peer review. At the end, there is only so much time in the day…
  • So, I have held off until now, but I have to ask… and only partially in a cynical manner… Like the tree falling in the forest, does an annotation really make a sound (ripple, impact, effect, etc)? That is, I appreciate your utopian vision, yet I wonder if you might want to reign it in a bit here. Sorry… not trying to pop the bubble, especially after nearly two hours of reviewing and annotating your manuscript, but I am just being realistic. The first few chapters had a few annotators. Now, here at the end, it is just me. And you two, as the authors. Are we really connected to a “robust information infrastructure?” Or, are the three of us walking alone in the woods?

In the end, I appreciate the opportunity to do this review, and to pause here to reflect on the process. I struggle both with how to structure class discussions in digital spaces as well as how to be a social scholar, so reading Kalir and Garcia’s manuscript was serving many more purposes for me than merely procrastinating on my writing. I am hopeful that the ideas I have offered to them (and those who might continue to annotate over the next month) are helpful. And, of course, I will continue to think about practices of annotation in my own scholarship and teaching.


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Prepping for KQED Summer Bootcamp on “Developing and Assessing Digital Writers”

Camp KQED Teach 2018 LogoThis week, KQED’s Bootcamp on “Developing & Assessing Digital Writers” kicks off, with the overarching idea that

[b]logging can help develop your students’ digital writing by combining traditional writing (text) with a digital platform (sharing online), along with the opportunity to incorporate other forms of media-making.

My role for the bootcamp will be to provide a brief, asynchronous presentation called “Rethink the Link.” And, in working with KQED’s Jordan Stewart-Rozema to prepare my session, I’ve been (re)thinking (over) a number of ideas.

In short, I want to help teachers consider when, why, and how we invite students to create hyperlinks in their digital writing, in addition to considering the typical questions of where, what, or to whom they will be linking.

To that end, I’ve been gathering up a few resources, beginning with Vannevar Bush’s essay “As We May Think” and his original conception of the memex as

a future device … in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory.

We will return to Tiffany and Bud Hunt’s essay from 20 years ago in English Journal, “New Voices: Linkin’ (B)Logs: A New Literacy of Hyperlinks” and explore M-W’s definition of “link.”

From the perspective of “link” as a verb, we will think about what a writer does by including a link, considering the kinds of reaction(s) she might want from her readers. As a noun, we will consider how the connection to other ideas serves the writer by invoking the broader academic conversation.

If you are interested in thinking about linking — and blogging more broadly — then there is still time to sign up. See you in the KQED Bootcamp community!


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

#MichEd Chat – 4-11-18 at 8:00 PM EST

PROFESSIONAL LEARNING NETWORKS

#MICHED CHAT 4/11/18

Wednesday, April 11th, 8-9pm EST

The idea of a professional learning network has existed for quite some time, built on some of the foundational work related to “situated learning” and “communities of practice” developed by Jean Lave and Etienne Wenger in the 1990s.

With the emergence of Web 2.0, Stephen Downes described “learning networks in practice” in a 2007 paper, arguing that “The idea behind the personal learning environment is that the management of learning migrates from the institution to the learner.”

Combined with the 2006 emergence of Twitter, a new idea had taken form, and educators began using hashtags to start a variety of ed chats, including our own #MichEd which was inaugurated Nov 7, 2012.

Chat Questions

This week, we reflect on our own experiences being a part of the #MichEd network and, more broadly, what it means for each of us to develop our own PLN. We will be joined by students from CMU’s Doctorate in Educational Technology, and the chat will be hosted by Troy Hicks. During the chat we will consider:

  1. What motivates you, personally, to create and maintain a PLN?
  2. How do PLNs change with time, for you personally and across the network? Think about #michED and who was there at the start, who has joined, who has left (or is less active) and WHY?
  3. How do we keep our networks diverse in thought? We don’t want them to be echo chambers for our ideas, but to be constructive spaces for dialogue. How can we achieve that goal?
  4. Besides sharing great resources, what can a PLN teach us about how to be an educator? How does participating in a PLN become part of your professional persona?
  5. OK, let’s get specific. What, exactly, can we learn from PLNs? Along with soft skills of collaboration and sharing resources, what other digital or pedagogical skills can we learn?
  6. Finally, what’s next for PLNs? How can we nurture and sustain them? How can we invite new voices? What should a group of doctoral students studying educational technology be thinking about?

https://www.smore.com/kngch

Reflections on Participating in KQED’s “Finding and Evaluating Information”

Photo by Markus Spiske on Unsplash
Photo by Markus Spiske on Unsplash

Over the holiday break, I’ve participated in an open course for educators, “Finding & Evaluating Information,” sponsored by KQED. Though the course ended last week, many of the materials are still available online, including this GDoc that contains a list “greatest hits” (and resources) from all the participants.

Among the many lessons posted by other participants, I created my own, Ethical Photo Editing (Personal, Professional, and Journalistic) that is designed to help students understand the decision making they would need to make when representing images through digital media, depending on the context. Also, one of the participants pointed me to an article by Poynter, “Three ways to spot if an image has been manipulated,” which I found quite useful.

Another one of the activities, adapted from the New York Times Learning Network’s “Media Literacy Student Challenge | Explore Your Relationship With News,” asks you to

Do a personal 24- to 48-hour news audit in which you record all the news you get now, where it comes from, and how well it meets your needs and interests.

This short course reminded me of the power of experiential, inquiry-based learning. As I am redesigning a media literacy course for teacher candidates, I am thinking that one of these types of brief activities each week could be incredibly useful, so I will return to them again in the future.


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Creating MINDFUL Readers and Writers

MINDFUL Graphic
Image Courtesy of Heinemann

Based on the book that I wrote with Kristen Hawley Turner, Argument in the Real World, one of the tools/strategies that I have been sharing in workshops this past year is the “MINDFUL” heuristic for readers and writers as they engage in academic arguments with, through, and about social media.

When we were wrapping up the book in early 2016, even before “fake news” and “alternative facts” became a phenomenon, Kristen and I designed this heuristic to fill in the gaps that we felt existing website evaluation checklists were missing.

In short, those checklists and other tools were created in the early days of the web when we – as educators and information consumers – generally placed the onus of responsibility on the creator for being accurate. This, of course, was a holdover from our view of the printed word having gone through extensive review and editing in order to be published. The power of books, periodicals, encyclopedias and similar sources came from the fact that they were curated by experts.

Yet, with the abundance of material emerging on the information superhighway, educators, especially librarians, knew that careful editing and peer review weren’t happening all the time. We needed to create a way for students to understand that some creators were thoughtful and accurate, while others were misleading or creating an outright hoax. So, we  held those creators to task by engaging with such checklists as readers so we could bring a critical eye to what we were reading/viewing. We also encouraged students to never trust a blog, or Wikipedia, or other sources that were not well-vetted. (Of course, we have since changed our tune. A bit).

At any rate, website evaluation checklists worked okay, for a while at least.

However, this was before the vast majority of us became content creators in the Web 2.0 era. Blogs, wikis, and other forms of media were being created at a constant pace and, unfortunately, with different audiences, purposes, and degrees of veracity.

More recently, through social media,  we are all creators, curators and circulators. Our roles as writers have changed. The role of the reader – as someone with agency and perspective in the online reading and writing process – also needed to take responsibility for the types of arguments being created and perpetuated.

What Kristen and I wanted to do, then, was to rethink this instructional strategy of website evaluation. We came from the stance of helping students –as both readers and writers of social media – to recognize that (borrowing from  Lunsford, Ruszkiewicz, and Walters’ book title) everything is, indeed, an argument.

Retweets and likes are, despite the disclaimers, endorsements. And, by extension, arguments. The way that we see evidence presented in social media matters because it will inform our own stance, as well as the perspectives of others with whom we engage. We create arguments through the act of liking, retweeting, reblogging, or otherwise endorsing, let alone when we create our own updates, tweets, or blog posts.

Rethinking the traditional website evaluation tool meant that we need to consider the challenges that new media, new epistemologies, and new perspectives all bring. In other words, it was no longer enough to simply read the “about” page, do a WHOIS lookup, or even try to understand more about the language/discourse being used on the page/post.

We needed something different. Hence, MINDFUL.

We wanted to help teachers, in turn, help their students slow down just a bit – even a nano second before retweeting, or a few moments when crafting an entire post – and to think about how arguments in digital spaces are constructed, circulated, and perpetuated.

I think that MINDFUL is helpful in doing just that. Below, you will find slides that I have been using over the past few months as well as links to additional resources I discuss in the presentation.

Additional Resources

  • Argument in the Real World Wiki
  • Our post on the Heinemann blog:  Seriously? Seriously. The Importance of Teaching Reading and Writing in Social Media
  • For the MINDFUL elements
    • Monitoring our own reading and writing means that we must be aware of and account for  Confirmation Bias. Of course, helping students (and ourselves) to do that requires a number of strategies, which are outlined in the rest of the heuristic.
    • Identifying the claim means that we must separate the opinions that someone offers from the facts that may (or may not) support the claim. A refresher on Fact vs Opinion from Cub Reporters is a useful place to begin, even for adults.
    • Noting the type of evidence and how it supports the claim is useful. As a way to think through different types of evidence – In the claims they can support – it is worth taking a look at the Mathematica Policy Research Report “Understanding Types of Evidence: A Guide for Educators
    • Determining the framework/mindset is perhaps one of the most difficult elements for anyone, especially children and teenagers, to fully understand and accomplish. Without taking a full course of study in critical discourse analysis, a few resources that are helpful include the idea of Sam Wineburg’s (of the Stanford History Education Group) idea of  “reading laterally,” explained here by Michael Caulfied. Also, using sites like AllsidesOpposing Viewpoints in Context, and Room for Debate can help. Finally, there is the Media Bias Fact Check plugin for Chrome and Firefox (which, of course, has some bias, and questionable authorship). But, it’s a start.
    • Focusing on the facts requires us to check and double check in the ways that researchers and journalists would. Despite claims to the contrary from those on the fringes, sites like SnopesPolitifact,  and FactCheck are generally considered to be neutral and present evidence in an objective manner. Also, there are lots of objective datasets and reports from Pew Research.
    • Understanding the counterargument is more than just seeing someone else’s perspective and empathizing/disagreeing. We need to help students understand that arguments may not even be constructed on the same concept of information/evidence and in fact some of it could be one of the 7 Types of Mis- and Disinformation from First Draft News.
    • Finally, leveraging one’s own response is critical. Understanding the way that fake news and other propaganda is constructed  and circulated will help us make sure that we do not fall into the same traps as  writers WNYC’s On the Media provides a Breaking News Consumers Handbook for Fake News that is, of course, helpful for us as readers and viewers, but could also be a guide for what not to do as a writer.

My hope is that these websites/resources are helpful for teachers and students as they continue to be mindful readers and writers of social media.


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Marginal Syllabus Conversation – February 22, 2017 at 6:00 PM EST

Image by Hans from Pixabay
Image by Hans from Pixabay

Tomorrow, Wednesday, February 22, 2017 at 6:00 PM EST, join my colleague and co-author, Dawn Reed, and me as we participate in an “Annotation Flash Mob” on the preface for our book, Research Writing Rewired. We’ve been invited to participate in this opportunity through Dawn’s collaborations with the Marginal Syllabus Project.

The Marginal Syllabus team is part of the larger Hypothes.is Syllabi Project, which “leverages web annotation to collect primary source documents by theme and organize communal conversation of those documents.”

Here is a bit more from the Marginal Syllabus’s “About” page:

The Marginal Syllabus seeks to advance educator professional development about education in/equity through the use of participatory learning technologies. We are a dynamic, multi-stakeholder collaboration among:

Hypothesis, a non-profit organization building an open platform for discussion on the web

Aurora Public Schools in Aurora, CO, and in particular educators and administrators associated with the LEADing Techquity research-practice partnership

Researchers and teacher educators from the University of Colorado Denver School of Education and Human Development in Denver, CO

While this group will work together for one hour tomorrow night, I am looking forward to seeing how the conversations Dawn and I had while writing will come alive with the Hypothes.is annotations of other educators.

All educators are welcome to participate, and we recommend that you sign up for Hypothes.is ahead of time, and install the Google Chrome browser extension.

From their blog, it also seems that the conversations might keep going on, and I am interested in seeing how that unfolds over the days and weeks to come.


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Analyzing Our Own Social Scholarship Profiles

During our workgroup meeting this morning, Maria Ranieri has asked us to engage in an analysis of our own social profile(s), and to reflect on our decision to engage in social scholarship.

For me, the choice to engage in social media began over a decade ago, while still in graduate school at MSU. The first entry for my blog was in 2006, at the NWP-sponsored Tech Matters advanced institute, and my first tweet was in May 2007 (also at an NWP-related event). In a sense, the growth of social scholarship in the past decade has mirrored my own journey. I’ve always lived in the world that leaned toward open-access, collaboration, and public engagement, and I have grown my network exponentially over my past 10 years at CMU.

DuckDuckGo Screenshot of "Troy Hicks" Search
DuckDuckGo Screenshot of “Troy Hicks” Search

Today, it was interesting for me to “Google” myself. I actually started with DuckDuckGo in order to get a (relatively) objective look at what “Troy Hicks” yields. Here is what I found, with my annotations. Interestingly enough, I am not in the “top 10” of Facebook profiles for “Troy Hicks,” and I actually think that is a good thing. I did click on the LinkedIn search, too, and I showed up second, FWIW.

Then, I did hop over to Google. Here is what the automated complete function showed with just “troy hicks” and the with a “troy hicks d” (because I wanted to see what would happen with digital writing).

"Troy Hicks" on Google Search with Autocomplete
“Troy Hicks” on Google Search with Autocomplete
"Troy Hicks d" on Google Search with Autocomplete
“Troy Hicks d” on Google Search with Autocomplete

Interestingly, the “brookings sd” is for a man, Troy Doyle Hicks, 52, of Brookings, SD, who died last November. As soon as the “d” was added after my name, however, it is interesting to see that the connections to “digital writing” as well as my books showed up. Not sure that I need to buy another domain name right now, but that was an option, too.

She concluded by having us ask one another about affordances and opportunities as well as constraints and challenges. There were many, many points made, but I will focus on one: my profile on Rate My Professor. I haven’t been on the site in years (I had only seen the 2008 post) and was interested to read the 2015 post about my ENG 514 class. I can reflect more on my experience of teaching that class, how I established timelines/provided feedback, and what I have changed since, but that is for another post.

The other point I want to make now was captured best by Jillian Belanger in a tweet:

Tweet from Jillian Belanger
Tweet from Jillian Belanger

Onward! Looking forward to my next steps as a social scholar.


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.