Like or Dislike? Assessment for this Fall

Two self-inking stamps: one stamps a thumbs up with the word Like and the other stamps a thumbs down with the word dislike.

Would it be wrong to grade pass/fail work with these? by tengrrl on Flickr, used under a CC-BY-SA license

I’m focused now on the last big decision for my fall classes: How I will make the assessment system work with all these new assignments. The many collaborative writing tasks and related homework impact the system dramatically. Specifically, I’m trying to determine how (or if) to change my old system now that three of the five documents I count as major projects require collaborative writing.

The old system was pretty straightforward:

  • A: Earn a Complete on All Five Major Project Submissions
  • B: Attempt and submit all five of the Major Project Submissions, and earn a Complete on Four Major Project Submissions
  • C: Attempt and submit all five of the Major Project Submissions, and earn a Complete on Three Major Project Submissions
  • D: Attempt and submit fewer than five of the Major Project Submissions, and earn a Complete on at least two Major Project Submissions
  • F: Attempt and submit fewer than five of the Major Project Submissions; Earn a Complete on fewer than two of the Major Project Submissions

I’m listing only the major projects here. There are, of course, other requirements. You can see the full chart from the Spring in the Short Guide to the course.

Looking at that arrangement, I’m wondering if I need to break out the five documents into two individual projects and three collaborative projects. And if I do break them out, how do I indicate their worth?

The new course template does require that the Usability unit and the Project Management unit are worth more than the other portions of the course. To indicate that in the assessment chart, I am thinking that I may need to list them explicitly. For instance, perhaps the chart needs to indicate that a student cannot earn higher than a D in the course if they skip the collaborative work.

That thought leads me to wondering about how I am going to assess that collaborative work. Will this system based simply on whether the work is complete or not allow a student to put in the least possible effort on the collaborative work but still earn an A because other group members pick up the slack? After that thought, I get all mixed up on if a complete project management document means students learned about project management.

Reading Failing Sideways Queerly

A slightly off-topic aside

My friend Will Banks pointed me to the discussion of narrative assessment in the recent book Failing Sideways: Queer Possibilities for Writing Assessment (2023), which he co-wrote with Stephanie West-Puckett and Nicole I. Caswell. The book arrived yesterday, and I recognized that I didn’t have time to read it through before classes start on the 21st. I found myself skipping around in the book, following index entries and looking up passages I found with Google Books for specific terms. I flipped back and forth, completely ignoring some sections. I really don’t care about the history of the bell curve, for example. I can read that section later (if at all….I seriously don’t care about bell curves).

I did dip into details on queering the writing process, which has never really been the harsh step-by-step progression that textbooks would lead readers to believe. It suddenly occurred to me that I was reading queerly. I wasn’t following the expected linear progression through the book. I was frantically flipping about, skipping from one topic to the next.

Assessing the Major Projects Sideways?

From flipping around in Will’s book, I landed on these thoughts and possibilities, in no particular order. Funnily enough, I ended up with ten:

  • Now I’m questioning my whole system of marking work as Complete or Incomplete, having realized that it’s imposing the “success/failure binary onto the writing and learning experience” (p. 162). Ugh.
  • Stephanie’s “low-stakes self-assessment activities meant to provide quick and dirty data that could inform instructors about students’ writing experiences and orientations, as well as about ways to better meet students’ needs” (p. 92) gave me new-found happiness for the weekly student check-in surveys that I incorporated into my class a year ago. I made up my practice on my own, but apparently I was right on track with my system.
  • I’m always stuck when I work on assessment between what works for programmatic assessment and what works for individual students in a writing class. I always feel like I’m trying to smash the proverbial square pegs in round holes.
  • I also get lost between the curricular expectations for a technical writing program and the actual writing instruction that students need. With a service class like the technical writing course I teach, there are so many outside stakeholders who want the class to do specific things (e.g., teach the memo, prepare students for the workplace, erase all “errors” in their writing). Generally, none of that is what I think makes someone an effective writer who is ready for the workplace.
  • I could use contract-based writing assessment based solely on labor, but given the expectations for the course among other departments and the program itself, doing a lot of work without producing something similar to the five major projects and other materials in the template wouldn’t fly.
  • I want to like labor-based grading, but it raises questions for me.
    • Failing Sideways points to a University of Akron study that “found their students viewed grading contracts as largely irrelevant because these students had come to expect that the amount of work they invested in their courses would automatically be reflected in their course grade (Spidell and Thelin 2006)” (p. 163). /li>
    • I’m also bothered by what feels like an invalid argument that tells students that if they just work hard enough and put in their very best effort, they will be just fine in the course. That stance outright defies the lived experiences of many BIPOC students who know full well that they can work their asses off and still not get the same rewards as their white classmates.
  • Not relevant to assessmnet per se, but Nikki’s discussions with writing center consultants about the space, the value that it holds, and how it expresses those values (p. 134) included some questions that I believe I can combine with Cecilia Shelton’s (2019) Linguistic Landscape Analysis approach for the analysis of usability projects in the middle unit of my course.
  • The discussion of narrative assessments through learning stories is intriguing, but I question whether I can ask students to write narrative assessments of their learning voyage on top of all of the other things that they have to write for the course.
  • Would looping students through questions about their understanding of the large goals of the course reveal adequate information for some kind of assessment? For instance, what if I asked students questions like these: “What counts as technical writing? What counts as effective technical writing? What need one have to be a good technical writer?” Would their answers from the beginning, middle, and end of the course tell me anything about their changing ideas of the field of technical writing as it relates to them and their careers? Note that I am saying “changing ideas” and not “increasing understanding” there.
  • Finally, my most favorite moment in the book was seeing the photo of Stephanie and Will’s 3D model of their research data on p. 196. I immediately flashed back to my description of a CCCC Forum from (gulp) almost forty years ago: HyperYarn: Threading space (1994).

And after all of that, I have ideas but I still haven’t figured out what to do with the contract system for the course. At least I have a whole week before I have to have things ready to go.