Hallway conversations about teaching and learning

Tag: assessment

A short round-up on ungrading

I am having lots of trouble keeping up with various commitments. Being here is one of them! But when I think of this place as a conversation rather than a set of settled statements, it helps. So, this is like the quick text I might send to a friend just to keep things going, even when I wish I had time to do more.

Rafael brought up ungrading.

There’s an entire issue of Zeal devoted to ungrading, with contributions from Jesse Stommel and Robert Talbert, whom I follow, and several voices who are new to me. Talbert also recently published a “stop/start/continue for the ungrading community.

One of Stommel’s simplest suggestions — and yet one I’m ashamed to have often skipped — is to have explicit conversations with students about grades and grading practices. I have a couple questions about this on my latest feedback questionnaire with my students. I linked to Stommel’s piece and to Kohn’s “The Case Against Grades” in the contexting for these questions, so students are invited to read those but not required to.

Here’s a (long) question and some of the responses so far:

Let's look specifically at Lab Practicals and the points/grades that go with them. Here's the 2nd lab practical from the fall semester — please review it briefly to remind yourself how points worked.
My goals were for everyone to accomplish enough on the lab practical to demonstrate they'd learned the relevant ideas. Putting points on different aspects is a way for me to signal to you what is important. It's not that I actually care about the grade. In fact, almost everyone passes the 100% line.
Would it be possible to describe what is important, lay out requirements, or some other idea that does not involve points, and achieve the same outcomes? Or are the points an essential motivation for making this work? 

- I am very motivated by points because I pride myself in getting good grades
- I disagree, With my major I am very busy. If I saw something with no points to it I would add it to the last thing to do in my pile of work.
- honestly not sure
- I find the points to be motivating
- I like the points for motivation, I felt like I would look at the material more and review it more before the lab practical. I felt that this made me feel more comfortable with the material more. I wish we did more of these lab practical's. for example, if we did 4, all 4 would equal the 2 we did in points.
- the points are pretty important

I have lots of thoughts about why these (few) responses do not seem to support an ungrading approach. But what do you all think?

(Side note: If you’d like to click through to the Lab Practical assignment, you’ll see that it is very, well, “alternative graded,” I guess? There are more points available than needed for the assignment. So I’m getting some of these ideas in even if it’s not via true ungrading.)

So, what do you think about these students’ thoughts on points? What would your own students say?

Ungrading on my mind

When I was introduced to the practice of ungrading, I was drawn to a simple description provided by Jesse Stommel. The practice itself isn’t simple at all, but it was described, simply. I don’t even remember a super specific definition, just what it made me think about.

I remember that this description prompted me to reflect on how it felt to be graded, which was mostly bad. And when it felt good, the grade itself was still the focus of my pride, not the curiosity or problem-solving I did. I also have great memories where I don’t remember the grade, just the experience of doing good, challenging work, and the relationships with teachers who took away some of the power of grades on me.

I also remember how ungrading challenged ideas about assessment I thought I was comfortable with. I wasn’t ready to think critically about learning objectives or rubrics, as I had “already learned” how to use these tools to improve learning and set clear expectations, not just for summative assessment. But it was so worth it to examine how grades still unconsciously influenced my thinking about assessment.

This was all especially rich to ponder as I was part of a professional development program that gave people a chance to design and teach, in environments where grading wasn’t necessary.

I’m thinking about ungrading again because I recently read an article with a call to the ungrading community to be more specific about the term. It laid out a definition of ungrading that included creating student portfolios, and collaborative assignment of grades when required.

This definition was certainly more specific as an approach. But it didn’t elicit any desire for reflection and curiosity that I had when I first was introduced to the ideas. I went back to a recent definition from Stommel:

Ungrading” means raising an eyebrow at grades as a systemic practice, distinct from simply “not grading.”

That felt much better. Beyond a general mindset, I think the ungrading community has done a lot to also drive the discussion around the context, details, and examples that are so important to put these ideas to work. There are articles about specific topics like rubrics, collections of FAQs and bibliographies to elaborate and prompt thinking, trials and lessons learned like Clarissa Sorenson-Unruh’s reflections on applying upgrading to chemistry, and collections of teacher experiences like Ungrading, edited by Susan Bloom. Again, being defined simply doesn’t mean it’s easy work. But I can raise my eyebrow at how grading influences me in many contexts. I think that’s more useful than worrying if I’m doing ungrading “right.”

I felt satisfied to be pulled back into this topic after not having thought about it for a while. I also realized that I never paid much attention to the “distinct from simply ‘not grading’” part in the definition above.


I don’t hear the word “grades” in my world anymore. I don’t have to grade in the way it’s done in higher education. And because of that, I think I resonate with ungrading even more, or at least in a different way.

What does grading leave behind when you don’t have to grade? 

I’ve needed to reflect on this for a while, and I’m glad something came up in my feed to prompt me. I want to think through this more.

Even without grades:

  • Am I doing anything that can pit learners and trainers against each other?
  • Do I value “objectivity” in measurement over good feedback and the learning process itself?
  • Am I not questioning definitions of assessment enough? Advocating for formative assessment? Letting assessment be shorthand for grading?
  • Am I looking hard enough for how negative aspects of grading are laundered through requirements like compliance?
  • Who does my approach to assessment work for? Who doesn’t it work for? How does it feel to be assessed and what are the impacts?

I think there’s more, but it’s an ok list coming from a simple call to action. And I don’t think I’d ask these questions if ungrading was a strictly-defined best practice. 

Let’s not throw the baby out with the bathwater

Whew, a lot of pressure on the first “real” blog post. And there are so many things I could talk about! 

  • For now, John Warner’s take is about where I’m at regarding ChatGPT. I don’t teach a course that’s likely to be very affected by AI until next spring — at which point, no doubt, the technology will be very different from today. Maybe I’ll have to work out my thoughts more carefully before then.
  • I don’t know if this is such big news everywhere, or just here in Minnesota; anyway, no one needs my hot take on what happened at Hamline. I’ll defer to nuanced takes from Muslim organizations and commenters (unpaywalled link).
  • This article in The Verge is a good review of the whole Twitter fiasco of the last few months.

I had a strong reaction as I read “The Terrible Tedium of ‘Learning Outcomes’” (unpaywalled link). All I could muster at the time was a cliché . Maybe here I can develop my reaction more.

This article is the first time I’ve encountered Gayle Greene. She is apparently an accomplished scholar and professor emerita. It’s important to point out that her essay in the Chronicle is adapted from her current book, Immeasurable Outcomes, which I haven’t read. I’m sure the book has room for much more nuance and qualification than the essay. It looks like the book is a strong defense of liberal education ideals — I bet there is a lot in there I would agree with.

I find it striking that there is positive blurb there from Lynn Pasquerella of the AAC&U. They articulated the essential learning outcomes of a liberal education and promote a method of assessing student learning of those outcomes. Yet Greene’s essay is a protest against ideas like those.

Maybe her essay is a deliberate provocation. Consider me provoked (cautiously).

The air is abuzz with words like models and measures, performance metrics, rubrics, assessment standards, accountability, algorithms, benchmarks, and best practices. Hyphenated words have a special pizzazz — value-added, capacity-building, performance-based, high-performance — especially when one of the words is datadata-driven, data-based, benchmarked-data. The air is thick with this polysyllabic pestilence, a high-wire hum like a plague of locusts. Lots of shiny new boilerplate is mandated for syllabi, spelling out the specifics of style and content, and the penalties for infringements, down to the last detail.

Gayle Greene, “The Terrible Tedium of ‘Learning outcomes'”

I get it. There are some of these corporate-ish words that set my teeth on edge, too. “Scale” is one of my pet peeves. It always feels like a way to dismiss anything that’s good as not good enough; “Yes, that’s great, but how does it scale?”

Greene’s thesis is that the learning that takes place is college is ineffable, unmeasurable, “matters of the spirit, not the spreadsheet.” Her characterization of the current machinery of learning outcomes and their assessment as “pernicious nonsense” captures a feeling that I know many in higher education share. When these processes are approached from a perspective of box-checking, of compliance, then I agree, it is not a good use of anyone’s precious time. But what if the ways that these processes work are the bathwater, and the purpose these processes ought to serve is the baby?

In passing, Greene links to this comment: “… while we are agonizing about whether we need to change how we present the unit on cyclohexane because 45 percent of the students did not meet the learning outcome, budgets are being cut, students are working full-time jobs, and debt loads are growing.” I’d suggest that these are real problems and that learning outcomes assessment has nothing to do with them. In fact, learning outcomes assessment is how you know that 45% of your (I presume organic chemistry) class doesn’t understand cyclohexane — and isn’t that useful information?

A response to Greene’s essay from @MarcSchaefferGD

When she mentions these real problems in passing, I suspect assessment is just the punching bag taking the brunt of the criticism for the fact that higher education today is not like the halcyon days of yore. But let’s disrupt those nostalgic sepia-toned images of the past to also remember that higher education then served a much wealthier and far less diverse student body. Higher education today must learn to serve much greater diversity, families that are not so well-connected, and students who come with a greater variety of goals. Data — yes, some from assessment processes — are tools for helping us do a better job working toward those worthwhile goals.


I’m not being snarky here: I wonder what Greene would do with a student’s essay if they claimed they “understand Shakespeare’s use of light and dark in Macbeth.” Wouldn’t she ask the student to elaborate further, to demonstrate their understanding with examples, with (dare I say it) evidence? Why, then, is it any different when we look at our own claims? If we claim that students are learning things in college, then shouldn’t we be able to elaborate further, to demonstrate how we know they learn those things?

I think maybe a major stumbling block is the issue of objectivity. She writes, “But that is the point, phasing out the erring human being and replacing the professor with a system that’s ‘objective.’ It’s lunacy to think you can do this with teaching, or that anyone would want to.” I teach physics, so my humanities colleagues might expect me to be a major proponent of “objective” and quantifiable measures. But surprise! I think this is a misunderstanding of the assessment process.

Surely mentors read and commented on the chapters of Greene’s dissertation. That feedback was assessment but no one claimed it had to be objective. In fact, one of the most common complaints of graduate students is that different mentors on their dissertation committees give contradictory feedback. That’s just the way it goes.

I wonder if thinking of the dissertation helps in another way: Some faculty just seem convinced that critical thinking skills are, by their very nature, not assessable. But what were your mentors doing when they commented on your writing? Greene ends by saying, “We in the humanities try to teach students to think, question, analyze, evaluate, weigh alternatives, tolerate ambiguity. Now we are being forced to cram these complex processes into crude, reductive slots, to wedge learning into narrowly prescribed goal outcomes, to say to our students, ‘here is the outcome, here is how you demonstrate you’ve attained it, no thought or imagination allowed.'” Did she feel there was no thought or imagination allowed when her mentors clarified what they wanted to see from her, when she was a student?