Hallway conversations about teaching and learning

Let’s not throw the baby out with the bathwater

Whew, a lot of pressure on the first “real” blog post. And there are so many things I could talk about! 

  • For now, John Warner’s take is about where I’m at regarding ChatGPT. I don’t teach a course that’s likely to be very affected by AI until next spring — at which point, no doubt, the technology will be very different from today. Maybe I’ll have to work out my thoughts more carefully before then.
  • I don’t know if this is such big news everywhere, or just here in Minnesota; anyway, no one needs my hot take on what happened at Hamline. I’ll defer to nuanced takes from Muslim organizations and commenters (unpaywalled link).
  • This article in The Verge is a good review of the whole Twitter fiasco of the last few months.

I had a strong reaction as I read “The Terrible Tedium of ‘Learning Outcomes’” (unpaywalled link). All I could muster at the time was a cliché . Maybe here I can develop my reaction more.

This article is the first time I’ve encountered Gayle Greene. She is apparently an accomplished scholar and professor emerita. It’s important to point out that her essay in the Chronicle is adapted from her current book, Immeasurable Outcomes, which I haven’t read. I’m sure the book has room for much more nuance and qualification than the essay. It looks like the book is a strong defense of liberal education ideals — I bet there is a lot in there I would agree with.

I find it striking that there is positive blurb there from Lynn Pasquerella of the AAC&U. They articulated the essential learning outcomes of a liberal education and promote a method of assessing student learning of those outcomes. Yet Greene’s essay is a protest against ideas like those.

Maybe her essay is a deliberate provocation. Consider me provoked (cautiously).

The air is abuzz with words like models and measures, performance metrics, rubrics, assessment standards, accountability, algorithms, benchmarks, and best practices. Hyphenated words have a special pizzazz — value-added, capacity-building, performance-based, high-performance — especially when one of the words is datadata-driven, data-based, benchmarked-data. The air is thick with this polysyllabic pestilence, a high-wire hum like a plague of locusts. Lots of shiny new boilerplate is mandated for syllabi, spelling out the specifics of style and content, and the penalties for infringements, down to the last detail.

Gayle Greene, “The Terrible Tedium of ‘Learning outcomes'”

I get it. There are some of these corporate-ish words that set my teeth on edge, too. “Scale” is one of my pet peeves. It always feels like a way to dismiss anything that’s good as not good enough; “Yes, that’s great, but how does it scale?”

Greene’s thesis is that the learning that takes place is college is ineffable, unmeasurable, “matters of the spirit, not the spreadsheet.” Her characterization of the current machinery of learning outcomes and their assessment as “pernicious nonsense” captures a feeling that I know many in higher education share. When these processes are approached from a perspective of box-checking, of compliance, then I agree, it is not a good use of anyone’s precious time. But what if the ways that these processes work are the bathwater, and the purpose these processes ought to serve is the baby?

In passing, Greene links to this comment: “… while we are agonizing about whether we need to change how we present the unit on cyclohexane because 45 percent of the students did not meet the learning outcome, budgets are being cut, students are working full-time jobs, and debt loads are growing.” I’d suggest that these are real problems and that learning outcomes assessment has nothing to do with them. In fact, learning outcomes assessment is how you know that 45% of your (I presume organic chemistry) class doesn’t understand cyclohexane — and isn’t that useful information?

A response to Greene’s essay from @MarcSchaefferGD

When she mentions these real problems in passing, I suspect assessment is just the punching bag taking the brunt of the criticism for the fact that higher education today is not like the halcyon days of yore. But let’s disrupt those nostalgic sepia-toned images of the past to also remember that higher education then served a much wealthier and far less diverse student body. Higher education today must learn to serve much greater diversity, families that are not so well-connected, and students who come with a greater variety of goals. Data — yes, some from assessment processes — are tools for helping us do a better job working toward those worthwhile goals.


I’m not being snarky here: I wonder what Greene would do with a student’s essay if they claimed they “understand Shakespeare’s use of light and dark in Macbeth.” Wouldn’t she ask the student to elaborate further, to demonstrate their understanding with examples, with (dare I say it) evidence? Why, then, is it any different when we look at our own claims? If we claim that students are learning things in college, then shouldn’t we be able to elaborate further, to demonstrate how we know they learn those things?

I think maybe a major stumbling block is the issue of objectivity. She writes, “But that is the point, phasing out the erring human being and replacing the professor with a system that’s ‘objective.’ It’s lunacy to think you can do this with teaching, or that anyone would want to.” I teach physics, so my humanities colleagues might expect me to be a major proponent of “objective” and quantifiable measures. But surprise! I think this is a misunderstanding of the assessment process.

Surely mentors read and commented on the chapters of Greene’s dissertation. That feedback was assessment but no one claimed it had to be objective. In fact, one of the most common complaints of graduate students is that different mentors on their dissertation committees give contradictory feedback. That’s just the way it goes.

I wonder if thinking of the dissertation helps in another way: Some faculty just seem convinced that critical thinking skills are, by their very nature, not assessable. But what were your mentors doing when they commented on your writing? Greene ends by saying, “We in the humanities try to teach students to think, question, analyze, evaluate, weigh alternatives, tolerate ambiguity. Now we are being forced to cram these complex processes into crude, reductive slots, to wedge learning into narrowly prescribed goal outcomes, to say to our students, ‘here is the outcome, here is how you demonstrate you’ve attained it, no thought or imagination allowed.'” Did she feel there was no thought or imagination allowed when her mentors clarified what they wanted to see from her, when she was a student?

5 Comments

  1. Anne Metevier
    Anne Metevier

    There is so much to respond to here. I’m not fond of the box-checking compliance stuff either, as it can take up a lot of my time, may or may not be in my teaching contract, and feels like it pulls me away from teaching. It can also feel inflexible. What if I want to change a learning outcome or a suite of learning outcomes for a course, but that requires a months-long process of vetting (that takes up time, may or may not be in my teaching contract, and pulls me away from teaching)?

    On the other hand, I’m very much in favor of doing away with “students will know”- and “students will understand”-type learning outcomes, as they invite the question: how do we know they know? The more we explain what it looks like when students are learning content, or improving at a practice or skill, the better. In this vein, I have found rubrics invaluable tools not only for assessing my students’ learning, but also for conveying expectations to them. And I very much expect that rubrics have helped my students learn more. It might not be possible to develop a rubric for “matters of the spirit”, but then again, have we tried? How do we know we’re “kindling … wonder, wisdom”, etc., in our students? If something is indicating to us that this “kindling” is happening, why not write it down, talk about it with colleagues, debate it, and refine our ideas about it? And, most importantly, create the conditions for it to happen!

  2. Linda Strubbe
    Linda Strubbe

    I hear you, Scott. The article you’re referencing (which I haven’t read) seems to take an all-or-nothing, black-and-white view of learning outcomes. That the conclusions should be that learning outcomes are completely useful or unuseful. It sounds to me that if the author thinks particular learning outcomes they have are not useful, then they should consider going deeper, revising the outcomes — come up with something that *is* useful for you. Even if it’s hard to measure, there *is* something you’re going for when you’re teaching, and there are some ways you’re attempting to get feedback (even body language) from a class about how you’re moving towards that goal. I’d say it’s useful for educators trying to accomplish something to think about and try and refine for ourselves what we are trying to accomplish and how we’ll know we’re making progress. Maybe it isn’t always a number, maybe it’s somewhat hard to quantify, but learning outcomes don’t always have to be quantifiable. They should be useful to the educator and student. So if the set that the author currently has are not useful, then I’d encourage the author to try thinking more creatively about what *are* outcomes that are useful to *them*, why they are teaching this at all, and what indicators they and their students can/should use to see how it’s all going.

  3. Nicholas McConnell

    Full disclosure: I hold one of the administrative positions whose existence Greene criticizes, though I recognize she is (mostly) hating on the game rather than individual players. In the assessment domain, I share her skepticism about approaches that obsesses over quantifying learning and – by omission in what is sought for evidence or validation – devalue the social and relational ingredients that most learning requires. I agree with Anne and Linda that defining learning outcomes as fixed entities and prescribing assessments without room for adaptation can hack down students’ and instructors’ varied assets and experiences in service to a common denominator. And alienate many along the way.

    In my role I sometimes wonder, what does it mean for an institution to be accountable for its students’ learning? Students enroll and place their trust (and their hard-earned cash) in the institution before knowing all of its staff and faculty members, or one another. Would it pass to say the institution meets its promise that students will become successful critical thinkers and communicators, because Professor A knows that her students are, and Professor B also knows that her students are, and Professor C noticed some weren’t but has since added the following supports, and so forth? Perhaps that should pass as collective evidence, without seeking a way to standardize and aggregate across classes and programs. I’d gladly exchange my title if it meant devoting adequate resources to supporting instructors in making reliable evaluations (however they design them) and adjusting their approaches to meet different students’ needs.

    Or we could go further and ask each student to showcase how they’ve succeeded, in their own voice [1,2]. As in Scott’s MacBeth example, the student’s account still might not be taken at face value, whereas instructors are presumed to have expertise to mediate students’ claims about their own learning (thank you Erin for starting a conversation here about positionality [3]).

    Yet as Greene and many others lament, most institutions have put their chips into measuring some common denominator, aggregating measurements across the institution, then disaggregating by various attributes/identities/backgrounds/experiences. This approach strives for a coherent picture, and sacrifices nuance for simplification and reduction. Many claim this is what our accreditors (and by extension our peers who review us) demand, and I don’t have experience in enough regions of the U.S. to know where this is true and where it is not.

    Since I’ve come full-circle to the original complaint I’ll wonder aloud: if all the compliance and administrative bloat just vanished, would we naturally arise at something like the instructor-centered or student-centered states above? Or would they need some kind of propping up? And what would that be?

    [1] https://www.centerforengagedlearning.org/eportfolio-as-high-impact-practice/
    [2] https://www.aacu.org/trending-topics/eportfolios
    [3] https://teachingmadevisible.com/2023/01/positionality-perceptions-and-power-dynamics-in-educational-spaces/#comment-11

    • Scott Seagroves

      I think what you describe is almost a sort of accountability that “passes,” but not quite. Professors A, B, and C have some students in common. I’d like to see all the professors get together to discuss what they’re noticing, maybe have an a-ha moment here and there, and possibly collectively decide to add some of C’s supports across the university. I’m not sure it should be *required* that there be some collective action, but I think discussion among the community of faculty is necessary to at least allow for the possibility of collective action. I’m not sure you can get there with A, B, and C each teaching in their own vacuum, never discussing with anyone else, never hearing how others approach the same issues, etc.

      What do you think?

      • Nicholas McConnell

        That seems reasonable, and ideally something close to this is already happening within departments. Two things seem like challenges for both this approach and the reductive/quantitative status quo. (1) bringing in more than the “same ten people (STP)” who embrace reflecting on and developing their practice with others but aren’t necessarily who students encounter at critical junctures. To this point (and Anne’s), compensation and scope/prioritization of instructor duties have to be part of the solution, for both full- and part-time folks. And (2) maintaining accountability to students by including them in a meaningful way.