Hallway conversations about teaching and learning

Author: Scott Seagroves

SS bio here

Don’t make the same mistakes I keep making

Spoiler: Click here to get the tl;dr advice

There’s almost never a good reason to give students a mass of 1 kilogram, and doing so can reinforce misconceptions about forces and units.


If you teach high school or introductory college physics (or just know the subject well), I wonder if you can tell what I’ve done wrong here:

a mass hanging from to strings at arbitrary angles, with tension protractors reading out the tension in the strings and their angles from horizontal

OK, well, I haven’t given you a lot of context.

Early in our study of forces, I make sure students see a lot of examples of objects that are not accelerating and the forces on them balance out, and conversely, a lot of examples of objects that are accelerating and the forces on them do not balance out. Sometimes we don’t even have to quantify the forces, we just have to know that some are bigger than others. Here are some examples that students see:

A person rides an elevator from the ground floor to the top floor.

a diagram showing the relative balance/imbalance of forces on a person during an elevator ride
force diagram for a frictionless object sliding down an incline

An effectively frictionless car is released on an inclined track.

In the distant past, I noticed that some students left these exercises with an insidious misconception: From the kinds of examples we often look at, it’s easy to think that there’s always an up force to go with a down force, a left to go with a right, etc. And if there are any funny angles, then there’s always acceleration (because that left-right-up-down balance is upset).

So I make sure to emphasize the one in the photograph above, specifically because it involves an odd number of forces at “funny angles” that still balance out to zero acceleration.

Here it is again, but with the force vector information:

A mass hanging from angled strings. One string has 4.9N of tension pulling up and left at 26 degrees above horizontal; the other has 8.85N of tension pulling up and right at 59.5 degrees above horizontal.

Now can you see (or calculate) what I’ve done wrong?

This course does not require that a student have taken high school physics. But of course, some have. And somewhere students learn this sing-songy mantra:

“Gravity’s always nine point eight.”

When students drill a bunch of projectile motion kinematics problems, they learn (correctly) that the acceleration of gravity here at the surface of Earth is 9.8 m/s2, downward. But the precision of that statement is lost and it gets filed away as “gravity’s always 9.8” (there are never any units in this mantra [and that’s part of the problem]).

This (more-or-less correct) idea that the acceleration of gravity is always the same number transfers to an incorrect idea that the force of gravity is the same on every object. And what is that force of gravity? “Nine point eight,” of course! So on the above diagram, where I’ve labeled “force of gravity on mass,” many students have “9.8” written on their drawing. What they should have is Fgrav = mg, where m is the mass of the object in kg, and g = 9.8 N/kg, the strength of the gravitational field here at the surface of the Earth.

So what mistake did I make? Never give students a 1 kilogram mass, because it inadvertently reinforces the misconception that the force of gravity is always 9.8 Newtons.

I have had this realization before, but I’m forgetful, and dumb. Learn from my mistake!

summer slowdown

Things have slowed a bit here at TMV.

I was going to write about the Supreme Court’s affirmative action decision. I have some sketchy notes; I was thinking about the fact that the vast majority of US students don’t attend the selective sorts of institutions where affirmative action ever applied. I was thinking about how this is a disappointing decision, but it could also be an opportunity to re-envision the whole process. I like Timothy Burke’s take on this: We should stop working so hard on various schemes to allocate the limited spaces on the lifeboats — we should get more lifeboats instead.

Then I was going to zoom in on the odd footnote 4 in the majority opinion. This is where Roberts off-handedly exempts the military service academies from the ruling. I was going to point out that the US’s amicus brief and oral arguments talked about the importance of a diverse officer corps, which is not the same thing as diversity at the service academies. In fact, fewer than 20% of military officers come from the service academies. The majority of officers go to “regular” colleges through ROTC programs. I think it’s interesting that the fraction who go to college, then later decide to join up through OCS or direct commission, is roughly comparable to the fraction who come through the service academies. All of this seems like research one of Roberts’s clerks could’ve done. The footnote strikes me as a real weakness.

Anyway. That’s what I would have written about.

But a week ago, on July 2, my Dad died. So if you’re reading this, I encourage you to let go of some notes that haven’t coalesced into whatever they were supposed to form. Go hug your kids. Or go hug your parents. Or go hug whoever’s important to you and you’re not sure you’ve let them know. I don’t have some Important Wisdom to Impart; I don’t think it works like that. It’s not like my Dad died and I was granted some epiphany and now I understand what’s really important. But I don’t think you can go wrong with some hugs (with consent, of course).

Scott and Wallace Seagroves on a cruise on Lake Superior, summer 2018

A short round-up on ungrading

I am having lots of trouble keeping up with various commitments. Being here is one of them! But when I think of this place as a conversation rather than a set of settled statements, it helps. So, this is like the quick text I might send to a friend just to keep things going, even when I wish I had time to do more.

Rafael brought up ungrading.

There’s an entire issue of Zeal devoted to ungrading, with contributions from Jesse Stommel and Robert Talbert, whom I follow, and several voices who are new to me. Talbert also recently published a “stop/start/continue for the ungrading community.

One of Stommel’s simplest suggestions — and yet one I’m ashamed to have often skipped — is to have explicit conversations with students about grades and grading practices. I have a couple questions about this on my latest feedback questionnaire with my students. I linked to Stommel’s piece and to Kohn’s “The Case Against Grades” in the contexting for these questions, so students are invited to read those but not required to.

Here’s a (long) question and some of the responses so far:

Let's look specifically at Lab Practicals and the points/grades that go with them. Here's the 2nd lab practical from the fall semester — please review it briefly to remind yourself how points worked.
My goals were for everyone to accomplish enough on the lab practical to demonstrate they'd learned the relevant ideas. Putting points on different aspects is a way for me to signal to you what is important. It's not that I actually care about the grade. In fact, almost everyone passes the 100% line.
Would it be possible to describe what is important, lay out requirements, or some other idea that does not involve points, and achieve the same outcomes? Or are the points an essential motivation for making this work? 

- I am very motivated by points because I pride myself in getting good grades
- I disagree, With my major I am very busy. If I saw something with no points to it I would add it to the last thing to do in my pile of work.
- honestly not sure
- I find the points to be motivating
- I like the points for motivation, I felt like I would look at the material more and review it more before the lab practical. I felt that this made me feel more comfortable with the material more. I wish we did more of these lab practical's. for example, if we did 4, all 4 would equal the 2 we did in points.
- the points are pretty important

I have lots of thoughts about why these (few) responses do not seem to support an ungrading approach. But what do you all think?

(Side note: If you’d like to click through to the Lab Practical assignment, you’ll see that it is very, well, “alternative graded,” I guess? There are more points available than needed for the assignment. So I’m getting some of these ideas in even if it’s not via true ungrading.)

So, what do you think about these students’ thoughts on points? What would your own students say?

Let’s not throw the baby out with the bathwater

Whew, a lot of pressure on the first “real” blog post. And there are so many things I could talk about! 

  • For now, John Warner’s take is about where I’m at regarding ChatGPT. I don’t teach a course that’s likely to be very affected by AI until next spring — at which point, no doubt, the technology will be very different from today. Maybe I’ll have to work out my thoughts more carefully before then.
  • I don’t know if this is such big news everywhere, or just here in Minnesota; anyway, no one needs my hot take on what happened at Hamline. I’ll defer to nuanced takes from Muslim organizations and commenters (unpaywalled link).
  • This article in The Verge is a good review of the whole Twitter fiasco of the last few months.

I had a strong reaction as I read “The Terrible Tedium of ‘Learning Outcomes’” (unpaywalled link). All I could muster at the time was a cliché . Maybe here I can develop my reaction more.

This article is the first time I’ve encountered Gayle Greene. She is apparently an accomplished scholar and professor emerita. It’s important to point out that her essay in the Chronicle is adapted from her current book, Immeasurable Outcomes, which I haven’t read. I’m sure the book has room for much more nuance and qualification than the essay. It looks like the book is a strong defense of liberal education ideals — I bet there is a lot in there I would agree with.

I find it striking that there is positive blurb there from Lynn Pasquerella of the AAC&U. They articulated the essential learning outcomes of a liberal education and promote a method of assessing student learning of those outcomes. Yet Greene’s essay is a protest against ideas like those.

Maybe her essay is a deliberate provocation. Consider me provoked (cautiously).

The air is abuzz with words like models and measures, performance metrics, rubrics, assessment standards, accountability, algorithms, benchmarks, and best practices. Hyphenated words have a special pizzazz — value-added, capacity-building, performance-based, high-performance — especially when one of the words is datadata-driven, data-based, benchmarked-data. The air is thick with this polysyllabic pestilence, a high-wire hum like a plague of locusts. Lots of shiny new boilerplate is mandated for syllabi, spelling out the specifics of style and content, and the penalties for infringements, down to the last detail.

Gayle Greene, “The Terrible Tedium of ‘Learning outcomes'”

I get it. There are some of these corporate-ish words that set my teeth on edge, too. “Scale” is one of my pet peeves. It always feels like a way to dismiss anything that’s good as not good enough; “Yes, that’s great, but how does it scale?”

Greene’s thesis is that the learning that takes place is college is ineffable, unmeasurable, “matters of the spirit, not the spreadsheet.” Her characterization of the current machinery of learning outcomes and their assessment as “pernicious nonsense” captures a feeling that I know many in higher education share. When these processes are approached from a perspective of box-checking, of compliance, then I agree, it is not a good use of anyone’s precious time. But what if the ways that these processes work are the bathwater, and the purpose these processes ought to serve is the baby?

In passing, Greene links to this comment: “… while we are agonizing about whether we need to change how we present the unit on cyclohexane because 45 percent of the students did not meet the learning outcome, budgets are being cut, students are working full-time jobs, and debt loads are growing.” I’d suggest that these are real problems and that learning outcomes assessment has nothing to do with them. In fact, learning outcomes assessment is how you know that 45% of your (I presume organic chemistry) class doesn’t understand cyclohexane — and isn’t that useful information?

A response to Greene’s essay from @MarcSchaefferGD

When she mentions these real problems in passing, I suspect assessment is just the punching bag taking the brunt of the criticism for the fact that higher education today is not like the halcyon days of yore. But let’s disrupt those nostalgic sepia-toned images of the past to also remember that higher education then served a much wealthier and far less diverse student body. Higher education today must learn to serve much greater diversity, families that are not so well-connected, and students who come with a greater variety of goals. Data — yes, some from assessment processes — are tools for helping us do a better job working toward those worthwhile goals.


I’m not being snarky here: I wonder what Greene would do with a student’s essay if they claimed they “understand Shakespeare’s use of light and dark in Macbeth.” Wouldn’t she ask the student to elaborate further, to demonstrate their understanding with examples, with (dare I say it) evidence? Why, then, is it any different when we look at our own claims? If we claim that students are learning things in college, then shouldn’t we be able to elaborate further, to demonstrate how we know they learn those things?

I think maybe a major stumbling block is the issue of objectivity. She writes, “But that is the point, phasing out the erring human being and replacing the professor with a system that’s ‘objective.’ It’s lunacy to think you can do this with teaching, or that anyone would want to.” I teach physics, so my humanities colleagues might expect me to be a major proponent of “objective” and quantifiable measures. But surprise! I think this is a misunderstanding of the assessment process.

Surely mentors read and commented on the chapters of Greene’s dissertation. That feedback was assessment but no one claimed it had to be objective. In fact, one of the most common complaints of graduate students is that different mentors on their dissertation committees give contradictory feedback. That’s just the way it goes.

I wonder if thinking of the dissertation helps in another way: Some faculty just seem convinced that critical thinking skills are, by their very nature, not assessable. But what were your mentors doing when they commented on your writing? Greene ends by saying, “We in the humanities try to teach students to think, question, analyze, evaluate, weigh alternatives, tolerate ambiguity. Now we are being forced to cram these complex processes into crude, reductive slots, to wedge learning into narrowly prescribed goal outcomes, to say to our students, ‘here is the outcome, here is how you demonstrate you’ve attained it, no thought or imagination allowed.'” Did she feel there was no thought or imagination allowed when her mentors clarified what they wanted to see from her, when she was a student?

Introduction

Welcome

a TMV logo

I had an idea — well, several ideas, I guess. I wanted a project that could be an excuse to work with some of the great people I know. I wanted a low-stakes venue to write as a form of working out my own thinking. I wanted a kind of writing that could be a joy rather than a dreaded item on my to-do list.

I started to loosely outline an idea for an old-school group blog. I envisioned essays and posts in each person’s distinct voice. I pictured us reading each others’ pieces, commenting, creating conversations. Some of my friends and colleagues were skeptical, to say the least. For one thing, who blogs anymore, in the era of social media?

A tweet about blogging from @scalzi

But one time, I described this idea of the most informed, engaged, inspiring educators I know, writing informally — maybe even tentatively — in conversation with each other and the wider worlds of education and pedagogy. And I got just the affirmation that I needed:

“That would be like listening to the conversations I wish I had with colleagues.”

(I’ll keep who said that to myself, to protect her colleagues.)

So I’m founding Teaching Made Visible. We’ll talk about teaching and learning construed broadly. If the team of voices I’ve put together here can be generalized, I think it’s fair to say we’re all interested in social justice, the interplay between theory and practice, and a certain balanced respect for both scholarly, research-based understandings and practical, experience-based ones. Maybe, if we live up to the affirmation above, it’ll be like virtual hallway conversations among educators.

Team of contributors

The “groupness” of this project has remained the one non-negotiable for me. I had no interest in just starting a blog of my own. I’m excited to showcase some great voices from important perspectives. And I get the joy of having a project I’m working on with them. Let me introduce them in my own words here, then I can give them the floor from here on out:

profile pic of Scott Seagroves

I’m Scott Seagroves. I teach physics and direct the liberal arts general education program at The College of St. Scholastica. A lot of my career has been connected to the Institute for Scientist & Engineer Educators, where I recently led a collection of publications. I’m pursuing an Ed.D. studying “teacher identity” in higher education faculty.


Anne Metevier

I’ve known Anne Metevier since around 2000, and I started working more closely with her around 2003. She has led and/or been invaluable for much of our work at the Institute for Scientist & Engineer Educators; for example, she led this recent publication. She teaches astronomy at Sonoma State University and Santa Rosa Junior College.

Anne is there in many of my best career and personal memories. She was part of the small group that helped my wife and I elope!


Linda Strubbe

I first met Linda Strubbe when she participated in our ISEE Professional Development Program around 2009. She is an independent educator and educational developer and, oh, also, she co-founded the Pan-African School for Emerging Astronomers. I’ve been circling around vague ideas that how professional development is done in higher education is all wrong, and I keep coming back to some points she and her collaborators make here.

If you have a serious case of impostor syndrome — not me, just, you know, someone — you defensively tell yourself that people who study astrophysics at CalTech and Berkeley might be smarty-smart but they’re not, you know, well-rounded whole people, right? And then you hang out with Linda.


Megan Perry-Spears

Around 2012, I became fast friends with CSS‘s Dean of Students, Megan Perry-Spears. I kept hearing students talk about the W curve, and she explained it to me. I wish every higher education faculty member understood that the student affairs staff are educators the way I understand it from knowing Megan. As a matter of fact, she might be more thoughtful about student learning than the average faculty member.

Megan and her wife were — unbeknownst to them — listed as the emergency contacts for my son at school. Obviously there’d never be a time when the school couldn’t reach me or my wife, right? Oops.


Rafael Palomino

Rafael Palomino joined our ISEE Professional Development Program around 2013, and within a couple years he was an instrumental member of the development team. Most recently we worked together on the Equity & Inclusion “theme” within that program. He is senior instructional designer at Cepheid. He has a remarkable command of numerous frameworks and theories; I’m excited to hear more from the corporate education/training perspective.

I can never tell if Rafael’s being sincere or patronizing when he compliments my music tastes.


Sarah Stewart

I met Sarah Stewart around 2015. She is the associate director of CSS’s Office of Equity, Diversity, and Inclusion. Somebody asked a dismissive question about the invite-only opt-in first-year seminar for students of color that she teaches, and I found that it demonstrably improves graduation rates for those students. She’s also my classmate in our Ed.D. program.

When I first met her, Sarah said something about a short, unpleasant stint in North Carolina, and I’ve felt that somehow it’s my job to make it right ever since.


Christine O'Donnell

I met Christine O’Donnell when she joined the ISEE Professional Development Program around 2018. Recently I had the privilege of working as her editor on this article; that really got me thinking about her experiences and perspective. She is an Education and Diversity Program Manager at the American Physical Society.

I can’t find it now, but I swear I’ve seen a collage of photos of Christine in hardhats visiting cool facilities.


Erin Karlgaard

I’ve only known Erin Karlgaard since 2021, when we joined an Ed.D. program in the same cohort. She is a 3rd-grade teacher, a Racial Equity Advocate, and a 2022 Minnesota Teacher of the Year finalist. She consistently asks an equity question at times when I’m distracted by something less important.

At one point I asked a “hypothetical” question: If I ask someone from my Ed.D. cohort to blog here, do I have to ask everyone from my cohort? It wasn’t hypothetical, obviously — I wanted to ask Erin!

What now?

Our plan, for now, is for someone to blog here approximately every week. Several of us will always be “on duty” to converse in comments. We’re going to see how this goes for a little while, and then pause to check in. I’m excited to get started!