If you were to ask me to name one thing that underlies all of the cultural problems we have in STEM, I’d say it’s perverse incentives.
What are these cultural problems I’m talking about? I imagine these are as obvious as a toddler’s faceful of peanut butter, but okay let’s spell it out:
Expectations to produce an absurd volume of scientific publications, at every career stage
Expectations that research trainees pursue academic careers in elite exclusionary institutions
An epidemic of misogyny and sexual harassment that actively pushes women out of every field in science
A prevailing atmosphere of classism and racism (which you’ll learn about really quickly by spending time in a regional public university that principally serves non-white students), along with other -isms to boot
The employee/student dichotomy for grad students that allows institutions to extract maximum labor from students with minimum institutional investment
Undervaluation of scientists who do community-engaged research and discipline-based educational research
Treating undergraduates enrolled research universities as an afterthought instead of the raison d'être.
I could go on, but we all get the idea.
Can we make things better? The answer is Yes. We can, we have to, and this change is eminently possible, even as our democracy is failing and the federal government has chosen war against universities and science itself. We are in a moment of disequilibrium, which means that our institutions are coming to emerge out of this crisis different than we were before all of this sh*t broke loose. I suppose the clichéd way of saying this is that our crisis is also an opportunity.
Where do we focus? At this moment of rapid change, where do we act to fix our culture. For those of us who are deep enough in our institutions to have a say in these matters, I suggest that we look at revising our tenure criteria so that our expectations become more aligned with our values.
Keep in mind that you might not think you have control over tenure criteria because lots of folks underestimate their power. Please don’t make this mistake.
At this moment, I think it’s only reasonable for every science department to revisit tenure criteria in light of our federal disinvestment into scientific research. It’s going to get harder and harder to get money to get the funding that our institutions are expecting us to land, which means that a bunch of us are going to be doing science on the cheap compared to what we are used to. This means that we better start adjusting our institutional expectations! While we’re cracking into the criteria for research funding and productivity, how about we use this opportunity to reconsider what else is in there.
As you might expect, I have some suggestions. These suggestions come from the innovation incubators of higher education, which are regional public universities. (Whenever you see a new progressive trend or practice or standard or idea about doing making our universities better, you’ll usually see it being attributed to some think tank or program in the Ivy League or other big research institution. But every single time, if you scrach below the surface, you’ll see that RPUs did it first. We innovate out of necessity, because we don’t have the resources or the access or the visibility or the structural power.
Here are some ideas:
Include mentoring as an explicit tenure and promotion criterion. There are a lot of ways to do this, I think it’s possible to focus on the quality of mentorship and the process of mentorship, rather than the amount of product. Some ways of doing this is to solicit confidential letters from former mentees; expecting undergraduate students to attend conferences and first-author presentations; and requirements for mentorship training.
Increased accountability for teaching effectiveness in research institutions. While it’s well known that lackluster teaching isn’t a dealbreaker for tenure at R1s, I’ve seen some R1s change tenure criteria at the department level to require ongoing substantial engagement in professional development in pedagogy when teaching evaluations indicate ample room for improvement. (Also, this needs to be implemented with careful attention to disparities in student perceptions of teaching effectivness with respect to gender and other aspects of identity.)
Provide a mechanism for counting the products of community-engaged scholarship, transdisciplinary projects, and effective outreach activities as part of the job in tenure criteria. To do these things well, they take plenty of effort and the development of expertise, and these are things that are typically seen as valuable to the institution. Most people who do this work are so committed they do it without getting ‘credit’ for it, but if we value it, then we’ll see more of it and more structures to ensure that it’s done well.
Placing scholarship of teaching and learning on equal footing with disciplinary research. A lot of departments don’t know what to do with people who engage in research on effective pedagogy, but a lot of places think this work is not as valuable as other kinds of research. About 20 years ago, my institution passed a university-wide policy that publications in discipline-based education research must be considered to be valid as other disciplinary papers. This gets hard because most scientists are not well positioned to evaluate the quality or even the validity of this work. I mean, I’ve seen talented scientists denigrate the value of some of the highest quality DBER work out there, just from a position of bias. And unfortunately it’s possible for some scholars to take advantage of this ignorance by doing shoddy pedagogical research and publish it in journals of questionable validity, but this doesn’t mean that we can’t count good DBER as genuine scholarship.
When soliciting external letters, institutions can ask reviewers to indicate whether they are aware of incidents or investigations of misconduct or inappropriate behavior. Many times departments are unaware of toxic activities being conducted by their own colleagues when they are away at conferences, or doing fieldwork, or sometimes even in their own laboratories, and it’s colleagues at other institutions who work in the same field know about these things. I know some places have buyer’s remorse in giving tenure to absolutely the wrong guy, and this can help address that situation.
Institutions can place higher value on mentoring arrangements that support social mobility. Faculty can recruit students who have been given every kind of opportunity and are well prepared to create lots of productivity with minimal mentoring investment, or they could choose to make a difference in building more equitable community by selecting talented students who haven’t had as much opportunity and privilege. For example, some REU programs have a 100% success rate in sending students on to doctoral programs, while others might have 50% or fewer move on to doctoral programs. How much of this is about what happens during the program, and how much of this is about how the program’s priorities in recruiting and selecting participants? How about aligning tenure criteria to supporting institutional values. For example, my university’s mission explicitly says that social justice is a top priority. Many departments have mission statements that say stuff like this. Can we operationalize this in tenure criteria?
Should we be outsourcing the heavily stressed peer review system to take the place of internal evaluations of research quality? Institutions can emphasize the quality of research rather than the quality of research by limiting the number of publications that a person puts in their tenure file. This isn’t as big of a deal as it sounds. For example, when you submit your biosketch to NSF, you’re only allowed to list 10 papers. That’s it. (Some people try to game the system by citing a jazillion of their own papers in the proposal itself, but that never comes off well, in my opinion.) It doesn’t matter how much you’ve published, the program and the reviewers are focused on your best and most relevant work. Imagine what this could do for improving the quality of papers and reducing the evolutionary arms race of publishing so many! Also, another approach to this is redacting the name or of the journal during the review process, or removing impact factor from the evaluation so that the quality of the work stands for itself. This requires that colleagues be prepared to evaluate work on its own merits. In the past, the name of the journal could be a good surrogate for the quality of work, but those days are past us now. Some reseachers keep playing the game of shooting incredibly high, and working their way down the tiers to put papers in the highest impact factor journal as possible. That’s a perverse incentive of its own.
Some of these changes are simple, and some of them may well be seen as revolutionary. Perhaps the fix your institution needs is to pare down tenure criteria instead of adding to them! You know what’s wrong with the culture of your own university and your own department. (Or, maybe you don’t! I’ve been to a lot of highly problematic places and there are folks who get through blissfully unaware of the pain that they are causing other people.) Either way, change the incentive structure and you can change behavior. While humans are not necessarily rational beings, but when folks are trying to get tenure and promotion, then they work to check off the boxes. This sets up a pattern of behavior and expectations that drive the field.
No comments:
Post a Comment