I’m reflecting on teaching. Which in itself can be good, as long as it isn’t so much reflection that you end up in a mirror maze with copies of your thoughts to infinity and beyond.
But, in this case, the reflection was prompted by a post from John Horgan, where he speculated that perhaps the current reproducibility crisis could be traced to practices in teaching the labs. Supposedly students would fudge data obtained to get it correct on reports, in order to get good grades.
Fudging data-points is not something that would get you better grades in psychology. It really isn’t a point in doing so for grade purposes (at least where I have been), because we know damned well how hard it is to get significant results so grading is never based on whether your experiment worked out or not. I actually had one of my profs telling me I could get my PhD based on a non-significant dissertation, as long as the reasoning and method was reasonable. (With my 12 x 30000 data points, that was not a problem!). I usually tell my own students something similar.
I have engaged in some sinful things like fishing for p-values (usually with the embarrassed caveat that this is not really how you should do it) so that my undergraduate students can practice writing up results section with something that we can pretend is meaningful. And, yes, those beautiful euphemisms for p > .05 (which surely God loves just as much) that Matthew Hankins so nicely collate for us – I’ve used them.
I have wondered about the role of teaching in the problems abounding, contemplating, as I am right now, on its low status. In coming up, though stats was usually taught by people who had done it for a long time, methods were relegated to the grad-students and the recent docs who needed something to do. In fact, at my grad institution, the methods and APA style course was handed to all the graduate students in year three, as part of their education. Actually, education wise, that was nice for us. The rest of methods were very much learned on the job as a lab assistant. Informally, it was suggested this was done because faculty did not want to be stuck with methods, and why not make it a learning expericens for the new grad students? (True? Not sure). Doesn’t say much about the value of teaching, especially about the basic mechanisms of doing research.
The other day, “Simply statistics” (in this post) stated that the squabbling agains p is misguided. The problem isn’t really the p-value, and that one should use effect sizes or other statistical procedures. The problem is a lack of analytical skill, and a lack of teaching analysis. Statistics is, after all, just a tool. Something that should help us keep track of our data, and sort out our results. It cannot provide a magical line that demarcates results from non-results. But, this is how it has been used.
What he suggests is that, beyond statistics and probability, there need to be teaching on how to analyze – the part of the theory and ideas that are not just in the statistical package. This I agree with. I’d like to add, it would be good if this was maintained throughout the schooling. Statistics and analysis of data is a skill that must be trained and repeated, not something to suffer through, get a good enough grade, and then forget. To do so, of course, you need to plan the teaching in a more overarching way. Did students learn statistics and methods last semester? Great, now that we have an area course, make sure that students keep using their fledgling skills in reading papers, planning research and analyzing results in any possible project. Do that in the next course also. This is not something you acquire overnight and once and for all.
You start, like pedagogic texts will tell you, learning some rule about “how it is”, because, really, that is probably the best way to start. A rule – we look for rules (if I believe Tomasello and Bloom, and I do), and tend to treat them as laws. Until, in the next stage, you confuse the students with exceptions. Slowly, you can build the skill necessary to understand and analyze what acquired data may actually mean, which is more than pushing buttons on SPSS. This learning does not end. I’m still at it, and my PhD is over a decade old.
Teaching Takes Time. And Skill.
In the chapter on culture and cognition in Viren Swami’s Evolutionary Psychology, which I used for my course in Evolutionary Psychology, they spend a lot of time trying to understand the evolution of the learning mechanisms that have allowed us to accumulate and transfer our technical knowledge across time, distributed across people. There’s imitation, there is emulation, and there is direct teaching. This teaching has a cost, to the teacher. That is, the teacher must take the time to create learning situations that fit with the stage of the learner. Having gone through the process is not enough. We forget how we learned, and how it was to not know – the Curse of Knowledge. This teaching exist in other species also. I was charmed by the sequence meerkats teach their young to hunt scorpions: from dead, to disabled, to fully functional prey.
Also, you cannot transfer your current level of knowledge directly to the students, even at college level (until Gibson’s Sim Stims become a reality, which is never). It has to be adjusted to allow learning to occur. This adjustment is in itself a learning process. So loopy.
The time you teach, and prepare to teach to those new in the field is time you cannot spend working on other projects, or keep abreast of the literature, or write grants. There is a possibility for synergy. The act of teaching helps with consolidating and deepening knowledge. It is well known, of course, that in order to really learn something, the best way is to teach it. (One of the reasons I keep designing new courses).
But, it doesn’t bring glory, it doesn’t bring grants, it doesn’t bring status, as it is set up right now.
But, perhaps John Horgan is right, that this is what has undermined the quality of the science that is done, because the new scientists do not get sufficient training to be able to do really good work (which is such a waste, considering the passion and cleverness of this group), and the glory comes with publishing papers. (I just read through this paper by Charles Lambdin which bring this up and more. Either Fred Hasselman or Hans Ijserman linked that in on Twitter. Paywall, alas).
Another lesson from evolutionary psychology (and also Richerson & Boyds work on cultural transfer) is that knowledge can become watered down and disappear if it is not carried forward. In their example, from Tasmania, it was because the population shrank, and key knowledge-bearers died without having transferred their knowledge sufficiently. The population in science is not under threat of shrinking, but the lowering of status of those involved in the knowledge transfer (adjuncts anyone?) may have a similar effect. (Here I’m wildly speculating – I’d like to think up a way of empirically test that conjecture).
I teach too much, at least from the perspective of current incentives, and research and publish way too little, and I sometimes feel my effort is invisible. In fact, I have no clear idea whether I’m any good at it. I like thinking about what students of psychology need to know and need to learn in order to be astute consumers and possibly producers of knowledge, and then to stick that in my courses. There is an element of self-interest here, because I’d like to feel like I’m valued, and teaching constantly gets dissed.
But, if not enough time is set aside to properly train the new generations – and this can be done in an apprentice setting – we will produce what?