Continued musings inspired by the Shanks lecture

Whatever my quibbles with David Shanks, I agree with the main point of his talk, which is a clear demonstration about the problems we face in reliability when more positive than null results are published, and replications end up in the file-drawers (not to speak of fraudsters). There needs to be an adjustment back (back?) towards ensuring reliability, rather than novelty. Perhaps also to try to puzzle together bigger pictures (by focusing on effects?).

As I was writing yesterday, I was becoming all insecure about the effects that I spoke about, which I have learned through textbooks mainly – not by looking at data, effect sizes, etc. Narrative claims about directions. Some graphs. Sure, there have also been a lot of papers I’ve skimmed. But, most of the stuff I actually work on and analyze are low-effects, not highly powered (well, except the 25 hours worth of reaction time collection over 12 participants resulting in over 30000 data-points each). Emotional effects on cognitive tasks (emotion induction – works well. Effect on cognitive task – wobbly). Individual differences in preference for ethnicity on judgment/perception – small effect. Even real? My hands-on research basically has given me coals in the stockings. Not much to hang on my CV.

But, it worries me now that I don’t know how robust those narrative summaries of research are. They sound robust. Perhaps they are like those hooks Smilla describes in her sense of snow that look robust but can be picked apart with your nails.

I wrote elsewhere about my adventures in priming, which at least ended up in a publication. At the time we were struggling with getting this to work, we (well advisor and co-workers) were going, “I can’t believe Bargh is getting those effects! How does he do it?” There really seemed like a buzz of incredulity (never of suspected fraud I will hasten to say, just incredulity).

I saw him speak at APS right around that time, and listening to him, I felt very sympathetic to the ideas. We don’t really always know what we are about, and what influences us, and lots of processing happens without us really thinking about them or knowing about them, like awareness is kind of a final or need-to-know station. I still don’t think that is implausible, although perhaps it is not the priming that does it.

There were these nagging doubts. But, in the end, things like his elderly prime and the like were cited so often by so many different sources that I figured there was enough evidence to overcome skepticism. The 1000 Elvises who can’t be wrong (now, why that particular version of social proof sticks in my head…).

And, now I wonder if it just simply is social proof or mere exposure, and should I doubt my field?

I recently read Lee Jussim’s Social Perception and Social Reality, where he drives his thesis that we are frequently fairly good at social perception, and that stereotypes are not always hopelessly biased. He pulls together a lot of data, being very careful in advancing his thesis. He starts with the Pygmalion effect. I had heard before (anecdotally) that it was hard to replicate. He goes through the research and the few follow-ups. There is an effect, but it is rather small. Mostly, teachers are fairly accurate about perceiving the abilities of their students.

He goes through a piece of research that I had been taught in class, and which I was under the impression had been replicated in many versions – it is work were a male gets to talk to a female who he either thinks is plain, or attractive due to a picture. And, evidently the effect of the good-looking picture can be detected even in the answer of the woman, in a positive way. It is considered a halo effect of beauty. Dion, Bersheid, Walster. But, evidently, it has not been replicated. So, nobody knows if it can.

In some ways, the whole book is a meta-analysis, because he is pulling in and comparing a great deal of data and their effect sizes. He actually tries to put the effects into proportion with each other, rather than simple directions (that one then can exaggerate at will). There needs to be more of this type of books, pulling together a lot of research answering rather simple questions. It is also very engagingly written, and I’d recommend it to anybody, but it is very expensive. (I got it as an e-book loan). I ended up reading the book after reading his plea for reporting effect-sizes and using them in your reasoning in this Pigee blog.

There really is a need for putting in the proportions, the effects. I come across confused undergraduates all the time, because the research seems contradictory. Positive emotions make you use more heuristics, and become more stereotypical except, of course, when it makes you more engaged and go out of your way with the task. Stereotypes are biasing, but you can extract information about people in a thin slice. Similarly, when arguing on the internet, and people throw out directions, and not effects, and in fact, we don’t know. Often within the tiresome nature-nurture discussions. We need to know. We need to integrate these findings. Is there a bigger picture to be found? (Yes, I know, meta-analyses. I probably should do that. At least I don’t have to scare up students to collect data for me then).

The data I will present on Wednesday is not very strong. It has been laying around for years, in part because it is hard to interpret, in part because it doesn’t seem novel and exciting, and I’ll just be rejected again (that I have to get over). In some ways it was perhaps fortunate because I discovered a calculation error, which changed the data. Had I already published, I would have had to retract, and I would have been mortified. There are still results – in some instances clearer – in some not as fun.

It isn’t awfully sexy, but I’m beginning to think that the tack I need to take is to instead present it within the small body of research looking at this issue, trying to estimate an effect. I have by far more participants and data-points than earlier research, so I should have power on my side. But, in some ways, publishing this and the other less sexy things is really what science ought to be about. Plodding, puzzling, that normal science Kuhn talks about (yes, I have started listening to Structure). Because we want a science we can trust. Of course, nobody’s health rides on whether or not people’s attitudes really bias their perception of affect in other faces, but it could be played up, considering the infected discussions on racism right now.

We need the checking, and rechecking, if only to exclaim “perfect ice-cubes, again!”

perfect icecubes

I’m thinking of that scene at the end in Amadeus – I recalled it wrongly, it is both milder and more disturbing than I remember. Salieri, as the patron saint of mediocrity. We can’t all be Wolfie Mozart. I think we need the Salieris. Even of science. For the Mozarts to be there.

Advertisements

About asehelene

... because if I'm in a room with a second person, I want to be reasonably sure I'm the crazier one.
This entry was posted in Research Practice, Social Psychology and tagged , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s