Tracking Srull & Wyer (1979): Bargh & Pietromonaco, 1982

I’m working my way through the papers that have cited Srull & Wyer (1979), which is rather illuminating. Right now I’m going through those papers that can be considered direct extensions of that research (and hence warrants a more careful look at the experiments). I’m planning on writing this up more comprehensively, but paper 3 in the list is Bargh and Pietromonaco’s 1982 paper, which uses the same Donald vignette, and trait measures as the Srull & Wyer paper from 1979, but attempts to prime hostility outside awareness. I think this is a classic paper also. Here’s a proto-writeup.

Bargh & Pietromonaco, 1982.

The first extension of the priming task that is not done by Srull & Wyer is Barg and Pietromonaco’s 1982 paper, which I would think is also a classic.

They take the Donald Vignette, and the trait rating task directly from Srull & Wyer (1979). (It doesn’t seem like they take the two vignettes, just the one that is published).

But, the priming set up here is different. They are really after the “outside awareness” priming idea, and I’m fairly impressed by the amount of care they place on demonstrating that individuals are not aware of the priming.

Making sure people are not aware that they are primed.

The priming task here is a vigilance task. Participants are brought into the lab and told their task is to indicate on which side of the screen they see a flash by pressing on one of two buttons, not surprisingly labeled “left” and “right”. The flashes are really words that are presented for 100 ms. Some of them are related to hostility, and some are neutral. All of the fifteen hostile words come from the Srull & Wyer (1979). The 15 neutral words are also carefully selected according to standards. (Yes, words do get presented more than once). The words are presented parafoveally to ensure that people don’t become aware of the meaning of the words. A great deal of space is taken up by describing the details, which, for my cognitively trained self, is very very nice. There are angles, and distance from the monitor and all that. No chin rest though, but as they point out, should they move closer, the presented words will just move further away from the fovea.

In experiment 1, there are actually 3 conditions that are solely geared towards probing whether participants are aware of the meaning of the flashed words. In the two first conditions called the “rate” conditions, participants are exposed to either 20% or 80% hostile words. After the vigilance task, participants are given a recognition memory test – 60 words: half hostile, half control. Of these, half appeared in the vigilance task (targets), and the other half did not (distractors). The task is to indicate which words they might recognize from the test.

In the third, the “guess” condition, participants were presented with an 80% hostile mix. The guess participants also did not do the vigilance task. Instead, they were told that the flashes were words, and they were supposed to guess what it said.

Experiment 2 is essentially a repeat of the “guess” condition and the 80% mix “test” condition, but with even tighter measure. For the “rate” task, participants’ recognition memory was tested after each trial in a 3-word choice task, and in the “guess” task, participants are no longer allowed to pass – they must guess something.

It actually doesn’t much matter. If we take the two “Guess” repeats first, participants were really bad at guessing correctly, even with (what the authors say) rather lax coding rules. In experiment 1, 16 out of 900 trials (9 participants, 100 trials each) are correctly guessed words. 4 of these are hostile, the rest neutral. They are not better when forced to guess. Out of 1000 trials, 10 hostile and 6 neutral were correctly guessed.

For the rating task, recognition, in both versions were not different from chance.

These are a lot of repetitions, and I agree that most likely participants were not consciously aware of the content of the primes. I’m quite impressed by the care they took here.

The Priming experiment

Then comes the main experiment – the reason for all of this care, which is the three “rate conditions”. Here participants start with the vigilance task in either a 0% mix, a 20% mix or an 80% mix of hostile words. The idea, following the results from S & W (1979) is that the more you are exposed to the hostile words, the more likely you are to rate Donald higher on traits related to hostility.

But, before they get into the results on this one shot rating task, they investigate responses in the vigilance task to detect whether there is some difference in processing depending on the proportion of hostile words in the mix. They call this the “Amount of processing” measure. What they are after is trying to find evidence that the hostile words have activated some kind of processing, which then spills over in the rating task. What they claim (to cite directly)

“…direct support for the proposed mediating process of automatic category activation would be provided by poorer performance on the vigilance task by the 80% hostile word group relative to the 20% group, and by the 20% group relative to the 0% group.”


“the subject would have less of his limited processing capacity for the demands of the vigilance task”.

Then, combining this with lack of awareness, they think it would be compelling evidence for automatically activating these categories.

Perhaps. I kinda buy it.

To test this, they looked at percent correct, percent incorrect, non-responses, and reaction times. (The RT’s included both correct and incorrect responses). They did not look at it collapsed across the entire experiment, but divided it up into chunks of 20 responses. That way, they get a time-line of errors and reaction times. Interesting.

The reaction times yield nothing of interest. The non-responses are too few to analyze (doesn’t surprise me – I have not bothered with non-responses in my analyses), and the error rates end up, not surprisingly, being the complement of the proportion correct.

The last one is the one they do spend some time on analyzing. There seems to be an effect of proportion of hostile words. From my own calculation (transcribing their graph into numbers, and taking means), it looks like the more hostile the mix is, the fewer correct responses

Mix Prop correct
0% 0,972
20% 0,968
80% 0,958

These are % correct rates that I would be fine with in an RT task under speed-accuracy tradeoff rules.

Proportion correct also varies across block, which is not surprising. There are fewer correct in the first block, and in the last block than in the middle blocks. There is also a marginal interaction (p = .07). Looking at the pattern, responses in the vigilance task without hostile words looks flat. For the mixes with hostile words, there is a pronounced drop in performance in the fifth block.


Interesting. I would have liked to see something similar for the test task.

And, now, finally, how did the primed participants do on the Donald rating? Remember, it is on a 0-10 likert scale, the same as the one used by S & W (1979).

Descriptively related Evaluatively related
80% 7,47 5,94
20% 6,75 5,77
0% 6,99 4,95

Let’s compare that to the immediate rating of the Srull & Wyer of 1979. Note that the 60 80% etc refers to the lenght of the sentence unscrambling task. In this case, 60 sentences to unscramble, 80% of which were related to hostility.

Descriptively related Evaluatively related
60 80% 9,7 7,9
30 80% 8,5 6,8
60 20% 6,7 5
30 20% 5,7 3,2

And, finally, with the immediate ratings from Srull & Wyer 1980. (note, the reason there are two 70% and 30% here is because they were testing type of delay – either between priming and reading the vignette, or between the vignette and the judgment. These are only the immediate conditions, so they should be the same).

Descriptively related Evaluatively related
70% 6,9 5,5
30% 5,2 4,6
70% 7,1 6,2
30% 5,5 4,3

(In all cases, I have transformed graphs into numbers, so they could be mildly off).

The contrast between high and low proportion of priming is not as strong, but there are some effects. (The paper actually provides the output for the ANOVA, where they compare the descriptively related and the evaluatively related).

But, again, this is a one-shot measure. They have 25 individuals in each cell, compared to the 8 in each cell in both Srull & Wyer papers (yes, those ratings that I show above are based on the ratings by 8 individuals per cell. High likelihood of over-estimation).

I’m not sure what to think about it. I do think there is a priming effect, but that it should be replicated with more than 25 participants in each cell. (Still, it is an improvement over 8 participants per cell). The effect looks small. I’m more impressed with the work assuring that the words are presented outside awareness than the possibility that there is a prime.

Bargh, John A. & Pietromonaco, Paula (1982) Automatic information processing and social perception: The influence of trait information presented outside of conscious awareness on impression formation. Journal of Personality and Social Psychology, 43, 437-449.

N per cell – because I think this will be interesting to track.

Experiment condition N N/cell
Experiment 1 Rate 75 25
Test 24 12
Guess 9 9
Experiment 2 Test 10
Guess 10

About asehelene

... because if I'm in a room with a second person, I want to be reasonably sure I'm the crazier one.
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s