Higgins Bargh & Lombardi, more on the trace.

I’m skipping a couple of papers (just for the blog series, I will get back to them) to post about this one.

Higgins, Bargh & Lombarid (1984) is definitely one that is extending the priming literature.

Three words first:

5 per cell.

FIVE PER CELL

Yes, granted, they actually collapse over those cells, which then ups it to 15 per cell, but still.

Let me get back to the purpose, and how they go about the experiment.

What they want to do is to distinguish between 3 models that can account for the priming effect on categorization.

They consider two types first of all: The mechanistic model, and the excitation transmission model. The first is a very computational one (from Srull & Wyer), whereas the second is more electric. They subdivide the transmission model further into two; The battery model and the synapse model.

And, I think I leave it there, because the models are perhaps not that important for what I’m trying to pursue. I like the idea that they are setting up models, and deriving alternative predictions that can then be tested, of course. None of those “differs from null” things here. But, I’m not entirely sure how well this end up working in the end.

Instead, I think I will focus on what they did, and what the results are. I’m a bit Nassim Taleb inspired here. Theory/schmeory, look at the damned phenomenon.

In this work, they think the crucial dividing point between the models is whether something has been frequently primed or more recently primed.

The Srull & Wyer work suggests that frequency matter. So far, nobody has really looked at recency, although through squinting enough, one could think the Fazio et al, with the puzzle placed in the 7th position (which due to duplication becomes the 7th and the 17th position over 20 presentations) could possibly be considered a mild recency effect – but then again, I’m not sure that effect actually happened.

So, how do they go about this?

The general template is the Donald paradigm: a priming sentence-unscrambling task, followed by judgment of an ambiguously described individual.

But, they didn’t want to have just one ambiguous trait dimension. They wanted more, to see if the effect generalizes. So, they created ambiguous stories that could be either independent/aloof, or adventurous/reckless or Persistent/stubborn. This is actually not analyzed, so I’m not sure what happened. For simplicity’s sake, I’m using the Adventurous/reckless example to describe the priming manipulation.

The idea here is to see whether the more frequently primed, or the more recently primed construct will influence the subsequent judgment of the ambiguous character. And, of course, the frequently primed and the recently primed need to have opposite valence. That is, in the Adventurous/Reckless example, positive synonyms for adventurous are presented more frequently (Bold, courageous, brave) whereas a negative synonym for reckless are presented as the last prime (Foolhardy, to pick one of their synonyms). And, vice versa. It is perfectly nicely crossed.

The priming task was a sentence-unscrambling task, 4 words presented on the screen, same specification as in the original Srull & Wyer. Participants are to say their sentence out loud.

First they go through two 20 sentence practice trials (they don’t know it is practice). Then they go through the 20 sentence priming trial. The 7th, 12th and 15th trial contains the synonyms for one of the valences, and the 20th a synonym for the opposite.

Once they are done with this, they are asked to do a backwards count in thirds from some large number for either 15 seconds or 120 seconds. This is an interference task. The delays are selected so that they can distinguish between the models.

Finally they are presented with what they are supposed to judge, and the method here is actually – ambiguous.

They are presented with a series of ambiguous descriptions that they are supposed to label with one word (written). In the first series they get descriptions of animals, and in the third they get the description of an individual that behaves either adventurous/reckless or any of the other combinations.

The description is very unclear. They talk about series, and I can’t make out whether all participants get to label all of the ambiguous persons, or only the one particularly fitting the prime. I think the latter makes more sense (I’m not sure the priming would work across like this), which means that, again, this is a one-shot measure.

This is all the involvement of the participants (they do the probe for suspicion, and get rid of 3, which they then replace).

The labels are then rated by judges as to how synonymous they are with the primed traits on a 6 point scale. A one indicates that the word coincides with one of the negative synonyms (“same as negative alternative construct”) and a 6 that it coincides with the positive.

So, get that? An additional label of judgment, done by others.

So what do we have here, design-wise? We have two types of priming (positive frequent/negative recent vs. Negative frequent/positive frequent). We have 2 types of delay. And we have 3 types of traits. 12 cells. Five in each.

All of this is thrown into an ANOVA, but only the 2 x 2 is reported.

Delay
Brief Long
+ frequent/-recent 3,1 3,4
-frequent/+recent) 4,8 2,9

The interaction here is significant F(1,48) = 4,84 p < .05

They then look at how often participants classify the ambiguous person using either the recent construct or the frequent construct (they throw in ambiguous also, because I guess not everyone were that clear in their labeling).

Delay
Brief Long
Recent 21 11
ambiguous 1 3
frequent 8 16

They test this with a chi-square, (N=60) = 6,79, p < .05

Looks like recency works across short delays, but not sure what happens over long delays, suggesting a reversal.

This supposedly discriminates between the three models (only the synaptic model would predict this pattern they say. I won’t evaluate that claim).

What I’m much more baffled by is the very low N. Even when collapsing over the different trait types, there are only 15 individuals in each condition. There are many instances of uncertainty and noise to creep in, and I’m not sure how replicable this is.

Methodologically I think it is interesting, even the reasoning is interesting. I think there are parts in here that are, well, open for intrusion so that results may not be as robust.

Advertisements

About asehelene

... because if I'm in a room with a second person, I want to be reasonably sure I'm the crazier one.
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s