Posing the Face – an overview of early Laird research

Let me start with this link to a lovely blog by Lynneguist on the meaning of Frowns.


Because, evidently it varies! I always considered it meaning that you pull your eye-brows down in a somewhat angry expression – frowning on something that you disapprove of. But, clearly (and I had come across this) there are those that considers the frown a sad face.

Now, the term “frown” is used in the scientific literature on emotional expressions –as you will see below. And, I’ll tell you up front that in most cases it was used as synonymous for the angry face, but in one case it was used for the sad face. Let that be a warning about using folk-psychological terms, because they may indicate very different things.

Nevertheless, I won’t heed my own advice in the work below, but I will let you know if the frown means a sad face.



In all the brouhaha that the non-replication of Strack brought, someone linked in a relatively early paper by Laird, where he responded to another non-replication of facial feedback from surreptitiously posing faces..

The paper is “The real role of facial response in the experience of emotion: a reply to Tourangeau and Ellsworth, and Others. Published in JPSP 1984. On the first page, he lists a simple nose-count of papers that have replicated the face-posing effects. As we should know, at least since Meehls asterisk paper, simple nose-counts is not good enough evidence that an effect exists, considering that we now understand there are as many un-interesting ways to get significant results as there are uninteresting ways of failing to get a result, and only a file-drawer separates the two.

So, I figured as a warmup for a longer review I should find those nose-counted papers and look at what they say (the were not that many).

It starts with his 1974 paper “Self attribution of emotion: The effects of expressive behavior on the quality of emotional experience.  The paper wass, in part, based on a his doctoral dissertation. Is this important? I don’t know. People talk so much about expertise as being a factor. It was early career work, at least research wise.

His theoretical background (which I’m less interested in – I want to look at what was measured, how it was measured, and the results thereof – theories develop one would hope), is grounded in Bem’s self-perception theory , and Schacters work on arousal and external cues. He thinks that changes in physical arousal, and changes in patterns of bodily expression are both parts that will change self-attribution of emotion. If one knows that there may be an external reason for arousal, one can then discount this effect.

Experiment 1

Lets start with experiment 1. Sixtyfive undergraduate males participated. Not all (as we will see) were in the experimental group, and even among those, some were excluded.

The experiment is quite elaborate: There is a cover story: Participants were told that it was about “the activity of facial muscles under various conditions”.  This was backed up by the presence of scientific looking apparatus, and by placing electrodes between the eye-brows and to the corner of their jaws. The electrodes seem to have had a function – but not as electrodes. That was a complete sham. Instead they were used to direct the participants to pose their faces so they appeared like facial expressions of emotion without letting on that that was being done. Here are the quotes from what they were told:

For the “angry” Position:

[Touching lightly the electrodes between the eyebrows] Now I’d like you to contract these muscles. [If this was unsuccessful, then ] Contract them by drawing them together and down [and if this was unsuccessful, then ] Pull your brows down and together. [Whenever the experimenter was satisfied, he said ] Good, now hold it like that. [Now touching lightly the electrodes at the corners of the jaw] Now contract these [if this was unsuccessful, then ] Contract them by clenching your teeth.


For the “Happy” Expression:

[Touching lightly the electrodes near the corners of the mouth] Now I’d like you to contract these muscles under here [If this was unsuccessful, then ] Contract them by drawing the corners of your mouth back and up [When satisfied, the experimenter said] Good, now hold it like that.

In the discussion he actually notes the mean number of steps in instruction to get the expressions right: 2,80 for the smiles, and 2.63 for the angry expression (but they did not differ).

But, to slightly move back. While the electrodes were placed on the face, the experimenter explained that there could be some subtle emotional changes, so after each trial, the participant would rate their emotional experience so that could be controlled for.

So, what we have here is – placing fake electrodes, explaining that emotional experience could be a confound (to justify measuring their emotional state), stating that the experiment involved tensing and relaxing facial muscles, and instruction on how to do that tensing.

We are ready for the experiment.

Once the face was positioned, participants were shown a picture for 15 seconds, before filling in the mood-adjective questionnaire. There were 4 pictures total – two of Ku Klux Klan members, and two of playing children. The participants saw all 4 pictures. A KKK and Kid picture while “smiling” and the other KKK and kid picture while looking angry.

The mood checklist

The mood adjective list was adapted from the Nowlis-Green Mood Adjective Check List (Nowlis 1968). It contained 40 mood words, and these were related to factors indicating Aggression, Anxiety, Remorse, Elation, Social Affection and Surgency. (Interesting to look at the names of the factors actually). The interesting set of adjectives would be those related to Aggression, Elation and Surgency (which is reasonable). Each adjective was rated on a 5 point scale ranging from “did not feel” to “Strongly felt”. Then, to get an index, the ratings for all adjectives that would indicate Aggression was averaged. Fairly standard procedure (it was what I used with the BMIS when we measured emotion).


He performed an interesting control for experimenter bias also. As much as possible, participants were run in pairs. One participant got his (they were all dudes) face manipulated, whereas the other one didn’t. The two subjects were separated by a screen so they could not see each other, but the experimenter could see both of them. The idea here is that if the researcher would inadvertently indicate what was intended, both participants would show this particular bias, but as only one received the manipulation, the bias could possibly be detected by looking at how similar the mood scores were between the two participants.

This pairing didn’t work perfectly. There were only 20 instances where both showed up, and 25 where the subject was alone. In total then, there were 45 manipulated participants, and 20 controls.

Seven of the manipulated participants seemed to be aware of a connection between the facial manipulation and their mood, and were then excluded from the analysis.

To re: The point of interest is – whether facial feedback result in an emotion signal, even if you don’t realize that your face is posed into an expression, and that the supposed control questionnaire is the actual dependent measure.

The emotional content of the pictures seems to have not been of a main interest here, but it is analyzed, and, not surprisingly, all people rated themselves as more aggressive after viewing the KKK pictures, and more elated after viewing the kid pictures.

This is how I translated the table of the results into graphs for the manipulated participants.


Laird posts the F-values, so I actually took those and the degrees of freedom and stuck them into Schimmacks nifty R-index sheet so I could get some p-values.

Study 1, Experimental                       Aggression N F df1 df2 p
Expression main effect 38 8,18 1 37 0,007
Expression x Picture interaction 38 4,18 1 37 0,048
Expression main effect 38 7,21 1 37 0,011
Expression x picture interacton 38 4,5 1 37 0,041
Expression main effect 38 5,91 1 37 0,020
Aggression x picture interaction 20 3,26 1 19 0,087
Expression main effect 20 1,66 1 19 0,213
Expression x picture interaction 20 1,54 1 19 0,230

Note that in the control condition (where the participants didn’t screw up their faces), there were no effects expected. Laird notes down some of the F-values (that are not less than 1), so I stuck them in here just for completeness.

He also goes into doing a manipulation check between the experimental and observer participants (there ends up being only 16 pairs), and find that they do differ as expected but it is quite weak, but I won’t discuss it here. In fact, I would recommend people read his own discussion, because it is quite detailed and thoughtful.

Some commentary

He uses a complete within-subjects design, with an interesting control. He measures their emotions quite openly, but most of them think that this is not of interest. They have to hold their facial expressions for quite a long time. He actually asked if it was distracting or uncomfortable, and some did. Three for all, six for anger and four for smile. Most of them didn’t

Experiment 2

In experiment 2 he addresses what will happen when the situational cue is ambiguous. (the pictures in experiment 1 weren’t ambiguous). To do so, he uses cartoons. He argues that the participants will attribute the source of their emotion to the cartoons. The selected cartoons had received a moderate humor rating.

The setup was similar to that in experiment 1, but a few differences. It was (again) a within-subjects design, but it appears there were only two repeats – one in the happy condition, one in the angry condition. The main measure was the ratings of the cartoon (but that was, as in the earlier experiment, tossed off as a control measure rather than the main measure), this time on a 9 point scale going not at all funny, to funniest ever. The mood checklist had been shortened to just 6 items, 3 from aggression and 3 from elation. The same post experimental questionnaire was used. No observer subjects this time.

32 undergraduates this time (no mention of gender). Six were excluded because they guessed the hypothesis.

And here are the results, copied from the paper, and with cohen’s d added (using Daniel Lakens nifty effect-size spread sheet).

Angry Happy t p d
Humor rating 4,42 5,5 2,8 0,01 0,55
Elation 4,11 4,42 < 1.0
Aggression 2,81 1,88 2,46 0,021 0,48
N = 26

Laird & Crosby (1974) individual differences in the self-attribution of emotion

Laird & Crosby’s work was a chapter in the book “Thought and Feeling: Cognitive Alteration of feeling states.”

I’ll focus on the results of the face-manipulation only.

They started out with 32 undergraduates, but removed 6 because they were aware of the hypothesis.

The cover story and face-manipulation was the same as in Laird 1974. The stimuli were cartoons, and for the emotion measure they used 3 adjectives for Elation (carefree, elated, pleased) and three for Aggression (Angry, annoyed, Defiant), rated on 5 point scales. The scores for each factor was summed. Then the aggression factor was subtracted from the elation factor, resulting in a single score for emotional experience.

The participants went through the procedure on two separate sessions, with 2-3 days delay. In each session they were asked to do both poses, while presented with a cartoon.

Smile Frown t p p one tailed d
Day 1 2,23 2,38 ns ns
Day 3 3,19 1,04 1,78 0,087 0,044 0,35
N = 26


The first day the manipulation did not make any difference in the ratings of emotional state. And, in some ways, it didn’t happen the second day either, as the test is 1 tailed. They proceed to divide people up into those that rated their emotions as negative both days, positive both days and those that switched, in order to investigate individual differences. It is interesting, but less interesting for a review on whether there is good evidence that posing the face in emotional expressions gives rise to emotional feelings. But, it turns out that some of the subsequent papers use the results from this part to divide particpants up in internal-cue sensitive and external-cue sensitive.

Paper 3.

The Duncan & Laird (1977) paper is very much more elaborate, but I think one can simply look at the face-manipulation part. The title of the paper is “cross-modality consistencies in individual differences in self-attribution”, and involves a very complex set-up where people are first tested on their attitudes, then about a month later are asked to do a counter-attitudinal video, which has some snags in it so, oh, by the way, could you help with this other work on perception while tensing and relaxing facial muscles.

As in the Laird & Crosby paper above, I’ll only focus on the results of posing the face.

They started out with 40 undergraduates (men and women, but, as in paper 2, they found no gender difference). In the end, they removed 14 subjects, because they were aware of aspects in the two different paradigms.

The set up for posing the face is the same as above. But, rather than pictures or cartoons, they are told the experimenters are interested in the reversing perspective of the Necker cube. They also added a neutral condition, in order to make a clearer base-line comparison.

All participants did two smile and two frown trials (and, presumably also a neutral trial), properly counterbalanced. After each trial, they filled in a mood adjective list, as always. This time it consisted of 26 descriptive adjectives from that same Nowlis-Green Mood Adjective Check list, again rating them on a 5 point scale (0-4). They used 6 items from aggression, 5 from Surgency and 4 from Elation, and some fillers. They summed the scores within each factor, and averaged them across the two trials of each type.

Frown Neutral Smile t frown vs neutral p frown vs neutral t smile vs neutral p smile vs. Neutral
Elation 1,9 3 4,4 3,49 0,002 2,83 0,008
surgency 3,7 4,9 6,2 2,1 0,022 2,14 0,041
Aggression 6,3 2,4 2,3 2,43 0,011 0,21 0,835


Paper 4

The next study, Laird Wagener, Halal and Szegda “remembering what you feel: The effects of emotion on memory (JPSP 1982) also uses the same face posing work, but using it in a p-curve is – a stretch. I will, but using emotion-congruent recall as a measure of facial feedback is several processing steps away.

As the title says, they were interested in emotion-congruent recall. Half of them started out reading a couple of Woody Allen anecdotes (positive stories), the other half a couple of editorials (anger inducing stories).

Then, as in the earlier studies, participants had their faces posed in frowns and smiles, there is a casual mention that felt emotion may bias the results so could they fill in this questionnaire after each pose. They actually even have a pre-measure of emotion before they start the facial poses.

The perceptual stimuli this time are four abstract paintings that have received titles with an emotion connotation: For happy “spring” and “dancing”. For angry “rip-off” and “betrayal”. And, the little twist here is that they were shown the angry-titled pictures while their faces were screwed up in smiles, and the happy titled pictures when they were frowning.

They do a rather elaborate summing of the mood scores (which I don’t want to go into). What they want to do is sort the participants into two separate groups – the self-produced group (the facial expression seems to dominate in the mood measure) and the situation cue group (those that take their mood cue from the pictures rather than from their faces). This results in 19 people that seem to take their cues from facial feedback, and thirty-two in the situational cue.

There is no report on the results from this section. The outcome is simply used as a separator for individual differences.

Instead, they proceed to the next stage, where people again get to pose their faces, and then they recall as much as they can for each story (written response). In the self-produced group, nine of them recalled the Woody Allen anecdotes, one while smiling, one while frowning. Ten recalled the editorials, one while smiling, one while frowning. The cell-numbers for the situational cue group was 17 and 15 respectively.

Their dependent variables where number of correctly recalled facts, and number of errors (assessed by two independent judges). Everybody recalled more from the editorials, but that was, in part, because they had more statements to recall. Thus, that is not terribly interesting.

What they were more interested in was whether there was evidence for more emotion-congruent recall for the self-producing participants when comparing them to the situational-cue group. (This was a planned comparison).

So they do a planned comparison on number of facts recalled of expression x passage x individual differences and it reaches significance: F (1,47) = 4,31 p = .043 (per p-checker). They do the same for number of errors and the result here is F(1,47) = 18,76, p < .000 (Is that one weirdly high).

For the situational cue group, there is no interaction between the posed expression and either correct recall or errors.


Self-produced cue group
Woody allen (n = 9) Editorials (n = 10) F
smile frown smile frown Passage x express p value
correct recall 3,3 2,2 6,7 8,3 9,98 0,0065
errors 0,6 1,2 2,1 1,6 4,13 0,0602
df per paper (1,15). Some must have dropped out, as this should have a df of 1,17


Here are the results, but, as I have noted, there are some issues. The listed df in the paper is 1,15, but they do not note any drop-outs. The df really should be 1, 17. There are also discrepancies between the reporting of the first F value in text and in table. It is small (9,96 vs 9,98). In addition there is a second discrepancy in the reported p-value for the errors. In the text, the p-value is reported as  < 0.55, but the p-value I get from Schimmacks r-index is higher than that (I report the one from df 1, 15).

They claim the results are as expected, but somewhat ambiguous (in that we don’t know if the supposed emotion congruent recall is due to actual congruence, or to a general positivity/negativity effect), which then then attempt to address in experiment 2.

Experiment 2

They note three major changes, and I quote

  1. a) to use different expressions during the memory and mood parts of the procedure, b to employ only material and expressions of negative emotions, which were fear, anger, and sadness, and c) to manipulate expressions during the initial encounter with the material as well as during recall. “

This time, there were 22 undergraduates – two were removed for awareness.

In the first part, they went through the same expression manipulation as in experiment 1 (I think, to separate out those that do produce an emotional feeling from those that don’t)

Then they were to judge “72 different slides on a variety of emotional scales”. I actually don’t know what was on those slides, because it is not described. What was more interesting (to the researchers) were two sentences that were read prior to each slide – one read by a woman, the other by a man. The sentences were read with emotional intonation, and also had emotional content, such as “did you hear that noise?” (for fear). During this part, the participants faces were also manipulated, but this time to a fearful, angry and sad position. All participants had their faces placed in all three positions.

To be more precise – the sentences/pictures were presented in 6 blocks. During each block, the participant held their face in one of the three positions (so they held each position for two blocks). Each block contained 24 sentences, 8 of each emotion. So, a total of 144 sentences (which really then should be considered the trials).

The blocks were about 3,5 minutes long – which is a long time to hold a static facial expression. In fact, they state that an experimenter was watching them so they could be reminded to hold their face in position.

What they were really interested in was in the recall of the sentences, while having their faces (again) positioned in the same three expressions (also within subjects). The subjects thought this was just a manipulation check. They didn’t want them to spend any effort trying to memorize them. The recall also took place after each block.

The recall was scored for correctness (and they were fairly generous with that).


Self-produced Situational cue
Fear anger sad fear anger Sad
Fearful 4,9 3,6 3,2 3,9 4 3,7
Angry 2,9 5,7 2,6 3,3 5,6 4,9
Sad 2,5 3 5,5 2,9 4,1 5,2
n = 10 in each group.
Overall interaction F(4,72) = 3,68


If participants were recalling everything more or less correctly, they would have gotten 16 in each cell.

I’ll post the means here, because I can’t really make head or tails out of the df’s for the various sub-analyses. The one I post above seems correct when it comes to df’s anyway. When one throws in all of the data, and analyze it in a mixed ANOVA with the between factor being self-produces vs situational cued participants (2), and the two within-subjects factors being posed face (3) and emotional tone of sentence (3), those make sense.

They do a planned comparison (hey, maybe there is the df problem) to check out the difference in the sentence/face interaction between the two groups, and come up with a not-significant result F(1,72) = 3.38, p = .066, but that was in those days when this was not considered not significant.

Then they report the self-produced and the situational cue separately, and I think they mess up on the df’s here again. There is a significant interaction between story and expression for the self-produced, F(4,72) = 2,78 – but I think it should be F(4,36), as they are only testing half the subjects here.

For the situational, the same effect was not significant, F(4,72) = 1.03 ns (again, I think F(4,36).

So, what they are claiming is that there is emotion-congruent recall, which is emotion specific, but only for those that are sensitive to facial feedback.

I have no idea how I should go about coding this into my r-index and my p-curve data-sheets. At least not now.

And, I really don’t know whether this should be interpreted as a replicated instance of facial feedback. They are actually assuming that facial feedback occur (at least in some of the participants), which then spills over into emotion congruent recall. For both types of participants, there appears to be more correct recall for the emotion-congruent. It is just not significant for the situational cues.

But, is the recall a reasonable measure of whether facial feedback works (as in, giving rise to an emotion that corresponds to the expression). In this work, it is simply assumed that the facial feedback does exactly that. The measures where they ask about how they feel are simply used for sorting people into two types, and in that measure they are receiving two conflicting types of information – from their face, and from the label.

Kellerman & Laird

In the last Laird paper: Kellerman & Laird, “The effect of appearance on self-perception” there is no data to scrape! They did the facial positioning, had people rate their emotions, went through an elaborate scoring, and then used it simply to sort people into self-produces and situational cue responders. Evidently, they could do that, but it provides nothing that I can use to keep assessing whether we have decent evidence for some sort of facial feedback.


I now move into the papers that he cites, that he didn’t also co-author.

Rhodewalt & Comer

The first one is Rhodewalt, & Comer (1979) “Induced-compliance attitude change: once more with feeling.

A total of 60 participants, divided across 4 conditions. Well, they started out with 69, but there were drop-outs as usual.

It is all very elaborate, in order to get to their research question, but I’ll gloss over the parts that are not directly about measures of facial feedback.

They start with a pretest session, done in groups, which are mainly about information, but where, oh by the way, another researcher needed help with filling out an opinion survey of 18 issues.

A week later they return (individually) to the experimental sessions. There are 4 groups smile, frown, neutral and control. In the 3 first, they are asked to write counter-attitudinal essays while holding the expression. In the fourth they simply copy down some written materials.

The posing instructions are taken from Laird 1974. As with the previous, possible changes in mood are explained as an artifact to control for (hence the measure). Mood was measured with the Nowlis-Green mood adjective check list- using 18 adjectives measuring Elation, Surgency, Social Affection,Anxiety, Remores and Aggression. (3 adjectives for each).

Each participants had  7 minutes to write the counter-attitudinal essay (and, topic, of course, coming from the pre-session), while keeping their face frozen in whatever expression was designed for them. There was an observer present to make sure they kept their face in the pose. (Boy, that is a long time!)


For the mood measure, they created a single composite score for the positive factors, and a single composite score for the negative factors. Plus, they calculated a difference score.

Positive mood Negative mood Difference
Smile 3,8 1,91 1,89
Neutral 2,43 2,54 -0,11
Fown 1,71 3,58 -1,87
Control 1,84 1,71 0,13
f(3,56) 3,21 3,32 4,93
P 0,029775 0,026179 0,004148
n = 15 in each group.


I’m not reporting the attitude change data. It is just too many steps away to say anything interesting about facial feedback.

Zebrowitz McArthur et al

Next up is Zebrowitz McArthur, Solomon & Jaffe (1980) Weight differences in Emotional Responsiveness to Proprioceptive and Pictorial Stimuli

The topic here is to investigate difference in emotional responsiveness between overweight and norma weight participants. For this they recruit 24 overweight participants, and 36 normal weight participants.

They use the Laird paradigm – but with some changes. The smile one was the same, but here they use the “bottom mouth” meaning for the frown – they place the mouth area into a sadness expression. Here is the instruction:

Please contract your lips by drawing them together and down. Now push out your lower lip a little… Good, now hold it like that.

They also had a neutral instruction

Please relax your face, keeping your mouth closed…. Good, now hold it like that

Now, onto the set-up. Each participant went through 9 trials. In each trial, they were shown a picture for 15 seconds. The pictures themselves depicted humans in sad postures (3) animals that were in “happy” postures (3) and microorganisms, (also 3 which I presume are considered neutral).

The first three pictures were shown while in one facial configuration, the next three in the next, and the final three in the final expression. And, of course, there was a happy, a sad and a neutral picture for each expression. Nice counter-balancing and all.

After each projection, the participant rated how they felt on a sub-set of the MACL. The target emotions were two adjectives for elated, and two adjectives for sad, and then there were 4 fillers. Instead of 5 pt scale, they used a 9 pt scale.

So, it is a within-subject manipulation, all possible combinations.

Additional part for control – they tested participants in pairs, where each individual in the pair posed a different expression – to control for possible experimenter influence. They also tried, as much as they could to counter balance the seats used by males and females, as well as by over-weight and normal weights. The participants could not see each other. (Of course, because they all were viewing the stimuli together, the order of the pictures was the same for everyone).


I’ll do the matrix of mean scores first (the mean score is a composite of the elation and sadness. More negative, more sad. More positive more happy).


Posed facial exprssion
Smile Neutral Sad All
Positive normals 6,661 5,917 4,806 5,778
overweight 6,583 6,75 6,042 6,458
Neutral Normals 0,361 0,333 -0,083 0,204
overweight 0,5 -0,5 0,333 0,111
Negative normals -4,722 -7,528 -5,917 -5,056
overweight -4,75 -3,625 -3,458 -3,944
All normals 0,75 0,574 -0,398
overweight 0,778 0,875 0,972
n overweight = 24/cell. N= 72 row and column
n normalweight 36 /cell, 108 in row and column totals.

The interesting (for us) is the two bottom rows, because that would indicate the net-effect of posing the faces. Clearly for the over-weight there is none (which is what they were checking), but there is some for the non-overweight.

And, luckily, they have a planned simple effects analysis for that:

Expression effect for normal weight: F(2,72) = 4.77, p = .0113

For overweight, F was less than 1, so nothing is reported.

They go further into the results, and see that this seems to be a sadness result – most of the effect being driven by that expression. In addition, they look at the scores for the other emotions, but find no effect. So, suggestion is that the feedback effect is expression specific. There were picture effects also, but those are not so important here.

Kleinke et al

The Kleinke & Walton (1982) paper is different from those above. Title is “Influence of Reinforced Smiling on affective responses in an interview”. They claim that the results support a facial feedback theory, and possibly it does, but it is rather messy. It doesn’t involve posing faces into different expression. Instead, they have subjects in the experimental condition, who get reinforced every time they smile (they get a nice green light when they smile, and their task is to try to have as much green light as possible. ) They weren’t told that it was smiling that would be reinforced. In two yoked groups participants were either told to smile whenever a light came on (same schedule as the reinforcement). I won’t go deeper into the experiment, because I think there are many other possible reasons for the results other than some kind of facial feedback (the paper would be important in a larger meta-analysis.


The final article that Laird sites is Barbara Edelman’s 1984 paper, A multiple factor study of body weight control.  And, again,there is nothing here that I can use. The facial manipulation (which, they claim, closely follow Laird & Crosby) is simply used to separate participants into self-percievers and situational-sensitive.

Some brief comments

I think this is interesting to note. Early on, Laird and Crosby uses their feedback in order to separate participants that are sensitive to facial feedback from those that are not, and in some of the subsequent articles, that is simply what they use it for, with no possibility for anyone to evaluate how strong the facial feedback effect was.

Of course, this notion of individual difference in facial feedback sensitivity was not part of the original Strack, it was not part of the work I did with Niedenthal where we interpreted the effects of pen-holding as mimicry disruptor, it was not considered for the Strack replication, and, as far as I have gotten in the review of Stracks list of conceptual replications, this is not considered either.



I did a p-curve using one focal text from each experiment (I also did it with all, but that makes it throw in repeats for the same manipulation – just slightly different ways of doing it). My shiny-app scripts are listed below.

It does suggest some evidential value, but we only have 8 data-points, and some of them are rather oblique (mood-congruent recall).

This is my first pass at this. I’ll do some better coding/clean-up. The posing is very similar for all. The measures aren’t always reported. There are several repeated measures designs, where there really are several repeats. Frequently they are asked to pose the face for a long time. 15 seconds – 7 minutes! (yikes).

In all of this, the only measure of facial feedback is the self-reported moods after each trial. There are theoretical accounts, but they are rather abstract – self-perception, perception of arousal. Situational cues.

This is just a small sliver of the literature, but I’m interested in whether there is work connecting this closer to additional biological function (physiological measures, brain-imaging etc), and that may very well exist. Or not. It is also hard to know if they really really really didn’t realize it was about emotion. It is measured, after all, even if mentioned that it is to control for unwanted affect. There are just a lot of questions, that one needs to see if the literature has answered.


Duncan, J.W. & Laird, J.D. (1977) Cross-modality consistencies in individual differences in self-attribution. Journal of Personality, 45, 191-206

Edelman, B. (1984) A multiple-factor of body weight control. Journal of General Psychology, 40, 363-369

Kellerman, J, & Laird, J.D. (1982). The effect of appearance on self-perception. Journal of Personality, 50. 296-315

Kleinke, C.L., & Walton, J.H. (1982). Influence of reinforced smiling on affective responses in an interview. Journal of Personality and Social Psychology, 42, 3, 557-565

Laird, J.D:, (1974) Self-attribution of emotion: The effects of expressive behavior on the quality of emotional experience. Journal  of Personality and Social Psychology, 475-486

Laird, J.D. (1984). The real role of facial response in the experience of emotion: A reply to Tourangeau and Ellsworth, and Others. Journal of Personality and Social Psychology,47, 909-917.

Laird, J.D. & Crosby, M (1974). Individual differences in the self-attribution of emotion. In H. London & R. Nisbett (Eds.). Thinking and feeling. The cognitive alteration of feeling states. Chicago: Aldine.

Laird, J.D. Wagener, J.J., Halal, M., & Szegda, M. (1982) Remembering what you feel: The effects of emotion on memory. Journal of Personality and Social Psychology, 42, 4, 646-657

McArthur, L. A., Solomon, M. R., & Jaffee, R.H. (1980). Weight and sex differences in emotional responsiveness to proprioceptive and pictorial stimuli. Journal of Personality and Social Psychology, 39, 308-319

Rhodewalt, F., & Comer, R. (1979). Induced-compliance attitude change: Once more with feeling. Journal of Experimental Social Psychology, 15  35-47







My shiny app data – two versions. One where I throw in all, so to speak, although several are separate measures for the same thing. In the second, I cross out duplicates. That is, I only select one measure for each experiment (and, when possible, the one that seems to be the strongest).


# Easy mode (‘#’ starts a comment)

#Paper 1 Laird 1974

#Experiment 1

F(1,37) = 8.18 #Aggression

F(1, 37) = 7.21 # Elation

F(1, 37) = 5.91 #Surgency

#Experiment 2

t(25) = 2.8 #Humorrating

t(25) = 2.46 #Aggression

#Laird Crossby 1974

t(25) = 1.78 #Day 3 effect

#Duncan Laird

t(39) = 3.49 #Elation, frown vs neutral

t(39) = 2.83 #Elation, Neutral vs. Smile

t(39) = 2.1 #Surgency, frown vs neutral

t(39) = 2.14 #Surgency, Neutral vs. Smile

t(39) = 2.43 #Aggression, frown vs neutral

t(39) = 0.21 #Aggression, Neutral vs. Smile

#Laird Wagener Halal Szegda

#Experiment 1

F(1,15) = 9.98 #correct recall, self perceivers

F(1,15) = 4.13 # Errors, self perceivers

#Experiment 2

F(4,72) = 2.78 #Interaction sentence, expression self perceivers

#Rhodewalt and Comer

F(3,56) = 3.21 #Positive index

F(3,56) = 3.32 #Negative Index

F(3,56) = 4.93 #Difference score

#zebrowitz et al

F(2,72) = 4.77 #Facial expression effect of normal weights



# Easy mode (‘#’ starts a comment)

#Paper 1 Laird 1974

#Experiment 1

F(1,37) = 8.18 #Aggression

#F(1, 37) = 7.21 # Elation

#F(1, 37) = 5.91 #Surgency

#Experiment 2

t(25) = 2.8 #Humorrating

#t(25) = 2.46 #Aggression

#Laird Crossby 1974

t(25) = 1.78 #Day 3 effect

#Duncan Laird

t(39) = 3.49 #Elation, frown vs neutral

t(39) = 2.83 #Elation, Neutral vs. Smile

#t(39) = 2.1 #Surgency, frown vs neutral

#t(39) = 2.14 #Surgency, Neutral vs. Smile

#t(39) = 2.43 #Aggression, frown vs neutral

#t(39) = 0.21 #Aggression, Neutral vs. Smile

#Laird Wagener Halal Szegda

#Experiment 1

F(1,15) = 9.98 #correct recall, self perceivers

#F(1,15) = 4.13 # Errors, self perceivers

#Experiment 2

F(4,72) = 2.78 #Interaction sentence, expression self perceivers

#Rhodewalt and Comer

#F(3,56) = 3.21 #Positive index

#F(3,56) = 3.32 #Negative Index

F(3,56) = 4.93 #Difference score

#zebrowitz et al

F(2,72) = 4.77 #Facial expression effect of normal weights





About asehelene

... because if I'm in a room with a second person, I want to be reasonably sure I'm the crazier one.
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s