Social psychology textbook like colorful laboratory experiments that illustrate a theoretical point. As famous social psychologist Daryl Bem stated, he considered his experiments more illustrations of what could happen than empirical tests of what actually happens. Unfortunately, social psychology textbooks make it less obvious that the results of highlighted studies should not be generalized to real life.
Myers and Twenge (2019) tell the story of fishy smells.
In a laboratory experiment, exposure to a fishy smell caused people to be suspicious of each other and cooperate less—priming notions of a shady deal as “fishy” (Lee & Schwarz, 2012). All these effects occurred without the participants’ conscious awareness of the scent and its influence.
They don’t even mention some other fun facts about this study. To make sure that the effect is not just a mood effect induced by bad odors in general, fishy smells were contrasted with fart smells, and the effect seemed to be limited to fishy smells.
The article was published in the top journal for experimental social psychology (JPSP:ASC) and is relatively highly cited.

However, the studies reported in this article smell a bit fishy and should be consumed with a grain of salt and a lot of lemon. The problem is that all of the results are significant, which is highly unlikely unless studies have very high statistical power (Schimmack, 2012).


And it even works the other way around.




And making people think about suspicion, also makes them think about fish, in theory.


Suspicion also makes you be more sensitive to fishy smells.


Undergraduate students may not realize what the problem with these studies is. After all, they all worked out; that is they produced a p-value less than .05, which is supposed to ensure that no more than 1 out of 20 studies are a false positive result. As all of these studies are significant, it is extremely unlikely that all of them are false positives. So, we would have to infer that suspicion is related to fishy smells in our minds.
However, since 2012 it is clear that we have to draw another conclusion. The reason is that results in social psychology articles like this one smell fishy and suggest that the authors are telling us a fun story, but they are not telling us what really happened in their lab. It is extremely unlikely that the authors reported all of their studies and data analyses that they conducted. Instead they may have used a variety of so-called questionable research practices that increase the chances of reporting a significant result. Questionable research practices are also known as fishing for significance. These questionable research practices have the undesirable effect that they increase the type-I error rate. Thus, while the reported p-values are below .05, the risk of a false positive result is not and could be as high as 100%.
To demonstrate that researchers used questionable research practices, we can conduct a bias test. The most powerful bias test for small sets of studies is the Test of Insufficient Variance. When most p-values are just significant , p < .05 and p > .005, but always significant the results are not trustworthy because sampling error should produce more variability than we see.
The table lists the test statistics, converts the two-tailed p-values into z-scores and computes the variance of the z-scores. The variance is expected to be 1, but the actual variance is only 0.14. A chi-square test shows that this deviation is significant with p = .01. Thus, we have scientific evidence to claim that these results smell a bit fishy.
Study | test | value | df | p | z |
1 | t | 2.22 | 42 | 0.032 | 2.15 |
2 | t | 2.01 | 79 | 0.048 | 1.98 |
3a | chisq | 4.27 | 1 | 0.039 | 2.07 |
3b | chisq | 6.28 | 1 | 0.012 | 2.51 |
3c | chisq | 7.77 | 1 | 0.005 | 2.79 |
5 | F | 8.24 | 116 | 0.005 | 2.82 |
6 | F | 3.93 | 1614 | 0.048 | 1.98 |
VAR(z) | 0.14 | ||||
TIVA | 0.01 |
Unfortunately, these results are not the only fishy results in social psychology textbooks. Thus, students of social psychology should read textbook claims with a healthy dose of skepticism. They should also ask their professors to provide information about the replicability of textbook findings. Has this study been replicated in a preregistered replication attempt? Would you think you could replicate this result in your own lab? It is time to get rid of the fishy smell and let the fresh wind of open science clean up social psychology.
We can only hope that sooner than later, articles like this will sleep with the fishes.
Quote from above: “It is time to get rid of the fishy smell and let the fresh wind of open science clean up social psychology.”
“Open Science” is all fine and good, but i reason it is not a solution to a lot of problems in Science, possibly including cleaning up Social Psychology.
To try and illustrate my point, here is the data from the “famous” paper by Simmons, Nelson, & Simonsohn (2011) “False positive Psychology” paper https://openpsychologydata.metajnl.com/articles/10.5334/jopd.aa/.
Now imagine they didn’t write their “False positive” paper, but p-hacked away and presented their finding that listening to The Beatles “When i’m 64” makes you younger (or whatever they found), and made sure their data were “open” (just like in the link above). Would that mean that their findings are “trustworthy” and/or “correct”? I would reason not.
Also, i reason that Science should work so that the best and brightest people are working on/in it. I view it as making sure planet Earth selects the best people to run in the 4 x 100 meter relay of the “Intergalactic Olympics”. Just like doping controls make it possible to not select (or reward) doping users, Open Science could make sure that people who p-hack, selectively report things, etc. are not selected. However, that’s just one part of trying to select the best people (runners or scientists). Or to use a different example, if we were to let 7 year old kids perform Social Psychological studies (or other studies from different fields) in an “open” and “transparten” manner, would that lead to “good” science. I would reason not.
In other words:
1) I agree with Gelman who wrote that honesty and transparency is not enough https://statmodeling.stat.columbia.edu/2017/05/09/honesty-transparency-not-enough/
2) I also worry that (some? many?) people will only think something is or is not “good” science because (some ? many?) people associate it with “Open Science”. This could lead to not really thinking about matters anymore, or why something is or is not “open science”, and if that is important and why.
For instance, can you point me to (some? many?) “Open Science” people who spotted, and ringed the alarm bell, when many “Registered Reports” dit not even provide the reader access to the crucial “pre-registration” in their paper? (also see “Mapping the universe of Registered Reports” by Hardwicke & Ioannidis https://osf.io/preprints/metaarxiv/fzpcy/)
Quote from above: “This could lead to not really thinking about matters anymore, or why something is or is not “open science”, and if that is important and why.”
Oh, and can you point me to “Registered Replication Reports” that pre-registered the exact labs that will perform the replication study?
That seems very important (and transparent) to me, because otherwise people could just leave out the results of (some? many?) labs if they do not like the results of those labs.
To me, pre-registering the exact labs that will perform the study seems even way more important to do then pre-registering the analyses concerning “Registered Replication Reports”. This is because i reason that the analysis is already “registered” in a way in the “original” paper that is being replicated.
Can you point me to (some? many?) “Open Science” people who spotted, and ringed, the alarm bell about that?
I don’t know sources but many lab replication projects are often preregistered and well documented on OSF.
You wrote: “I don’t know sources but many lab replication projects are often preregistered and well documented on OSF. ”
It was sort of a rhetorical question, as my investigation of these “Registered Replication Reports” have lead to me not being able to find any pre-registration information about the exact labs that will participate. But please correct me if i am wrong.
The “Registered Replication Reports” are also pre-registered and well documented on the OSF, but the point i am trying to make is that that doesn’t (apparently) mean it is “open” or “transparent” concerning (what i think are) crucial things.
For example, here is the “pen-in-mouth” Strack et al. “Registered Replication Report” (we all know and love :P). On page 5 of the paper there is a link depicted to the pre-registration https://osf.io/h2f98/. Now, that leads to a page where you can download a few things, but the thing that (i assume) is the actual pre-registration seems to me to only talk about the analyses, and does not mention the exact labs who will participate https://osf.io/4bnxm/
As i tried to make clear above with the 2 posts is that 1) leaving out the crucial information in, and to, the -pre-registration information and, 2) simply assuming, and not even thinking about, why something is or is not “open science” seems highly problematic.
Just because (some? many?) people associate something with “open science” doesn’t mean it’s “good” science, nor does it (apparently) mean it’s even “open” and “transparent”.
Interesting observation. You could do a bias analysis of the results. I did so for facial feedback to investigate Strack’s accusation of reverse p-hacking and found no bias one way or the other.