Personality psychologists have conducted hundreds of studies that relate various personality measures to each other. The good news about this research is that it is relatively easy to do and doesn’t cost very much. As a result, sample sizes are big enough to produce stable estimates of the correlations between these measures. Moreover, personality psychologists often study many correlations at the same time. Thus, statistical significance is not a problem because some correlations are bound to be significant.
The key problems with personality psychology is that many studies are mono-method studies. This often leads to spurious correlations that are caused by method factors (Campbell & Fiske, 1959). For example, self-report measures often correlate with each other because they are influenced by socially desirable responding. It is therefore interesting to find articles that used multiple-methods which allows it to separate method factors and personality factors.
One common finding from multi-method studies is that the Big Five personality traits often appear correlated when they are measured with self-reports, but not when they are measured with multiple methods (i.e., multiple raters) (Anusic et al., 2009; Biesanz & West, 2004; DeYoung, 2006). Furthermore, the correlations among self-ratings of the Big Five are explained by an evaluative or desirability factors.
Despite this evidence, some personality psychologists argue that the Big Five are related to each other by substantive traits. One model assumes that there are two higher-order factors. One factor produces a positive correlation between extraversion and openness and another factor produces positive correlations between Emotional Stability (low Neuroticism), Agreeableness, and Conscientiousness. These two factors are supposed to be independent (DeYoung, 2006). Another model proposes a single higher-order factor that is called the General Factor of Personality (GFP). This factor was originally proposed by Musek (2007) and then championed by the late psychologists Rushton. Plank suggested that bad theories die after their champion dies, but in this case Dimitri van der Linden has taken it upon himself to keep the GFP alive. I have met Dimitri at a conference many years ago and discussed the GFP with him, but evidently my arguments fell on deaf ears. My main point was that you need to study factors with factor analysis. A simple sum score of Big Five scales is not a proper way to examine the GFP because this sum score also contains variance of the specific Big Five factors. Apparently, he is too stupid or lazy to learn structural equation modeling to use CFA in studies of the GFP.

Instead, he computes weighted sum scores as indicators of factors and uses these sum scores to examine relationships of higher-order factors with intelligence.

The authors then find that the Plasticity scale is related to self-rated and objective measures of intelligence and interpret this as evidence that the Plasticity factor is related intelligence. However, the Plasticity scale is just an average of Extraversion and Openness and it is possible that this correlation is driven by the unique variance in Openness rather than the shared variance between Openness and Extraversion that corresponds to the Plasticity factor. In other words, the authors fail to examine how higher-order factors are related to intelligence because they do not examine this relationship of factors, which requires structural equation modeling. Fortunately, they provided the correlations among the measures in their two studies and I was able to conduct a proper test of the hypothesis that Plasticity is related to intelligence. I fitted a multiple-group model to the correlations among the Big Five scales (different measures were used in the two studies), the self-report of intelligence, and the scores on Cattell’s IQ test. Overall model fit was acceptable, CFI = .943, RMSEA = .050. Figure 1 shows the model. First of all, there is no evidence of Stability and Plasticity as higher-order factors, which would produce correlations between Extraversion (EE) and Openness (OO) and correlations between Neuroticism (NN), Agreeableness (AA), and Conscientiousness (CC). Instead, there was a small positive correlation between Neuroticism and Openness and between Agreeableness and Conscientiousness. There was evidence of a general factor that influenced self-ratings of the Big Five (N, E, O, A, C) and self-ratings of intelligence (sri), although the effect size for self-reported intelligence was surprisingly small. This might be due to the assessment of intelligence that may have led to more honest reporting. Most important, the general factor (h) was unrelated to performance on Cattell’s test. This shows that the factor is unique to the method of self-ratings and supports the interpretation of this factor as a method factor (Anusic et al., 2009). Finally, self-ratings and objective test scores reflect a common factor which shows some valid variance in self-ratings. This has been reported before (Borenau & Liebler, 1992). The intelligence factor was related to Openness, but not with Extraversion, which is also consistent with other studies that examined the relationship between personality and IQ scores. Evidently, intelligence is not related to Plasticity because plasticity is the shard variance between Extraversion and Openness and there is no evidence that this shared variance exist and no evidence that Extraversion is related to intelligence.

These results show that van der Linden and colleagues came to the wrong conclusion because they did not analyze their data properly. To make claims about higher-order factors, it is essentially to use structural equation modeling. Structural equation modeling shows that the Plasticity and Stability higher-order factors are not present in these data (i.e., the pattern of correlations is not consistent with this model) and it shows that only Openness is related to intelligence which can also be seen by just inspecting the correlation tables. Finally, the authors misinterpret the relationship between the general factor and self-rated intelligence. “First, their [high GFP individuals] intellectual self-confidence might be partly rooted in their actual cognitive ability as SAI and g shared some variance in explaining Plasticity and the GFP” (p. 4). This is pure nonsense. As is clearly visible in Figure 1, the general factor is not related to scores on Cattell’s test and as a result it cannot be related to the shared variance between test scores and self-rated intelligence that is reflected in the i factor in Figure 1. There is no path linking the i-factor with the general factor (h). Thus, individuals standing on the h-factor is independent of their actual intelligence. A much simpler interpretation of the results is that self-rated intelligence is influenced by two independent factors. One is rooted in accurate self-knowledge and correlates with objective test scores and the other is rooted in overly positive ratings on desirable traits and is related to the tendency to do so across all traits. Although this plausible interpretation of the results is based on a published theory of personality self-ratings (Anusic et al., 2009), the authors simply ignore it. This is bad science, especially in correlational research that requires testing of alternative models.
In conclusion, I was able to use the authors data to support an alternative theory that they deliberately ignored because it challenges the authors’ prior beliefs. There is no evidence for a General Factor of Personality that gives some people a desirable personality and others an undesirable one. Instead, some individuals exaggerate their positive attributes in self-reports. Even if this positive bias (self-enhancement) were beneficial, it is conceptually different from actually possessing these attributes. Being intelligent is not the same as thinking that one is intelligent, and thinking that one understands personality factors is different from actually understanding personality factors. I am not the first critic of personality psychologists’ lack of clear thinking about factors (Borsboom, 2006).
“In the case of PCA, the causal relation is moreover rather uninteresting; principal component scores are “caused” by their indicators in much the same way that sumscores are “caused” by item scores. Clearly, there is no conceivable way in which the Big Five could cause subtest scores on personality tests (or anything else, for that matter), unless they were in fact not principal components, but belonged to a more interesting species of theoretical entities;
for instance, latent variables. Testing the hypothesis that the personality traits in question are
causal determinants of personality test scores thus, at a minimum, requires the specification
of a reflective latent variable model (Edwards & Bagozzi, 2000). A good example would be a
Confirmatory Factor Analysis (CFA) model.”
In short, if you want to talk about personality factors, you need to use CFA and examine the properties of latent variables. It is really hard to understand why personality psychologists do not use this statistical tool when most of their theories are about factors as causes of behavior. Borsboom (2006) proposed that personality psychologists dislike CFA because it can disprove theories and psychologists seem to have an unhealthy addiction to confirmation bias. Doing research to find evidence for one’s beliefs may feel good and may even lead to success, but it is not science. Here I show that Plasticity and Stability do not exist in a data-set and the authors do not notice this because they treat sumscores as if they were factors. Of course, we can average Extraversion and Openness and call this average Plasticity, but this average is not a factor. To study factors, it is necessary to specify a reflective measurement model, and there is a risk that a model may not fit the data. Rather than avoiding this outcome, it should be celebrated because falsification is the root of scientific progress. Maybe the lack of theoretical progress in personality psychology can be attributed to an avoidance to disconfirm existing theories.