Sunday, 7 July 2013

Respect your elders: First, you watch Meehl's videotaped philosophy of psychology lectures - then we'll discuss your "pseudo-intellectual bunch of nothing"

I've never understood how it was possible a reviewer or editor of a scientific journal could write something like: This subject matter is too difficult and complex for our reader audience (to be interested in). I even heard a colleague exclaim once that a mathematical ps. journal found his mathematics too complex. 

That can only mean the audience does not want to get educated on things they do not know about yet, which is strange behaviour for scientists. An editor should invite an author to write a primer possibly as supplementary material, I've seen some examples of that recently, the p-curve article to appear in JEP: General is one of them.

More often than not, psychological theories and their predictions are evaluated for their descriptive value, which means: can the reviewer relate what it is about to his own preferred theories. This should not matter in science, theories should (as long as they do not claim a re-write of well-established theories based on some statistical oddities: Bem, 2011), be evaluated for the precision of the predictions they make, their empirical accuracy and their logical structure.

Problem is, we do not get educated on these matters in psychology. Whether you do or not seems to trivially depend on whether there's a professor at your university who knows about theses things.

(How lucky they were in Minnesota!)

It's plain and simple: If we really want psychology to be taken seriously as a scientific endeavour, we need to discuss it at the level of metatheory, how do we evaluate theories, what is their verisimilitude, their similarities so we can hope to unify them. 

We need to discuss it at the level Paul Meehl discussed it.

Now, his list of publications is long, so are the publications themselves and my list of quotes I would like to paste here is endless and going by the popular journals, our generation of scientists is likely to doze off  by anything longer than 5000 words anyway.

How about some video then? 

12 lectures of about 1.5 hours and you'll know all you need to know to have a proper discussion about the credibility of the theory you use to study the phenomena you are interested in.

(You do know TED last only 20 minutes or so?),

Ok, get through the first 7 at least (this will not be a difficult task, I even enjoyed to hear him speak about the practicalities of the course)

Recommendations of Meehl's work by others:
"After reading Meehl (1967) [and other psychologists] one wonders whether the function of statistical techniques in the social sciences is not primarily to provide a machinery for producing phony corroborations and thereby a semblance of ‘scientific progress’ where, in fact, there is nothing but an increase in pseudo-intellectual garbage." (Lakatos, 1978, pp. 88–9)

Just one quote sums it up for me

Whenever I try to evaluate what someone is claiming about the world based on their data, or "theory" from the perspective of theory-evaluation, they look at me like a dog who has just been shown a card trick. It's so unreal that I cannot use a word like ontology or epistemology or ask about the measurement theory or rules of inference someone used to make a claim about the way the universe works that I considered leaving academia, but I guess leaving without trying to change the world is not how I was raised, or genetically determined,  The quote below summarises how I feel almost exactly:
"I am prepared to argue that a tremendous amount of taxpayer money goes down the drain in research that pseudotests theories in soft psychology and that it would be a material social advance as well as a reduction in what Lakatos has called “intellectual pollution” (Lakatos, 1970, fn. 1 on p. 176) if we would quit engaging in this feckless enterprise. 
I think that if psychologists would face up to the full impact of the above criticisms, something worthwhile would have been achieved in convincing them of it. Besides, before one can motivate many competent people to improve an unsatisfactory cognitive situation by some judicious mixture of more powerful testing strategies and criteria for setting aside complex substantive theory as “not presently testable,” it is necessary to face the fact that the present state of affairs is unsatisfactory. 
My experience has been that most graduate students, and many professors, engage in a mix of defense mechanisms (most predominantly, denial), so that they can proceed as they have in the past with a good scientific conscience. The usual response is to say, in effect, “Well, that Meehl is a clever fellow and he likes to philosophize, fine for him, it’s a free country. But since we are doing all right with the good old tried and true methods of Fisherian statistics and null hypothesis testing, and since journal editors do not seem to have panicked over such thoughts, I will stick to the accepted practices of my trade union and leave Meehl’s worries to the statisticians and philosophers.” 
I cannot strongly fault a 45-year-old professor for adopting this mode of defense, even though I believe it to be intellectually dishonest, because I think that for most faculty in soft psychology the full acceptance of my line of thought would involve a painful realization that one has achieved some notoriety, tenure, economic security and the like by engaging, to speak bluntly, in a bunch of nothing." (Meehl, 1990, emphasis and markup added)


Meehl, P. E. (1990). Why Summaries of Research on Psychological Theories Are Often Uninterpretable. Psychological Reports66(1), 195. doi:10.2466/PR0.66.1.195-244