Showing posts with label feigning ignorance. Show all posts
Showing posts with label feigning ignorance. Show all posts

Wednesday, 3 July 2013

Respect your elders: Fads, fashions, and folderol in psychology - Dunnette (1966)


Some reflections on novelty in psychological science

In the discussion on open data I commented on recently results were reported on data sharing:
Because the authors were writing in APA journals and PLoS One, respectively, they had agreed at the time of submitting that they would share their data according to the journals' policies. But only 26 % and 10 %, respectively, did. (I got the references from a paper by Peter Götzsche, there may be others of which I am unaware.
Yes, there are other studies, interestingly, in the historical record: plus ça change, plus c'est la même chose.

To stress the importance of efforts to change these statistics, an excerpt from Dunnette (1966) who reports a 1962 study found 13.5% authors complied to data requests. Reasons for being unable to comply with a request sound familiar, this is not an issue of "modern" science it seems. (I can recommend the entire article)

THE SECRETS WE KEEP  
We might better label this game "Dear God,  Please Don't Tell Anyone." As the name implies, it incorporates all the things we do to accomplish the aim of looking better in public than we really are. The most common variant is, of course, the tendency to bury negative results.  
I only recently became aware of the massive size of this great graveyard for dead studies when a colleague ex- pressed gratification that only a third of his studies "turned out"—as he put it. 
Recently, a second variant of this secrecy game was discovered, quite inadvertently, by Wolins (1962) when he wrote to 37 authors to ask. for the raw data on which they had based recent journal articles. 
Wolins found that of 32 who replied, 21 reported their data to be either misplaced, lost, or inadvertently destroyed. Finally, after some negotiation, Wolins was able to complete seven re-analyses on the data supplied from 5 authors. 
Of the seven, he found gross errors in three—errors so great as to clearly change the outcome of the results already reported. Thus, if we are to accept these results from Wolins' sampling, we might expect that as many as one-third of the studies in our journals contain gross miscalculations."

30% gross miscalculations might have been a high estimate, but as a 50 year prospective prediction it's not bad: Bakker & Wicherts (2011) found "number of articles with gross errors" across 3 high and 3 low impact journals ranging from 9% to 27.6% 

In the light of these (and other) historical facts & figures, maybe its time for a historical study, lots of recommendations in those publications. 


Again Dunnette (1966):

THE CAUSES
[…]
When viewed against the backdrop of publication pressures prevailing in academia, the lure of large-scale support from Federal agencies, and the presumed necessity to become "visible" among one's colleagues, the insecurities of undertaking research on important questions in possibly untapped and unfamiliar areas become even more apparent. 
THE REMEDY 
[…]
1. Give up constraining commitments to theories, methods, and apparatus!
2. Adopt methods of multiple working hypotheses!
3. Put more eclecticism into graduate education!
4. Press for new values and less pre-tense in the academic environments of our universities!
5. Get to the editors of our psychological journals! 
THE OUTCOME: UTOPIA  
How do I envision the eventual outcome if all these recommendations were to come to pass? What would the psychologizing of the future look like and what would psychologists be up to? Chief among the outcomes, I expect, would be a marked lessening of tensions and disputes among the Great Men of our field.
I would hope that we might once again witness the emergence of an honest community of scholars all engaged in the zestful enterprise of trying to describe, understand, predict, and control human behavior.



References

Bakker, M., & Wicherts, J. M. (2011). The (mis)reporting of statistical results in psychology journals. Behavior research methods43(3), 666–78. doi:10.3758/s13428-011-0089-5
Dunnette, M. D. (1966). Fads, fashions, and folderol in psychology. The American psychologist21(4), 343–52. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/5910065
Wolins, L. (1962). Responsibility for raw data. The American Psychologist, 17, 657-658. doi: 10.1037/h0038819



Saturday, 15 June 2013

Why did you teach me to corrupt the scientific method? An open letter to my professors of 1993 and beyond.

Nijmegen,  15-06-2013

"There is no such thing as philosophy-free science; there is only science whose philosophical baggage is taken on board without examination."
- Daniell Dennett 


Esteemed prof. / dr. / PhD. / drs. / (emeritus),

almost exactly 20 years ago, in the summer of 1993, I was preparing for an exciting new chapter in my life as I was about to start my education as a student of Psychological Science at the same University at which I am currently employed as a docent and researcher. You probably do not remember me, I now know from personal experience what a lecture hall filled with 300+ students looks like from the professor's perspective. I followed your lectures, with great interest. 

During this first year you may have heard me ask some questions that were the topic of the books I was reading at the time: Gödel, Escher, Bach by Douglas Hofstadter and Consciousness Explained, by Daniel Dennett. Of course you were right to point out that, although interesting, such questions of philosophy were not the things empirical scientists like yourself cared much about. I know that now. Also, it was pointed out to me there was a philosophy course in the second year that pretty much covered all those things. 

(That course was indeed about philosophy, but not so much about the philosophy of psychological science. So I decided in my second year to write an essay about a story that appeared in a book edited by Hofstadter and Dennett. This you may remember, I won me the first prize in the 1995 university essay contest. Included was a short meet and greet with Daniel Dennett who was visiting in 1996 to promote his new book: Darwin's Dangerous Idea, from which I took the quote above this letter. By the way, do we still have an essay contest at our university?)


My question is about the following: 

In that same summer of 1993 an article appeared in the Journal of Experimental Education by Ronald Carver entitled: The Case Against Statistical Significance Testing, Revisited

Revisited, refers to the fact that Carver has to conclude that all the issues with statistical inference and the lack of replication of phenomena he identified in an article published in 1978 in the the Harvard Educational Review by the same title (save Revisited of course), were still around in 1993.

In fact, if you did not know the date of the publication, the excerpt from the article I copied below could very well have been published in the summer of 2013. The information in the abstract alone, contains all the answers one would need in order to change the corruption of the scientific method.

My question is: Why didn't you change? Your scientific work, the content of your lectures?

Why didn't we set up a study to quantify how bad this corruption really was? 
That would have been an excellent topic for a Master's thesis.

Why didn't we at least discuss these issues during my formative years as a scientist?



Why?

Sincerely, 
Your former student,

Fred Hasselman




ps. Carver is one of many scholars who have been warning us about these problems. If you missed his work, here's a selection of just a few of the publications that appeared just before and during the time I was a student that could, maybe should have caught your attention:

Cohen, J. (1990). Things I Have Learned (So Far). American Psychologist, 45(12), 1304–1312.
Cohen, J. (1994). The earth is round (p<. 05). American Psychologist, 49(12), 997–1003.
Freeman, W. J. (1997). Three centuries of category errors in studies of the neural basis of consciousness in intentionality. Neural Networks, 10(7), 1175–1183.
Kugler, P. N., Shaw, R. E., Vincente, K. J., & Kinsella-Shaw, J. (1990). Inquiry into intentional systems I: Issues in ecological physics. Psychological Research, 52(2), 98–121.
Meehl, P. E. (1990). Why Summaries of Research on Psychological Theories Are Often Uninterpretable. Psychological Reports, 66(1), 195.
Michell, J. (1997). Bertrand Russell’s 1897 Critique of the Traditional Theory of Measurement. Synthese, 110(2), 257–276.
Van Orden, G. C., & Paap, K. R. (1997). Functional neuroimages fail to discover pieces of mind in the parts of the brain. Philosophy of Science, 64(S1), 85–94.


pps. I have to exclude the advisors on my Master's Thesis who taught me the important lesson to publish data only when you are more than 100% sure they are reliable. Some of the pronunciation errors I had categorised could not be labelled unambiguously, or in other words, another rater would arrive at a different conclusion. So we did not publish.

Friday, 14 June 2013

Truths, Glorified Truths and Statistics (II)

(part 2: "To boldly, go...")

First a disclaimer: I love the work on the p-curve and the estimation of effect sizes. I support the disclosure initiatives (4 questions, 21 words) and the call for more quality and less quantity (however, also see part 1 in which I remind the reader there are many scientists for which there has been no life before p-hacking and a claim of ignorance on these matters is at the very least disrespectful to these scholars)



Let me be the one to spoil all the fun: There is no true effect!

It does not exist as an entity in reality, it is not one of the constituents of the universe, it should be a measurement outcome observed in a measurement context that was predicted by by a theory about a specific domain in reality.

As was pointed out by Klaus Fiedler at the Solid Science Symposium"What does it mean there is an effect?" (I am quoting from memory, this may be incorrect)

According to the live tweet feed earlier that day:

Solid Science Symposium Tweet Feed - Excellent!

If you believe this is possible, that a true effect can somehow be discovered, out there in reality, like a land mass across the ocean where everyone said there would be dragons, or a new species of silicon based life forms at the other end of the worm-hole, then you show one of the symptoms of participating in a failing system of theory evaluation and revision that I dubbed the [intergalactic] explorer delusion

This refers the to the belief expressed by many experimental psychological scientists that the purpose of scientific inquiry is to go where no man has gone before and observe the phenomena that are “out there” in reality waiting to be uncovered by clever experimental manipulation and perhaps some more arbitrary poking about as well. 

A laboratory experiment is however not a field study or an excursion beyond the neutral zone. Even if it were, I would argue that wherever you go as a scientist, boldly, or otherwise, you will be guided and quite possible even be blinded, by a theory or a mathematical formalism about reality that is in most cases implicitly present in your theorising.


Let's analyse this delusion by scrutinising a recent paper by Greenwald (2012) entitled: “There is nothing so theoretical as a good method”, which is a reference to the famous quote by a giant of psychological science, Kurt Lewin (1951). This also allows me to comment on what it actually is that Platt meant to say by the term "strong inference" in his 1964 paper.

Greenwald is explicit about his position towards theory; he is not anti-theoretic, as he acknowledges that theories achieve parsimonious understanding and guide useful applications (but he does not specify… of what?). The author is however also skeptical of theory, because he noticed the ability of theory to restrict open-mindedness. This is indeed a proper description of a theory: It is a specific tunnel-vision, but from the perspective of the Structural Realist (forgive me, I will explain this position more  precisely in the near future), this tunnel-vision is is only temporary.

It will be no surprise I disagree with the following: 
“When alternative theories contest the interpretation of an interesting finding, researchers are drawn like moths to flame. J. R. Platt (1964) gave the approving label “strong inference” to experiments that were designed as crucial empirical confrontations between theories that competed to explain a compellingly interesting empirical result.” (Greenwald, 2012, pp. 99–100, emphasis added)
That is not at all what Platt meant by strong inference, but incidentally we find another symptom of a failing system of theory evaluation, the interpretation fallacy I mentioned in part 1: Theories do not compete for their ability to provide an understandable description or explanation of empirical phenomena. They compete for the ability to predict measurement contexts in which phenomena may be observed and they compete for the accuracy with which measurement outcomes were predicted. And J.R. Platt agrees with this perspective as he describes very clearly:


“Strong inference consists of applying the following steps to every problem in science, formally and explicitly and regularly:


1) Devising alternative hypotheses;

2) Devising a crucial experiment (or several of them), with alternative possible outcomes, each of which will, as nearly as possible, exclude one or more of the hypotheses;

3) Carrying out the experiment so as to get a clean result;

1') Recycling the procedure, making subhypotheses or sequential hypotheses to refine the possibilities that remain; and so on.”

(Platt, 1964, p. 347, emphasis added)
Strong inference starts with devising alternative hypotheses to a problem in science and not with an interesting finding. Platt comments that step 1 and 2 require intellectual invention, which I take the liberty to translate as ‘theorizing about reality’. That is what you do when you device a method.

One source of evidence for his argument concern 13 papers, listed in a table that have started controversies on average 44 years ago in psychological science, but which still have no resolution. The author claims that in order to resolve the controversies, the method of strong inference was applied, which obviously failed. Also, it is claimed that philosophy of science provides no answers to resolve the controversies, because it discusses (apparently endlessly) whether such issues can be resolved empirically in principle. It is clear that Greenwald is referring to the resolution of these controversies as a resolution about the ‘reality’ of the ontology of a theory. This is again a matter of interpretation and is not what formal theory evaluation is about. The constituents of reality posited to exist by a theory are irrelevant in theory evaluation. As long as everything behaves according to the predictions by the theory, we should just accept those constituents as temporary vehicles for understanding. I believe these controversy theories were not properly evaluated for their predictive power and empirical accuracy. I don't know if they can be evaluated in that way, if they cannot, the conclusion must be the theories are trivial.

This impression that ontology evaluation seems to be the problem here is indeed supported by the descriptions provided for the 13 controversies: It is primarily a list of clashes of ontology, e.g., Spreading activation vs. Compound cueing. Further support comes from the examples provided to argue that even if philosophy had an answer, this would not refrain scientists to continue the debate. The fact that scientists do not do this implies to the author there must be another way than strong inference to resolve controversies in science. This is illustrated by examples in which a scientific community was able to achieve consensus about a problem in their discipline (the classification of Pluto as a dwarf planet, HIV as the cause of AIDS and the influence of human activity on global warming). The author suggests that controversies in psychology could be resolved if only a reasonable consensus could be achieved.

I cannot disagree with the author on his wish for a science that worked towards reaching consensus about the phenomena in its empirical record, instead of wasting energy on definite existence proofs for the ontologies of competing theories. Recall the history of the quantum formalism, two very different theoretical descriptions of reality (waves vs. particle ontologies) were found to be the same for all intents and purposes. I am certain that scientists in cosmology, virology and climatology used strong inference to work towards those consensus resolutions, but I did not check it. Strong inference and consensus formalism science go hand in hand.

What I can say is that Platt’s recycling procedure (step 1’) suggests replication attempts should be carried out and apparently there is somewhat of a problem with replication of phenomena in psychological science. So this makes it again very unlikely any strong inference has been applied to resolve theoretical disputes in psychological science. Indeed, one of the authors listed to have caused a controversy that was unresolved by strong inference, recently challenged the discipline to start replicating the ‘interesting findings’ in its empirical record (e.g. Yong, 2012).

(There must be some proverb about dismissing something before its merits have been properly examined...)


A second source of evidence to support his suspicions about the benefits of theorising, Greenwald examines the description of Nobel Prizes for their being rewarded due to theoretical or methodological contributions. The [intergalactic] explorer delusion is obvious here; Greenwald highly values the appearance of the word ‘discovery’: 
“Most “discovery” citations were for methods that permitted previously impossible observations, but in a minority of these, “discovery” indicated a theoretical contribution.” 
He concludes that theory was important for the development of methods, and that novel methods produced inconceivable results, that prompted new theory.

I am quite certain that the referred inconceivable results were predicted by a theory or considered as an alternative hypothesis. They concern measurement contexts one just does not accidentally stumble upon. If outcomes were surprising given the predicted context, an anomaly to the theory was found, and in that case, naturally, a new theory would have to be created. It was however due to an anomaly to a theoretical prediction, not due to a ‘discovery’ of a phenomenon by a method! The Large Hadron Collider (or any other billion-dollar instrument of modern physics) was not built as a method, a vehicle to seek out previously unknown phenomena like the starship U.S.S Enterprise. Theory, very strongly predicted a measurement context in which a boson should be observable that completed the standard model of particle physics. The methods scientists use for obtaining knowledge about the structure of reality is the result of testing predictions by theories, without exception. Satellites are not sent into space equipped with multi-million dollar X-ray detectors just to see what they will find when they get there. 

I conclude by commenting on the way the author describes why Michelson won the Nobel Prize for Physics in 1907. This involves a recurring theme in a paper I am about to submit: the luminiferous Æther. Experimental physicists like Michelson and Morley spent most of their academic careers (and most of their money) on experiments that tested the empirical accuracy of theories that predicted a very specific observable phenomenon called Æther-dragging. Their most famous experiment reported in “On the Relative Motion of the Earth and the Luminiferous Ether” (Michelson & Morley, 1887), showed very accurately and consistently that there was no such thing as an Æther, or at least, that its influence on light and matter was not as large as the Æther-dragging hypothesis predicted it would be. This of course harmed the precision and accuracy of Æther-based theories of the cosmos, but to hint, as Greenwald seems to do, that the method ‘caused’ Einstein to create special relativity theory is farfetched. 

Michelson won the Nobel Prize for Physics in 1907 for the very consistent null-result (yes psychological science, such things can be important) and for the development of the interferometer instruments that meticulously failed to measure any trace of the Æther (cf. Michelson, 1881). Their commitment to the Æther was adamant though. To be absolutely certain that the minute interferences that were occasionally measured were indeed due to measurement error, instruments of increasing accuracy and sensitivity were built. The largest were many meters wide and placed on high altitude on heavy slabs of marble floating on quicksilver in order to avoid vibrations interfering with the measurement process. Now that is a display of ontological commitment! It was however as much motivated by theoretical prediction as the construction of the Large Hadron Collider. Not a theory-less discovery by some clever poking about.

Greenwald admits that the word theory is often used in Michelson and Morley’s 1885 article, so theory must have played an important role in the design of the instruments. The role was not just 'important', without the theory there would have been no method at all. In fact, if a theory of special relativity had been published 20 years before 1905 (physicists knew something like relativity was necessary), there would have been no instruments constructed at all because:
"Whether the ether exists or not matters little - let us leave that to the metaphysicians; what is essential for us is, that everything happens as if it existed, and that this hypothesis is found to be suitable for the explanation of phenomena. After all, have we any other reason for believing in the existence of material objects? That, too, is only a convenient hypothesis; only, it will never cease to be so, while some day, no doubt, the ether will be thrown aside as useless." (Poincaré, 1889/1905, p. 211). 
And indeed, the Æther  was thrown aside as useless, because a method devised to test a prediction by a theory yielded null results. Strong inference means this repeated null-result has consequences for the credibility of the theory that predicted the phenomenon. Apparently, in psychological science, this id a difficult condition to achieve. 

The Structural Realist's take home message is: 

  1. We should believe what scientific theories tell us about the structure of the unobservable world, but
  2. We should be skeptical about what they tell us about the posited ontology of the unobservable world. 
In this quote by Poincaré may lie the answer to Greenwald's interpretation of current practice of psychological science (which is in fact a very accurate description of the problems we have with theory evaluation, I just do not agree with the interpretation): Why does Poincaré reserve a special place for the hypothesis about material objects, which will never cease to to be so? 


Still believe it is possible to use a method that was not predicted to yield measurement outcomes by a theory about reality? 

Ok.

I'll think of some more examples.



References

Greenwald, A. G. (2012). There Is Nothing So Theoretical as a Good Method. Perspectives
on Psychological Science, 7(2), 99–108. doi:10.1177/1745691611434210

Michelson, A. . (1881). The Relative Motion of the Earth and the Luminiferous Ether. American Journal of Science, 22(128), 120–129. Retrieved from http://www.archive.org/details/americanjournal62unkngoog

Michelson, A. ., & Morley, E. W. (1887). On the Relative Motion of the Earth and the Luminiferous Ether. American Journal of Science, 34(203), 333–345. Retrieved from http://www.aip.org/history/gap/PDF/michelson.pdf

Platt, J. (1964). Strong Inference. Science, 146(3642), 347–353. Retrieved from http://clustertwo.org/articles/Strong Inference (Platt).pdf

Poincaré, H. (1905). Science and Hypothesis. New York: The Walter Scott Publishing Co., LTD. Retrieved from http://www.archive.org/details/scienceandhypoth00poinuoft

Yong, E. (2012). Nobel laureate challenges psychologists to clean up their act. Nature. Retrieved from http://www.nature.com/doifinder/10.1038/nature.2012.11535








Truths, Glorified Truths and Statistics (I)


(part 1: Just for the record)



The Appendix should probably be skipped by anyone who reads this


[Just for the record] {

I did not. 

Engage in p-hacking, or any other exploitation of researchers degrees of freedom.
(ok, maybe once, but I did not inhale, or have any relations with the degrees, or the freedoms involved. None that are worth mentioning, or have been caught on tape anyway: See point of the Appendix below)
Some would have us believe that we all studied Cohen, but did not act appropriately, just ignored it all in our daily  practice of science (this was almost literally exclaimed at some point during this very interesting symposium). 

I do not understand how such a thing can happen to a scientist, it appears to me as a post-meditated case of pathological science, or was it just a little sloppy and careless? When you learn about something that should be implemented immediately, then why don't you? Or: Who else will? There is no scientist high council that will decide such things for you.


On the other hand, maybe Cohen was studied very well, as evidenced by the conclusion of the paper entitled What I have learned (so far): "Finally, I have learned that there is no royal road to statistical induction, that the informed judgment of the investigator is the crucial element in the interpretation of data, and that things take time."


Cohen makes a very serious error against formal theory evaluation, but he is in good company, as this is the most common flaw in theory evaluation as it is practiced by the social sciences. In a genuine science, the informed judgement of the investigator plays NO role whatsoever in the evaluation of the accuracy of the prediction by a theory. Quantum physical theories are the best scientific theories ever produced by human minds and there are over 20 informed judgements on how the theory should be interpreted, but that does not have any influence on the empirical accuracy of the theory: highest ever!

Something that I'm picking up in how people are talking about this worries me. There seems to be a tendency to spin all the wrongdoing of the past  as a necessary evil that was inescapable. As if to say: Forgive our ignorance, let's show some penance and go about our business as usual.

I'm not bringing this up because I feel it does not apply to me personally: It is just not true

A scientist can never feign ignorance about his or her theorising about the way the universe works. It's either the best and most thorough and profound thinking you can possibly achieve, or it is not solid enough to share with other scientists.


Moreover, what about all those scholars who:

- have spoken out against questionable research practices in the past. 
- argued against the reluctance of scientists to abide by the rules of the scientific method
- out of sheer frustration gave up because their colleagues would not accept falsification in the face of anomalies
- criticised our preferred model of inference, or pointed out those NHST rules are not obeyed at all.
- complained about the logical inconsistencies in psychological theorising and the lack of a proper foundations debate.


To claim ignorance about these matters is at least disrespectful to those who dared to speak out, often at the risk of being marginalised and ridiculed for doing so. I believe it is more than disrespectful and find the idea there could be some kind of cleansing p-hack penance waiting to happen just outrageous.


To whom this may concern: You did not listen, and you should have!

That is what happened, you did not bother to spend time and energy to be educated on important matters of philosophy, mathematics, measurement theory, statistics or whichever discipline of science is somewhat relevant to help you answer your research questions.

Science is not: "That with which you can get away with in peer review." It is about doing everything in your power to get it as right as inhumanly possible and we should not settle for anything less. The point is lucidly made here, this will take time and should bring down the number of studies published. There is no excuse for not being on top of all the most relevant developments from all disciplines of science that could potentially help you get closer to answering the research questions you have.

So let me be clear: There will be no feigning of ignorance tolerated on my watch.

To summarise:

I did not have a life before p-hacking.

}



-------------------------------

[Appendix] {

Want proof?
Of course you do, you're the proud owner of scientific mind!

1. I have not published a single paper in a peer-reviewed journal as a first author before 2013. It just took me a long time to find out exactly what it was I could contribute 
(note: this usually has nothing to do with the importance of those thoughts as perceived by others)
2. Before 2013, I submitted a paper as first author only twice, but they did concern the same study. First journal, they loved the theory, but not the experimental design, so it was rejected. Then I revised it and submitted it to another journal. They saw merit and wanted me to resubmit, again, they loved the theory, but asked me if I could lose 66% of the words I had used. That pretty much settled it. 
(I will not relate here all the encouraging advice I received over the years to become less precise, engage more often in the practice of “huis tuin en keuken” science [probably translates to “middle of the road science”], or to “just send it in and see what reviewers say, because you never know in advance what they will say, they will be pissed off because you cite work that is over 2 years old anyway. Here’s a list of 10 journals, start at the top”)
3. Even so, I have a decent number of publications to which I made substantial contributions either in study design or by performing the data analysis or even the theoretical part, imagine that! I disseminate the work that I do not publish and even teach about it and this is the best way to learn about all the things that I still need to be educated on. Such a resume will not impress any research institute or funding agencies. Thank the goddess I have a permanent teaching job. 
("oh, one of those guys who can only teach and does not know how to write a proper scientific paper")
4. I did not defend my dissertation until I could 100% stand behind every word I wrote.
(but that was already the case more than 5 years ago and still hasn't happened)
Almost, just awaiting some additional results. 


I did postpone, yes, mainly because I seriously considered leaving science, until about a year ago. Things have changed recently as you may have noticed. 



Before the change, I wanted to leave because I realised that a game was being played in which the winners were the ones who interpreted the "facts" of their scientific inquiries in such a way that it would maximally serve their own cause instead of the cause of science, which is to uncover the structure of reality. Decisions about funding, positions, courses in the curriculum, they are not based on quality, but on politics. Good luck with that strategy. 



I have seen too many gifted young students who understood this was the game they were supposed to be playing if they wanted to become a scientist and therefore, could not be saved for science.



If I had wanted to be engaged in an endeavour that interpreted facts any way the wind blows, I would have chosen a career in politics or finance and would have made a much better living out of it in the process. Science is for nerds who want to figure things out, not for bullies who take over the playing ground by loudly shouting out incoherent authoritative arguments to prove they are never wrong about anything.



}