140 likes | 169 Views
More on the Science of Psychology. More thoughts from Mike. The public’s perspective. Early on (and I mean way back) science made bold claims and spoke in absolute terms However, the only thing that was consistently discovered was that there were as many exceptions as rules
E N D
More on the Science of Psychology More thoughts from Mike
The public’s perspective • Early on (and I mean way back) science made bold claims and spoke in absolute terms • However, the only thing that was consistently discovered was that there were as many exceptions as rules • In fact, if there is any rule in science it’s that outcomes are probabilistic, not deterministic, something the general public (and new students) seem to have a difficult time understanding. • However the public, seeing any fallibility, began to dismiss science outright in some cases as being any better than other modes of knowledge acquisition • In fact, I think it’s safe to say America is in an scientific dark age now. Sure we do science, but our scientific education is appalling, much of the non-scientific populace seems to distrust ‘science’ in general (though oddly they don’t mind accepting it without question when it comes to computers, airplanes, cooking and many other aspects of daily life), politicians that know nothing of science go out of their way to limit some areas of research, pseudoscience will make the news before real science etc. • In short, if it weren’t for our numbers in terms of population we wouldn’t even be competing on the international level
Economics of research • The fact is, money drives research, but more than monetary resources are at play for the scientist. • Intellectual resources, the scientists’ own as well their assistants and colleagues, have significant impact on the realm of ideas including what one studies and how one studies it • It is also here that methodological knowledge can make or break research • If you don’t know how to study some problem, odds are you won’t • Physical resources as well come into play. • For example, someone without access to an MRI is not likely to do any brain imaging studies
Scientists as individuals • Scientists after all are human • Many of the more visible ones have tremendously huge egos, but in general a typical scientist: • Does esoteric research that only a relative few in the world and perhaps even their discipline care about • Does research that will do little by itself to advance the field • Does research that will likely be forgotten soon after they stop doing it • Engages in confirmation bias, selective memory and the like on a regular basis • Uses poor methods • In short, scientists, despite what they may think of themselves, are working stiffs just like everyone else (i.e. their work is not important just because they are doing science), and they can be lazy in both thought and action
However… • The typical scientist doing typical work is a requisite for science as a whole to progress • Not everyone can be an Einstein1 • With dozens to thousands of such individuals in regular communication, what eventually emerges from the individual, disparate molecules of specific and mundane research are trees, raging rivers, and planets of coherent thought that enrich our lives in their application • Science thus proceeds as an emergent process arising from the work that everyone does, and while history may favor some names over others, no one works in isolation • Many scientists, like our own founder Wundt, are noted not because what they did that was novel, but because they tied ideas together into an understandable whole and which paved new directions for research to follow
Theories • Theories do not advance simply due to falsification or verification, exploration or confirmation, but with any of these techniques, and theories undergo modification, trimming, expansion etc. • Some theories require verification • Existential type claims (Does X exist) • Some a falsification • “All or none” type of theories (universals) • Exploration • When dealing with the unknown • Confirmation • Does a model based hold for this sample? • The typical problems of science are too complex to take a simple approach, and old theories never die1, they just fade away or are absorbed by new ones
Statistics • The problem with common approaches to analysis is that the theory is often (usually) not tested statistically • At least this is 99% of psych and much of social science research anyway, as well as much of the supposed ‘hard’ science stuff I’ve seen • As an example, say you want to examine whether guys and gals differ on a score of verbal ability1 • Your theory is that they are different • First issue: Your theory is confirmed by any collection of data, i.e. they’ll never score the exact same with adequate sampling • Theory revision: guys and gals are statistically significantly different • Second issue and deathblow: the hypothesis tested statistically is… • That they are the same • What??? • Third issue: The result is probabilistic, there is no definitive answer • The probability is stated as follows • Assuming guys and gals are the same, what is the probability that I would see a difference like I have here? • If low, reject the hypothesis that they are the same • If high… “fail to reject” or incorrectly conclude they are the same2 • Fourth issue: the whole reasoning process is illogical
Science prevails • Despite all this science progresses • How? • While perhaps a small minority, some do very good research that uses a correct approach, collects the right data, uses appropriate methods of analysis, they know what they can conclude and what they can’t, understands the probabilistic nature of the process, they know that their theory is wrong before they even start and so are open to contrary interpretations, and they ignore poor research that doesn’t do this sort of thing.
Psychological Science • What’s odd is that, though psychology started so strongly as a science, and while the cognitive revolution was necessary to progress beyond what was largely (but not always) an overly simplistic approach of behaviorism, concomitant with it came a non-scientific side of psychology, and even hostility to its science (e.g. humanists) • For the better, psychological science moved out of the lab, but for the worse, kept its methods the same over time • While there have been many advances in statistics, psychology has been very slow to adopt them, and a typical study is still doing analyses used the 50s (and usually inappropriately)
Psychological Science • There are many critics of “psychological science”, mostly ignorant types that think psychology is merely private practice therapy1. • However those with knowledge of its methods do have plenty to criticize. • APA, often slow to act and inadequate when it has, is perhaps part of the problem • Despite putting a task force together in 1996 to examine the problems of methodology, no enforcement of the policy comes about except through the journals themselves2 • A recent study examining what had changed since that taskforce’s recommendations were given showed some improvement but very modest. • Result: research that routinely does not test the assumptions of the statistical procedure, uses inappropriate methods, poorly implements correct ones, is overly simplistic (e.g. univariate), and is in general far behind where it should be or could be
Example of good requirements/standards • General • First, that they have specific statistical standards at all • Uses masked review • Helps with the problem of nepotism • Has clear guidelines for measurement information • Journal of Consulting and Clinical Psychology1 • “JCCP requires the statistical reporting of measures that convey clinical significance. Authors should report means and standard deviations for all continuous study variables and the effect sizes for the primary study findings. (If effect sizes are not available for a particular test, authors should convey this in their cover letter at the time of submission.) JCCP also requires authors to report confidence intervals for any effect sizes involving principal outcomes. • In addition, when reporting the results of interventions, authors should include indicators of clinically significant change. Authors may use one of several approaches that have been recommended for capturing clinical significance, including (but not limited to) the reliable change index (i.e., whether the amount of change displayed by a treated individual is large enough to be meaningful; see Jacobson et al., Journal of Consulting and Clinical Psychology, 1999), the extent to which dysfunctional individuals show movement into the functional distribution (see Jacobson & Truax, Journal of Consulting and Clinical Psychology, 1991), or other normative comparison… ”
Good • Psychological Science1 • “Effect sizes should accompany major results. In addition, authors are encouraged to use prep rather than p values (see the article by Killeen in the May 2005 issue of Psychological Science, Vol. 16, pp. 345-353). Thus, typical statistical reports would follow formats like these: • t(50) = 2.68, prep = .95, d = 0.76; F(1, 30) = 4.69, prep = .90 eta2 =.135; or r? = .61, prep = .99, d = 1.56. • When relevant, bar and line graphs should include distributional information, usually confidence intervals or standard errors of the mean.” • Still issues • Lack of anonymous review • Effect sizes, to be used appropriately, should accompany any result • The prep is not an improvement over the p-value2
Bad • Most journals out there, and ‘impact factor’, a suspect index to begin with, doesn’t necessarily help you weed out the good from the bad. • Just because it’s popular doesn’t mean people are citing worthy research, or that the study you have in front of you is done appropriately • Is McDonald’s good food just because it’s the most popular ‘restaurant’ on the planet? • Just because it’s not a ‘top’ journal doesn’t mean the paper within isn’t good • Gist: You have to decide for yourself whether something is good or not based on its scientific merit, not who the author is, which journal it’s in, or because the theory sounds ‘neat’.
Psychological science, statistics, and students • What is not clear to new psychologists (or perhaps most) is that increasing your methodological knowledge increases your ‘idea power’ • The scientific temperament allows you to think logically about a problem and what can and cannot be generalized • Knowing different means of analyzing information allows you to think about the research problem itself differently, and this leads to new ideas, theories and interpretations of the information that are not otherwise available to you • Knowing more about methods allows you to be a better consumer of research • Knowing what makes for a good measure (e.g. reliability) ensures that you won’t waste time collecting poor data, or reading a study that has • Having good standards will allow you to quickly move among many articles that feign relevancy but may not be good research