From John Ray's shorter notes
|
November 09, 2019
Blinding Themselves: The Cost of Groupthink in Social Psychology
It is mentioned below that the Leftist bias in academe can damage social psychological research. I saw a vivid example of that during my research career in social psychology. I noted that when my colleagues designed a scale to measure conservatism, it normally showed little if any correlation with vote. So, apparently, lots of Leftists voted for conservative political candidates!
Basically, the scale designers never talked to conservatives so had only stereotyped and incorrect ideas of what conservatives actually thought. Insofar as one can summarize it, the Leftist researchers saw conservatives as brutes whereas in reality conservatives are basically cheerful, relaxed people. They rarely hit back the way Mr Trump does -- which is one reason why Mr Trump so shocks the Left. Gentlemen like Ronald Reagan and George Bush II are much more representative of the conservative mainstream. So scales by Leftists measured something that existed only in their own heads.
By contrast my scales of conservatism predicted vote very solidly, with correlations as high as .50. How come? I am a conservative so was intimately familiar with what conservaties actually think. My scales were valid. The Leftists' scales were not.
So did Leftists start using my scales? No way! They preferred to continue using their own invalid scales, thus making their findings of unknown meaning
The social sciences have a problem: If their scholars think too much alike, they will be blinded to the flaws and gaps in their research. Rather than explaining how individuals in society act and think, academics can sometimes slip blinders on themselves and the public.
Polling shows broad agreement within some disciplines. For instance, recent data from the Society for Personality and Social Psychology’s Diversity and Climate Survey revealed that almost 90 percent of their members who took the survey self-identify as liberals—but fewer than 5 percent identify as conservatives. This imbalance seems to affect how welcome conservative academics feel in scientific environments: They report feeling excluded more, they feel less free to express their ideas at SPSP events, and they do not believe that SPSP lives up to its diversity values.
And a study by Yoel Inbar and Joris Lammers of Tilburg University in the Netherlands showed that more than a third of the American scholars surveyed would be willing to discriminate against hiring a conservative job candidate, all else being equal.
In theory, the lack of political diversity shouldn’t affect research quality. Western civilization developed scientific methodologies to make sure that knowledge is universal and shareable. If the methods and analyses are adequate, the data openly available, and the conclusions justified, then any qualified investigator could evaluate the merits of a study. Ideally, scientific validity does not depend on the political or moral values of the scientist, but on the reasonability of the research process.
However, personal values and biases can affect researchers in multiple ways. They can affect how scientific ideas are conceived, developed, and tested. One of the biggest effects is in how values determine research questions.
How does a social scientist decide what to study? Undoubtedly, personal preferences push academics toward some topics over others. Similarly, scholars are embedded in a research hierarchy (laboratories, advisors, mentors, colleagues, assistants) that might make decisions for them—especially early in their careers. But those communities are usually committed to specific goals. Members of a laboratory studying the effects of smoking on the academic performance of college students probably are not indifferent to policies regarding smoking on campus. Those who study economic development want to find ways to ameliorate poverty. And those who study depression want to treat it better.
Funding agencies bring their own values to research, too: Grants are given to advance scientific knowledge in specific areas chosen by the values of the funding agency. Grant recipients, in turn, need to adjust their research interests to the funder’s vision.
Thus, the personal views of researchers shape research programs by investigating what they decide are most important. For example, the last three issues of the Journal of Social Issues (the flagship journal of the Society for the Psychological Study of Social Issues) were special issues dedicated to “neoliberalism,” “ableism,” and “immigration and identity multiplicity.” These topics and the language used are clearly aligned with specific left-leaning views, which express what those scientists believe needs to be studied. It’s unlikely that special issues investigating entrepreneurship, the benefits of patriotism, or gender complementarity will follow.
It’s important to note, however, that choosing some topics over others is not a sign of low-quality science per se. Scientific studies on ableism might be as rigorous as any. The problem is that the ideological imbalance among researchers means equally valid research questions that enrich the understanding of society are left uninvestigated.
For example, for decades it was taken as common psychological knowledge that conservatives were more intolerant and prejudiced than liberals. However, psychologist Jarrett Crawford showed, in a series of studies, that those results depended on which groups were the target of prejudice. While right-wingers showed more prejudice and intolerance toward blacks, LGBT individuals, and welfare recipients, left-wingers show similar levels of intolerance for those with right-wing political values.
In other words, what is being researched depends on personal, social, and institutional values—considerations that are not necessarily rational nor objective. Liberal scholars studying prejudice might focus their research on victimized groups rather than more-secure ones, which is a noble objective and a valid scientific decision. Yet, their research can lead to activists or other academics claiming more than is scientifically valid.
In psychological terms, it is not that conservative ideologies are necessarily linked to prejudice, as had been suggested since the 1950s. Crawford showed—and the psychological establishment has come to accept—that prejudice can be found across the political spectrum, but targeted at different groups. That is the way science makes progress—testing the accepted consensus and foundational knowledge.
In another domain, a 2014 study showed that women with unplanned pregnancies did not change their decision about having an abortion after looking at an ultrasound. Those findings can be—and have been—used as scientific evidence for specific policy views and partisan agendas. However, the study was conducted in Planned Parenthood Los Angeles, where about 9 in 10 of the incoming patients were reported to be “highly certain” about their decision to terminate the pregnancy. This study is valuable in and of itself, but it should not be stretched to imply that all women with unplanned pregnancies will be unaffected by looking at an ultrasound.
And in a unique instance, one professor discovered his own research was biased because he didn’t have anyone around to challenge his assumptions. In the late 1990s Keith Stanovich, a prominent cognitive psychologist at the University of Toronto, and his colleagues published a scale to measure “actively open-minded thinking”—i.e. the disposition to rely on reasoning rather than impulses, to revise one’s beliefs or to tolerate ambiguity. Recently, studies showed that this trait was strongly negatively correlated with religious beliefs: the more religious someone is, the less open-minded they are. Those findings were consistent with previous literature about the relationship between religious beliefs and analytical thinking.
However, in a highly unusual publication, Stanovich himself revised his own scales and realized that they might be intrinsically skewed against religious individuals. Evidence showed that once the bias in the open-minded scale is corrected, the correlation decreases noticeably. Reflecting on this, Stanovich wrote: “It never occurred to us that these items would disadvantage any demographic group, let alone the religious minded. No doubt it never occurred to us because not a single member of our lab had any religious inclinations at all.”
The above examples show how the ideological imbalance in the social sciences has a cost. Some questions don’t get asked. Then, established “knowledge” does not get challenged for inaccuracy because academics do not have another way to frame the issue. Since the demographics of academia are not likely to change in the short-term, how can this issue be addressed by researchers?
The key is dialogue: In the early stages of a research project, social scientists could reach out to scholars in departments that traditionally do not hold dominant liberal views (such as business schools, health sciences, or engineering departments). Even a non-technical discussion of research questions could yield valuable insights about potential blind spots. Academic institutions could promote these dialogues to improve scientific research—which is the very reason they exist in the first place.
SOURCE
Go to John Ray's Main academic menu
Go to Menu of longer writings
Go to John Ray's basic home page
Go to John Ray's pictorial Home Page
Go to Selected pictures from John Ray's blogs