**************************************************************************************
Pacific Sociological Review. Vol. 17 No. 3, July 1974, 370-376.
SHOULD SOCIOLOGY REQUIRE STATISTICS?
JOHN RAY
University of New South Wales
Most published accounts of what should be taught in sociology courses include statistics (e.g., Timms and Zubrzycki, 1971; Emery, 1970; Blalock, 1969). The debate is not over whether they should be taught but over how they should be taught. It seems to the present writer however that this state of affairs does not well represent what many sociologists and advanced sociology students actually feel -- in particular, it is not representative of what sociologists actually do teach. In conversation, one detects a hostility to anything statistical. Statistics are referred to as "decorations" for journal articles and are condemned as a fetish. Worst calumny of all, they are, outside the United States, sometimes referred to as "American" (see also Berger, 1963). Looking at the example of actual teaching closest to the present author, a perusal of the 1971 course outlines in the School of Sociology at the University of New South Wales reveals that over all the first-, second-, third-, and fourth-year courses there is only one lecture on statistics given [1]. This practice is so far from the recommendations of the writers quoted above that there is clearly a need for some basic debate over the place and necessity of statistics before detailed course recommendations such as those by Emery (1970) can be placed in context. The situation in American universities is probably seldom as extreme as the above instance, but it is certainly true that student agitation has on some campuses succeeded in having statistics removed as a degree requirement.
The paramount thing to keep in mind in respect of both probability statistics and psychometric measurement procedures is that they represent precautions. They are precautions we take before we risk generalizations from incomplete data. They are, moreover, precautions of a standardized nature. They enable us better to compare the results of one study with the results of another. The unwillingness to take such simple precautions (reflected in criticisms directed at journal editors who request statistical treatment of empirical data) does then seem to represent a dangerous sort of irresponsibility. What is the good of a "relevant" science (Walker, 1970) if its findings cannot be relied on to any known degree? Statistics are sometimes said in fact to militate against sociology being a "relevant" science. There may be some truth in this, but it certainly cannot be said that a statistical science must be non-relevant, nor does it follow that a non-statistical science will be more relevant than a statistical one. (Nor is "relevance" in its turn any guarantee of truth or accuracy. At best, it is a demand for applied research; at worst, it is an adolescent impatience with enterprises of a necessarily long-term or step-by-step nature.) In fact, the choice between a statistical sociology and a non-statistical sociology would seem to be a choice between science and non-science. Without the precaution orientation of traditional scientific method, we are reduced to something little different from "what Mrs. Jones said over the backyard fence." Selvin (1957) or Labovitz (1970) is sometimes quoted to justify the failure to use probability tests, but see here Gold (1969) and the symposium in the April 1971 issue of this journal.
Many anti-statistical sociologists will not of course be at all dismayed by the observations made so far. They will be saying, "We teach social criticism, not social science." They see themselves as having taken over the role of social critic -- once the field of academic philosophers but now largely abandoned by them (Williams, 1970). The present author believes that the need for social criticism of this sort cannot be questioned, and he also believes that sociology is the proper place for it. It is also true, however, that sociologists have no monopoly on social criticism. The most prolific and the most widely heard social critics in our society are without a doubt journalists, columnists, broadcasters, newspaper editors, and, last but certainly not least, economists (such as J. K. Galbraith, Milton Friedman, and, in Australia, Colin Clark). The question then arises: "What has the sociologist to offer that someone like Malcolm Muggeridge has not?" In many cases, it seems the honest answer must be: "Nothing." A social critic who is also a social scientist however does have something to offer that the layman does not: Specialized knowledge of a known degree of accuracy and techniques for acquiring more of the same.
The "social criticism" defense of an anti-statistical orientation, however, is perhaps made in an awareness that sociology is characterized by what Mannheim has called a "debunking" style of thought (see also Berger, 1963). Surely, however, such a defense cannot hope to imply that one can teach "criticism" in isolation from its social subject matter. Sociology may be characterized by a critical approach to data gathering and interpretation, but this critical approach is applied in the process of doing social science. Probability statistics in fact are one way of forcing oneself (and one's colleagues) to be critical of what any body of data may be taken to imply. They do have the advantage of standardizing the level of our skepticism. Skepticism must stop somewhere if we are not to end up in solipsism. The p < .05 level is such a "somewhere" -- perhaps the only "somewhere" that can be standardized from occasion to occasion.
To take another possible tack, perhaps the enterprise of "social criticism" mentioned above is meant to comprehend making diagnoses of social ills and subsequently giving recipes for their cure (per social change). In this case, the necessity for a base in social scientific knowledge should be obvious. A person who offers to cure bodily ailments without being trained in medicine is called a quack. A person who offers cures for social ills without being trained in social science is called (to be polite) a layman.
Yet another type of defense that is sometimes given by "sociologists" with little enthusiasm for empirical work is: "Sociology also includes expanding your consciousness" -- i.e., it is a legitimate part of sociology simply to generate new sensitivity to personal and social phenomena. Even if this laudable goal can somehow be systematically achieved (by Titchenerian introspection? by psychomimetic drugs?), it is not clear that the matter should ever be allowed to rest there. When the new ideas and conceptualizations are generated, they are of little worth and perhaps even of little plausibility until they have been subjected to an empirical check. In any case, the present author suspects that an older discipline has somewhat beaten these sociologists to the gun in developing new ways of perceiving and dealing with "reality." Writing of the value in a study of English literature, Leavis (1952: 194) says:
Without the sensitizing familiarity with the subleties of language, and the insight into the relations between abstract or generalizing thought and the concrete of human experience, that the trained frequentation of literature alone can bring, the thinking that attends social and political studies will not have the edge and force it should.
To the present author, Leavis' claim seems a highly defensible one. Surely no one, however, would claim that a study of English literature (or any study with equivalent effect) was alone enough to entitle one to call oneself a sociologist.
That attempting to find a defensible role as a non-social-scientific sociologist should simply turn one into a tyro in a field long covered by others is also true if one believes one has a peculiar competence to teach critical thought per se (i.e., without the "social" adjective). If there were in fact anyone who could claim particular competence to teach this, it would, of course, be the philosopher.
Another variant on the anti-statistical theme is: "I teach social theory -- not mindless fact-grubbing." Perhaps the archetypical utterance of such a person is: "But what Weber really said was . . . ." Such people seem oblivious of the fact that what Weber himself was concerned with was facts, data. Using such data as he had, Weber produced theories which would explain it. Instead of parsing Weber's sentences, a potentially more profitable venture would be to emulate Weber: Don't merely praise him, but also gather more and better data. Theories by their very nature exist to explain data, and the data-less theorist is a solipsist.
Nothing in this paper, moreover, can be construed as demeaning the importance of theory. Indeed a description of any science as a theory-to-data and data-to-theory interaction would seem to be axiomatic. Parenthetically, it is sad to note that the sort of research produced or encouraged by capital 'T' Theorists (when system pressures make it obligatory for them to do so) very often tends toward the "shotgun empiricism" type -- where theory is at best an afterthought. Prior to his moving from a psychology to a sociology department, it had never occurred to the present author that an academic could do a piece of research that was not a test of some theory. For many sociologists, however, theory and research seem two separate and only accidentally related realms. This demeaning of theory in research planning explains the suggestion of one colleague that some sociology departments are merely second-rate market research organizations with the only skill required being the ability to add up. The sorts of research topics one has in mind as examples of this tendency are "home units" or "air pollution." Relevance is a poor apology here. It may indeed be the very worst trivialization of all.
The statistics section of an actual course might well be taught according to the syllabus proposed by Emery (1970). The present author feels, however, that Emery assumes a depth of statistical understanding on the part of his colleagues which would normally be found only in professional statisticians. A more realistic goal would be to say that we should teach our students how and under what conditions to use the tools that the statisticians have already given us. To take an analogy: If every factory worker in industry had to understand the principles involved in constructing the tools he used, very little work would ever be done. The main thing is that he be informed how and when to use those tools. There is no point or hope in trying to develop our sociology students to the level where they can do the statistician out of a job.
The measurement section of the course would comprise as a minimum an introduction to the rationale and practice of the three major sorts of attitude scaling: Likert, Thurstone, and Guttman. This is sometimes thought to be the province of the psychologist, but the present author's impression is that it is in fact the sociologist whose work is most likely to require their use. The psychologist always has his Skinner box, his visual illusions, and his dissection table to occupy him. It is the opinion of La Piere (1969), however, that the empirical sociologist would have very little left indeed if you took away his attitude scales. Whatever the truth of that contention may be, it is at least evidence that attitude scaling does have a place in a course such as the one being discussed.
One point of debate which has not been treated so far is the often-heard contention that while statistics may be necessary for social science, we do not need to teach them. We can always find a "friendly statistician" to help us out when needed. This is analogous to saying that a motor-mechanic does not need to know about gearboxes -- he can always call in a gearbox-specialist when needed. Both statements are, of course, perfectly true. Both social scientists and motor mechanics can carry on with only a part of the skills they need to use in carrying out their vocation. Whether they should expect to do so is another question. There is the additional consideration that a consultant statistician in combination with a non-statistical social scientist will never be as effective or as efficient as the one man who combines both skills. This is because statistics, as well as providing a probability test, also have important analytical and descriptive functions. Data as complex as those needed to deal with human subjects will always be analyzable in a variety of ways. One type of analysis may reveal a relationship that another type of analysis failed to detect. Only an intimate familiarity with both the sociological subject-matter and the statistical techniques can hope to yield the optimal combination of data characteristics and analytical method.
As an extension of this point, it is desired to argue that data analysis is a creative enterprise in its own right. The present author has on several occasions been the "friendly statistician" who has undertaken to analyze other people's data. The degree to which an analyst can advance the status of a project is evidenced by the fact that on three of those occasions joint authorship was obtained and on two out of three of those occasions the present writer was in fact accorded the status of senior author. Surely, then, we do not want the training we give out students to reduce them to the fate of being perennial tributaries to some statistical grandee.
NOTE
1. I understand, however, that 1971 is to some extent an a-typical year in this respect.
REFERENCES
Berger, P. L. (1963) Invitation to Sociology. Harmondsworth, Eng.: Penguin.
Blalock, H. M. (1969) "On graduate methodology training." Amer. Sociologist. 4, 1: 5, 6.
Emery, F. E. (1970) "The teaching of methodology in sociology." In J. Zubrzycki (ed.) The Teaching of Sociology in Australia and New Zealand. Melbourne: Cheshire.
Gold, D. (1969) "Statistical tests and substantive significance." Amer. Sociologist 4, 1: 42-46.
Labovitz, S. (1970) "The nonutility of significance tests: the significance of tests of significance reconsidered." Pacific Soc. Rev. 13: 141-148.
La Piere, R. T. (1969) "Comment on Irwin Deutscher's looking backwards." Amer. Sociologist 4, 1: 41-42.
Leavis, F. R. (1952) The Common Pursuit. Harmondsworth, Eng.: Penguin.
Selvin, H. C. (1957) "A critique of tests of significance in survey research." Amer. Soc. Rev. 22 (October): 519-527.
Timms, D.W:G. and J. Zubrzycki (1971) "A rationale for sociology teaching in Australia." A.N.Z. J. of Sociology 7, 1: 3-20.
Walker, E. L. (1970) "Relevant psychology is a snark." Amer. Psychologist 25, 12: 1081-1086.
Williams, G. (1970) "Philosophers at war." Australian (Dec. 18): 11.
Go to Index page for this site
Go to John Ray's "Tongue Tied" blog (Backup here or here)
Go to John Ray's "Dissecting Leftism" blog (Backup here or here)
Go to John Ray's "Australian Politics" blog (Backup here or here)
Go to John Ray's "Gun Watch" blog (Backup here or here)
Go to John Ray's "Education Watch" blog (Backup here or here)
Go to John Ray's "Socialized Medicine" blog (Backup here or here)
Go to John Ray's "Political Correctness Watch" blog (Backup here or here)
Go to John Ray's "Greenie Watch" blog (Backup here or here)
Go to John Ray's "Food & Health Skeptic" blog (Backup here)
Go to John Ray's "Leftists as Elitists" blog (Not now regularly updated -- Backup here)
Go to John Ray's "Marx & Engels in their own words" blog (Not now regularly updated -- Backup here)
Go to John Ray's "A scripture blog" (Not now regularly updated -- Backup here)
Go to John Ray's recipe blog (Not now regularly updated -- Backup here or here)
Go to John Ray's Main academic menu
Go to Menu of recent writings
Go to John Ray's basic home page
Go to John Ray's pictorial Home Page (Backup here)
Go to Selected pictures from John Ray's blogs (Backup here)
Go to Another picture page (Best with broadband)