This article reviews the history of Public Opinion Quarterly from 1972 through 1986, with brief discursions on its prehistory at Princeton and Columbia universities and some reflections on its present and future.
Public Opinion Quarterly, designated as AAPOR’s official journal in 1948 and now celebrating its 75th anniversary, almost came to an untimely end in 1967, when Princeton University, its home since its inception, decided it could no longer afford this act of charity. (Princeton University Press actually stopped underwriting the journal’s publication costs in 1954, and a special “committee for the POQ,” organized by Paul Lazarsfeld, persuaded some 25 organizations interested in public opinion research to become “sustaining subscribers,” initially at a rate of $100 a year.) It was rescued by W. P. Davison, then a professor at Columbia University’s School of Journalism but a resident of Princeton, who literally packed up the journal’s files, loaded them into his station wagon, and drove them to his (fortunately spacious) office suite at Columbia. (A more detailed account of the journal’s early years can be found in Davison [1987, 1992].) The suite was shared with Fred T. C. Yu, another Professor of Journalism, and it included an extra room between their two offices with a couple of desks and space for an extra filing cabinet or two. From then until 1972, Davison edited the journal from this office without pay, and I, a newly minted Columbia sociology PhD with a long prehistory as an editor, then teaching in the School of General Studies, was ensconced in the middle office as managing/copy editor. By the grace of Robert K. Merton, who chaired the Sociology Department and who, along with Paul F. Lazarsfeld, Fred Yu, and Allen H. Barton, constituted the journal’s Advisory Committee, this earned me credit for teaching one course (then worth approximately $2,000).
Bernard Roshco, a former journalist earning his PhD at the School of Journalism at the time, took over as editor of the journal in 1972. His tenure continued until 1975, when, having completed his degree, he began a career at the U.S. Information Agency in Washington, D.C. When Roshco left, I was invited by the Advisory Committee to edit the journal, a position I held for the next 11 years. In 1985, AAPOR assumed ownership of POQ from Columbia University—a change leading to the journal’s more independent and stable existence. It was Albert E. Gollin, President of AAPOR in 1984–85 and a master parliamentary tactician, who successfully shepherded this transition (Davison 1992).
In the absence of a formal content analysis, what follows provides a very brief impression of the kinds of articles POQ published from 1972 to 1986. The current editors of the journal, Nancy Mathiowetz and James Druckman, kindly provided help to the former editors in the form of two measures of articles’ impact: number of citations in the ISI, and number of times printed or viewed in JSTOR. My crude attempt at classification suggests that from 1972 through 1986, about 50 percent of the 10 articles cited most frequently were methodological in nature, whereas 80 percent or more of the 10 articles downloaded most frequently were primarily substantive. (Of course, this may simply mean that substantive articles were read more frequently than methodological articles by non-AAPOR members, who did not have easy access to the journal itself.) A readership survey in 1979 found an overwhelming preference for articles on research methods, followed by articles on attitudes, communication/mass media, and theory (Singer and Glassman 1980). Accordingly, this account of the 15-year period 1972 through 1986 will focus on two broad areas: substantive research, and research on methods.
Four substantive areas stand out from those years: attitudes and public opinion; mass communication, including mass media, newspapers, and television, and their effects on public opinion; polling and the polls; and voting behavior. In terms of number of articles, probably the largest area of emphasis from 1972 to 1986 was research on attitudes, both substantive and methodological, and studies of public opinion. (The distinction between studies of attitudes and studies of public opinion is not easy to make. In indexing the journal—each Winter issue through 2003 carried an index for the preceding four issues—we tried to distinguish in-depth examinations of attitudes from opinions measured by only one or a few questions, but were not always consistent.) Here, I can give only a few examples of the wide range of attitude and opinion objects considered. Over the years, POQ has published numerous articles about racial and political attitudes, religion and the religious right, prejudice (racial as well as religious), adolescent attitudes, aggression, gender roles, environmental concern, foreign policy beliefs, the nature of public opinion support for American presidents, and many, many more. It published several articles by Milton Rokeach on the relation among values, attitudes, and behavior. In 1972, it published Schuman’s “Attitudes vs. Actions Versus Attitudes vs. Attitudes,” a cogent explanation (and persuasive empirical test) of why research on the predictive power of attitudes often fails to find a relationship, and how that can be remedied.
Mass communication, the second large area of emphasis, includes agenda setting, starting with the seminal article by McCombs and Shaw (1972) (which included the famous quote, from Cohen’s The Press and Foreign Policy , that “the press may not be successful much of the time in telling people what to think, but it is stunningly successful in telling its readers what to think about”). That article was the one most often cited and viewed in POQ’s entire history; it has been cited more than 700 times and downloaded almost 49,000 times! Also included are Davison’s 1983 article on “The Third-Person Effect in Communication” (cited 244 times), which notes that people often ascribe a great deal more influence to the effect of mass communication on other people than they do to its effect on themselves, and the related concept of “pluralistic ignorance,” with articles by O’Gorman (1975), O’Gorman and Garry (1976), and Fields and Schuman (1976), among others.
Whereas research on newspapers and the press tended to be subsumed under the topic of “mass media” or “mass communication,” research on television received considerable separate coverage throughout the 15-year period. In 1972, POQ’s editors invited Leo Bogart, Executive Vice President and General Manager of the Advertising Bureau of the American Newspaper Publishers Association, to prepare an extended review article of the recently published Surgeon-General’s Study of Television and Social Behavior. When the 10-year follow-up study was released in 1982, the editors asked Thomas D. Cook, a psychologist noted for his research on television, to perform the same function; the result was a 40-page analysis of the Report’s emphases and methodology (Cook, Kendzierski, and Thomas 1983). Both bear rereading today, and indeed both were among the 10 most frequently downloaded articles of their time. The journal also contributed to some of the major theoretical controversies of the era, namely Gerbner’s theory of television effects, known as “cultivation analysis” (Gerbner et al. 1980), and Noelle-Neumann’s theory of the “spiral of silence” (1977).
A third area of emphasis was polling, especially its relationship to the press. I distinguish here between in-depth studies of public opinions or attitudes (already discussed) and studies of public opinion polls per se. Included under the latter heading is a special issue on “Polling and the Press,” edited by Albert E. Gollin (1980), which considered recurring as well as new issues in the relation between polls and the press—the uses and effects of polls, the legitimacy of “newspapers making their own news” by sponsoring public opinion polls and publishing their results, and the relationship between journalism and social science. In Spring 1986, POQ published “A Symposium on Polls: Is There a Crisis of Confidence?,” an early warning of what has since come to pass with respect to response rates and public cooperation, with articles by Andy Kohut, Bud Roper, Stephen Schleifer, and John Goyder. Also included under this heading are articles such as Declercq’s “The Use of Polling in Congressional Campaigns” (1978) and Perry’s “Certain Problems in Election Survey Methodology” (1979), though this might well be classified under methodology, as well as historical surveys of public opinion, such as Smith’s “America’s Most Important Problem—A Trend Analysis” (1980). It also, of course, includes a feature of most issues of the journal then and now: a collection of public opinion poll results on some substantive topic of current interest (e.g., unions and strikes [de Boer 1977a], nuclear energy [de Boer 1977b], health insurance [Erskine 1975], homosexuality [de Boer 1978], China [de Boer 1980], U.S. military intervention [Benson 1982], and the Arab-Israeli conflict [de Boer 1983]. Plus ça change …
The fourth area of emphasis in the journal during 1972–1986 was turnout and voting behavior as a preeminent expression of public opinion. For example, the Spring 1975 issue included six articles on “The American Polity and Public Opinion,” including an article on party identification in the South from 1952 to 72 (Gatlin 1975), changes in voter turnout during the same period (Hout and Knoke 1975), and social class and party support in 1972 (Glenn 1975). Traugott and Katosh examined response validity in surveys of voting behavior in Fall 1979, and Traugott and Tucker looked at the prediction of turnout and election outcomes in Spring 1984—both of these, of course, could also be considered examples of methodological investigations. Elections, in addition to voting and turnout, were also a matter of continual concern, and this was an area to which Kurt and Gladys Lang contributed extensively (e.g., Lang and Lang 1978, 1980).
I can’t resist mentioning three idiosyncratic areas that received early, scattered attention: “Theory” shows up occasionally in the Index, as do “Anonymity and Confidentiality,” and, surprisingly often, “Ethics.”
Research on Methods
As noted earlier, about half of the 10 most frequently cited articles in 1972–1975 and 1976–1986 were methodological in nature, though the word “methodology” rarely appeared during those years. Instead, the journal published research on “research methods,” “survey research,” and the more specific headings discussed below, as well as articles that were truly methodological but examined the logic of research design in relation to specific applications (e.g., the Surgeon General’s reports on television and social behavior, the prediction of behavior from racial attitudes, the prediction of voting from public opinion polls, and many similar topics).
Nevertheless, a real change was apparent between 1972 and 1975. In 1972, about a dozen Index entries were related to research methods (including an entry under Mail Questionnaires by Dillman, “Increasing Mail Questionnaire Response in Large Samples of the General Public,” and one entry under Lost-Letter Technique by Georgoff, Hersker, and Murdick). By 1975, there were at least two dozen such entries, including “Nonresponse” (though for many years that heading carried only a cross-reference, “see Response Rate”). As early as 1976, POQ carried an article by Goudy, “Nonresponse Effects on Relationships Between Variables.” The 1975 Index also featured the term “Methodology,” indexing under that category an article on Guttman scales (McConaghy 1975), another on studying rare populations by means of secondary analysis (Reed1975), and still a third on estimating public opinion with the randomized response technique (Wiseman, Moriarty, and Schafer 1975). (A cautionary note: As should be clear from some of these examples, an unknown proportion of apparent change in emphasis over time is due to editorial inconsistency in indexing.)
The journal’s attention to research methods continued to increase after 1975. In the annual Indexes, it’s possible to trace the rise and fall of interest in specific problems and techniques for solving them. Bandwagon effects, for example, received some attention in the early 1980s but none in more recent issues. “House effects” share a similar fate. Methods become increasingly differentiated: By 1979, for example, there are separate entries for mail surveys, self-administered surveys, and telephone surveys, but no longer one for survey research in general; by 1983, all these modes have separate entries under the general heading of “Survey Research.” But by 1984, alas, there remains only the general heading, “Survey Research,” and one other, “Survey Research. Telephone Surveys.” Sic transit gloria.
Whether the increased attention to articles on methods research was due to a changing emphasis in the field or the interests of changing editors, or both, is difficult to determine, and it’s likely that both were at play. My impression (and that of Davison) is that the number of manuscripts coming in “over the transom” increased over time, whereas those solicited by the editor decreased. At the same time, the review process became more routinized and rigorous, with the editor having less discretion about acceptance or rejection.
Whatever the reason, examination of the contents suggests that at the beginning as well as at the end of the 1972–1986 period, POQ was publishing the leading methodological research in the survey field, exclusive perhaps of research that was primarily statistical in nature. From 1972 to 1975, the article with the largest number of citations after “The Agenda-Setting Function of the Mass Media” (McCombs and Shaw 1972) was Linsky’s review article “Stimulating Responses to Mailed Questionnaires” in Spring 1975, with 225 citations. Linsky examined research in several fields, including education, sociology, psychology, and business, over a 30-year period, concluding that number of contacts and cash rewards were the most effective stimulants of response. Also among the most frequently cited methodological articles during this period was Armstrong’s (1975) review of the effect of prepaid monetary incentives in mail surveys, which attempted to estimate the relationship between size of incentive and reduction in nonresponse from 18 published studies, and Dillman’s (1972) article, already cited, on increasing response to mail questionnaires.
In the decade 1976–1986, the most frequently cited methodological articles reflect increasing attention to modes other than mail in conducting surveys. Steeh’s groundbreaking article, “Trends in Nonresponse Rates, 1952–1979,” in the Spring 1981 issue, traces the increase in nonresponse in two national surveys then conducted face-to-face by the same survey organization during the entire 27-year period. Access to field records clearly established that an increase in nonresponse had taken place in both surveys, fueled mainly by an increase in refusals rather than noncontacts and other reasons. Also among the 10 articles cited most frequently from 1976 to 1986 are Kiesler and Sproull’s analysis of response effects in “electronic surveys” in Fall 1986, which suggested that such surveys might produce less socially desirable answers than mail surveys; Aneshensel et al.’s (1982) comparison of telephone and personal interviews in measuring depression in the general population, which found no significant differences between the two modes; and Traugott and Katosh’s (1979) study of the validity of vote reports, which compared records of registration and voting with survey reports of these behaviors in the 1976 election. During this decade, too, POQ began publishing the work of younger scholars who were to become leaders in the new field of survey methodology, among them Groves, Tourangeau, Presser, and Traugott.
Methodology Then and Now
Nevertheless, despite the journal’s continuous interest in the methods of survey research, changes are apparent even in the 15-year period under examination here. Most research on methods reported earlier in the period was undertaken in the context of research on attitudes or opinions. For a variety of reasons, including changes in funding sources, the focus on attitude and opinion research has declined in recent years. This decline has implications both for the kinds of research reported and for how closely research on methods is tied to substantive research.
I believe—without having done the rigorous comparison—that more “pure” methodological research is being published today than was the case in the earlier period—for example research on nonresponse bias, the use of paradata, synthetic data to protect against disclosure, and the like. Of course, all such research is done in a certain context—a particular mode, sponsor, sample, topic, response rate, and the like—but the context is often reported only as an acknowledgment of the study’s limitations.
What these changes portend for the future of the journal is difficult to say. Its founders envisioned a close collaboration between researchers from different disciplines as well as government, commercial, and academic settings, doing high-quality research on attitudes and behavior in many spheres of life. Whether such collaborations are still possible and, more important, relevant in an age of increasing specialization is one of the most important questions facing AAPOR as an organization and Public Opinion Quarterly as the organization’s official journal. How to adapt to a changing environment for survey research is another. Writing in the Introduction to POQ’s Fiftieth Anniversary Issue in Winter 1987, I said, “Where the POQ will be 50 years from now—or if indeed there will be such a thing—is anybody’s guess.” Writing in the same issue, Davison (1987, p. S6) notes that as editor in 1948 he had volunteered POQ as AAPOR’s official journal, “fearing that establishment of a new publication by AAPOR would drain off good articles … and that the POQ would die of starvation.” My hope is that AAPOR’s pending decision to launch a second journal, devoted to statistical methodology, will not have the same feared effect.