Show simple item record

Feeling Thermometers Versus 7-Point Scales

dc.contributor.authorAlwin, Duane F.en_US
dc.date.accessioned2010-04-14T14:11:42Z
dc.date.available2010-04-14T14:11:42Z
dc.date.issued1997en_US
dc.identifier.citationALWIN, DUANE (1997). "Feeling Thermometers Versus 7-Point Scales." Sociological Methods & Research 3(25): 318-340. <http://hdl.handle.net/2027.42/68989>en_US
dc.identifier.issn0049-1241en_US
dc.identifier.urihttps://hdl.handle.net/2027.42/68989
dc.description.abstractThis study addresses the issue of the relation between the number of response categories used in survey questions and the quality of measurement. Several hypotheses, derived from relevant theory and research, are tested through a comparison between 7- and 11-category rating scales used in the 1978 Quality of Life Survey. One hypothesis derived from information theory, that rating scales with more response categories transmit a greater amount of information and are therefore inherently more precise in their measurement, is strongly supported. A second hypothesis, that questions with greater numbers of response categories are more vulnerable to systematic measurement errors or shared method variance, is rejected. This study supports the conclusion that questions with more categories are both more reliable and more valid.en_US
dc.format.extent3108 bytes
dc.format.extent2272387 bytes
dc.format.mimetypetext/plain
dc.format.mimetypeapplication/pdf
dc.publisherSAGE PERIODICALS PRESSen_US
dc.titleFeeling Thermometers Versus 7-Point Scalesen_US
dc.typeArticleen_US
dc.subject.hlbsecondlevelSociologyen_US
dc.subject.hlbtoplevelSocial Sciencesen_US
dc.description.peerreviewedPeer Revieweden_US
dc.contributor.affiliationumUniversity of Michiganen_US
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/68989/2/10.1177_0049124197025003003.pdf
dc.identifier.doi10.1177/0049124197025003003en_US
dc.identifier.citedreferenceAlwin, D. F. 1974. “Approaches to the Interpretation of Relationships in the Multitrait-Multimethod Matrix.” Pp. 79-105 in Sociological Methodology 1973-74, edited by H. L. Costner. San Francisco: Jossey-Bass.en_US
dc.identifier.citedreferenceAlwin, D. F. 1988. “Structural Equation Models in Research on Human Development and Aging.” Pp. 71-170 in Methodological Issues in Aging Research, edited by K. W. Schaie, R. T. Campbell, W. Meredith, and S. C. Rawlings. New York: Springer.en_US
dc.identifier.citedreferenceAlwin, D. F. 1989. “Problems in the Estimation and Interpretation of the Reliability of Survey Data.”Quality and Quantity23:277-331.en_US
dc.identifier.citedreferenceAlwin, D. F. 1991. “Research on Survey Quality.”Sociological Methods & Research20:3-29.en_US
dc.identifier.citedreferenceAlwin, D. F. 1992. “Information Transmission in the Survey Interview: Number of Response Categories and the Reliability of Attitude Measurement.” Pp. 83-118 in Sociological Methodology 1992, edited by P. V. Marsden. Oxford, UK: Basil Blackwell.en_US
dc.identifier.citedreferenceAlwin, D. F. and D. J. Jackson. 1979. “Measurement Models for Response Errors in Surveys.” Pp. 68-119 in Sociological Methodology 1980, edited by P. V. Marsden. San Francisco: Jossey-Bass.en_US
dc.identifier.citedreferenceAlwin, D. F. and J. A. Krosnick. 1985. “The Measurement of Values in Surveys: A Comparison of Ratings and Rankings.”Public Opinion Quarterly48:409-442.en_US
dc.identifier.citedreferenceAlwin, D. F. and J. A. Krosnick. 1991. “The Reliability of Attitudinal Survey Measures: The Role of Question and Respondent Attributes.”Sociological Methods & Research20:139-181.en_US
dc.identifier.citedreferenceAndrews, F. M. 1984. “Construct Validity and Error Components of Survey Measures: A Structural Modeling Approach.”Public Opinion Quarterly46:409-442.en_US
dc.identifier.citedreferenceAndrews, F. M. 1990. “Construct Validity and Error Components of Survey Measures: A Structural Modeling Approach.” Pp. 15-51 in Evaluation of Measurement Instruments by Meta-Analysis of Multitrait-Multimethod Studies, edited by W. E. Saris and A. van Meurs. Amsterdam, Netherlands: North-Holland.en_US
dc.identifier.citedreferenceAndrews, F. M. and S. B. Withey. 1976. Social Indicators of Well-Being: Americans' Perceptions of Life Quality. New York: Plenum.en_US
dc.identifier.citedreferenceBenson, P. H. 1971. “How Many Scales and How Many Categories Shall We Use in Consumer Research? A Comment.”Journal of Marketing35:59-61.en_US
dc.identifier.citedreferenceBentler, P. M. and D. G. Bonett. 1980. “Significance Tests and Goodness of Fit in the Analysis of Covariance Structures.”Psychological Bulletin88:588-606.en_US
dc.identifier.citedreferenceBirkett, N. J. 1986. “Selecting the Number of Response Categories for a Likert-Type Scale.” In Proceedings of the American Statistical Association (Section on Survey Research Methods). Washington, DC: American Statistical Association.en_US
dc.identifier.citedreferenceBradburn, N. and C. Danis. 1984. “Potential Contributions of Cognitive Research to Questionnaire Design.” Pp. 101-129 in Cognitive Aspects of Survey Methodology: Building a Bridge Between Disciplines, edited by T. B. Jabine, M. L. Straf, J. M. Tanur, and R. Tourangeau. Washington, DC: National Academy Press.en_US
dc.identifier.citedreferenceBrowne, M. W. 1984. “The Decomposition of Multitrait-Multimethod Matrices.”British Journal of Mathematical and Statistical Psychology37:1-21.en_US
dc.identifier.citedreferenceCampbell, A. and P. E. Converse. 1980. The Quality of American Life: 1978 Codebook. Ann Arbor: University of Michigan, Inter-University Consortium for Political and Social Research.en_US
dc.identifier.citedreferenceCampbell, D. T. and D. W. Fiske. 1959. “Convergent and Discriminant Validation by the Multitrait-Multimethod Matrix.”Psychological Bulletin6:81-105.en_US
dc.identifier.citedreferenceCenter for Political Studies. 1994. Continuity Guide to the American National Election Studies, 1952-1993. Ann Arbor: University of Michigan, Institute for Social Research.en_US
dc.identifier.citedreferenceChampney, H. and H. Marshall. 1939. “Optimal Refinement of the Rating Scale.”Journal of Applied Psychology23:323-331.en_US
dc.identifier.citedreferenceConverse, J. M. and S. Presser. 1986. Survey Questions: Handcrafting the Standardized Questionnaire. Newbury Park, CA: Sage.en_US
dc.identifier.citedreferenceConverse, J. M. and H. Schuman. 1984. “The Manner of Inquiry: An Analysis of Survey Question Form Across Organizations and Over Time.” Pp. 283-316 in Surveying Subjective Phenomena, Vol. 2, edited by C. F. Turner and E. Martin. New York: Russell Sage.en_US
dc.identifier.citedreferenceCostner, H. L. 1969. “Theory, Deduction, and Rules of Correspondence.”American Journal of Sociology75:245-263.en_US
dc.identifier.citedreferenceCox, E. P., III. 1980. “The Optimal Number of Response Alternatives for a Scale: A Review.”Journal of Marketing Research17:407-422.en_US
dc.identifier.citedreferenceCronbach, L. J. 1946. “Response Evidence on Response Sets and Test Design.”Educational and Psychological Measurement6:475-494.en_US
dc.identifier.citedreferenceCronbach, L. J. 1950. “Further Evidence on Response Sets and Test Design.”Educational and Psychological Measurement10:3-31.en_US
dc.identifier.citedreferenceCronbach, L. J. 1951. “Coefficient Alpha and the Internal Structure of Tests.”Psychometrika16:297-334.en_US
dc.identifier.citedreferenceCunningham, W. H., I.C.M. Cunningham, and R. T. Green. 1977. “The Ipsative Process to Reduce Response Set Bias.”Public Opinion Quarterly41:379-384.en_US
dc.identifier.citedreferenceDavis, J. A. and T. W. Smith. 1995. General Social Surveys, 1972-1995: Cumulative Codebook. Chicago: National Opinion Research Center.en_US
dc.identifier.citedreferenceFeather, N. T. 1973. “The Measurement of Values: Effects of Different Assessment Procedures.”Australian Journal of Psychology25:221-231.en_US
dc.identifier.citedreferenceFerguson, L. W. 1941. “A Study of the Likert Technique of Attitude Scale Construction.”Journal of Social Psychology13:51-57.en_US
dc.identifier.citedreferenceFinn, R. H. 1972. “Effects of Some Variations in Rating Scale Characteristics on Means and Reliabilities of Ratings.”Educational and Psychological Measurement32:255-265.en_US
dc.identifier.citedreferenceGarner, W. R. 1960. “Rating Scales, Discriminability, and Information Transmission.”Psychological Review67:343-352.en_US
dc.identifier.citedreferenceGarner, W. R. and H. W. Hake. 1951. “The Amount of Information in Absolute Judgements.”Psychological Review58:446-459.en_US
dc.identifier.citedreferenceGreen, P. E. and V. R. Rao. 1970. “Rating Scales and Information Recovery: How Many Scales and Response Categories to Use?”Journal of Marketing34:33-39.en_US
dc.identifier.citedreferenceGuilford, J. P. 1954. Psychometric Methods. New York: Mc Graw-Hill.en_US
dc.identifier.citedreferenceHamilton, D. L. 1968. “Personality Attributes Associated With Extreme Response Set.”Psychological Bulletin72:406-422.en_US
dc.identifier.citedreferenceHeise, D. R. 1969a. “Separating Reliability and Stability in Test-Retest Correlations.”American Sociological Review34:93-101.en_US
dc.identifier.citedreferenceHeise, D. R. 1969b. “Some Methodological Issues in Semantic Differential Research.”Psychological Bulletin72:406-422.en_US
dc.identifier.citedreferenceHeise, D. R. and G. W. Bohrnstedt. 1970. “Validity, Invalidity, and Reliability.” Pp. 104-129 in Sociological Methodology 1970, edited by E. F. Borgatta and G. W. Bohrnstedt. San Francisco: Jossey-Bass.en_US
dc.identifier.citedreferenceJabine, T.B., M. L. Straf, J. M. Tanur, and R. Tourangeau. 1984. Cognitive Aspects of Survey Methodology: Building a Bridge Between Disciplines (report of the Advanced Research Seminar on Cognitive Aspects of Survey Methodology, Committee on National Statistics, and Commission on Behavioral and Social Sciences Education, National Research Council). Washington, DC: National Academy Press.en_US
dc.identifier.citedreferenceJacoby, J. and M. S. Matell. 1971. “Three-Point Scales Are Good Enough.”Journal of Marketing Research8:495-500.en_US
dc.identifier.citedreferenceJahoda, M., M. Deutsch, and S. W. Cook. 1951. Research Methods in Social Relations. New York: Dryden.en_US
dc.identifier.citedreferenceJenkins, G. D., Jr. and T. D. Taber. 1977. “A Monte Carlo Study of Factors Affecting Three Indices of Composite Scale Reliability.”Journal of Applied Psychology62:392-398.en_US
dc.identifier.citedreferenceJÖreskog, K. G. 1971. “Statistical Analysis of Sets of Congeneric Tests.”Psychometrika36:109-133.en_US
dc.identifier.citedreferenceJÖreskog, K. G. and D. SÖrbom. 1986. LISREL: Analysis of Linear Structural Relationships by the Method of Maximum Likelihood (user's guide, Version 6). Chicago: Scientific Software Inc.en_US
dc.identifier.citedreferenceKomorita, S. S. 1963. “Attitude Content, Intensity, and the Neutral Point on a Likert Scale.”Journal of Social Psychology61:327-334.en_US
dc.identifier.citedreferenceKomorita, S. S. and W. K. Graham. 1965. “Number of Scale Points and the Reliability of Scales.”Educational and Psychological Measurement25:987-995.en_US
dc.identifier.citedreferenceKrosnick, J. A. and D. F. Alwin. 1989. “Response Strategies for Coping With the Cognitive Demands of Survey Questions.” Unpublished manuscript, Institute for Social Research, University of Michigan.en_US
dc.identifier.citedreferenceLehman, D. R. and J. Hulbert. 1972. “Are Three-Point Scales Always Good Enough?”Journal of Marketing Research9:444-446.en_US
dc.identifier.citedreferenceLikert, R. 1932. “A Technique for the Measurement of Attitudes.”Archives of Psychology, No. 140.en_US
dc.identifier.citedreferenceLissitz, R. W. and S. B. Green. 1975. “Effect of the Number of Scale Points on Reliability: A Monte Carlo Approach.”Journal of Applied Psychology60:10-13.en_US
dc.identifier.citedreferenceLord, F. M. and M. R. Novick. 1968. Statistical Theories of Mental Test Scores. Reading, MA: Addison-Wesley.en_US
dc.identifier.citedreferenceMattell, M. S. and J. Jacoby. 1971. “Is There an Optimal Number of Alternatives for Likert Scale Items? Study I: Reliability and Validity.”Educational and Psychological Measurement31:657-674.en_US
dc.identifier.citedreferenceMattell, M. S. and J. Jacoby. 1972. “Is There an Optimal Number of Alternatives for Likert Scale Items? Effects of Testing Time and Scale Properties.”Journal of Applied Psychology56:506-509.en_US
dc.identifier.citedreferenceMc Kennell, A. 1974. “Surveying Attitude Structures.”Quality and Quantity7:1-96.en_US
dc.identifier.citedreferenceMessick, S. 1968. “Response Sets.” Pp. 492-496 in The International Encyclopedia of the Social Sciences, Vol. 13, edited by D. L. Sills. New York: Macmillan.en_US
dc.identifier.citedreferenceMiller, G. A. 1956. “The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information.”Psychological Review63:81-97.en_US
dc.identifier.citedreferenceMiller, W. E. 1982. American National Election Study, 1980. Ann Arbor, MI: Inter-University Consortium for Political and Social Research.en_US
dc.identifier.citedreferenceMurphy, G. and R. Likert. 1938. Public Opinion and the Individual: A Psychological Study of Student Attitudes on Political Questions, With a Retest Five Years Later. New York: Russell & Russell.en_US
dc.identifier.citedreferenceOsgood, C. E., G. J. Suci, and P. H. Tannenbaum. 1957. The Measurement of Meaning. Urbana: University of Illinois Press.en_US
dc.identifier.citedreferencePeabody, D. 1962. “Two Components in Bi-Polar Scales: Direction and Extremeness.”Psychological Review69:65-73.en_US
dc.identifier.citedreferenceRamsay, J. O. 1973. “The Effect of Number of Categories in Rating Scales on Precision of Estimation of Scale Values.”Psychometrika38:513-532.en_US
dc.identifier.citedreferenceSaris, W. E. 1988. Variation in Response Functions: A Source of Measurement Error in Attitude Research. Amsterdam, Netherlands: Sociometric Research Foundation.en_US
dc.identifier.citedreferenceSaris, W. E. and A. van Meurs. 1990. Evaluation of Measurement Instruments by Meta-Analysis of Multitrait-Multimethod Studies. Amsterdam, Netherlands: North-Holland.en_US
dc.identifier.citedreferenceScherpenzeel, A. 1995. “A Question of Quality: Evaluating Survey Questions by Multitrait-Multimethod Studies.” Ph.D. thesis, University of Amsterdam.en_US
dc.identifier.citedreferenceSchuman, H. and S. Presser. 1981. Questions and Answers in Attitude Surveys. New York: Academic Press.en_US
dc.identifier.citedreferenceShannon, C. and W. Weaver. 1949. The Mathematical Theory of Communication. Urbana: University of Illinois Press.en_US
dc.identifier.citedreferenceSherif, C. W., M. Sherif, and R. E. Nebergall. 1965. Attitude and Attitude Change. Philadelphia: W. B. Saunders.en_US
dc.identifier.citedreferenceSudman, S., N. M. Bradburn, and N. Schwarz. 1996. Thinking About Answers: The Application of Cognitive Processes to Survey Methodology. San Francisco: Jossey-Bass.en_US
dc.identifier.citedreferenceSymonds, P. M. 1924. “On the Loss of Reliability in Ratings Due to Coarseness of the Scale.”Journal of Experimental Psychology7:456-461.en_US
dc.identifier.citedreferenceTourangeau, R. 1984. “Cognitive Sciences and Survey Methods.” Pp. 73-100 in Cognitive Aspects of Survey Methodology: Building a Bridge Between Disciplines, edited by T. B. Jabine, M. L. Straf, J. M. Tanur, and R. Tourangeau. Washington, DC: National Academy Press.en_US
dc.identifier.citedreferenceTurner, C. F. and E. Martin. 1984. Surveying Subjective Phenomena. New York: Russell Sage.en_US
dc.identifier.citedreferenceWeisberg, H. F. and A. H. Miller. n.d. Evaluation of the Feeling Thermometer: A Report to the National Election Study Board Based on Data From the 1979 Pilot Survey. Ann Arbor, MI: Center for Political Studies, Institute for Social Research.en_US
dc.identifier.citedreferenceWerts, C. E. and R. L. Linn. 1970. “Path Analysis: Psychological Examples.”Psychological Bulletin74:194-212.en_US
dc.identifier.citedreferenceWoelfel, J. and E. Fink. 1980. The Measurement of Communication Processes: Galileo Theory and Method. New York: Academic Press.en_US
dc.owningcollnameInterdisciplinary and Peer-Reviewed


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.