The Relationship Of Constrained Free‐Response To Multiple‐Choice And Open‐Ended Items
dc.contributor.author | Bennett, Randy Elliot | en_US |
dc.contributor.author | Rock, Donald A. | en_US |
dc.contributor.author | Braun, Henry I. | en_US |
dc.contributor.author | Frye, Douglas | en_US |
dc.contributor.author | Spohrer, James C. | en_US |
dc.contributor.author | Soloway, Elliot | en_US |
dc.date.accessioned | 2014-10-07T16:09:52Z | |
dc.date.available | 2014-10-07T16:09:52Z | |
dc.date.issued | 1989-12 | en_US |
dc.identifier.citation | Bennett, Randy Elliot; Rock, Donald A.; Braun, Henry I.; Frye, Douglas; Spohrer, James C.; Soloway, Elliot (1989). "The Relationship Of Constrained Free‐Response To Multiple‐Choice And Open‐Ended Items." ETS Research Report Series 1989(2): i-37. <http://hdl.handle.net/2027.42/108688> | en_US |
dc.identifier.issn | 2330-8516 | en_US |
dc.identifier.issn | 2330-8516 | en_US |
dc.identifier.uri | https://hdl.handle.net/2027.42/108688 | |
dc.description.abstract | This study examined the relationship of a machine‐scorable, constrained free‐response computer science item that required the student to debug a faulty program to two other types of items: (1) multiple‐choice and (2) free response requiring production of a computer program. Confirmatory factor analysis was used to test the fit of a three‐factor model to these data and to compare the fit of this model to three alternatives. These models were fit using two random‐half samples, one given a faulty program containing one bug and the other a program with three bugs. A single‐factor model best fit the data for the sample taking the 1‐bug constrained free response and a two‐factor model fit the data for the second sample. In addition, the factor intercorrelations showed this item type to be significantly related to both the free‐response items and the multiple‐choice measures. | en_US |
dc.publisher | Lawrence Erlbaum Associates | en_US |
dc.publisher | Wiley Periodicals, Inc. | en_US |
dc.title | The Relationship Of Constrained Free‐Response To Multiple‐Choice And Open‐Ended Items | en_US |
dc.type | Article | en_US |
dc.rights.robots | IndexNoFollow | en_US |
dc.subject.hlbsecondlevel | Education | en_US |
dc.subject.hlbtoplevel | Social Sciences | en_US |
dc.description.peerreviewed | Peer Reviewed | en_US |
dc.description.bitstreamurl | http://deepblue.lib.umich.edu/bitstream/2027.42/108688/1/ets200147.pdf | |
dc.identifier.doi | 10.1002/j.2330-8516.1989.tb00147.x | en_US |
dc.identifier.source | ETS Research Report Series | en_US |
dc.identifier.citedreference | Marsh, H. W., & Hocevar, D. ( 1985 ). Application of confirmatory factor analysis to the study of self‐concept: First and higher order factor models and their invariance across groups. Psychological Bulletin, 97, 562 – 582. | en_US |
dc.identifier.citedreference | Ackerman, T. A., & Smith, P. L. ( 1988 ). A comparison of the information provided by essay, multiple‐choice, and free‐response writing tests. Applied Psychological Measurement, 12, 117 – 128. | en_US |
dc.identifier.citedreference | Bennett, R. E., Gong, B., Kershaw, R. C., Rock, D. A., Soloway, E., & Macalalad, A. (In press). Assessment of an expert system's ability to automatically grade and diagnose students' constructed‐responses to computer science problems. In R. O. Freedle (Ed), Artificial intelligence and the future of testing, Hillsdale, NJ: Lawrence Erlbaum Associates. | en_US |
dc.identifier.citedreference | Bleistein, C., Maneckshana, B., & McLean, D. ( 1988 ). Test analysis: College Board Advanced Placement Examination Computer Science 3JBP (SR‐88‐63). Princeton, NJ: Educational Testing Service. | en_US |
dc.identifier.citedreference | Braun, H. I. ( 1988 ). Understanding scoring reliability: experiments in calibrating essay readers. Journal of Educational Statistics, 13, 1 – 18. | en_US |
dc.identifier.citedreference | Braun, H. I., Bennett, R. E., Frye, D., & Soloway, E. (In press). Developing and evaluating a machine‐scorable, constrained constructed‐response item. Princeton, NJ: Educational Testing Service. | en_US |
dc.identifier.citedreference | Birenbaum, M., & Tatsuoka, K. K. ( 1987 ). Open‐ended versus multiple‐choice response formats–It does make a difference for diagnostic purposes. Applied Psychological Measurement, 11, 385 – 395. | en_US |
dc.identifier.citedreference | Johnson, W. L., & Soloway, E. ( 1985 ). PROUST: An automatic debugger for Pascal programs. Byte, 10 ( 4 ), 179 – 190. | en_US |
dc.identifier.citedreference | Joreskog, K., & Sorbom, D. ( 1986 ). PRELIS: A program for multivariate data screening and data summarization. Mooresville, IN: Scientific Software, Inc. | en_US |
dc.identifier.citedreference | Joreskog, K., & Sorbom, D. ( 1988 ). LISREL 7: A guide to the program and applications. Chicago, IL: SPSS Inc. | en_US |
dc.identifier.citedreference | Loehlin, J. C. ( 1987 ). Latent variable models. Hillsdale, NJ: Erlbaum. | en_US |
dc.identifier.citedreference | Lord, F. M. Applications of item response theory to practical testing problems. Hillsdale, NJ: Erlbaum. | en_US |
dc.identifier.citedreference | McNemar, Q. ( 1962 ). Psychological statistics. New York: Wiley. | en_US |
dc.identifier.citedreference | Mazzeo, J., & Bleistein, C. ( 1986 ). Test analysis: College Board Advanced Placement Examination Computer Science 3IBP (SR‐86‐105). Princeton, NJ: Educational Testing Service. | en_US |
dc.identifier.citedreference | Mazzeo, J., & Flesher, R. ( 1985 ). Test analysis: College Board Advanced Placement Examination Computer Science 3HBP (SR‐85‐180). Princeton, NJ: Educational Testing Service. | en_US |
dc.identifier.citedreference | Sobel, M. E., & Bohrnstedt, G. W. ( 1985 ). Use of null models in evaluating the fit of covariance structure models. In N. B. Tuma (Ed), Sociological Methodology. San Francisco: Jossey‐Bass. pp 152 – 178. | en_US |
dc.identifier.citedreference | Sternberg, R. J. ( 1980 ). Factor theories of intelligence are all right almost. Educational Researcher, 9, 6 – 18. | en_US |
dc.identifier.citedreference | Traub, R. E., & Fisher, C. W. ( 1977 ). On the equivalence of constructed‐response and multiple‐choice tests. Applied Psychological Measurement, 1, 355 – 369. | en_US |
dc.identifier.citedreference | Tucker, L. R., & Lewis, C. ( 1973 ). A reliability coefficient for maximum likelihood factor analysis. Psychometrika, 38, 1 – 10. | en_US |
dc.identifier.citedreference | Ward, W. C. ( 1982 ). A comparison of free‐response and multiple‐choice forms of verbal aptitude tests. Applied Psychological Measurement, 6, 1 – 11. | en_US |
dc.identifier.citedreference | Ward, W. C., Frederiksen, N., & Carlson, S. B. ( 1980 ). Construct validity of free‐response and machine‐scorable forms of a test. Journal of Educational Measurement, 17, 11 – 29. | en_US |
dc.owningcollname | Interdisciplinary and Peer-Reviewed |
Files in this item
Remediation of Harmful Language
The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.
Accessibility
If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.