The Qualitative Method of Impact Analysis
dc.contributor.author | Mohr, Lawrence | en_US |
dc.date.accessioned | 2010-04-13T19:09:31Z | |
dc.date.available | 2010-04-13T19:09:31Z | |
dc.date.issued | 1999 | en_US |
dc.identifier.citation | Mohr, Lawrence (1999). "The Qualitative Method of Impact Analysis." American Journal of Evaluation 20(1): 69-84. <http://hdl.handle.net/2027.42/67113> | en_US |
dc.identifier.issn | 1098-2140 | en_US |
dc.identifier.uri | https://hdl.handle.net/2027.42/67113 | |
dc.description.abstract | Consider the qualitative approach to evaluation design (as opposed to measurement) to be typified by a case study with a sample of just one. Although there have certainly been elaborate and emphatic defenses of the qualitative approach to program evaluation, such defenses rarely attempt to qualify the approach explicitly and rigorously as a method of impact analysis. The present paper makes that attempt. The problem with seeking to advance a qualitative method of impact analysis is that impact is a matter of causation and a non-quantitative approach to design is apparently not well suited to the task of establishing causal relations. The root of the difficulty is located in the counterfactual definition of causality, which is our only broadly accepted formal definition of causality for social science. It is not, however, the only definition we use informally. Another definition, labeled “physical causality,” is widely used in practice and has recently been formalized. Physical causality can be applied to the present problem. For example, it explains the persuasiveness of Striven’s “Modus Operandi” approach tailored case study design with a sample size of one in principle as strong a basis for making inferences program impact as a randomized experiment. Crucial program evaluation finding that people’s “operative reasons” for doing what they do are the physical actions. it is shown that external validity using this qualitative approach would have exceptional strengths. | en_US |
dc.format.extent | 3108 bytes | |
dc.format.extent | 1477684 bytes | |
dc.format.mimetype | text/plain | |
dc.format.mimetype | application/pdf | |
dc.publisher | Sage Publications | en_US |
dc.title | The Qualitative Method of Impact Analysis | en_US |
dc.type | Article | en_US |
dc.subject.hlbsecondlevel | Education | en_US |
dc.subject.hlbsecondlevel | Social Sciences (General) | en_US |
dc.subject.hlbtoplevel | Social Sciences | en_US |
dc.description.peerreviewed | Peer Reviewed | en_US |
dc.contributor.affiliationum | SPP-Larch Hall, University of Michigan, Ann Arbor, MI 48109-1220, lmohr@umich.edu | en_US |
dc.description.bitstreamurl | http://deepblue.lib.umich.edu/bitstream/2027.42/67113/2/10.1177_109821409902000106.pdf | |
dc.identifier.doi | 10.1177/109821409902000106 | en_US |
dc.identifier.source | American Journal of Evaluation | en_US |
dc.identifier.citedreference | Campbell, D. T., & Stanley, J. C. (1966). Experimental and quasi-experimental designs for research. Chicago, IL: Rand Mc Nally. | en_US |
dc.identifier.citedreference | Chen, H. T. (1990). Theory-driven evaluations. Newbury Park, CA: Sage. | en_US |
dc.identifier.citedreference | Cook, T. D., & Reichardt, C. S. (Eds.). Qualitative and quantitative methods in evaluation research (pp. 49-67). Beverly Hills, CA: Sage. | en_US |
dc.identifier.citedreference | Cronbach, L. J. (1982). Designing evaluations of educational and social programs. San Francisco, CA: Jossey-Bass. | en_US |
dc.identifier.citedreference | Davidson, D. (1980). Essays on actions and events. Oxford: Clarendon Press. | en_US |
dc.identifier.citedreference | Edwards, P. K., Acock, A. C., & Johnston, R. L. (1985). Nutrition behavior change: Outcomes of an educational approach. Evaluation Review9(4), 441-460. | en_US |
dc.identifier.citedreference | George, A. L., & Mc Keown, T. J. (1985). Case studies and theories of organizational decision making. In L. S. Sproull & P. D. Larkey (Eds.), Advances in information processing in organizations. R. F. Coulam & R. A. Smith (Eds.), Research on public organizations. Vol. 2. (pp. 21-58). Greenwich, CT: JAI Press. | en_US |
dc.identifier.citedreference | Goddard, A., & Powell, J. (1994). Using naturalistic and economic evaluation to assist service planning. Evaluation Review, l&4), 472-492. | en_US |
dc.identifier.citedreference | Greene, J. C. (1994). Qualitative program evaluation: Practice and promise. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (pp. 530-544). Thousand Oaks, CA: Sage. | en_US |
dc.identifier.citedreference | Greene, J. C., & Caracelli, V. J. (Eds.). (1997). Advances in mixed-method evaluation: The challenges and benefits of integrating diverse paradigms. New Directions for Program Evaluation, No. 74. San Francisco, CA: Jossey-Bass. | en_US |
dc.identifier.citedreference | Guba, E. G., & Lincoln, Y. S. (1981). Effective evaluation: Improving the usefulness of evaluation results through responsive and naturalistic approaches. San Francisco, CA: Jossey-Bass. | en_US |
dc.identifier.citedreference | Huberman, A. M., & Miles, M. B. (1985). Assessing local causality in qualitative research. In D. N. Berg & K. K. Smith (Eds.), The selfin social inquiry (pp. 351-381). Newbury Park, CA: Sage. | en_US |
dc.identifier.citedreference | Hume, D. (1955). An inquiry concerning human understanding. C. W. Hendell (Ed.). New York, NY: Liberal Arts Press. | en_US |
dc.identifier.citedreference | King, G., Keohane, R. O., & Verba, S. (1994). Designing social inquiry: Scientific inference in qualitative research. Princeton, NJ: Princeton University Press. | en_US |
dc.identifier.citedreference | Lincoln, Y. S., & Guba, E. G. (1986). But is it rigorous? Trustworthiness and authenticity in naturalistic evaluation. In D. D. Williams (Ed.), Naturalistic evaluation (pp. 73-84). New Directions for Program Evaluation, No. 30. San Francisco, CA: Jossey-Bass. | en_US |
dc.identifier.citedreference | Ma&e, J. L. (1980). The cement of the universe: A study of causation. Oxford: Clarendon. | en_US |
dc.identifier.citedreference | Maxwell, J.A. (1996). Using qualitative research to develop causal explanations. Working Papers. Cambridge, MA: Harvard project on schooling and children. | en_US |
dc.identifier.citedreference | Mohr, L. B. (1995). Impact analysis for program evaluation (second ed.). Newbury Park, CA: Sage. | en_US |
dc.identifier.citedreference | Mohr, L. B. (1996). The causes of human behavior: Implications for theory and method in the social sciences. AM Arbor, MI: University of Michigan Press. | en_US |
dc.identifier.citedreference | Mohr, L. B. (1999). One hundred theories of organizational change: The good, the bad, and the ugly. In H. G. Frederickson & J. M. Johnston (Eds.), Public administration in a time of turbulence: The management of reform, reinvention, and innovation. Birmingham, AL: University of Alabama Press, forthcoming. | en_US |
dc.identifier.citedreference | Reichardt, C. S., & Cook, T. D. (1979). Beyond qualitative versus quantitative methods. In T. D. Cook & C. S. Reichardt (Eds.), Qualitative and quantitative methods in evaluation research (pp. 7-32). Beverly Hills, CA: Sage. | en_US |
dc.identifier.citedreference | Striven, M. (1976). Maximizing the power of causal investigations: The modus operandi method. In G. V. Glass (Ed.), Evaluation studies review annual, Vol. 1 (pp. 101-118). Beverly Hills, CA: Sage Publications. | en_US |
dc.identifier.citedreference | Shadish, W. R., Jr., Cook, T. D., & Leviton, L. C. (1991). Foundations of program evaluation: Theories of practice. Newbury Park, CA: Sage. | en_US |
dc.identifier.citedreference | Stake, R. E. (1981). Case study methodology: An epistemological advocacy. In W. Welch. (Ed.), Case study methodology in educational evaluation. Minneapolis, MN: Minnesota Research and Evaluation Center. | en_US |
dc.identifier.citedreference | Strawson, P. F. (1985). Causation and explanation. In B. Vermazen & M. B. Hintikka (Eds.), Essays on Davidson: Actions and events (pp. 155-136). Oxford: Clarendon. | en_US |
dc.owningcollname | Interdisciplinary and Peer-Reviewed |
Files in this item
Remediation of Harmful Language
The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.
Accessibility
If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.