Division of Research Graduate School of Business Administration The University of Michigan February, 1981 ORGANIZATIONAL STRUCTURE OF ACADEMIC DEPARTMENTS: HOMOGENEITY, AGGREGATION, AND LEVEL OF ANALYSIS Working Paper No. 256 V. Jean Ramsey Western Michigan University L. Delf Dodge The University of Michigan FOR DISCUSSION PURPOSES ONLY None of this material is to be quoted or reproduced with the express permission of the Division of Research.

ABSTRACT This study investigated the homogeneity of faculty members' responses to measures of organizational structure, environmental uncertainty, and task routineness, and determined the legitimacy of aggregating those responses to create departmental-level variables. Analysis of variance indicated that one could cautiously proceed to aggregate structure and environmental uncertainty subscales, but not task routineness scales.

Organizational Structure of Academic Departments: Homogeneity, Aggregation, and Level of Analysis Organizational researchers make several basic research choices which influence the results of their studies. Whether to measure at the individual, subunit, or organizational level, and how to investigate and demonstrate homogeneity of responses prior to aggregating individual responses, are among the more critical issues. There appears to be growing recognition of the importance of distinguishing among at least three levels of measurement and analysis: those of (a) the individual organizational member, (b) the work group or subunit, and (c) the larger organization (Comstock and Scott, 1977; Fry, 1978). Research at the organizational level (cf., Woodward, 1965; Khandwalla, 1974) rests on the implicit assumption of homogeneity, that is, uniformity of work and structural forms across participants and departments. Complex organizations, however, are more commonly characterized as highly differentiated (Hall, 1962; Lawrence and Lorsch, 1967). The assumption of homogeneity is, perhaps, most tenable at the subunit level where a narrower range of variance might be expected. But even here, distortion may occur. Subunit level measures are often developed by averaging characteristics of the tasks performed by individual workers. Comstock and Scott (1977) suggest that concentrating on the characteristics of individual tasks in order to arrive at some modal task measure may cause the researcher to overlook technology characteristics peculiar to the subunit level, such as measure of work variety or characteristics of the group's or department's overall workflow. -1 -

-2 - Comstock and Scott (1977) demonstrated that characteristics of technology at one level were not necessarily reflected in organizational structure at other levels. They found little effect of individual task characteristics on subunit structure and little effect of characteristics of subunit technology (in this instance, workflow predictability) on either the qualifications or the specialization of staff members. Fry's review (1978) of major technology and structure studies strengthens Comstock and Scott's caution to formulate studies which relate variables at the same level of analysis. He found significant differences between level of analysis and whether its findings were supportive or nonsupportive of technology-structure relationships. It would appear important, then, to consciously choose a specific level (or levels) of analysis before conducting a study. Equally important is demonstrating, through some measure of response homogeneity, that the level one intended to measure was, indeed, the level one measured. Aggregation Problems In 1967, Hage and Aiken stated that the aggregation of data to reflect organizational properties "presents methodological problems for which there are as yet no satisfactory solutions" (p. 75). Unfortunately, their statement holds equally true today, some thirteen years later. Importance of Demonstrating Homogeneity Any aggregation of data measured at the individual level is based on the assumption of homogeneity of organizational members' responses. As Lynch (1974) pointed out, the aggregation problem is two-sided: from which individuals do we gather the data if we want to measure an organizational level variable, and how do we aggregate their individual scores? Others (Duncan, 1971;

-3 - Schneider and Bartlett, 1970) have suggested that careful consideration be given to determining how individuals have reported the same phenomenon before aggregating scores, especially if measures are perceptual. Studies in which group averages are used as summary statistics implicitly assume that individuals' perceptions agree with one another. It is curious that more studies do not test this assumption of homogeneity prior to aggregating their data points into broader measures. Sathe (1978) is not unusual in defending aggregation by simply citing the practices of others: "Following Hall (1963), Duncan (1971), Pennings (1973), and Lynch (1974), departmental scores for this study were obtained by assigning equal weight to the score of each individual" (pp. 231-232). Leifer and Huber (1977), on the other hand, are only one example of researchers who simply aggregate without explanation or apology: "The unweighted mean scores on each measure were then averaged to obtain an organicness score for each individual's work unit. The average of these organicness scores for all individuals in a unit was used as the organicness score for that unit" (p. 240). One of the reasons for the general omission of tests of homogeneity may be that there is not a simple, well-agreed-upon method of demonstrating it. Methods of Demonstrating Homogeneity While the importance of demonstrating homogeneity is agreed upon in principle, there is little discussion of methods for investigating the degree of response homogeneity in a particular sample. That is, how does one demonstrate that an organization- or subunit-level variable is not simply a random collection of individual responses? The most common method used to demonstrate that aggregated scores of organizations and departments reflect properties of the described organization,

-4 - and not merely idiosyncratic responses, is analysis of variance. It has, however, been used in somewhat different ways. Duncan (1972) used one-way analysis of variance computed across individuals in a given decision unit to look for significant differences among individual perceptions before individual scores were pooled to produce total decision unit scores. Pennings (1973) used one-way analysis of variance on each measure of organizational structure, as well as multivariate analysis of variance on clusters of scales, to demonstrate homogeneity. He also established response homogeneity by computing rank-order correlations between the mean scores in supervisor and subordinate subsamples in each of the ten organizations. Lynch (1974) also combined the use of analysis of variance and rank-order correlations to demonstrate homogeneity. Analysis of variance was used to test whether a person's social position or the length of time in their position influenced responses. In addition, a rank ordering of departments using aggregated responses of only those respondents who had masters' degrees in library science was compared with the rank order when all respondents were aggregated. Irrespective of the technique used, it would seem equally important to establish homogeneity prior to aggregation when conducting research in any setting. To our knowledge, when the academic department has been used as the unit of analysis, homogeneity of responses has been assumed but never demonstrated (Dressel, Marcus and Johnson, 1970; Lodahl and Gordon, 1973; Peterson, 1969; Ryan, 1972). The purpose of this study was, first, to investigate the homogeneity of faculty members' responses to measures of organizational structure, environmental uncertainty, and task routineness, and second, to determine whether

-5 - it was legitimate to aggregate those responses to departmental-level variables. The Study Data were collected in twenty-one academic departments of two colleges in a single university. Questionnaires were mailed to all faculty within these departments. A response rate of 52 percent resulted in 152 faculty members' responses being included in the data analysis. Measures Van de Ven and Delbecq's (1974) measures of task routineness and Duncan's (1971) measures of environmental uncertainty and structure were used. Four subscales measured structure (hierarchy of authority, impersonality, participation in decision making, and rules and procedures), two measured environmental uncertainty (lack of information and knowledge of outcomes), and two measured task routineness (task difficulty and task variability). Two sets of measures were used for each of the three major concepts of interest in the study. Faculty were asked about the predictability and variability of their "job" of teaching and their "job" of research. Faculty were also asked to respond to questions concerning environmental uncertainty in two contexts: once in connection with the factors outside the department they took into consideration in making decisions about their teaching activities and once in connection with the factors outside the department they took into consideration in making decisions about their research activities. Similarly, faculty were asked to respond to questions concerning the department's decision-making structure twice: once in connection with decisions made regarding teaching activities and a second time in connection witiesh decisions made in the course of their research activities.

-6 - Response categories for all items varied along a seven-point scale. Items were reverse-scored where necessary, so that a higher score indicates a more certain environment or a higher degree of structure for all cases. Scale Development Item-to-total correlations were run on sample data to determine which items could be included in each of the scales. Then, correlational analyses (in the manner of Campbell and Fiske, 1959) were conducted to determine which subscales could be combined into overall structural, environmental uncertainty, and task routineness scales. The structures and environments surrounding teaching and research decisions were perceived by faculty to be quite different from one another. There was little difference in routineness, however, between the task of teaching and the task of research. Instead, measures of research and teaching task difficulty correlated, as did measures of research and teaching task variability. The dimensions of lack of information and knowledge of outcomes correlated with one another within the research and teaching classifications to form overall indices of environmental uncertainty. The structural dimensions of hierarchy of authority, impersonality, and participation in decision making clustered and could be combined into overall indices of departmental structure. The rules and procedures dimensions separated out from the other structure subscales and were kept separate in subsequent analyses. Reliability coefficients (Cronbach's alpha) of the final scales were acceptable (Table 1). Analysis of variance was conducted on items and scales to examine the homogeneity of departmental members' responses.

-7 - TABLE 1 Scale Reliabilities ___# of items I I Structure 1* (teaching).833 9 Structure 1 (research).877 9 Structure 2 (teaching).677 2 Structure 2 (research).782 2 Environmental Uncertainty (teaching).874 7 1 1 Environmental Uncertainty (research).846 7 1 1 Task Difficulty (teaching and research).567 8 Task Variability (teaching and research).632 4 *Structure 1 includes hierarchy of authority, impersonality, and participation in decision making dimensions. Structure 2 is the dimension of rules and procedures. RESULTS The intent of this study was to test the viability of aggregating items and subscales to create full scales of organization structure, environmental uncertainty, and task routineness in an academic setting. Homogeneity of subject responses within and between departments was examined using analysis of variance. Where F ratios are not significant, it denotes that equal or greater variance exists in responses within departments than between departments. This could be interpreted as an indication that departmental members' responses are not homogeneous, i.e., there is a great deal of variation in departmental members' responses to the same questions. Alternative interpretations are that the variables are unable to discriminate between departments or that significant differences do not exist across departments on these variables.

-8 - One-way analysis of variance was first used on each of the items contained in the structure, environmental uncertainty, and task routineness scales. Results of the analysis of variance of the structural variables were mixed (Table 2). TABLE 2 One-Way Analysis of Variance: Structural Variables Mean Square Teaching hierarchy Teaching impersonality Teaching participation Teaching rules Research hierarchy Research impersonality Research participation Research rules 1 4 1 3 1 2 3 4 5 3 4 3 4 1 3 1 2 3 4 5 3 4 between Departments 3.6 1.9 2.0 3.5 3.3 1.8 4.8 1.9 2.5 2.1 2.1 2.4 4.7 3.9 4.1 5.6 3.2 3.7 5.3 4.9 3.7 2.3 Mean Square within Departments 2.2 1.1 2.3 1.8 1.1 1.9 3.6 1.5 2.0 2.7 2.3 1.8 2.8 2.0 2.1 2.2 2.3 3.3 1.7 3.0 2.5 2.4 F 1.62 1.70.87 1.92 2.85.96 1.36 1.31 1.21.79.92 1.33 1.66 1.94 1.94 2.58 1.40 1.12 3.16 1.64 1.48.94 Prob..06.04.63.02.00.51.16.19.26.72.56.17.05.02.02.00.14.34.00.05,.10.54 d.f. 20;125 20;124 20;124 20;125 20;124 20;124 20;123 20;125 20;123 20;122 20;118 20;117 20;117 20;117 20;118 20;118 20;118 20;116 20;117 20;118 20;117 20;115 In general, the structural variables did not seem to discriminate between departments well. Only the variables included in the teaching hierarchy of authority dimension and the research impersonality dimension were consistently significant. The variables included in the teaching rules and procedures dimension were consistently nonsignificant. The research structural measures

-9 - did, however, seem to discriminate between departments slightly better than the teaching structural variables. The results on one-way analysis of variance using the variables measuring environmental uncertainty are similarly mixed (Table 3). TABLE 3 One-Way Analysis of Variance: Environmental Uncertainty Variables Mean Square Mean Square between within Departments Departments F Prob. d.f. Teaching lack of information 1 3.8 1.6 2.41.00 20;117 2 4.4 2.0 2.23.00 20;118 3 3.2 2.7 1.18.28 20;117 4 4.4 2.2 2.04.01 20;117 5 2.1 2.1 1.01.46 20;117 Teaching knowledge of outcomes 1 2.8 2.3 1.26.22 20;117 2 2.3 1.8 1.31.19 20;115 Research lack of information 1 3.5 2.2 1.59.07 20;106 2 2.2 2.2.98.49 20;109 3 2.3 2.8.82.68 20;109 4 1.8 3.0.60.91 20;108 5 2.3 2.2 1.09.37 20;108 Research knowledge of outcomes 3 3.0 2.1 1.42.13 20;103 4 2.5 2.2 1.13.33 20;108 It would seem that either the variables measuring environmental uncertainty are unable to discriminate between departments or that there are few significant differences across departments on these variables. Finally, the results of the analysis of variance using the task routineness items are displayed in Table 4. Again, results are mixed. Only about half of the variables show significant differences between departments.

-10 - TABLE 4 One-Way Analysis of Variance: Task Routineness Variables Mean Square Mean Square between within Departments Departments F Prob. d.f. Teaching task difficulty 1 3.9 2.3 1.67.05 20;123 2 1.5 1.8.83.68 20;124 3 3.1 1.5 2.10.01 20;123 4 2.1 1.4 1.49.10 20;124 Research task difficulty 1 3.9 2.2 1.75.03 20;116 2 4.5 1.4 3.31.00 20;116 3 2.6 1.5 1.81.03 20;116 4 3.5 1.7 2.06.01 20;117 Teaching task variability 2 1.5 1.6.95.53 20;123 3 2.4 2.1 1.14.32 20;123 Research task variability 2 2.0 1.5 1.33.18 20;111 3 3.9 1.8 2.14.01 20;113 Because of the mixed results of the one-way analysis of the variance of single variables, and because these variables are not independent, the next step was to examine one-way analyses of variance of the scales (Table 5). Overall, the combined scales do a better job of differentiating the departments than do the subscales. Discussion It would seem warranted to aggregate (to the departmental level) faculty responses to the three structure dimensions. Measures of hierarchy of authority, impersonality, and participation in decision making combined in a scale do seem to be a departmental level measure of structure which differentiates between departments. That rules and procedures do not differentiate departments is not unexpected. The college or university may be the source of operating rules and procedures rather than their being department specific (Ramsey and White, 1980).

-11 - TABLE 5 One-Way Analysis of Variance: Indices Mean Square Mean Square between within Departments Departments F Prob. d.f. Structure 1* (teaching) 1.4 Structure 1 (research) 2.5 Structure 2 (teaching) 1.6 Structure 2 (research) 2.5 Environmental Uncertainty (teaching) 2.6 Environmental Uncertainty (research) 1.5 Task Difficulty.6 Task Variability.9 *Structure 1 includes the dimensions of and participation in decision making. and procedures..8 1.1 1.9 1.9 1.1 1.2.4.9 hierarchy Structure 1.75.04 20;117 2.22.01 20;112.84.66 20;117 1.28.21 20;115 2.39.00 20;112 1.34.17 20;96 1.37.15 20;112 1.00.47 20;106 of authority, impersonality, 2 is the dimension of rules Environmental uncertainty surrounding teaching decisions also seems to be a departmental-level scale. Responses concerning the environmental uncertainty surrounding research decisions should, however, be aggregated only with great caution. The research environment appears to be perceived by faculty at an individual rather than a departmental level. Individual faculty vary in their perceptions of the uncertainty of their research environments, even though they are members of the same department. Task routineness dimensions do not appear to be measuring departmental level phenomena. Again, faculty may view their tasks of teaching and research

-12 - as individual rather than departmental endeavors. The dimensions do not seem to discriminate between departments. Analysis of variance indicated, then, that one could cautiously aggregate the combined scales of structure and environmental uncertainty, but not the task routineness subscales. These results were surprising since the appropriateness of the academic departmental level of analysis has not been seriously questioned. Nevertheless, the measures used in this study, although reliable, did not differentiate departments well. Further work needs to be done, perhaps using profile analysis, in investigating similarities and differences between departments. If similar departments were clustered or combined on the basis of some theoretical premise, the measures might do a better job of discriminating between department groups. Lodahl and Gordon's (1972) use of the concept of paradigm development rather than the arbitrary boundaries of academic departments as defined by the university's organization chart, is a step in this direction. Further thought and research needs to be devoted to the issue of investigating and demonstrating level of analysis before aggregating individual responses in organizational research. Contingency theories of organization design suggest that organizational or departmental structure is (or should be) a function of environmental uncertainty and task routineness. These theories are often tested at the departmental level. Only if our measures differentiate departments from individual responses and can legitimately be aggregated to the departmental level, it is inappropriate to use the measures to test contingency theories. It is, therefore, imperative that additional work be done in the area of homogeneity assessment in organizational research.

-13 - REFERENCES Billings, R. S.; R. J. Klimoski; and J. A. Breaugh. "The Impact of Change in Technology on Job Characteristics: A Quasi-Experiment." Administrative Science Quarterly 22 (1977): 318-339. Campbell, D. T. and D. W. Fiske. "Convergent and Discriminant Validation by the Multi-Trait-Multimethod Matrix." Psychological Bulletin 56 (1959): 81-105. Comstock, D. E. and W. R. Scott. "Technology and the Structure of Subunits: Distinguishing Individual and Work Group Effects." Administrative Science Quarterly 22 (1977): 177-202. Dressel, P. L.; F. C. Johnson and P. M. Marcus. The Confidence Crisis: An Analysis of University Departments. San Francisco: Jossey-Bass, 1970. Duncan, R. B. "The Effects of Perceived Environmental Uncertainty on Organization Decision Unit Structure." Ph.D. Dissertation, Yale University, 1971.. "Characteristics of Organizational Environmental Uncertainty." Administrative Science Quarterly 17 (1972): 313-327. Forehand, G. A. and G. B. von Haller. "Environmental Variation in Studies of Organizational Behavior." Psychological Bulletin 62 (1964): 361-382. Fry, L. M. "Unrationalized Categorization and Its Impact on TechnologyStructure Research." Unpublished Manuscript, Texas A & M University, 1978. (Typewritten) Hage, J. and M. Aiken. "Relationship of Centralization to Other Structural Properties." Administrative Science Quarterly 12 (1967): 72-92. Hall, R. H. "Intraorganizational Structural Variation: Application of the Bureaucratic Model." Administrative Science Quarterly 7 (1962): 295-308. Hall, R. N. "The Concept of Bureaucracy: An Empirical Assessment." American Journal of Sociology 69 (1963): 32-40. Hrebiniak, L. G. "Job Technology, Supervision, and Work-Group Structure." Administrative Science Quarterly 19 (1974): 395-410. Khandwalla, P. N. "Mass Output Orientation of Operations Technology and Organizational Structure." Administrative Science Quarterly 19 (1974): 74-97. Lawrence, P. R. and J. W. Lorsch. Organization and Environment. Boston: Harvard Business School, Division of Research, 1967. Leifer, R. P. and G. P. Huber. "Relations Among Perceived Environmental Uncertainty, Organization Structure, and Boundary-Spanning Behavior." Administrative Science Quarterly 22 (1977): 235-247.

-14 - Lodhal, J. B. and G. Gordon. "The Structure of Scientific Fields and the Functioning of University Graduate Departments." American Sociological Review 37 (1972): 57-72. and G. Gordon. "Differences Between Physical and Social Sciences in University Graduate Departments." Research in Higher Education 1 (1973): 191-213. Lynch, B. D. "An Empirical Assessment of Perrow's Technology Construct." Administrative Science Quarterly 19 (1974): 338-356. Pennings, J. M. "Measures of Organizational Structure: A Methodological Note." American Journal of Sociology 79 (1973): 686-704. Peterson, M. W. "An Organizational Study of University Departments: Openness and Structural Complexity." Ph.D. Dissertation, The University of Michigan, 1969. Ryan, D. W. "The Internal Organization of Academic Departments." Journal of Higher Education 43 (1972): 464-482. Sathe, V. "Institutional Versus Questionnaire Measures of Organizational Structure." Academy of Management Journal 21 (1978): 227-238. Schneider, B. and C. J. Bartlett. "Individual Differences and Organizational Climate II: Measurement of Organizational Climate by the Multi-trait, Multi-rater Matrix." Personnel Psychology 23 (1970): 493-512. Van de Ven, A. H. and A. L. Delbecq. "A Task Contingent Model of Work-Unit Structure." Administrative Science Quarterly 19 (1974): 183-197. Woodward, J. Industrial Organization: Theory and Practice. London: Oxford University Press, 1965.