Division of Research School of Business Administration August 1987 USER ACCEPTANCE OF INFORMATION SYSTEMS: THE TECHNOLOGY ACCEPTANCE MODEL (TAM) Working Paper #529 Fred Davis University of Michigan FOR DISCUSSION PURPOSES ONLY None of this material is to be quoted or reproduced without the expressed permission of the Division of Research. Copyright 1987 University of Michigan School of Business Administration Ann Arbor Michigan 48109

I I I i I

Abstract Lack of user acceptance has long been a major roadblock to the success of information systems efforts. A new theoretical model is introduced to address (1) why end-users accept or reject information systems and (2) how user acceptance is affected by the design features of the system. The proposed model, termed the "technology acceptance model" (TAM), specifies the causal interrelationships between system design features, perceived usefulness, perceived ease of use, attitude toward using and actual usage behavior. TAM was derived by integrating previous research from three distinct traditions: Management Information Systems (MIS) attitude research, MIS laboratory research and Human-computer Interaction (HCI) research. Pertinent issues from the psychology of attitudes were taken into account in order to place TAM on a solid theoretical foundation. Valid, reliable measures were used to operationalize the model's variables. A field experiment of 112 users and 2 end-user systems was conducted to test the hypothesized causal structure of TAM. The proposed model accounted for 36% of the variance in usage behavior. Perceived usefulness was 50% more influential than ease of use in determining usage. Implications for future research and practice are discussed. 1

i

1. INTRODUCTION Corporations are investing in management information technology hand over fist. At the same time, many companies are beginning to question whether the technology really contributes to organizational performance. Industry observers specifically mention the problem of underutilization, pointing out that a significant fraction of installed office systems, personal computers and terminals gather dust (e.g., Lee, 1986; Young, 1984). In general, the goal of an organizationally-based information system is to improve performance on the job. Unfortunately, performance impacts are lost whenever people decide not to use the system. User acceptance is often the pivotal factor determining the success or failure of an information system project (Lucas, 1975a). The present research introduces and empirically tests a new behavioral model of user acceptance, referred to as the "technology acceptance model" (TAM). TAM is designed to address two key research questions: (1) Why do users accept or reject information technology? Information systems researchers have wrestled with this question for over a decade, attempting to model the underlying cognitive and affective determinants of user behavior (e.g., Baroudi, Olson & Ives, 1986; DeSanctis, 1983; Ginzberg, 1981; Lucas, 1975b; Robey, 1979; Schultz & Slevin, 1975; Swanson, 1974). Progress has been steady but slow, and results have been mixed. Fortunately, the accumulated research on user acceptance provides a valuable point of departure for new theoretical initiatives. Rather than disregard the existing base of findings, the present research builds upon and extends previous work, using it to identify key variables and relationships for further investigation. (2) What is the impact of a system's design features on user acceptance? From a practical standpoint, we are interested not only explaining why a system is unacceptable to a set of users, but also in understanding how to increase user acceptance through the design of the system. The choice of functional and interface characteristics of a new system are largely under the control of MIS practitioners such as system designers, developers, selectors and managers. Whether a new end-user system is an in-house development, a canned package purchased from an outside vendor, or an adaptable modeling and analysis tool, MIS practitioners generally have significant influence over the -2

features and capabilities to be included in the system. Whereas most MIS attitude research has studied the flow of causality from perceptions and attitudes forward to usage, the present research attempts to go further upstream and investigate how these perceptions and attitudes are influenced by the system's objective characteristics. The proposed model is shown in Figure 1, with arrows representing causal relationships. A prospective user's overall attitude toward using a given system is hypothesized to be a major Figure 1. Technology Acceptance Model (TAM) Perceived Usefulness System Attitude Actual Design Toward -- System Features Using Use Perceived Ease of Use External Cognitive Affective Behavioral Stimulus Response Response Response determinant of whether or not he actually uses it. Attitude toward using, in turn, is a function of two major beliefs: perceived usefulness and perceived ease of use. Perceived ease of use has a causal effect on perceived usefulness. System design features directly influence perceived usefulness and perceived ease of use. System design features have an indirect effect on attitude toward using and actual usage behavior behavior through their direct effect on perceived usefulness and perceived ease of use. 2. RATIONALE FOR TECHNOLOGY ACCEPTANCE MODEL (TAM) TAM was derived by integrating, within a single model, key variables and relationships that have been studied within three distinct paradigms: (1) MIS attitude research, (2) MIS laboratory 3

research and (3) Human-computer Interaction (HCI) laboratory research. These three paradigms offer complementary perspectives which, when brought together, provide a more complete picture of the interplay between objective system features, user beliefs and attitudes, and usage behavior than any of them provide in isolation. MIS attitude researchers (e.g., Baroudi, Olson & Ives, 1986; DeSanctis, 1983; Ginzberg, 1981; Lucas, 1975b; Robey, 1979; Schultz & Slevin, 1975; Swanson, 1974) have made important progress toward understanding the beliefs and attitudes that determine usage, but have generally overlooked the impact of system design features on user beliefs and attitudes. MIS laboratory researchers (e.g., Benbasat & Dexter, 1986; Benbasat, Dexter & Todd, 1986; Dickson, DeSanctis & McBride, 1986; Dickson, Senn & Chervany, 1977) have addressed how the objective characteristics of systems, such as tabular versus graphic information displays, influence decision making effectiveness. Although the emphasis has clearly been on objective measures of effectiveness, subjective measures are getting increasing attention. In particular, MIS laboratory researchers have studied the impact of system characteristics on perceived usefulness and ease of use. Within the HCI tradition (e.g., Brosey & Shneiderman, 1978; Card, Moran & Newell, 1980; Gould, Conti & Hovanyecz, 1983; Miller, 1977; Roberts & Moran, 1983; Thomas & Gould, 1975), laboratory studies have addressed the effect of a system's design features (such as commands versus menues) on its usability. Although the bulk of the emphasis has been placed on objective usability criteria, such as task time and errors, increased emphasis is being placed on subjective measures of usability, or what the present research terms "perceived ease of use." A smaller number of researchers within the HCI tradition have also looked at perceived usefulness as a dependent variable. Figure 2 shows the causal relationships studied by MIS attitude researchers, MIS laboratory researchers, and HCI field researchers, respectively. These three research paradigms have evolved quite independently, but are beginning to converge on a common set of issues. The proposed model bridges these three perspectives by integrating laboratory research findings on the impact of design characteristics on perceptions with field research findings on the relationships between perceptions, attitudes and usage. TAM is therefore built upon regularities discovered in many previous studies, recognizing that variables and relationships repeatedly observed to be significant across several 4

studies are more likely to be fundamental, and that, in turn, a model composed of such variables and relationships is likely to be more general across a range of systems and user populations. Figure 2. Relationship of TAM to Previous Research 1 4 Causal Relationship Research Paradigm i Paradigm 1 2 3 4 5 6 MIS Attitude Studies * 0 MIS Laboratory Studies l l HCI Laboratory Studies * * The Impact of Attitudes on Usage The attitude-usage relationship is one of the most widely studied areas in MIS research. The MIS attitude studies published by Swanson (1974) and Lucas (1975b) were influential in the early development of this area. Swanson (1974) measured an attitude construct he called "MIS appreciation," finding it to be significantly related to system usage. Lucas (1975b) proposed and tested a model that linked attitudes to use and use to performance. A field survey of 398 subjects provided substantial support for the attitude-use relationship, and d mixed support for the useperformance relationships. Schewe (1976) surveyed 79 computer users from 10 companies. Assessing a very wide array of attitude and belief measures, several significant relationships with usage were 5

observed. Lucas (1978) found a significant attitude-usage relationship in a study of a medical research information system. Maish (1979) surveyed 62 federal employees with a broad-ranging attitude questionnaire. Several of his attitude scales were significantly correlated with use. In a longitudinal field study of 44 portfolio managers, Ginzberg (1981) found users' "realism of expectations" significantly correlated with use (r=.22). Fuerst and Cheney's (1982) study of 64 DSS users in eight oil companies found significant relationships between attitudes and self-reported use. Perhaps the most extensively studied MIS attitude scale is the "computer user satisfaction" instrument originally introduced by Bailey and Pearson (1983). Four semantic differential scales were used to measure a user's response to each of 39 "factors," and an importance weight was measured for each factor as well. Although Bailey and Pearson did not examine the satisfaction-use relationship, questionnaire responses from 29 subjects were used to assess the reliability of the 4-item scale for each factor, which averaged.93 across the 39 factors of the instrument. The average reliability for the 4 items for each factor was.93. Ives, Olson and Baroudi (1983) performed further psychometric analyses of Bailey and Pearson's (1983) scale. A sample of 200 managers completed the questionnaire, as well as a second follow-up questionnaire containing a separate overall measure of satisfaction. (The correlation between the 2 satisfaction measures was.55) Inter-item reliabilities were found to be above.9 for 30 out of 38 scales. Factor analysis revealed 5 factors underlying the satisfaction index. Using the same database reported by Ives, Olson and Baroudi (1983), Baroudi, Olson and Ives (1986) studied the interrelationship between user involvement, user information satisfaction and system usage. Baroudi, Olson and Ives (1986) equate user satisfaction with "attitude toward the object," where the object is the system, and specify two rival models. The first model, based on dissonance theory (Festinger, 1957), argues that usage determines satisfaction. The second model, based on Fishbein and Ajzen's (1975) attitude theory, argues the reverse: that attitudes cause behaviors. Baroudi, et al.'s (1986) data is more consistent with a significant causal effect of attitudes on usage (path coefficient=.24), which agrees with findings reported in the psychology literature (Bentler & Speckart, 1981). 6

Barki and Huff (1985) also observed a significant correlation between satisfaction and use (r=.39), measuring satisfaction using Ives, Olson and Baroudi's (1983) abbreviated version of the Bailey and Pearson (1983) user satisfaction measure. Srinivasan (1985) found no significant correlations between frequency of use and the five dimensions from Jenkins and Ricketts' (1979) user satisfaction scale in a survey of 29 users. Swanson (1982) summarizes the status of the MIS attitude area and makes prescriptions for future research: "Briefly, what has been learned to date from the research on MIS user attitudes? First, MIS attitudes are related to MIS use, broadly speaking. Variations in the attitude concept itself appear to explain much of the variation in research results to date. In some cases, this variation is due to distinctions drawn (or not drawn) between attitudes and beliefs...Secondly, the usage-relevant components of user attitudes are not well understood. Identification of these usage-relevant components is much needed to advance further research in the field...a refinement of the MIS attitude concept is called for. New construct(s) should be introduced which relate more closely to user behavior. Measure(s) should be developed for these constructs, and theoretical and empirical validation should follow (Swanson, 1982, p. 161)." Despite the fact that a broad mix of different MIS attitude assessment procedures have been used, nine of the ten reviewed studies assessing the attitude-usage link found a significant relationship. Attitude Theory Swanson (1982) recommends adopting principles from the Fishbein and Ajzen (1975) attitude paradigm from psychology to put the MIS attitude area on a more solid theoretical foundation. This paradigm helps resolve several theoretical issues in the MIS attitude literature by (1) specifying how to measure the behavior-relevant components of attitudes, (2) distinguishing between beliefs and attitudes, and (3) specifying how external stimuli, such as the objective features of an attitude object, are causally linked to beliefs, attitudes and behavior. Fishbein and Ajzen (1975, p. 31) draw the distinction between two attitude constructs: attitude toward the object (Ao), which refers to a person's affective evaluation of a specified attitude object,and attitude toward the behavior (AB), which refers to a person's evaluation of a specified behavior involving the object. It has been shown that AB relates more strongly to a specified behavior than does Ao (Ajzen & Fishbein, 1977). The Ao - AB distinction is relevant for MIS attitude research, 7

which has generally focused on attitudes toward the system (Ao) as opposed to attitude toward using the system (A8). Hence, within the proposed technology acceptance model (TAM), attitude toward using the system will be employed. Adapting the general AB definition, attitude toward using is defined as: "the degree of evaluative affect that an individual associates with using the target system in his or her job." The present research will employ Fishbein and Ajzen's recommended attitude measurement scales to operationalize attitude toward using (e.g., Ajzen & Fishbein, 1980, Appendix A; see also Ajzen & Fishbein, 1975, Chapters 3 and 4). The standard measures employ 7-point rating scale formats anchored with evaluative semantic differential adjective pairs (such as 'good-bad'), and typically possess high reliability and validity (e.g., Bagozzi, 1981; Fishbein & Raven, 1962). Why not use one of the existing MIS attitude scales? The highly studied satisfaction scale developed by Bailey and Pearson (1983; see also Ives, Olson & Baroudi, 1983) is perhaps the best candidate from the MIS literature. There are several drawbacks to using it in the present research, however. First, as Baroudi, Olson and Ives (1986) point out, the scale is regarded as an operationalization of Ao and not Ag. Although Baroudi et al. (1986) observed a significant AO - usage relationship, according to Fishbein and Ajzen, AB should relate even more strongly to usage. Second, although user satisfaction has been treated as a single variable (Bailey & Pearson, 1983; Barki & Huff, 1985; Baroudi,Olson & Ives, 1986), the scale has been shown to have multiple underlying dimensions (Ives, Olson & Baroudi, 1983). This violates convergent validity (Campbell & Fiske, 1959), and obscures the interpretability of empirical relationships observed between the satisfaction scale and other variables. In contrast, the standard Fishbein and Ajzen measures are unidimensional (e.g., Bagozzi, 1981). Third, the satisfaction scale is quite long, being composed of almost 200 items for the original scale (Bailey & Pearson, 1983), with even the abbreviated version having more than 60 items (Ives, Olson & Baroudi, 1983), whereas the standard scales recommended by Ajzen and Fishbein (1980) achieve high reliability and validity using 4 to 6 items. Clearly, there are several advantages to using the standard attitude scales in preference to the Bailey and Pearson satisfaction index. 8

Fishbein and Ajzen draw the distinction between beliefs and attitudes, a distinction which has frequently been overlooked in the MIS attitude domain (Swanson, 1982). Whereas a person's belief about a behavior (also called a perceived consequence of the behavior) refers to his or her subjective likelihood that performing the behavior will lead to a specified outcome event (Fishbein and Ajzen, 1975, p. 233), attitude toward the behavior is an affective evaluation of the behavior. According to Fishbein and Ajzen (1975, p. 233) attitude toward a behavior is determined by an expectancy-value model of perceived consequences weighted by evaluations of the consequences: AB = 2 i=l,n biei Although Fishbein and Ajzen recommend using a self-stated evaluation term ei, this has become a point of considerable debate in psychology. First, multiplying two variables together to form an index assumes they're scaled at the ratio level of measurement. However, psychological ratings such as bi and ei generally only achieve the interval level of measurement (Hauser & Shugan, 1980). The multiplication of interval-scaled measures introduces systematic error of unknown magnitude into the resulting product term (Schmidt, 1973). Second, using hierarchical regreession methods to circumvent the ratio-scaling problem, expectancy-value theorists have observed that the people generally do not combine expectancies and values multiplicatively (e.g., Bagozzi, 1984; Stahl & Harrell, 1981). Moreover, statistically estimated (e.g., via regression) weights often predict as well as or better than their self-stated counterparts (e.g., Shoemaker & Waid, 1982) and provide a representative picture of the more detailed cognitive activity underlying judgmental processes (e.g., Einhorn, Kleinmuntz & Kleinmuntz, 1979), while avoiding the ratio-scaling problem discussed above. In light of these developments, TAM will employ statistically estimated as opposed to selfstated value weights. Fishbein and Ajzen (1975) also discuss how external stimuli influence beliefs, attitudes and behavior. According to their theory, external stimuli influence a person's attitude toward a behavior indirectly by influencing his or her salient beliefs (perceptions) about the consequences of performing the behavior (p. 396). Since system design features are external stimuli, they should influence 9

specific beliefs about using a system. By identifying the particular beliefs that are operative in the context of computer user behavior, the proposed model should provide diagnostic insight into how system characteristics influence user attitudes. Two specific beliefs, perceived usefulness and perceived ease of use, have repeatedly surfaced across numerous studies as significant correlates of attitudes and usage on the one hand and system design features on the other. We now review some of these previous studies. The Impact of Perceived Usefulness on Attitudes and Usage The present research defines perceived usefulness as "the degree to which an individual believes that using a particular system would enhance his or her job performance." The importance of expected performance impacts for user acceptance was suggested in a series of studies by Schultz and Slevin (1975), Robey and Zeller (1978) and Robey (1979). Schultz and Slevin (1975) developed an MIS attitude instrument by generating an initial set of 81 items from the literature, pilot testing the items on MBA students and pruning the questionnaire down to 67 Likert-scaled items. The resulting questionnaire was administered to 94 managers. Exploratory factor analysis yielded 7 factors, which Schultz and Slevin labeled: performance; interpersonal; changes; goals; support/resistance; client/researcher; and urgency. The performance dimension, interpreted as the perceived "effect of model on manager's job performance," was the factor most strongly correlated with a separate measure of self-predicted use. Using the Schultz and Slevin instrument, both Robey and Zeller, (1978) and Robey (1979) found the performance dimension to be most strongly linked to actual usage behavior. Robey and Zeller (1978) used Schultz and Slevin's questionnaire to diagnose why one department in a company adopted and used an information system, while a similar department in the same company rejected the same system. Across the 7 factors of the instrument, 2 were significantly different between departments: performance, and urgency. Robey (1979) found the "performance" subscale was most highly correlated with use of a computerized record-keeping system by 66 salespeople (Spearman correlations =.79 &.76 respectively for two use measures). The empirical importance of the performance factor led Robey (1979) to amplify Vertinsky, Barth and Mitchell's (1975) earlier 10

suggestion to interpret the link between expected performance and use within the context of expectancy theory models of work motivation (e.g., Porter & Lawler, 1968; Vroom, 1964). Work motivation models often include the "effort-performance" expectancy as a key determinant of job effort. Vertinsky, Barth and Mitchell (1975) and Robey (1979) both recommended adapting work motivation models to the explanation of MIS usage by substituting the "use-performance" expectancy in place of "effort-performance" in order to predict and explain usage behavior. Following the suggestions of Vertinsky, Barth and Mitchell (1975) and Robey (1979), DeSanctis (1983) tested an expectancy theory model of decision support system use in a business simulation context. Use-performance and performance-outcome expectancies, and outcome valences were measured and combined to form a measure of "motivational force" according to Vroom's (1964) specification. Force measured at one time period of the simulation was significantly correlated with use measured in the subsequent time period (r=.23). Robey and Zeller (1978), Robey (1979) and DeSanctis (1983) addressed the direct effect of perceived usefulness on usage, but did not test for the intervening role of attitudes. Their data is consistent with a usefulness-attitude-usage chain of causality, however, and we will appeal to Fishbein and Ajzen's (1975) theory, which posits that beliefs impact behavior indirectly through their effect on attitude. Other researchers have examined variables similar to perceived usefulness. Fuerst and Cheney (1982) observed a significant correlation between "perceived relevancy" and usage. Ginzberg (1981) found perceptions of "importance" and "value" to be significantly correlated with satisfaction. King and Epstein (1983) measured "decision relevance" as one of ten attributes in a multiattribute model of information systems value. The model as a whole was highly correlated with an overall value rating, but the relevance-value relationship was not separately computed. Barrett, Thornton and Cabe (1968) obtained a corrrelation of.38 between importance and satisfaction. Ives, Olson and Baroudi (1983) assessed the correlation between each of the 39 factors comprising the Bailey and Pearson 1983) instrument and a separate overall satisfaction measure. Three of the factors are similar to perceived usefulness: "perceived utility" (r=.42); "relevance of output"(r=.41), and "job effects of computer support" (r=.48). In a factor analytic study, Swanson (1987) found the following 11

"usefulness-like" items to load highly on a single dimension, which he labeled "value": important; meaningful; relevant; useful; and valuable. Using hierarchical regression, Swanson found "value" to be more strongly linked to usage (P=.27) than two other derived factors (technique and accessability). Thus, nine previous empirical studies have found constructs conceptually similar to perceived usefulness significantly linked to either attitudes or usage. The Impact of Perceived Ease of Use on Attitudes and Usage Perceived ease of use is defined as "the degree to which an individual believes that using a particular system would be free of physical and mental effort." Three factor analyses reported in the literature suggest that perceived ease of use and perceived usefulness are statistically distinct dimensions. Larcker and Lessig (1980) found items tapping "usableness" and "importance" of information reports to load on distinct factors. The former is similar to ease of use, while the latter is similar to usefulness. Hauser and Shugan's (1980) factor analysis of user perceptions toward telecommunication alternatives revealed two dimensions: "ease of use" and "effectiveness". In Swanson's (1987) factor analysis, items such as "convenient," "controllable," "easy," "failure-free," "flexible," and "unburdensome" loaded on a single factor, which he labeled "accessability." This factor, which is very similar to our definition of perceived ease of use, is statistically distinct from Swanson's "value" factor described above, which is similar to usefulness. These three factor analytic studies suggest that perceived usefulness and perceived ease of use are separate theoretical constructs. Ives, Olson and Baroudi (1983) found significant relationships between an overall satisfaction measure and several of Bailey and Pearson's (1983) factors that are like perceived ease of use: user's understanding of system (r=.30); convenience of access (r=.36); personal control (r=.37), and flexiblity of system (r=.46). King and Epstein (1983) included an attribute called "understandability" in their multiattribute model. Schewe (1976) observed a significant link between attitudes and "conversion effort" defined as "the effort and difficulty in changing to a computer system from a manual system." Barrett, Thornton and Cabe (1968) obtained a.54 12

correlation between user ratings of ease of use and satisfaction. Both Culnan (1983) and Swanson (1987) observed a significant link between perceived accessability and usage of an information source. Insofar as a more accessable source requires less effort to use, perceived accessability is conceptually akin to perceived ease of use. Since these latter two studies did not assess attitudes, we once again appeal to Fishbein and Ajzen's (1975) theory to postulate that attitudes mediate between perceived ease of use and usage. Previous research, therefore, is consistent the hypothesis that perceived usefulness and perceived ease of use are distinct constructs (three studies) and that ease of use is significantly linked to attitudes and usage (five studies). The Impact of Perceived Ease of Use on Perceived Usefulness Perceived ease of use is hypothesized to have a significant direct effect on perceived usefulness. If two systems perform the identical set of functions, a user should find the one that is easier to use more useful. Brooks (1977) provides an example how the reorganization of menus for accessing the same set of functions provided a dramatic effect on user performance. A designer should be able to enhance perceived usefulness either by adding new functional capabilities to a system, or by making it easier to invoke the functions which already exist. Given that some fraction of a user's total job content is devoted to physically using the system per se, if the user becomes more productive in that fraction of his or her job via greater ease of use, then he or she can reallocate the saved time to other activities, becoming more productive overall. Alternatively, the additional time may be spent performing a more detailed or extensive job on a task, such as examining more alternatives when using a decision support system (Keen 1981). Aside from pure time savings, if a system is easier to use, less of the user's cognitive capacity should be required to operate the system, allowing the user to be more attentive to the task for which the computer is being used, which could reduce errors and increase task effectiveness (e.g., Olson, 1987) Additionally, if a user finds a particular system easy to use, over time he or she is likely to become a more advanced user of the system, better able to take advantage of the range of capabilities it has to offer (e.g., Goodwin, 1982). Thus, for a variety of reasons, a system's 13

ease of use should influence its usefulness. In the words of Goodwin (1987, p. 229): "There is increasing evidence that effective functionality of a system depends on its usability." Three studies provide empirical evidence of an ease of use-usefulness link. Swanson (1987) observed a correlation of.55 between his "accessability" (perceived ease of use) and "value" (perceived usefulness) factors. Schewe's (1976) hierarchical regression analysis found "conversion effort" to be a significant predictor of "job productivity" defined as "the effect on the manager's own productivity." Barrett, Thornton and Cabe (1968) observed a whopping.95 correlation between the ease of use and importance ratings of eight information sources, averaged across subjects. Why doesn't perceived usefulness have a direct impact on perceived ease of use? Perceived usefulness concerns the expected overall impact of system use on job performance (process and outcome), whereas ease of use pertains only to those performance impacts related to the process of using the system per se. Hence, the performance impacts conceimonrning ease of use are a logical subset of those comprising usefulness. Making a system easier to use, all else held constant, should make the system more useful. The converse does not hold, however. Consider a hypothetical new forecasting model which, although equally easy to use as the model it supersedes, provides a more accurate forecast. Moving from the old model to the new one, usefulness goes up with no effect on ease of use. The Impact of Design Features on Perceived Usefulness and Perceived Ease of Use To a great extent, MIS research is a science of design (e.g., Simon, 1981). Academics and practitioners alike wish to better understand how to choose, from among the multitude of possibilities afforded by information technology, those particular design features that will contribute most to user acceptance and performance (e.g., Goslar, 1986; Klein & Beck, 1987; Moriarty, 1985; Reiman and Allen, 1985). Two distinct research traditions have focused on the impact of design characteristics: MIS laboratory research and HCI laboratory research. MIS laboratory research (e.g., Benbasat & Schroeder, 1977; Benbasat & Dexter, 1986; Benbasat, Dexter & Todd, 1986; Dickson, DeSanctis & McBride, 1986; Dickson, Senn & Chervany, 1977; Dickson, Senn & Chervany, 1977) has investigated how the format in which information is presented to a decision maker (e.g., color vs. monochrome; detailed vs. summarized; tabular vs. graphic) 14

influences decision making effectiveness. Parenthetically, researchers in this tradition have also studied the impact of cognitive style on performance (Dickson, Senn and Chervany, 1977; Zmud, 1979), a line of inquiry that Huber (1983) argues is unlikely to yield important insight regarding MIS and DSS design. As dependent variables, MIS laboratory researchers have emphasized objective performance criteria, such as profit earned in a business simulation game or decision making time, although increased emphasis on user perceptions and attitudes is evident (e.g., Benbasat & Dexter, 1986; Ghani & Lusk, 1982). Several MIS laboratory researchers have examined perceived usefulness and perceived ease of use as dependent variables. Benbasat and Dexter (1986) studied the impact of color (vs. monochrome) and information format (graphical, tabular and combined). Although color had no effect on perceptions, format had a significant effect on perceived usefulness (as measured by three items: "report usefulness," "report relevance," and "useful for choosing alternatives"), and had a nearly significant effect on "report understandability" which is similar to perceived ease of use. Benbasat, Dexter and Todd's (1986) subjects rated graphs as more relevant and useful than tables and rated multicolor reports as more understandable than monochrome reports. Dickson, DeSanctis and McBride (1986) found that subjects rated tabular reports "easier to read and understand" than graphical reports. Lucas (1981) found that subjects rated tabular CRT hardcopy output more useful than tabular CRT output, and tabular CRT output more useful than graphical CRT output. Whereas empirical MIS research has its roots in the management sciences and focusses on the use of information and decision support systems for decision making, HCI research is more strongly influenced by computer science and human factors engineering, and has emphasized the design of text editors and command language interfaces. Although the bulk of HCI research employs objective usability criteria, such as task performance time and error rate (e.g., Brosey & Shneiderman, 1978; Reisner, Boyce & Chamberlin, 1975; Card, Moran & Newell, 1980, 1983; Lochovsky & Tsichitzis, 1977; Roberts & Moran, 1983; Thomas & Gould, 1975), recently increased attention is being given to self-reported ease of use ratings as well. 15

Bewley, et al. (1983) reported on a set of experiments that were done in designing the Xerox "Star" office workstation. Subjects rated four sets of icons for "ease of picking out of a crowd," although no differences were found. In comparing a document processing system to a standard typewriter, Good (1982) found no differences in perceived ease of use as measured by the adjective pair "unfriendly-friendly." Magers (1983) found that an on-line help facility had a significant effect on perceived ease of use. Miller (1977) observed that, although a system's average response time had no impact on either perceptions or performance, the variability of response time had an adverse effect on actual performance, perceived ease of use and perceived usefulness. Poller and Garter's (1983) subjects found "modeless" text editing easier to use than "moded" text editing. Thus, a significant impact of system design features on both perceived usefulness (four studies) and perceived ease of use (five studies) has previously been observed by MIS and HCI researchers. Summary Attitude theory from psychology provides a rationale for the flow of causality from system design features through perceptions to attitude and finally to usage. For each causal relationship within TAM, several previous MIS or HCI studies have found significant links between similar constructs. Moreover, none of the previous studies have addressed all of the relationships represented in TAM. In this sense, the proposed model builds upon, consolidates, and extends existing research. The previous research provides theoretical and empirical justification for the hypotheses expressed in TAM. 16

3. EMPIRICAL TEST OF MODEL A field experiment was conducted to test hypotheses regarding the causal structure of TAM. A questionnaire was administered to 120 users asking them to rate two different software systems, an electronic mail system and a text editor, which are widely available in their organization. Respondents were screened to make sure they had previously used the target systems, so that the attitudes and beliefs measured were formed based on direct behavioral experience with the attitude object (Fazio & Zanna, 1981). The technology acceptance model is expressed by the following structural equations (Duncan, 1975): EOU= P11 System + el USEF= P21 System + 122 EOU + e2 ATT= 331 EOU + 32 USEF + e3 USE= 041 ATT + e4 Where "System" is a dummy variable taking on value 0 for electronic mail and 1 for the text editor, USE refers to intensity of system usage, ATT refers to attitude toward using, USEF refers to perceived usefulness and EOU refers to perceived ease of use. Ordinary least-squares regression is used to test this structural equation model (Land, 1973). The statistical significance of the proposed TAM relationships, expressed in hypotheses 1-5 below, will be assessed using the t-statistic corresponding to each estimated parameter. H1: Attitude toward using will have a significant positive effect on actual system use. H2: Perceived usefulness will have a significant positive effect on attitude toward using, controlling for perceived ease of use. H3: Perceived ease of use will have a significant positive effect on attitude toward using, controlling for perceived usefulness. H4: Perceived ease of use will have a significant positive effect on perceived usefulness, controlling for system. H5: System will have a significant effect on perceived usefulness and perceived ease of use. 17

The data will also be used to test whether the causal relationships hypothesized to be indirect have no significant direct effect. These tests are expressed in hypotheses 6 and 7 below. Hierarchical.regression and associated F-tests of the significance of the increase in R2 due to the additional variables will be used for these hypotheses. H6: Perceived usefulness, perceived ease of use, and system will not have significant direct effects on actual system use, controlling for attitude toward using. H7: System will not have a significant direct effect on attitude toward using, controlling for perceived usefulness and perceived ease of use. In addition to testing for the significance vs. non-significance of the hypothesized relationships, the data will also be used to estimate the magnitudes of the causal parameters. The estimates will be the standardized regression coefficients, expressed both as point and confidence interval estimates. Method Subjects and Procedure Subjects were 112 professional and managerial employees of a large North American corporation. A questionnaire was circulated to 120 users on one day and collected from 112 on the following day, yielding a response rate of 93.3 %. Questionnaire The questionnaire contained questions regarding two systems that are widely used in the company: an electronic mail system and a text editor. In order to screen respondents without experience with one or the other of the systems, instructions in the questionnaire asked subjects not to fill out the section regarding a given system if they haven't used it. Of the 112 participants, 109 completed the section of the questionnaire pertaning to electronic mail and 76 completed the section pertaining to the editor. For each system, respondents were asked to rate their perceived ease of use (EOU), perceived usefulness (USEF), attitude toward using (ATT) and actual current use of the system (USE). 18

Attitude toward using was measured using standard 7-point semantic differential rating scales as suggested by Ajzen and Fishbein (1980): All things considered, my using electronic mail in my job is: Neutral Good::::::::Bad In addition, the adjective pairs Wise-Foolish, Favourable-Unfavourable, Beneficial-Harmful and Positive-Negative were used, for a total of five items making up the attitude scale. These are all adjective pairs found to load on the evaluative dimension of the semantic differential (Osgood, Suci & Tannenbaum, 1957). The attitude toward using scale had a Cronbach Alpha reliability of.96. Perceived ease of use and perceived usefulness were measured using the measurement scales given in the Appendix. The scales were highly reliable, with Cronbach aplha coefficients of.97 for perceived usefulness and.91 for perceived usefulness. Subjects were instructed to circle the number corresponding to their responses on rating scales having the following format: Strongly Agree Strongly Disagree Neutral I find the electronic mail system cumbersome to use. 1 2 3 4 5 6 7 Two items were used to obtain a self-reported measure of actual system use. The first one, a measure of the frequency of use of the system, had the following format: On the average, I use electronic mail (pick most accurate answer): Don't use at all Use less than once each week Use about once each week Use several times each week Use about once each day Use several times each day 19

The second usage measure asked subjects to specify about how many hours they normally spend each week using the target system. Frequency of use and amount of time spent using a target system are typical of the usage metrics employed in MIS research (e.g., Ginzberg, 1981; Robey, 1979). Available evidence suggests that such self-reported time estimates, although not necessarily precise in an absolute sense, are accurate as relative indicants of the amount of time spent on activities (Hartley, et al., 1977). Given that the methodologies applied to the data require measures scaled at no more than the interval level of measurement, the self-reported items should be adequate. The hours per week question exhibited a highly right-skewed distribution of responses, and was rescaled by taking logarithms in order to make the distribution more symmetric. A linear transformation was then performed on the rescaled hours per week question to give it the same range as the frequency of use question. Averaging the use items yielded a reliability of.70. Results Regression analyses were performed on data pooled across the two target systems (n = 185). Table 1 contains the results of OLS regressions applied to the hypothesized equations of the model, and Table 2 contains the unrestricted regressions needed to carry out the hierarchical regression test of the insignificance of those causal relationships hypothesized to be nonsignificant. Figure 3 shows the results of the regressions analyses. Most of the hypotheses were confirmed by the data. Attitude had a significant effect on usage (p=.21, P<.05). Perceived usefulness had a significant and strong effect on attitude (P=.65, p<.001). Ease of use had a smaller, but also significant effect on attitude ( =.13, p <.05), and a strong effect on usefulness ( =.63, p <.001). Ease of use was significantly affected by the system, with telectronic mail being perceived as easier to use than the text editor (3=-.21, p<.01). Among the hypothesized direct effects, only the systemusefulness link was disconfirmed (f3=-.01, n.s.). Hence, aside from the differences in ease of use, the electronic mail and text editor provide similar perceived impacts on one's job performance. System and ease of use had no direct effect on usage, as hypothesized. Counter to expectation, however, usefulness had a strong direct effect on use over and above attitude (f=.44, p<.001). 20

Table 1. TAM Regression Tests Dep.Var. R2 In"ePebe b S.E.(b) stt. sig. Iv. EOU.044 Constant 3.323.155 21.463.000 System -.581.201 -.210 -.289.004 USEF.400 Constant.933.214 4.356.000 System -.036.151 -.014 -.238.812 EOU.591.055.630 10.661.000 ATT.550 Constant.224.134 1.668.097 EOU.100.049.134 2.037.043 USEF.531.054.650 9.893.000 USE.308 Constant 4.192.283 14.802.000 ATT.220.025.555 8.792.000 USE.361 Constant 3.411.323 10.565.000 (W/ USEF USEF.077.016.435 4.913.000 included) ATT.089.039.205 2.316.022 - - - - - System did not have a direct effect on use, but, unexpectedly, had a small but significant effect on attitude over and above usefulness and ease of use (p= -.16, p <. 01). Table 3 gives the point estimates and confidence intervals for the standardized regression coefficients. The parameters enable one to compute the relative importance of USEF and EOU in influencing USE. USEF has both a direct effect (.44) plus an indirect effect via ATT (.65 x.21). Combined, this equals.58. EOU has an effect on USE through ATT:.13 x.21; plus an effect through USEF:.63 x.58 (.58 from above calculations of USEF's effect on USE). This totals.39. Comparatively, therefore, USEF is about 1.5 times as important as EOU in influencing USE. 21

Table 2. Hierarchical Regression Tests of Indirect Relationships Dep. R2 Independent b S.E.(b) tt. sig. Iv. Var. Variable S.E.(b) stat. ATT.574 Constant.484.155 3.121.002 System -.323.103 -.159 -3.133.002 EOU.077.049.104 1.599.112 USEF.532.052.651 10.155.000 USE.365 Constant 3.010.416 7.235.000 System.366.278.084 1.314.191 EOU.147.130.092 1.129.261 USEF.669.175.380 3.829.000 ATT.449.201.206 2.233.027 - - - - - - -~~~~~I Table 3. TAM Parameter Estimates and 95% Confidence Intervals 95% Confidence Causal Link Point Estimate Interval Ind. Dep. Std. Sig. Lower Upper Var. Var. Error Level Bound Bound System EOU -.210.073.004 -.352 -.068 System USEF -.014.059.812 -.129.101 EOU USEF.630.059.000.515.745 System ATT -.159.051.002 -.259 -.059 EOU ATT.134.049.043 -.027.235 USEF ATT.651.064.000.524.778 USEF USE.435.090.000.258.612 ATT USE.205.090.022.029.381 - mI -I- 22

Figure 3. Causal Diagram of Model Evaluation Results X....44*** -.01 Perceived Usefulness )I r ^s^ Attitude s Actual.63*** Toward -- System -' — / ___. IUsing.21* Use -21** Perceived 13' -.21'' Ease of Use..-. 16**.... link hypothesized insignificant but-.16* significant *... - link hypothesized insignificant but found significant Note: Regression beta coefficients are shown p<.05 **p<.0 1 ***p<.001 4. DISCUSSION The TAM motivational variables: attitude toward using, perceived usefulness and perceived ease of use, taken together, fully mediate between system design features and usage. That is, the characteristics of the system appear to influence behavior entirely through these motivational variables and have no additional direct effect on use. The powerful effect of usefulness on actual use, both directly, and indirectly through attitude (.65*.21) +.44=.58, is perhaps the most striking of our findings. The fact that usefulness exerts more that twice as much direct influence on use than does attitude toward using (with regression coefficients of.44 and.21 for usefulness and attitude respectively) underscores the importance of the usefulness variable. In addition, usefulness exerts more than four times as much direct influence on attitude as does ease of use (.65 vs..13). 23

The direct effect of a perception on behavior over and above its indirect effect through attitude, such as the observed usefulness-use link, is inconsistent with Fishbein and Ajzen's theory (e.g., 1975, p. 314), but has been addressed elsewhere in psychology. An alternative model specified by Triandis (1977, p. 194), views cognitions as having a direct effect on behavioral intentions. Bagozzi (1982) found that beliefs have both a direct effect on intentions and an indirect effect through attitude. Therefore, there is some theoretical and empirical precedent for an effect of beliefs on behavior over and above their indirect effect via attitude. Compared to usefulness, perceived ease of use has a fairly small direct effect on attitude primarily influencing attitude indirectly via its relatively strong effect on usefulness (.63 *.65 =.41). The effect of ease of use on usage operates almost entirely through its effect on usefulness, which is.63 * (.44 +.65*.21) =.36. Comparatively, the effect of ease of use on use through its direct effect on attitude is only.13*.21 =.02. Why does ease of use operate primarily through usefulness? The usefulness construct may reflect considerations of both the "benefits" and the "costs" of using the target system (e.g., Einhorn & Hogarth, 1981; Johnson & Payne, 1985). Ease of use (or more appropriately its opposite, effort of use) may be seen as part of the cost of using the system from the user's perspective. One possible direction for future research would be to attempt to define an measure the "benefits" of use in a particular context. The small but significant direct influence of system characteristics on attitude toward using (-.16) suggests that perceived usefulness and perceived ease of use may not be the only beliefs mediating between system and attitude. This leads us to consider possible beliefs that should be added to the model. The previous discussion has emphasized the importance of perceived usefulness, arguing that ease of use operates through this variable. Thus the model views computer usage behavior to be largely extrinsically motivated, being driven by concern over performance gains and associated rewards. Malone (1981) pointed out that intrinsic motives also play an important role in determining usage computer systems. That is, people use systems in part because they enjoy the process using them per se (and thereby gain intrinsic reward), not just because they are being extrinsically 24

rewarded for the performance impacts of usage. Intrinsic motivation may be one mechanism underlying the observed direct effect of system characteristics on attitude toward using. From this perspective, an individual's affect toward using a given system would be expected to be jointly determined by the extrinsic and intrinsic rewards of using the system. A similar point is made by Turner (1984), who observes that productivity improvements may vary inversely with the quality of work life for users. Malone (1981) discusses design characteristics of systems which are expected to influence intrinsic motivation. Future research is needed to address the role of intrinsic motivation within TAM. Hence, TAM represents an evolutionary step in the development of a valid theory of user acceptance, and provides a foundation for further research that builds upon and extends TAM as necessary to account for a broader range of phenomena. Additional directions for elaboration include linking up with the literature on organizational context factors (Ein-Dor & Segev, 1978), political factors (e.g., Kling, 1980; Markus, 1983) and design methodologies (King & Rodriguez, 1981; Mason & Carey, 1983) that exert an influence on user acceptance. The interplay between such variables as top management support, user involvement, centralization and power of the MIS function, system development methodology, and the variables in TAM are not currently well-understood. Baroudi, Olson and Ives (1986) theorize that user involvement directly influences both satisfaction and usage. What are the mechanisms underlying this relationship? Robey and Farrrow (1982) observe that user involvement increases user influence and conflict, which leads to greater conflict resolution. Is it possible that user involvement operates, at least partially, by affecting the objective characteristics of the resulting system? King and Rodriguez (1981) found that participation in design affects attitudes but not usage or performance. Usage was linked to the kinds of system design inputs provided by the users. Alavi's (1984) data shows that prototyping increases user involvement and increases attitudes and perceived ease of use. Alavi and Henderson (1981) show that, holding constant the objective characteristics of the system, user involvement leads to greater use. Can user involvement affect usefulness perceptions independently of system characteristics? Franz and Robey (1986) show that 25

involvement is linked to perceived usefulness, and that other organizational factors, such as MIS department maturity, also influence usefulness. Although the jigsaw puzzle of user acceptance remains a challenge, many pieces of the puzzle are beginning to fall into place. Future research should systematically investigate how the puzzle fits together to form an overall picture of user acceptance. 26

References Ajzen, I. & Fishbein, M. (1977). Attitude-behavior relations: A theoretical analysis and review of empirical research. Psychological Bulletin, 84, 888-918. Ajzen, I. & Fishbein, M. (1980). Understanding attitudes and predicting social behavior. Englewood Cliffs, NJ: Prentice-Hall. Alavi, M. (1984). An assessment of the prototyping approach to information systems development. Communications of the ACM, 27, 556-563. Alavi, M. & Henderson, J.C. (1981). An evolutionary strategy for implementing a decision support system. Management Science, 27, 1309-1323. Bagozzi, R.P. (1981). Attitudes, intentions and behaviors: A test of some key hypotheses. Journal of Personality and Social Psychology, 41, 607-627. Bagozzi, R.P. (1982). A field investigation of causal relations among cognitions, affect, intentions and behavior. Journal of Marketing Research, 9, 562-584. Bagozzi, R.P. (1984). Expectancy-value attitude models: An analysis of critical measurement issues. International Journal of Research in Marketing, 1, 295-310. Bailey, J.E. & Pearson, S.W. (1983). Development of a tool for measuring and analyzing computer user satisfaction. Management Science, 29, 530-545. Barki, H. & Huff, S.L. (1985). Change, attitude to change, and decision support system success. Information and Management, 9, 261-268. Baroudi, J.J., Olson, M.H. & Ives, B. (1986). An empirical study of the impact of user involvement on system usage and information satisfaction. Communications of the ACM, 29 232-238. Barrett, G.V., Thornton, C.L. & Cabe, P.A. (1968). Human factors evaluation of a computer based information storage and retrieval system. Human Factors, 10, 431-436. Benbasat, I. & Dexter, A.S. (1986). An investigation of the effectiveness of color and graphical presentation under varying time constraints. MIS Quarterly, March, 59-84. Benbasat, I., Dexter, A.S. & Todd, P. (1986). An experimental program investigating color-enhanced and graphical information presentation: An integration of the findings. Communications of the ACM, 29 1094-1105. Benbasat, I. & Schroeder, R.G. (1977). An experimental investigation of some MIS design variables. MIS Quarterly, 1, 37-49. Bentler, P.M. & Speckart, G. (1981) Attitudes "cause" behaviors: A structural equation analysis. Journal of Personality and Social Psychology, 40, 226-238. Bewley, W.L., Roberts, T.L., Schoit, D., & Verplank, W.L. (1983). Human factors testing in the design of Xerox's 8010 "star" office workstation. CHI '83 Human Factors in Computing Systems (Boston, December 12-15, 1983), ACM, New York, 72-77. Brosey, ~M. & Shneiderman, B. (1978). Two experimental comparisons of relational and hierarchical database models. International Journal of Man-machine Studies, 10, 625-637. Brooks, F.P. (1977). The computer scientist as "toolsmith"- Studies in interactive graphics. In information processing 1977. B. Gilchrist, (Ed.), New York: Elsevier North-Holland, 625-634. Campbell, D.T. & Fiske, D.W. (1959). Convergent and discriminant validation by the multitraitmultimethod matrix. Psychological Bulletin, 56, 81-105. 27

Card, S. K., Moran, T.P., & Newell, A. (1980). The keystroke-level model for user performance time with interactive systems. Communications of the ACM, 23, 396-410. Culnan, M.J. (1983). Environmental scanning: The effects of task complexity and source accessibility on information gathering behavior. Decision Sciences, 14, 194-206. DeSanctis, G. (1983). Expectancy theory as an explanation of voluntary use of a decision support system. Psychological Reports, 52 247-260. Dickson, G.W., DeSanctis, G. & McBride, D.J. (1986). Understanding the effectiveness of computer graphics for decision support: A cumulative experimental approach. Communications of the ACM, 29, 40-47. Dickson, G.W., Senn, J.A., & Chervany, N.L. (1977). Research in management information systems: The Minnesota experiments. Management Science, 23, 913-923. Duncan, O.D. (1975). Introduction to structural equation models. New York: Academic Press. Ein-dor, P. & Segev, E. (1978). Organizational context and the success of management information systems. Managment Science, 10, 1064-1077. Einhorn, H.J. & Hogarth, R.M. (1981). Behavioral decision theory: Processes of judgement and choice. Annual Review of Psychology, 32, 53-88. Einhorn, H.J., Kleinmuntz, D.N. & Kleinmuntz, B., (1979). Linear regression and process-tracing model ofjudgement. Psychological Review, 86, 465-485. Fazio, R.H. & Zanna, M.P. (1981). Direct experience and attitude-behavior consistency. Advances in Experimental Social Psychology, 14 161-202. Festinger, L.A. (1957). A theory of cognitive dissonance. Evanston, IL: Row, Peterson. Fishbein, M. & Ajzen, I. (1975). Belief, attitude, intention and behavior: An introduction to theory and research. Reading, MA: Addison-Wesley. Fishbein, M. & Raven, B.H. (1962). The AB scales: An operational definition of belief and attitude. Human Relations, 15, 35-44. Franz,C.R. & Robey, D. (1986). Organizational context, user involvement, and the usefulness of information systems. Decision Sciences, 17, 329-356. Fuerst, W.L. & Cheney, P.H. (1982). Factors affecting the perceived utilization of computer-based decision support systems in the oil industry. Decision Sciences 13 554-569. Gahni, J.A. & Lusk, E.J. (1982). The impact of a change in representation and a change in the amount of information on decision performance. Human Systems Management, 3, 270-278. Ginzberg, M.J. (1981). Early diagnosis of MIS implementation failure: Promising results and unanswered questions. Management Science, 27, 459-478. Good, M. (1982). An ease of use evaluation of an integrated document processing system. CHI '83 Human Factors in Computing Systems (Boston, December 12-15, 1983), ACM, New York, 142-147. Goodwin, N.C. (1982). Effect of interface design on usability of message handling systems. In Proceedings of the Human Factors Society (Norfolk, VA. Oct 10-14). Human Factors Society, Santa Monica, CA, 69-73. Goodwin, N.C. (1987). Functionality and usability. Communications of the ACM, 30 229-233. Goslar, M.D. (1986). Capability criteria for marketing decision support systems. Journal of Management Information Systems, 3, 81-95. Gould, J.D., Conti, J., & Hovanyecz, T. (1983). Composing letters with a simulated listening typewriter. Communications of the ACM, 26, 295-308. 28

Hartley, C., Brecht, M., Pagerly, P., Weeks, G., Chapanis, A. & Hoecker, D. (1977). Subjective time estimates of work tasks by office workers. Journal of Occupational Psychology, 50, 23-36. Hauser, J.R. & Shugan, S.M. (1980). Intensity measures of consumer preference. Operations Research, 28, 279-320. Huber, G.P. (1983). Cognitive style as a basis for MiIIS and DSS design: 'Much ado about nothiong? Management Science, 29, 567-582. Ives, B., Olson, M.H., & Baroudi, J.J. (1983). The measurement of user information satisfaction. Communications of the ACM, 26,785-793. Jenkins, A.M. & Ricketts, J.A. (1979). Development of an instrument to measure user satisfaction with management information systems. Unpublished discussion paper, Indiana University, Bloomington, Indiana. Johnson, E.J. & Payne, J.W. (1985). Effort and accuracy in choice. Management Science, 31, 395 -414. Keen, P.G.W. (1981). Value analysis: Justifying decision support systems. MIS Quarterly, 5, 1-15. King, W.R. & Epstein, B.J. (1983). Assessing information system value: An experimental study. Decision Sciences, 14, 34-45. King, W.R. & Rodriguez, J.I. (1981). Participative design of strategic decision support systems: An empirical assessment. Management Science, 27 717-726. Klein, G. & Beck, P.O. (1987). A decision aid for selecting among information system alternatives. MIS Quarterly, 177-186. Kling, R. (1980). Social analyses of computing: Theoretical perspectives in recent empirical research. Computing Surveys, 12, 61-110. Land, K. C. (1973). Identification, parameter estimation, and hypothesis testing in recursive sociological models. In Goldberger, A.S. & Duncan, O.D. (Eds.) Structural equation models in the social sciences. New York: Seminar Press, 19-49. Larcker, D.F. & Lessig, V.P. (1980). Perceived usefulness of information: A psychometric examination. Decision Sciences, 11, 121-134. Lee, D.M.S. (1986). Usage pattern and sources of assistance for personal computer users. MIS Quarterly, December, 1986, 313-325. Lochovsky, F.H. & Tsichritzis, D.C. (1977). User performance considerations in DBMS selection. Proceedings of the ACM SIG-MOD, 128-134. Lucas, H.C. (1975a). Why information systems fail. New York: Columbia Univarsity Press, 1975. Lucas, H.C. (1975b). Performance and the use of an information system. Management Science, 21, 908-919. Lucas, H.C. (1978). Unsuccessful implementation: The case of a computer-based order entry system. Decision Sciences, 9, 68-79. Magers, C.S. (1983). An experimental evaluation of on-line help for non-programmers. CHI '83 Human Factors in Computing Systems (Boston, December 12-15, 1983), ACM, New York, 277-281. Maish, A.M. (1979). A user's behavior toward his MIS. MIS Quarterly, 3, 39-52. Malone, T.W. (1981). Toward a theory of intrinsically motivating instruction. Cognitive Science, 4, 333-369. Markus, M.L. (1983). Power, politics, and MIS implementation. Communications of the ACM, 26 430-444. 29

Mason, R.E.A. & Carey, T.T. (1983). Prototyping interactive information systems. Communications of the ACM 26, 347-354. Miller, L.H. (1977). A study in man-machine interaction. National Computer Conference, 409-421. Moriarity, M.M. (1985). Design features of forecasting systems involving managementjudgments. Journal of Marketing Research, 22, 353-364. Olson, J.R. (1987). Cognitive analysis of people's use of software. In Carroll, J. (Ed.) Interfacing thought: Cognitive aspects of human computer interaction. Cambridge, MA: MIT Press. Pindyck, R.S., & Rubinfeld, D.L. (1981). Econometric models and economic forecasts. New York: McGraw-Hill. Poller, M.F. & Garter, S.K. (1983). A comparative study of moded and modeless text editing by experienced editor users. CHI '83 Human Factors in Computing Systems (Boston, December 12-15, 1983), ACM, New York, 166-170. Porter, L.W. & Lawler, E.E. (1968). Managerial attitudes and performance. Homewood, IL: Dorsey. Reimann, B.C. & Waren, A.D. (1985). User-oriented crriteria for the selection of DSS software. Communications of the ACM, 166-179. Reisner, P. Boyce, R.F. & Chamberlin, D.D. (1975). Human Factors evaluation of two database query languages: SQUARE and SEQUEL. Proceedings of the National Computer Conference. Montvale, NJ: AFPIS Press, 447-452. Roberts, T.L. & Moran, T.P. (1983). The evaluation of text editors: Methodology and empirical results. Communications of the ACM, 26, 265-283. Robey, D. (1979). User attitudes and management information system use. Academy of Management Journal, 22, 527-538. Robey, D. & Farrow, D. (1982). User involvement in information system development: A conflict model and empirical test. Management Science, 28, 73-85. Robey, D. & Zeller, R.L. (1978). Factors affecting the success and failure of an information system for product quality. Interfaces, 8, 70-75. Schewe, C.D. (1976). The management information system user: An exploratory behavioral analysis. Academy of Management Journal, 19, 577-590. Schmidt, F.L. (1973). Implications of a measurement problem for expectancy theory research. Organizational Behavior and Human Performance, 10, 243-251. Schultz, R.L. & Slevin, D.P. (1975). In Schultz, R.L. & Slevin, D.P. (Eds.) Implementing operations research/ management science. New York: American Elsevier, 153-182. Shoemaker, P.J.H. & Waid, C.C. (1982). An experimental comparison of different approaches to determining weights in additive utility models. Management Science, 28, 182-196. Simon, H.A. (1981). The sciences of the artificial (Second Edition). Cambridge, MA: The MIT Press. Srinivasan, A. (1985). Alternative measures of system effectiveness: Associations and implications. MIS Quarterly, September, 243-253. Srinivasan, A. & Kaiser, K.M. (1987). Relationships between selected organizational factors and systems development. Communications of the ACM, 30, 556-562. Stahl, M.J. & Harrell, A.M. (1981). Modeling effort decisions with behavioral decision theory: Toward an individual differences model of expectancy theory, Organizational Behavior and Human Performance, 27, 303-325. 30

Swanson, E.B. (1974). Management information systems: Appreciation and Involvement. Management Science, 21178-188. Swanson, E.B. (1982). Measuring user attitudes in MIS research: A review. OMEGA, 10, 157-165. Swanson, E.B. (1987). Information channel disposition and use. Decision Sciences, 18, 131-145. Thomas, J.C. & Gould, J.D. (1975). A psychological study of Query by Example. Proceedings of the National Computer Conference. Montvale, NJ: AFIPS Press, 439-445. Triandis, H.C. (1977). Interpersonal behavior. Monterey, CA: Brooks/Cole. Turner, J.A. (1984). Computer mediated work: The interplay between technology and structured jobs. Communications of the ACM, 27, 1210-1217. Vertinsky, I., Barth, R.T. & Mitchell, V.F. (1975). A study of OR/MS implementation as a social change process. In R.L. Schultz & D.P. Slevin (Eds.) Implementing operations research/management science. New York: American Elseview, 253-272. Vroom, V.H. (1964). Work and motivation. New York: Wiley. Young, T.R. (1984). The lonely micro. Datamation, 30 4, 100-114. Zmud, R.W. (1979). Individual differences and MIS success: A review of the empirical literature. Management Science, 25, 966-979. 31

Appendix Perceived Usefulness of Electronic Mail Strongly Agree Strongly Disagree Neutral 1. Using electronic mail improves the quality of the work I do. 2. Using electronic mail gives me greater control over my work. 3. Electronic mail enables me to accomplish tasks more quickly. 4. Electronic mail supports critical aspects of my job. 5. Using electronic mail increases my productivity. 6. Using electronic mail improves my job performance. 7. Using electronic mail allows me to accomplish more work than would otherwise be possible. 8. Using electronic mail enhances my effectiveness on the job. 9. Using electronic mail makes it easier to do my job. 10. Overall, I find the electronic mail system useful in my job. 1 2 3 4 5 6 7 1 2 3 4 5 6 7 1 2 3 4 5 6 7 1 2 3 4 5 6 7 1 2 3 4 5 6 7 1 2 3 4 5 6 7 1 2 3 4 5 6 7 1 2 3 4 5 6 7 1 2 3 4 5 6 7 1 2 3 4 5 6 7 32

Perceived Ease of Use of Electronic Mail Strongly Agree Neutral Strongly Disagree 1. I find the electronic mail system cumbersome to use. 2. Learning to operate the electronic mail system is easy for me. 3. Interacting with the electronic mail system is often frustrating. 4. I find it easy to get the electronic mail system to do what I want it to do. 5. The electronic mail system is rigid and inflexible to interact with. 6. It is easy for me to remember how to perform tasks using the electronic mail system. 7. Interacting with the electronic mail system requires a lot of mental effort. 8. My interaction with the electronic mail system is clear and understandable. 9. I find it takes a lot of effort to become skillful at using electronic mail. 10. Overall, I find the electronic mail system easy to use. 1 2 3 4 5 6 7 1 2 3 4 5 6 7 1 2 3 4 5 6 7 1 2 3 4 5 6 7 1 23 4 5 6 7 1 2 3 4 5 6 7 1 23 4 5 6 7 1 2 3 4 5 6 7 1 2 3 4 5 6 7 1 2 3 4 5 6 7 33