RESEARCH SUPPORT SCHOOL OF BUSINESS ADMINISTRATION JUNE 1995 CAN YOU FIND WHAT YOU'RE LOOKING FOR?: SUCCESSFUL AND UNSUCCESSFUL INFORMATION SEARCH STRATEGIES OF MANAGERS USING EXECUTIVE INFORMATION SYSTEMS WORKING PAPER #9503-12 HANS P. BORGMAN UNIVERSITY OF MICHIGAN

~~U I~-~IIY~~~~~~~UI - I LII.-~L~~~~~~~~~~~ — ~ l~~~~~ ---~~~ lllll ~ ~ ~

"Can you find what you're looking for?": Successful and unsuccessful information search strategies of managers using Executive Information Systems Hans P. Borgman Michigan Business School University of Michigan 701 Tappan Street Ann Arbor, MI 48109-1234 phone: (313) 764-6110 fax: (313) 763-5688 email: hborgman@umich.edu Rotterdam School of Management Erasmus University Rotterdam P.O. Box 1738 3000 DR Rotterdam The Netherlands +31 10 4081908 +31 10 452 3565 hborgman@fac.fbk.eur.nl June 23, 1995

I i i i i I i I 3 i l I 3 ( i I I ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ k I

Managers' information search behavior using Executive Information Systems Which characteristics of managers' information search behavior distinguish successful from less successful use of (executive) information systems? This article addresses this question by employing a new process-tracing approach in an exploratory laboratory study with fifty managers as participants. Building upon and extending existing theoretical models, three main factors are identified relating to information search behavior that together explain 31% of the variance in the objectively assessed outcome performance. Key characteristics are the initial use of a broad and structured 'checklist' scanning approach, followed by a limited number of strongly focused deeper investigations. An explanatory model relating the above and other characteristics of decision-making processes to various aspects of outcome performance, and its implications for both IS research and the design and use of (executive) information systems are discussed. 1. Introduction Over the last years we are seeing a rapid increase in access to and use of large amounts of data for managers and/or their professional staff through the use of (executive) information systems. With a focus on information search, i.e., data retrieval and manipulation, these systems aim primarily at providing support for the critical predecisin onal phases of situation assessment and diagnosis (Kirschenbaum, 1992; Watson, Rainer and Koh, 1991). Providing constructive guidance for the design and use of these systems requires a thorough understanding of the underlying cognitive processes. Although this need has often been recognized (see e.g., Todd & Benbasat, 1987, 1992; Ford et al., 1989), few studies that examine these process aspects have emerged in the IS literature. Most studies of decision making and the role of IS focus on the effects of manipulating certain IS characteristics on either objective or perceived decision-making effectiveness and efficiency. Using a black-box input-output model in studying these situations enables us to "replace the decision maker 2

with a model" (Libby, 1981, p. 3) thereby, in principle, providing a good basis for the design of prescriptive improvements. However, the resulting mathematical model does not provide insight as to how and why the underlying decision process takes place. Using an accounting example, Wright (1977) demonstrated that seemingly minor changes in decision task or decision maker can lead to radical differences between predicted (mathematical model) and actual behavior (see also Kotteman and Remus, 1990). Knowing how and why decision processes take place will allow us to develop a more robust, generalizable model as a basis for prescriptive improvement. This illustrates the essence to "open the black box of decision making" (Todd & Benbasat, 1987). This article reports on research that takes a process view to explore the characteristics of successful information search behavior for situation assessment and problem diagnosis by users of information systems. Drawing on concepts and findings from a variety of disciplines and employing an exploratory laboratory study its primary aim is to build an explanatory model that relates information search behavior to decision-making performance. Before describing the design of the laboratory study, the next section will introduce some of the relevant concepts from behavioral decision-making theory. 2. Decision-making processes Procedural aspects of decision making have formed a major research area within cognitive and behavioral psychology. What emerges from this (empirical) research is a fairly consistent and robust picture of the processes and shortcomings involved, a picture which is summarized in Figure 1 (our main interest is in the inner workings of the middle box and its relation with decision-making performance). 3

e.g.. jab title; work experience;task domain knowledge. Performance; Perceived performance. Information acquisition: \4 depth; content; sequence; Information processing & hypothesis formatior use of heuristics/strategies; occurrence of biases. e.g., time pressure; task importance; disturbances. Fig. 1 Contingencies of the decision-making process In essence the decision-making process itself can be described as repeated cycles of hypothesis formation, active, expectancy guided information acquisition, and information processing (Johnson et al., 1992). These cycles are guided by the general (contingent) heuristics and biases model (Tversky and Kahneman, 1974). The contingencies deal with characteristics of the decision-making task, the decision environment and the decision maker, influencing factors such as the use of specific decisionmaking strategies, the selective use of heuristics and the existence and role of biases as well as specific information search behavior (Payne, Bettman and Johnson, 1988). To give an example: the presence or absence of time pressure in preferential choice tasks causes people to accelerate their information processing, focus on only a subset of information and change their decision strategy from alternative-based (i.e., looking at different attributes of one alternative before switching to the next alternative) to attribute-based processing (ibid.). Information processing and hypothesis formation both relate to the use of heuristics and biases and can be explicated by looking at reasoning steps and patterns, the use of specific reasoning strategies and the occurrence of biases. For example, Newell and Simon (1972) found that people 4

deal with task complexity by factoring into subdecisions and by 'chunking' sets of related observations. For a description of (twelve) decision strategies applicable to preferential choice problems see Svenson (1979). For an overview and critical discussion of biases and their role in decision making see Fraser, Smith & Smith (1992). Despite the large body of research devoted to behavioral decision making it should be noted that it predominantly addresses the 'choice' phase of decision making and that much less is known about information acquisition and situation assessment. Following Jacoby et al. (1987), information acquisition or search behavior can be characterized by either depth ('how much information was accessed'), content ('just which information') or sequence ('in what order'). Kirschenbaum (1992) used the proportion of information searched as one measure of depth and found that a much higher score on proportion searched differentiated novices from experts. Apparently novices, "not knowing exactly what is important, employ a search strategy that is not well focused, but aim at examining as much information as possible" (ibid., p. 345). This study attempts to build on the above theories and frameworks-empirically based mostly on simple, highly structured and controlled 'toy' decision tasks (see the Ford et al. [1989] review)-in the domain of IS decision making. Objectives As Johnson and Payne (1985) argue, understanding the nature of decision strategies employed by decision makers will aid in building systems that support specific decision tasks. The contribution of this current study then can best be described as the unraveling of the factors and relations that link different aspects of information search behavior to aspects of outcome quality. These factors and relations may then form the basis for the design of experiments that attempt to establish the existence and direction of a possible causality. Practical application of these experiments then may lead to: - Guidelines on recommended information search behavior: should we conclude from this study that a certain 'inquiry style' (distinguishing characteristics of typical information search 5

behavior) correlates with better results, we may then be able to translate this style into a set of instructions for system's usage. Should these instructions, in subsequent experimentation, actually lead to improved performance, they can then be published as 'recommended good practice'. 'Recommended good practice' is a term very well established in areas like accounting and medicine, where the use of specific procedures, checklists or techniques is endorsed by professional organizations and published in handbooks as a standard of recommended good practice. - Guidelines on how to provide support for situation assessment and diagnosis, i.e., the design of the IS-interface: should we be able to formulate the aforementioned instructions (the 'recommended good practice'), we may then be able to translate these into specific design characteristics of the (E)IS. Checklists or specific recommended procedure-sequences can be made visible and integrated into the interface-style, either promoting the recommended search behavior or steering users away from pitfalls or errors. Expected research contributions are an increased understanding of the problem recognition and diagnosis process of individual managers using an (E)IS in relation to measures of outcome quality. Looking at how and why decision processes take place is likely to lead to a richer and more generalizable model of what goes on and may provide a better understanding of why people are either successful, make errors, or miss important issues. It may also provide more insight into the relationship between actual performance and perceived performance (or confidence). One of the reasons that published decision-making process studies are scarce is the labor intensiveness of process-tracing methods (Todd & Benbasat, 1987). Ericsson and Simon (1984) estimate that coding and analysis of a protocol takes several hours for each minute of verbalized data, and studies using these methods are typically not able to accommodate more than about a dozen subjects (Ford et al., 1989). The experiences reported here with the development and application of a process-tracing approach with several novel elements that enabled a set-up with 783 pilot subjects and 50 subjects for the final analysis can therefore be seen as a spin-off. 6

3. Method The above objectives, the relative lack of studies focusing on process aspects of situation assessment and diagnosis, and the highly contingent nature of the decision-making process together call for an exploratory, hypothesis-generating approach with a reasonable amount of control. To this end a laboratory study was used. In selecting subjects and fixing contingencies regarding decision task and decision environment special care was taken to incorporate as much realism as possible. Fine-tuning the decision task and validating the operational measures was done over the course of a large number of pilot tests involving a total of 783 subjects. Subjects Subjects for the final study were 50 middle to senior level managers from 45 different companies in a variety of functional areas and industry sectors: service 36%, industry 32%, public 19%, and the finance sector 13%. All reported job titles indicated either a managerial or professional staff position, with various 'consulting' type positions dominating the list (34%). Other jobs were classified as general management 26%, marketing 21%, finance 6%, information systems 4% and a 'rest' category of 9%. Reported work experience was on average 8.0 years (a=6.3 years). Computer experience, measured as their self-reported average number of hours they spend each week using a computer averaged 14.0 hours/week (a=9.9 hours/week). The experiment was presented to subjects as an EIS workshop/game (with a secondary research goal) and held as part of larger evening program with invited speakers. Subjects were motivated by the competition element of the 'game' and appeared highly motivated with many being very fanatic. The top performer in each group (there were four different sessions) received a small prize (bottle of champagne). Decision task subjects in the laboratory study were placed into the role of 'senior business analyst' at the headquarters of a hypothetical company 'MultiLevel'. The subjects were told that dissatisfaction at the headquarters with the financial results of one of the daughter companies in Japan (disputed 7

problems about product 'Defto' were mentioned) had led to the resignation of the Japanese CEO. As replacement a new outside CEO had been found who had to be briefed on what to expect. As senior business consultant, the subjects were to prepare one of these briefings, concerning an analysis of the exact situation (covering both negative and positive aspects) the Japanese company found itself in. For this the subjects had access to all corporate data on marketing, sales and finance of the Japanese company. These data were accessible as graphs through a specially developed computer program (referred to as an Executive Information System, following the functional classification scheme of Watson, Rainer and Koh [1991]). A total of 960 graphs were available for any combination of variable, product, region, and time period. The EIS enabled the subjects not only to access any requested graph of the data on Tano in a very user-friendly way, but also offered a continuously accessible notepad for writing down intermediate findings and an editable 'briefing' screen (see Fig. 2). Filling in this 'briefing' screen (called 'conclusions' in the program) was central to the assignment, and both oral and written instructions stressed this. The analysis was individual. The unfamiliar setting of the case (none of the subjects was expected to have any knowledge of Tano's business: the 'paint-industry in Japan'), and the broadness of the assignment ('assessing Tano's situation') practically guaranteed that the task would be perceived as unstructured. For the same reason (to avoid the use of overly simple decision-making strategies) no real data were used. Trends in the underlying data for the case were manipulated such that a balance between complexity on one side and unambiguity of a normative 'correct' assessment of the case on the other side was maintained. 8

) 11=,... I '~ Previous inquiry:.[} wiutd. iJ t iw.mopre at *r Irt~glq:I 5 Ir pi My new inquiry...|.|. starts with z ~ <. | and is: s it true thatcaf is rcspon most losscs?....:.... Y ~: "I ::~' -- *-. rI I would lik to knw er LI I would like to know more m about market trends per L region H all_Products_Summed: demand perregion in 1988-1990 -_ElBBr -- I -cs't I J I 4 I rs nsiwed atto '*. Finding i~?.. i quiry........................................., _..,., I 1988 1989 1990 i I I 1 Fig. 2 Sample screens from the laboratory task. Clockwise, starting from top left: Inquiry, Graph, Conclusion and Finding. 9

Measures As explained earlier, the contingencies regarding decision task and decision environment were controlled by fixing them at realistic levels (and checking, using a.o. a post-test questionnaire). Computer proficiency was also measured (pre-test questionnaire) to consider a potential indirect effect through better system use to increased performance (effect did not occur). Specific variables were selected for inclusion based on the research described in section 2 and are presented below following the model from figure 1. Decision-maker characteristics: the only non-controlled variable was 'task domain knowledge' (inferred from job title/company/industry data). Information acquisition: depth was assessed by measuring the proportion of information retrieved during the task (labeled 'proportion searched'), taking into account repeats or overlaps between all retrieved graphs. Content was measured as the distribution of accessed graphs over the various attributes (variance of relative frequencies) and labeled 'unbalance'. Sequence was measured in four ways: (1) as the average number of dimension changes between successively retrieved graphs ('switching', with low switching signaling a systematic browsing approach and high switching a more directed hunting approach); (2) the changes in degree of 'switching' over time (signaling a 'depth first' or 'breadth first' approach); (3) the distribution (skewness) of 'proportion searched' over time (an early or late 'search intensity focus'); and (4) the percentage of successive retrievals that are grouped closely together indicating the use of a 'chunking' strategy. Hypothesis formation and information processing: moving from screen to screen through the program, subjects left a (logged) trace of their reasoning process. In addition, prompts asked them for explanations (comparable to a concurrent 'thinking-aloud' recording) of their intentions and actions. The prompts were designed such that they were actually functional (e.g., incorporated into the notepad) and therefore non-obtrusive to the task. Manual analysis of these traces was used to search for the occurrence of biases (following Fraser et al.'s [1992] definitions). Given the (only partially correct) lead in the case description regarding 'Defto', the overrepresentation of data points retrieved 10

concerning Defto ('Defto focus') was also measured as explication of a potential confirmation bias (Ford et al., 1989). The use of heuristics/strategies was measured by looking at how often subjects decided to move their search into a new area (because of specific prompts this could be derived from the log files). 'Inquiry grainsize' concerned the average time between successive conceptually new searches; 'inquiry openness' concerned what fraction of inquiries was defined open or broad ('I would like to know more about...') rather than narrow ('Is it true that...'). Decision-making performance: using a Delphi panel approach, three external experts developed a scoring template containing a weighted list of all issues they considered relevant. Two raters independently compared this template' with each subject's final briefing which resulted in an objective 'score', a measure of the ability to actively discover the most important issues. In addition, subjects completed a post-test multiple choice quiz that essentially measured whether they were able to passively recognize the most important issues (based on the template), resulting in an 'mc-score'. Issues reported by a subject that contradicted an issue on the expert's template were considered 'errors'. 'Potential score' reflected the score a subject could have achieved based on an optimal use of the graphs actually retrieved during the task. Important issues that were reported but for which it could be determined that the necessary underlying data were never retrieved were counted as 'unsupported findings'. 'Score', 'errors' and 'potential score' were each standardized on a 0-100 scale. 'Perceived analysis quality' was measured using post-test questionnaire items (5-point Likert scale). As an illustration of the process-tracing approach, figure 2 and 3 show a sample process trace of an individual subject. In addition to information acquisition, log files provided insight into the reasoning process (strategies, biases, etc.) by capturing all time-stamped responses to prompts, entries to notepads, final conclusions et cetera. Interrater reliability was high, r=0.96. 11

3 _ o 2 1988-1990 iii!!iii..... s l:i i!i.i!. si.> = west >>........... s:~~:,. >.;...... eaC) -... s. s <. Y:: S.1990 1989 central n) north X ' U.) each all X!! i ii!!i all ovi each Defto Chrom 0 avg. salesprice.....::;::; (u sales in $ i:.:.':::.... Fig. 3 Sample process trace of accessed graphs remark - remark conclusion clusion finding fis _nding (not relevant) (not relevant) (relevant) (relevant) inquiry H inquiry H inquiry Q inquiry:Q 0 I - t10 20 30 40 50 60 0 I 20 I 0 lO 20 30 40 50 60 0 lO 20 Fig. 4 Sample trace of an individual subject's flow through the program over the duration of the task. The graph at the right shows the cumulative time for each screen/activity. 'Findings' ending in a 'conclusion' are counted as 'conclusions'. Summarizing, our model contains a single contingency variable (task domain knowledge), nine decision-making process variables (proportion searched; unbalance; switching; depth first/breadth 12

first; search intensity focus; chunking; occurrence of biases, including a Defto focus; inquiry grainsize; and inquiry openness) and six performance variables (score; mc-score; potential score; errors; unsupported findings and perceived analysis quality). Procedure Each subject was randomly assigned to his/her own PC, seated such that it was difficult to view someone else's monitor. The experimenter briefed subjects about the case and their specific task ("assessing Tano's situation"). A tutorial then introduced them to the program (using meaningless data) after which the subjects started with their task. To control time-pressure, a strict time-limit of one hour was used, forcing subjects to be selective in their searches. Remaining time was announced at fixed intervals. After one hour, a post-test questionnaire and a multiple choice quiz concluded the session. As to the briefing, subjects were restricted to a maximum of four (prioritized) positive and negative findings. 13

4. Results and discussion The exploratory analysis approach set forth in Tukey (1977) and in Kleinbaum, Kupper and Muller (1992) was used, focusing primarily on partial multiple correlation analysis and subsequent multiple stepwise regression. Descriptive statistics are presented in table 1. Table 1 Descriptive statistics variable mean std.d min. max. ev. DECISION-MAKER CHARACTERISTICS (control variables) - task domain knowledge (1-5) 3.30 0.65 2.00 5.00 DECISION-MAKING PROCESS CHARACTERISTICS depth - proportion searched (0-1) 0.26 0.12 0.07 0.56 content - unbalance 0.10 0.03 0.03 0.20 sequence - switching 1.27 0.16 1.03 1.86 - depth first/breadth first 0.00 0.01 -0.04 0.04 - search intensity focus -0.12 0.75 -1.96 1.57 occurrence of biases - total number of biases 0.24 0.52 0.00 2.00 - Defto focus 1.06 0.21 0.71 1.89 reasoning steps and patterns - inquiry grainsize (minutes) 9.32 7.64 2.26 31.44 - initial inquiry openness 0.87 0.15 0.38 1.00 DECISION-MAKING PERFORMANCE CHARACTERISTICS performance - score (0-100) 58.81 16.74 18.60 90.70 - mc-score (0-100) 61.00 14.94 19.00 91.00 - errors (0-100) 2.36 4.94 0.00 22.00 - potential score (0-100) 91.46 8.90 52.71 100.00 - # of unfounded conclusions 1.20 1.80 0.00 10.00 perceived performance - perceived analysis quality (1-5) 2.93 0.92 1.00 4.67 14

Subsequent analysis and interpretation will focus first on the set of decision-making performance characteristics, after which their relation with the process variables will be discussed. Decision-making performance Whereas potential score was almost uniformly high for all subjects (avg.=91.5, o=8.9, meaning almost everyone at one point retrieved most of the key information), the large variance in scores indicates that retrieving the 'right' information is not sufficient to achieve success. Errors (rarely made, average 2.36 on a 0-100 scale) apparently only play a marginal role in explaining this difference (no significant correlation with score, p>0.05). Correlation analysis further shows that objective score is not significantly correlatd (p>0.05) with perceived performance, a conclusion that should worry us since 'objective' feedback on decision-making performance in real life situations is often so delayed, indirect, distorted or even absent that decisions are likely to be guided primarily by outcome confidence. In addition, these same delays and distortions may inhibit subsequent learning from experience, thus potentially leading to structurally inferior decision making.2 Correlation analysis within the set of performance variables reveals the following interesting points: * Score (the ability to discover issues on your own), mc-score (the ability to passively recognize issues when they are presented to you) and perceived analysis quality have no statistically significant relation with each other. * Whereas perceived analysis quality has no statistically significant relation to either score or mcscore, it does have a statistically significant curvilinear (inverted U) relation with potential score (R2=0.23, p<0.01). This translates to a conclusion that although retrieving more relevant 'key' information is not necessarily associated with better objective performance, it is, beyond a certain point, correlated with a decreased perceived performance. A potential explanation for this 2 Compare Davis (1989) and Davis, Bagozzi and Warshaw (1989) for a related discussion on the relation between user-acceptance of computer-based tools and users' perceptions of how much the tool improves performance. 15

may be that retrieving increasing amounts of relevant information in ill-structured situations (more subtleties are revealed) beyond a certain point leads to feelings of a lack of closure (see Cats-Baril and Huber [1987] for a related discussion). Perceived analysis quality also has a statistically significant correlation with task domain knowledge (r=0.31, df=50, p<0.05) which, combined with the above findings, suggests that perceived analysis quality is dependent much more on the existence of relevant work experience-possibly boosting or in fact inflating self confidence regarding performance for this task-than on how well the person actually did. The implications of these points will be taken up more closely in the next section. Relating process to performance Multiple partial regression analysis (summarized in Table 2), combined with partial correlation analysis (Fig. 4) result in the identification of a number of clear differences between successful and unsuccessful information search behavior. 16

Table 2 Overview of all regression results, showing the amount of explained variance (R2), F and j3 (p<0.05). Potential score is added as an alternative process variable, indicated by a *. decision-making performance (predicted T C/).- C _CO 0 0 cn c 8 >0 oo 0 c n o o c C.) 0) 0 =0 > o _. C.o cO Q. E m Q c, o. u Total adjusted R^2 of regression 0.08 0.16 0.11 0.25 0.31 n/a 0.21 F 6.3 10.3 6.8 9.3 8.4 4.9 control variables task domain knowledge 0.34 decision-making process (predicted FROM) information acquisition depth: proportion searched 0.42 (prop. searched-0.35)A2 -.38 potential score -.34* content: unbalance switching 0.37 -.41 0.28 depth first/breadth first 0.28 0.29 sequence: search intensity focus 0.4 processing & hypothesis formation biases: n/a steps: chunking inquiry grainsize -.27 initial inquiry openness 0.33 17

control variables task domain knowledge -.29 /-.31 31 / process variables: acquisition outcome vaiables unbalance 920 perceived quality proportion searched. 34 mc-score switching — /.s —. _ *.# unfounded conclusions search intensity focus ' --- potential score - score depth/breadth first * score process: hyp. & processing: potential score v- chunking.35 score - errors Defto focus / i inquiry grainsize inquiry openness - positive correlation -..- negative correlation... curvilinear (R2) Fig. 4 Overview of all significant partial correlations (at p<0.05 level) between the process, outcome and control variables. Taken together, these data suggest that successful information search behavior is characterized by the combination of a focused and breadth first approach: a. Focus: knowing what to look for and where to find it. The overall search activity of successful managers is characterized by large conceptual differences between successive queries (high switching), suggesting that successful searchers know what they are looking for. A directed search may also foster a better processing of retrieved information, as indicated by the correlation between high switching and a small number of overlooked findings. These findings are in general consistent with the theories presented in section 2, providing support for their extension into the-less structured-domain of this study. Specifically, the association found between high switching and decision-making performance is in line with the use of chunking to cope with complexity (Newell and Simon, 1972), as well as Kirschenbaum's (1992) observations regarding the use of a set strategy. The correlation between high switching and a small number 18

of overlooked findings builds upon the findings by Johnson et al. (1992) regarding the structure of decision-making processes for situations in which a pattern-matching approach is not possible. b. Breadth first: working along an initial broad 'checklist' (systematic browsing) and subsequently using selective searches. The reported findings suggest that, initially, systematic browsing is used to methodically scan the data and build an overall picture. This is followed by selective search to investigate those areas that were deemed important based on the scan (together a breadth first approach). The 'checklist' nature of the initial stages of the search is demonstrated by the positive correlation between total outcome quality (score - errors) and initial inquiry openness, in combination with a negative correlation between initial inquiry openness and initial Defto focus. What these correlations illustrate is that poor performing decision makers typically started their information search with a confirmatory analysis (low initial inquiry openness, meaning they preferentially selected inquiries starting with "Is it true that...?") while focusing on Defto (strong initial Defto focus). As explained in Section 2, this strategy potentially leads to (perceived) confirmation biases. Successful information search behavior on the other hand was characterized by less initial Defto focus and more initial inquiry openness, implying a broad initial information search. The above is in line with the earlier mentioned recommended-good-practice recommendations as endorsed by professional organizations of auditors, doctors and institutions in various other sectors. These recommendations typically include the use of checklists. The findings summarized above indicate that the use of checklists may also be beneficial to information search behavior since checklists are likely to: - structure the search, thereby providing an explicit frame to interpret retrieved information; - make sure that no important areas are overlooked; - avoid the occurrence of a (perceived) confirmation bias. 19

Unsuccessful information search behavior on the other hand is characterized by with users plunging in the decision process employing a quantity strategy, possibly worsened (more so than with other search strategies) by poor time management. a. 'Plunging in': this is essentially the opposite of the breadth first strategy described above as a characteristic of successful information search behavior. A depth first approach with a focus on confirming preconceived ideas is associated with poor decision-making performance. Plunging in is what Russo and Schoemaker (1989, p. xvi) call the "most dangerous decision trap." b. Quantity strategy: beyond a certain point, retrieval of additional information (a large proportion searched) is indicative of reduced performance (information overload). One explanation for this is that the increased complexity revealed by additionally retrieved information causes confusion and therefore decreases performance. However, Cats-Baril and Huber (1987) found that this negative influence only applies to perceived performance and that actual performance increases with the availability of additional-possibly confusing-information. A different and more likely explanation is that people employing a quantity strategy focus more on retrieval of information than on processing (or simply do not have enough time left for processing because of their focus on retrieval, see [c.]). Although a positive relation between proportion searched and perceived time pressure was found not to be statistically significant, the positive relation between proportion searched and the ability to passively recognize important findings (as measured by mc-score) does support this second explanation. c. Poor time management: the efficient use of time may also be a factor in explaining performance differences. The amount of overlooked findings, for instance, is highest for those managers that concentrated their searches near the end of the task (a high search intensity focus), possibly because they were pressed for time. One other indicator is that the use of many smaller steps in the analysis process (a small inquiry grainsize) is a predictor of total outcome quality (scoreerrors). 20

In the above descriptions of the characteristics of 'successful' and 'unsuccessful' search behavior, success is defined as an objective criterion. As to perceived analysis, this study points only to a relationship with task domain knowledge, a troubling conclusion that should clearly be investigated more deeply. The next paragraph takes the above results and interpretations, and translates them to possible implications for IS practice and research. Implications and conclusions Discussing this research's implications for managerial practice involves stretching the researchregardless of the levels of internal and external validity-beyond the limits of its exploratory design. If we can not, by design, assess causality, we also can see not make recommendations that involve manipulations to affect a desired change. The recommendations here, therefore should be seen as explicated hypotheses, as concrete to-be-tested procedures for improvement of use and design of IS. a. Guidelines on recommended information search behavior for problem recognition and diagnosis using an EIS. The characteristics of a successful search strategy were outlined above: initially broad and systematic browsing (a 'checklist' scanning approach), later focused based on the results of the scan with large conceptual distances between successive queries. These characteristics can be translated directly into procedural recommendations (including a concrete checklist) for users of an (E)IS (see [b]). Subsequent testing should answer the following questions: (1) does the procedural recommendation indeed change search behavior in the intended direction; (2) does it increase average performance; and (3) does it not systematically decrease performance for any group of users that, if not influenced, would have followed a different search strategy? Should the answer indeed be positive to each of the above questions, the procedural recommendations then can take on the form of recommended-good-practice and incorporated in training and/or handbooks. 21

b. Guidelines on how to provide support for problem recognition and diagnosis, i.e., the design of the (E)IS-interface. Should formulation of the aforementioned instructions (the 'recommended-goodpractice') prove worthwhile, the instructions may then be translated into specific design characteristics of the EIS (or, for that matter, the component of any information system where information search plays a prominent role). IS interfaces can be seen as more or less compatible with specific search strategies. Earlier this was illustrated by an example in which a particular clustering of information items in menus was suggested by Kirschenbaum (1992) as a means to facilitate 'chunking'. Assuming a 'conservation of effort' strategy (Einhorn and Hogarth, 1978; Todd and Benbasat, 1992) on the part of users, this would indeed promote the search strategy. Kirschenbaum further speculated (ibid., p. 351) that this clustering "might help develop the deeply structured schema necessary for expertlike performance," and concludes by hypothesizing "that information clustering facilitates situation understanding, and hence decision making." Whereas the above may provide support for a selective search (primarily of value in the later part of a search), it would have no promotional influence on an initial systematic browsing strategy. One possible option is to provide checklists on paper, through training or implemented as checkmarks in program menus listing attributes and dimensions for retrieval requests. Unchecked items then flag uncharted waters and point users in the direction of a more balanced initial search. The structure of the program may also influence users. Allowing users to 'tag' retrieved information items in a first scanning round so they can come back to them later, or offering them a prestructured memory aid to help them set up an agenda for their search may also influence their behavior in the intended direction. The above implications directly address issues of (recommended) information search behavior. Several implications of this research that are less direct, but perhaps equally important in terms of research implications focus on the role of biases and errors as well as on the merits of the processtracing technique employed: 22

Biases and errors: an important part of the research questions that were formulated at the outset of this study concentrate on the role of biases and errors in explaining the variance in decisionmaking performance. The results presented show that errors only have a minimal influence on total performance (defined as score - errors). The distance between actual score and the theoretical maximum score of 100 is very much larger (on average 41.2) than the amount of errors (average 2.4, expressed on the same scale as score). This suggests that for improving performance we should focus not so much on the avoidance of errors but more on a reduction of missed score. As Fraser et al. (1992) point out, biases occurring during a decision-making process do not necessarily have a negative influence on decision-making outcome performance (with some studies even indicating the opposite). Following this line of reasoning, biases were only investigated here as a potential source of explanation of errors that showed up in the final decision-making outcome. Given the minimal influence of errors on total performance, it logically follows that the explanatory role of biases regarding overall decision-making performance is also, at best, minimal. However, this study did not look at the role of biases as a possible explanation of overlooked findings or missed score in general, and the results show that biases do play a potential role in explaining that-crucial-part. For example, it may be that biases such as selective perception and a (perceived) confirmation bias account for a large share of the overlooked findings. Process-tracing: key to this study's data acquisition is the development and use of process-tracing techniques that-to a large extent-are automated. Supplemented with 'manual' protocol analysis of the log files (e.g., to trace the sources of errors), this technique allows for a detailed inside process view that proved to be both veracious and efficient to obtain, enabling a total sample size of 783 subjects over the various pilot tests and 50 managers for the final study, permitting the application of quantitative analysis techniques in addition to the mostly qualitative analysis techniques previously possible for studies employing process-tracing. 23

Many studies argue for the importance of a process-oriented approach to the study of decision-making processes, note their relative absence and see efficiency reasons as a major reason for this absence (Todd and Benbasat, 1987). The increased efficiency that enables an extended applicability of process-tracing techniques, developed and demonstrated in this study, can therefore be seen as an important outcome. Summarizing, this study provides descriptive and explanatory insight into the problem recognition and diagnosis process of individual managers using an (E)IS. Its result is a set of hypotheses that build toward constructive guidance for decision makers faced with similar tasks and for information systems designers that build systems to support this process. These hypotheses, presented and discussed above, embody the direct implications of this study. Testing them in a confirmatory replication study is a logical next step. References CATS-BARIL, W.L. and G.P. HUBER, (1987). 'Decision Support Systems for ill-structured problems: an empirical study'. Decision Sciences, 18, 350-372 EINHORN, H.J. and R.M. HOGARTH (1978). 'Confidence in Judgment: Persistence of the Illusion of Validity'. Psychological Review, 85/3, 395-416. ERICSSON, K.A., and H.A. SIMON (1984). Protocol Analysis. Cambridge, Mass: MIT Press. FORD, J.K., N. SCHMITT, S.L. SCHECHTMAN, B. HULTZ and M.L. DOHERTY (1989). 'Process Tracing Methods: Contributions, Problems, and Neglected Research Questions'. Organizational Behavior and Human Decision Processes, 43, 75-117. FRASER, J.M., P.J. SMITH and J.W. SMITH (1992). 'A Catalog of Errors'. International Journal of ManMachine Studies, 37/3, 265-307. JACOBY, J., J. JACCARD, A. KUSS, T. TROUTMAN and D. MAZURSKY (1987). 'New Directions in Behavioral Process Reserach: Implications for Social Psychology'. Journal of Experimental Social Psychology, 23, 146-175. 24

JOHNSON, E. and J. PAYNE (1985). 'Effort and Accuracy in Choice'. Management Science, 31/4, 395 -415. JOHNSON, P.E., S. GRAZIOLI, K. JAMAL, and I.A. ZUALKERNAN (1992). 'Success and Failure in Expert Reasoning'. Organizational Behavior and Human Decision Processes, 53, 173-203. KIRSCHENBAUM, S.S. (1992) 'Influence of experience on information-gathering strategies'. Journal of Applied Psychology, 77/3, 343-352. KLEINBAUM, D.G., L.L. KUPPER AND K.E. MULLER (1992). Applied regression analysis and other multivariable methods. Boston, Mass.: PWS-Kent. KOTTEMAN, J.E. and W.E. REMUS (1990). 'The performance effects of what-if analysis in a production planning task'. Proceedings of Environments to Support Decision Processes. Budapest. LIBBY, R. (1981). Accounting and Human Information Processing: Theory and Applications. Englewood Cliffs, N.J.: Prentice Hall. NEWELL, A. and H.A. SIMON (1972). Human problem solving. Englewood Cliffs, N.J.: Prentice-Hall. PAYNE, J.W., J.R. BETTMAN and E.J. JOHNSON (1988). 'Adaptive strategy selection in decision making'. Journal of Experimental Psychology: Learning, Memory and Cognition. RUsso, J.E. and P.J.H. SCHOEMAKER (1989). Decision traps. New York: Doubleday/Currency. SVENSON, 0. (1979). 'Process descriptions of decision making'. Organizational Behavior and Human Performance, 23, 88-112. TODD, P., and I. BENBASAT (1987). 'Process Tracing Methods in Decision Support Systems Research: Exploring the Black Box'. MIS Quarterly, 11/4, 493-512. TODD, P. and I. BENBASAT (1992). 'The Use of Information in Decision Making: An Experimental Investigation of the Impact of Computer-Based Decision Aids'. MIS Quarterly, 16/3, 373-393. TUKEY, J.W. (1977). Exploratory data analysis. Reading, Mass.: Addison-Wesley Pub. Co. TVERSKY, A. and D. KAHNEMAN (1974). 'Judgment under uncertainty: heuristics and biases'. Science, 185, 1124-1131. 25

Watson, H.J., R.K. Rainer and C.E. Koh (1991). 'Executive Information Systems: A Framework for Development and a Survey of Current Practices'. MISQ, 15/1, 13-30. WRIGHT, P. (1977). 'Financial Information Processing: An Empirical Study'. The Accounting Review, 52/3, 676-689. 26

I~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~I m Ii ft~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~t r i i i ---~-' --- ------------------------- ----— ~~ '-~ --- —'~- ----- ~ --- —-------------- - -' — ~ -' ------ ---- --- ~~-~ —~-~~-I —" ---- ~