02/18/00 Tools to Facilitate Transitions into Unfamiliar Writing Contexts Priscilla S. Rogers University of Michigan Jone Rymer Wayne State University Priscilla S. Rogers University of Michigan Business School 701 Tappan Street Ann Arbor, MI 48109-1234 0 734-764-9779 F 734-936-6631 psr@umich.edu Jone Rymer 3701 Middleton Drive Ann Arbor, MI 48105 H 734-996-1521 F 734-662-6199 jone.rymer@wayne.edu AUTHORS' NOTE: This research was partly supported by a grant from the Graduate Management Admissions Council (GMAC). We thank Jane Thomas and John Morrow for their contributions as members of our primary research team, and extefid special thanks to Thomas for her contributions to the coherence tool. We also thank Linda Zaddach, Kathy Welch, and Pam Russell who supported and facilitated this work. Also, thanks to John Sherblom who generously reviewed an earlier draft. Portions of the results were presented at the Association for Business Communication Conventions (1995, 1996, 1998) and at the Graduate Management Admissions Council Annual Conference (1995). We contributed equally to this article and have listed our names in alphabetical order.

I

2 02/18/00 ") Is writing in a new context largely dependent on "being there" over time? Can transitions into unfamiliar contexts be facilitated by diagnosing writers' performance in a type of writing they already know? With little information about transferable writing skills and no valid and reliable tools to facilitate such transitions, these questions remain unanswered. This field study identified four traits in academic essays (task, coherence, reasoning units, and error interference) that are significant for quite a different writing context, namely MBA programs in business schools. Analyzing more than 400 essays from the GMAT Analytical Writing Assessment, the research team identified transferable traits and developed tools describing them from a communicative perspective highlighting reader needs. The tools were tested through essay readings (including blind scoring), followed by consultations with 50 matriculating MBA students. Results from the readings, student surveys, and consultations showed that the tools enabled students with diverse needs to identify writing deficiencies relevant to their new writing environment. The findings suggest that these four traits and the communicative focus of the tools may be applicable to writer transitions into other unfamiliar contexts, including the workplace.

3 02/18/00 Tools to FacilitateTransitions'into Unfamiliar Writing Contexts For some years, social constructionist theory, research, and practice have demonstrated that there is no valid way of defining what makes a good writer without asking about the writing's context, discourse community, genre, and social circumstances (Faigley, 1992; Fairclough, 1993; Brown & Herndl, 1986; Rogers, 1989; Smeltzer & Fann Thomas, 1994). This robust line of research has demonstrated that —to a greater or lesser degree —writing goals, practices, processes, expectations, and standards all differ across disciplines (Bazerman, 1988; Swales, 1998), types of writing (Northey, 1990; Swales, 1990), and professional/organizational contexts (Driskill, 1989; Thomton, 1992). No wonder that individuals often have difficulty communicating with people in other groups and problems making the transition into new roles and writing contexts (Freedman, Adam, & Smart, 1994; Hill, 1992; Mathes, 1986). Indeed, research suggests that writers' competence varies as they move from one context to another (Nystrand, 1989; Webster & Ammon, 1994; Witte & Cherry, 1994). Individuals may fully learn discourse practices only after actively participating in a community; even expert writers sometimes require extended periods to gain facility in new writing contexts (Berkenkotter & Huckin, 1995; Winsor, 1996). In short, one of the current orthodoxies of our discipline finds effective written communications as contingent on context and challenges assumptions about commonalities across writing communities (Suchan, 1998). If writing like an expert depends in large part on "being there" for some period of time and learning the habits of a particular community, how can we help individuals move into new writing contexts? Beyond basic textual skills like writing grammatically correct

4 02/18/00 sentences (Anson & Forsberg, 1990; Kent, 1993), there is little evidence regarding what kind of writing deficiencies evident in writers' familiar discourse might be relevant elsewhere —such as writing required of students entering a professional program or managers confronting multiple new constituencies in fast-changing businesses. All these issues about transferable skills across contexts confronted us in a very practical way with the addition of essays to the Graduate Management Admissions Test (GMAT), the entrance examination for graduate business schools worldwide. Although the holistic scores on the GMAT essays (called the Analytical Writing Assessment or AWA) cannot be used to diagnose students' particular writing needs, our preliminary study showed that the essays themselves promised to be a rich source of information about incoming MBA students' writing abilities (Rogers & Rymer, 1996a & b). The problem was that the AWA essays represent a very different type of written discourse than students face in their MBA writing assignments, so determining what the AWA essays might tell us was neither straightforward nor simple. The AWA essays were promoted by the Graduate Management Admissions Council (GMAC) and the Educational Testing Service (ETS) as a kind of writing relevant to MBA and business writing (GMAC, n.d.). In fact, the essays actually represent a generic kind of analytical writing associated with academe, especially papers assigned in introductory humanities courses (Rogers and Rymer, 1995a & b). Some scholars call this classroom writing "essayist literacy," describing it as rational and decontextualized text in which both writers and readers are largely invisible (Farr, 1993; Farr & Nardini, 1996). In contrast, MBA writing is highly contextualized in the culture of the business school and

5 02/18/00 typically requires writers persuading particular audiences from managerial and disciplinary perspectives, for example, as a financial analyst, human resources manager, or operations specialist (Bridgeman & Carlson, 1984; Forman & Rymer, 1999a & b; Freedman, Adam, & Smart, 1994; Rogers & Rymer, 1995a & b; Russell, 1991). So despite some similarities as classroom analytical writing tasks that can demonstrate critical thinking ability (Ackerman, 1991; 1993; Durst, 1987; Penrose, 1992), the AWA essays and MBA writing represent quite different discourses. How, then, might the AWA essays be used to help new students transition into the discourse practices of MBA writing? More to the point, could the AWA academic essays offer an opportunity to help students identify deficiencies that might interfere with their development into effective MBA writers? This became the central question of this study. To bridge these two contexts, we conducted a field study with writing experts and MBA students in two business schools. First, we identified traits reflected in the AWA essays that writing experts considered relevant to successful performance in MBA writing. Second, we described these traits and developed them into analytical tools that could be used to facilitate writers' transitions into the unfamiliar discourse practices in graduate business school. Results from scoring essays and student consultations show that the analytical tools are relevant to widely diverse students' development as MBA writers, useful in helping them identify and address their own potential deficiencies. In practical terms, therefore, the study provides a way for MBA programs to tap the enormous potential of the relatively unused AWA (Knight, 1999; Noll & Stowers, 1998), a writing test taken by over 200,000 students annually. In theoretical terms, the study's significance lies in the approach it explores for linking discourses, providing a way for writing experts and non-experts alike to use performance in one context and community to facilitate their development in the practices of a new world of writing.

6 02/18/00 Literature Review and Theoretical Background Searching for a means by which to bridge these two writing contexts, we determined to develop "analytical tools" as a practical vehicle for assisting learners. Most relevant for this study, several tools have been developed and tested for both management writing and oral presentations by Rogers and her colleagues (e.g., Quinn, Hildebrandt, Rogers, & Thompson, 1991; Rogers & Hildebrandt, 1993; Rogers, 1994; 2000). Having been used with some success to assist both business students and professionals adjust to new communicative demands, these tools suggested procedures and prospects for tool development for this study. To construct tools for linking AWA and MBA writing, we devised an analytic scoring method because it can evaluate specific factors or traits of writing (Purves, Gorman, & Takala, 1988). Rather than merely evaluating overall text quality like the holistic AWA score, an analytic method can offer several significant reasons for an evaluation of text quality and diagnose specific problems (Huot, 1993; Pula & Huot, 1993; Yeh, 1998). Thus, tools based on an analytic scoring method could potentially help writers to identify deficiencies in their AWA essays that might inhibit their MBA writing tasks. Procedures for developing an analytic scoring method to evaluate specific factors of written text are well established (Wolcott with Legg, 1998) Developing an analytic scoring instrument involves two major steps: identifying the particular factors, sometimes called "traits," relevant to the goals and appropriate for the type of writing to be evaluated; and developing the scoring explanation and scale descriptors. Based on a long

7 02/18/00 tradition in both research and practice, analytic scoring was championed by ETS researcher Paul Diederich in his "Five Factors" for essay writing (that is, Ideas, Form, Flavor, Mechanics, and Wording; Diederich, French, & Carlton, 1961; Diederich, 1974). Indeed, a variety of analytic scoring schemes has been devised for many significant research projects, especially those determining what factors particularly impact overall quality in a holistic score (e.g., Connor, 1990; Connor & Lauer, 1985; Hoetker, 1982; Yeh, 1998). The foundation of some influential classroom instruction (Elbow, 1993), analytic scoring (though typically considered to be too expensive) has also been adapted for some large-scale assessments. The Vermont state assessment, for example, scores writing on purpose, organization, details, tone, and grammar (DeRemer, 1998). Despite the fact that analytic scoring offers ample precedent for developing a system to evaluate individual writing traits, it presents significant problems for a study attempting to bridge two discourses. First and foremost, analytic scoring developed for one discourse type does not necessarily apply to another (Faigley, Cherry, Jolliffe, & Skinner, 1985). Considerable research shows that analytic scoring schemes must be adjusted to each new type of writing (Lloyd-Jones, 1977; Walvoord & Anderson, 1998). Indeed, the inability to span discourse types with one scheme caused the failure of perhaps the most ambitious analytic scoring assessment ever attempted, the International Association for the Evaluation of Educational Achievement. Researchers eventually stopped the international study because, except for handwriting, evaluators could not consistently score writing across discourse types (Purves, 1992).

8 02/18/00 Clearly, analytic scoring has tended to focus on texts; thus, there is no model for applying a single analytic scheme to two discourse types that may define and value particular features of writing quite differently. In addition, analytic and holistic scoring alike have tended to ignore the communicative purpose of text, specifically factors of writing as they facilitate interaction between writers and readers. Not only do the scoring rubrics say little or nothing about audience, but the standard scoring technology designed to achieve interrater reliability in scoring does not permit true "reading" as we know it — that is from a communicative perspective by which readers' interpretive needs for understanding actually form the basis for evaluation. Indeed, both analytic and holistic scoring methods imply a decontextualized view of writing with evaluators acting as "raters" who narrowly assess texts according to a testing rubric rather than as "readers" who seek to understand writer intent (Haswell, 1998; Huot, 1996; Yancey, 1999). Yet, we, like others (Kuriloff, 1996), wondered if focusing on the reader-writer relationship might enable us to identify writing traits of concern across discourses. Attempting to associate and thus bridge discourses by viewing texts through the lens of "communicative interaction" (Bakhtin, 1986, p. 68), we developed and applied analytical tools in a social constructionist fashion. Spivey cogently described this point of departure in which meaning does not reside in the text but is constructed collaboratively by writers and readers: Writers construct meaning when they compose texts, and readers construct meaning when they understand and interpret texts....[Readers] build a representation of meaning in response to discourse goals, using previously

9 02/18/00 acquired knowledge to operate on, and to embellish, the minimal cues provided by the text.... A reader strives, as a writer does, to create a representation that is not only internally coherent, in that it makes sense and hangs together, but is also externally coherent, in that it is appropriate for the communicative context. (Spivey, 1990, pp. 256-7) Such a genuine or authentic reader tries to understand what the writer is trying to say —that is, what the writing is seeking to communicate. An authentic reader reaches out to the writer and plays an active role, participating by filling in details from the textual cues and from acquaintance with the context. Such authentic reading does not, of course, constitute a single method for reading all types of texts; as readers we use different methods for different genres and in our various roles we may read texts of the same genre quite differently depending on our purposes —simply using some texts as disposable drafts while engaging in others fully, including assignments written by our students whose needs we are attempting to understand. In this way, authentic-reading contrasts sharply with reading for the purpose of rating a text according to a scoring scheme. Authentic reading represents the kind of reading we mean in referring to "writer-reader interaction" in this study. In short, rather than focusing the analytic tools on the AWA essays purely as texts, we sought to identify reader-writer concerns in reading the essays that would also be relevant to readers of MBA writing. As readers of AWA essays —anticipating interaction with MBA writers in consultations about their writing deficiencies —we constructed the scoring scheme to reflect significant reader-writer concerns in both types

10 02/18/00 of writing, emphasizing those factors that would elucidate needs in MBA writing. Similarly, in applying the analytical tools, we eschewed the role of "raters" dictated by a scoring rubric and adopted the broader perspective of "reader-evaluators," that is, readers considering factors necessary for understanding the writer's intent. This kind of authentic reading relates all aspects of writing to the central rhetorical relationship; everything, from larger matters of context to surface matters of text, is contingent upon the interaction between writer and reader. Adopting a "reader-evaluator" perspective in preference to the more limited evaluator role in constructing and using an analytic scoring system has strong support among writing assessment scholars. Many are challenging both the validity and the reliability of scoring procedures (Haswell, 1998; Hayes, Hatch, & Silk, 2000). Many are calling for new forms of assessment that go beyond the notion of the written text as a "right answer" which inevitably must.constrain writers' choices as well as reading processes of raters (Moss, 1994; Williamson, 1993). The reader-evaluator perspective is also in harmony with current assessment trends among the wider psychometric community (Broad, 1997; Huot, 1996; Moss, 1994). Theorists Guba and Lincoln (1989), for example, call for replacing the psychometric model of assessment with a decisionmaking process that involves sharing views with all stakeholders through discussion and negotiation, seeking to reach a relative consensus. Valuing agreement but not requiring it (with some similarities to the portfolio assessment movement) represents a "shift from a desire for the uniform replication of scoring practice to an assumed negotiation and

11 02/18/00 acceptance of different readings" (Yancey, p. 493, 1999) because real readers are responding to real writers, not to texts in isolation. Thus, during the initial development and the final testing of analytical tools in this study, we adopted the most authentic role possible, that of reader-evaluators trying to understand the student-writers and their needs. However, we also employed a more traditional rater role during the middle phase of the study. Here we rated the essays according to some standard assessment procedures, using blind scoring according to a rubric to achieve interrater agreement (White, 1994; Wolcott with Legg, 1998). We wanted to demonstrate some reliability for this research for those adhering to traditional psychometric principles, including many communication scholars and some stakeholders in business schools. Overall, our greater goal was to function as reader-evaluators, developing the tools that could engage novice MBA writers in a dialogue about the demands of their new writing environment. The effect of this multi-faceted methodology appeared to be effective in developing tools that help writers make the transition to the new discourse practices of MBA programs. Method Conducted by a research team drawn from two business schools, this exploratory field study involved three overlapping stages: first, reading essays to identify the specific factors or traits of writing —that is, recurring writing problems in AWA essays that are consequential for MBA writing; second, developing analytical tools —that is explanations of the traits together with a scoring rubric; and third, testing the analytical tools by blind scoring of the essays followed by authentic reading and consultations with MBA

12 02/18/00 students. After describing the research team, study sites, and sample data, we discuss our research process involving trait identification and analytical tool development, and testing. Research Team Eight writing experts participated in this study, a primary group consisting of four communication faculty and a secondary group of four writing tutors. All had long-term appointments at one or both of the graduate business schools in this study, all were familiar with the requirements and standards of MBA writing in a wide variety of regular business classes and field studies, and all were experienced consultants on MBA writing -Conducting the bulk of the research, the primary research group consisted of two faculty colleagues working together with the two of us, the co-principal investigators. All four primary researchers had conducted local writing assessments, served as ETS raters for the GMAT AWA, and taught various MBA communication courses at one or both of the research sites. The secondary group of writing tutors participated independently by reading essays for a trait identification exercise. As the principal investigators in this research, both of us participated in each facet of the study. We also acted as outside observers, configuring the project, studying what happened, interpreting it, and reporting it here. We acknowledge our immersion in all these activities and, as many others in the interpretive tradition have admitted, we believe in the power of subjective, informed perspectives and see our multiple roles as a source of synergy, not as a conflict of interest (Cross, 1994; Doheny-Farina, 1993; Geertz, 1983; Lincoln & Guba, 1985; Ray, 1993).

13 02/18/00 Research Sites The research sites comprised two sharply contrasting MBA programs and business school environments that provided significant diversity in the study population. Located in large Midwestern public universities, one is a top-ranked or "elite" business school, enrolling a select few among many applicants worldwide, whereas ethe o r is an urban "mainstream" school, enrolling fill-time career employees in area businesses. The academic profile of students at the elite institution predictably showed very high GMAT scores with a third of the class ranked in the tolp 5 percent on the theAWA. In contrast, records of the students at the mainstream program showed both GMAT and AWA scores just slightly above the average of all test-takers.1 Sample Selection The first two stages in the study, identifying key traits and developing the analytical tools, involved reading hundreds of AWA essay samples from the two schools. The third stage, testing the analytical tools, focused on evaluating the essays with the tools and conducting writing conferences with students at both schools. Here we describe selecting the essay sample and the student sample. To identify the traits and develop the analytical tools, we collected an overall sample of essays at all scoring levels, that is half-point intervals on the 6-point scale (except for the lowest AWA score of 1.0, which was unavailable at either school). The resulting sample comprised 230 sets of AWA essays (that is, 460 essays because each GMAT test-taker writes two essays), 100 sets from the elite school and 130 from the mainstream school (which afforded more mid- and low-scoring essays). Through initial

14 02/18/00 random holistic readings and discussions of these essays, the primary researchers determined that the goals of the study could best be met by concentrating on essays with mid- and low-level scores. We found these essays revealed frequent deficiencies representing critical problems for students' MBA writing. (See Appendix A for sets of AWA essays with the GMAT holistic scores.) From the overall sample we subsequently selected four smaller, comparable samples, each consisting of approximately 12 essays with scores clustered at the middle and lower range (AWA 4.0 and below at the elite school and AWA 3.5 and below at the mainstream school). Each of these smaller essay samples was used at some point in the research process, to identify traits or to develop the analytical tools. To test the analytical tools, we constructed a complementary sample of matriculating students from the two business schools to participate in consultations based. on their AWA essays. The total sample was 50 students. After engaging eight students in pilot consultations, we invited 20 students from the elite business school and 22 from the mainstream school to participate in writing consultations. Using a complementary (rather than proportional) student sample allowed for reasonable distribution of AWA scores ranging from 2.5 - 4.0 with a mean score of 3.3. The student sample from the elite school reflected the AWA scores in the entering MBA class, but we weighted the mainstream sample with scores of AWA 3.0 and below, especially at the 3.0 and 2.5 levels (Table 1). This weighting ensured a sufficiently diverse group of low-scoring students whose writing would display deficiencies exemplified in all GMAT test-takers worldwide, a quarter of whom scored AWA 3.0 and below. Because

15 02/18/00 only 12 percent of the students at the mainstream school had scored AWA 3.0 and below (and the elite school only 4 percent), a proportional sample of participants would have inadequately represented the lower-scoring students, the target group for our study. [Table 1] The complementary student sample also reflected substantial diversity by GMAT score (elite mean of 620; mainstream mean of 462), undergraduate major, gender (31 percent women), race/ethnicity (17 percent African American, the largest group), and by proportion of international students (33 percent). Although the number of internationals was much higher than in the matriculating classes at either school (elite at 20 percent; mainstream at 9 percent), one would expect English as Second Language (ESL) speakers to be disproportionately represented among the lower-scoring ranks on an English writing test. Trait Identification To identify the traits for building the analytical tools, the four primary researchers read, discussed, and evaluated AWA essays, defining writing deficiencies likely to impact student-writers' initiation into the new writing environment. Assuming the role of authentic readers attempting to understand writers' meaning, we read each set of sample essays aiming to answer the question: "What are the major communication problems in these essays that I would discuss with this student-writer in order to help him/her with MBA writing assignments?" After discussing and classifying our findings, we selected the most critical recurring problems in both types of writing to form the bases of analytical tools. Our goal was to develop an extensive list of communication problems and then

16 02/18/00 build a consensus on those that might critically affect performance in MBA writing. In effect, we were searching for problems in the genre of the academic analytical essay to identify potential problems in highly contrasting types of writing such as case analyses. Based on researchers' readings and discussions of many sets of essays, we developed tentative conclusions about the most significant communicative traits. To serve as a check on our conclusions, we devised a separate analysis involving four MBA writing tutors. Following the same technique used by the primary researchers, each of these experts read ten sets of essays in the context of their tutorial obligations — that is, when reviewing the essays, they listed the major problems that they as tutors would discuss with a student to help him/her write successful MBA assignments. Operating independently, these tutors attempted to review the essays as much as possible as they would review students' writing assignments for MBA consultations (that is, determining what the student needs to do to meet the professor's assignment). After reporting their individual findings in writing (Appendix B), the tutors met as a focus group with us so that we could explore their views further. Negotiating among the tutors and primary researchers' conclusions, we reached a consensus on the deficiencies in the AWA essays most likely to affect MBA writing performance significantly. On the basis of strong agreement among the primary researchers and independent confirmation from the tutors, we identified three provisional areas of concern: addressing the assigned task, reasoning logically, and errors. With more modest agreement, we identified two other issues, organization and clarity. After preparing descriptions of these components, the primary researchers applied them in

17 02/18/00 analyzing new sample essays and eventually recast the organization and clarity descriptions into a single factor, "coherence." Thus, we identified four essential traits relevant to the communicative contexts of AWA essays and of MBA writing: addressing the writing task, coherence, logical reasoning, and controlling error. Interestingly, considerable assessment research verifies the validity of three of the four traits as significant components of analytical writing quality —the error and reasoning traits, and to a lesser degree, coherence. (Assessment research on the concept of task is conflicted because of the frequent discrepancies between the assignment and how the writer construes it; Cherry & Witte, 1998; Ruth & Murphy, 1988). Control of error rivals logical reasoning as the most important predictor of overall argumentative essay writing quality (Yeh, 1998), and it is particularly salient in predicting lower-scoring papers (Haswell, 1998). Logical reasoning (the structure of argument from the Toulmin model) is the most important predictor of overall writing quality as evidenced in holistic scores of argumentative essays, while coherence is an important but lesser predictor, frequently intertwined with reasoning (Bracewell, 1998; Connor, 1990; Connor & Lauer, 1985; Crammond, 1998; Durst, Laine, Schultz, & Vilter, 1990; Yeh, 1998). Tool Development Based on the traits of task, coherence, reasoning, and error, we constructed each analytical tool as a separate, stand-alone sheet including an explanation of the trait as it relates to the reader-writer interaction and a standard 6-point evaluative scale with descriptors.

18 02/18/00 Tapping the research team's expert knowledge of students' problems in preparing MBA writing assignments, the analytical tools reflect the writer-reader interaction involved in the review of the AWA essays and anticipate the demands of MBA writing. The coherence tool, for example, describes what the writer might do (fail to provide sufficient context and cohesive devices) and what the reader might do as a consequence (perhaps reread, attempting to fill in the gaps). At first glance, the tools may appear to be a type of checklist or "text-focused" evaluation system so familiar in textbooks (Schriver, 1989), but upon closer examination it becomes apparent that they explicitly concern readers' evaluations of textual features in the process of attempting to understand the writer's message. Guided by the research literature and a recursive process of piloting the tools in individual writing conferences, we designed the tools to be used by both writing experts (for example, instructors and consultants) and by non-expert students conducting selfassessments. Thus, we attempted to describe skills that contribute to effective performance on each trait using language that non-experts might readily understand and apply, even without the benefit of instruction. In constructing the reasoning tool, for example, we found that the MBA students in our study required some definition of "claim" and that the term "support" worked better than "data" in Toulmin's model of informal logic (1958). Moreover, the tool simplifies Toulmin's model in accordance with recent research suggesting that the concept of warrant is difficult even for sophisticated writers to grasp and that claim and data are the essential components (Bracewell, 1998; Crammond, 1998; Yeh, 1998).

19 02/18/00 Testing the Tools The four analytical tools were tested by scoring essays and by conducting individual student consultations. Each consultation was conducted by a member of the primary research team using a consultation protocol designed for the study. Additional data regarding the workability of the tools was obtained by anonymously surveying the students after their consultations and by recording extensive research notes immediately after each consultation. To test the tools, the primary research team scored all students' essays "blind" — independent of fellow researchers' scores and without any knowledge of the students' identities. Thus, we followed several of the conventions in traditional writing assessments aiming for interrater reliability (White, 1994; Wolcott with Legg, 1998). However, we deliberately eschewed standard rater training and monitoring to achieve conformity. We considered that these procedures would undermine our efforts to tap the knowledge of our readers and to fully understand the needs of the student-writers (Haswell, 1998), the primary goal of this study. Moreover, since the AWA is a set of two essays, the test provided two writing samples that we used in a complementary way to interpret a writer's needs. Instead of aiming for interrater reliability on the text by scoring one of the essays, we read the two essays together, attempting to understand the writer's strengths and weaknesses on each of the tools. To prepare for the writing consultations, each research team member rescored the essays of his/her designated students. Here we each assumed the role of an authentic reader preparing to meet our student writers and help them diagnose their communicative strengths and weaknesses with the aid of the analytical tools. For this reading, each researcher investigated the identities of his/her individual students and reviewed pertinent

20 02/18/00 background information (AWA holistic scores, GMAT test scores, native language, educational background). Finally, each researcher considered their fellow researchers' comments and scores on the individual student's essays. In light of this fuller picture, each rescored the essays of their assigned students. In addition, the researcher identified one or two essential tools along with scores, and constructed a "writing profile" listing the student's major writing strengths along with explicit suggestions to improve (Appendix C). Individual Consultations Consultations with student-writers, averaging 40 minutes, were conducted face-to* face with over two-thirds of the students (69 percent) and by telephone with the remainder who were all full-time employees at the mainstream school. (For the telephone conferences, students received copies of their AWA essays and the analytical tools by mail so that they could review them during the writing conference). These writing consultations were structured by a protocol that incorporated the tools in a systematic way (Appendix D). Built on research of writing tutorials, user-testing for document revision, and discourse-based interviewing (Anson, 1989; Beach; 1989; Harris, 1995; Harris & Silva, 1993; Hynds & Rubin, 1990; Odell, Goswami, & Herrington, 1983; Straub, 196; Schriver, 1992; Walker &. Elias, 1987), our consultation protocol asked students to recollect their writing of the AWA essays, connecting their recollections to the traits in the analytical tools to help them consider communication issues that might impact their writing in the new MBA environment. Writers can reliably recall the details of past writing experiences, especially if the writing were significant to them and if they review and discuss actual passages of the text (Chin, 1994; DiPardo, 1994; Green & Higgins, 1994; Smagorinsky, 1994). Indeed, most students in this field study —expressing concern or anger about their low AWA scores-readily remembered their experiences in writing the essays some months or even a year

21 02/18/00 earlier. Prodded to explore their choices in composing the essays, the students remembered much about the writing experience and articulated thoughts they may never have considered before. Student Survey After the writing consultations, students completed an anonymous survey. This post-consultation survey solicited information on students' attitudes toward the discussion of their essays and whether they felt that the consultation with the analytical tools furthered the understanding of writing concerns that needed to be addressed as they entered the MBA writing environment (Appendix E). The survey response rate was 88 percent; only two students at the elite school and three at the mainstream school failed to respond. Researcher Notes Post-consultation researcher notes and the "writing profiles" (Appendix C) constitute the major qualitative data in this study. Ranging from 200-500 words of narrative and analysis, the notes describe the student as a writer and his/her participation, recount student comments and reactions to the tools, characterize the overall discussion of the essays, and analyze the session in light of the goals outlined in the consultation protocol. The writing profiles provide the researcher's observations on the student's strengths and opportunities to improve. The Analytical Tools This section presents the primary outcome of this study, the four analytical tools: task, coherence, reasoning units, and error interference. The tools form a logical

22 02/18/00 progression for discussing the writers' and the readers' actions (that is, their respective processes in attempting to communicate). The task tool contextualizes the communication event for writers and readers, examining what the writing is for. The coherence and reasoning tools concern writers' development of content and readers' success in comprehending it. The error tool explores the extent to which textual errors may distract the reader or hurt the writer's credibility. The tools are presented here in their final form as tested for this study. Each tool synthesizes a significant body of research so the references are gathered in a note with each figure. In addition, a revised generic version of the task tool —developed to use for MBA courses, executive education, and company training programs —is provided in Appendix F. (The task tool for the study was tailored to the AWA argument essay so it is not suited to other applications.) Task Tool The task tool (Figure 1) treats the text as the writer's action in response to a social need involving reader expectations and situational obligations.2 The task tool highlights the writer's responsibility for figuring out what the writing is for and what the readers expect — whether the assignment is spelled out (as in the AWA), described somewhat vaguely (as in some MBA assignments), or left up to the writer to determine. Fulfilling the task involves determining the genre required, what to say and how to say it, and how to address reader expectations. The task for the AWA argument essay, for instance, is narrowly academic, asking a writer to analyze the logical structure of the argument, critiquing major flaws. In contrast, the overriding task for much MBA writing is to pinpoint a critical business problem, analyze it, and recommend a feasible solution for the circumstances. [Figure 1]

23 02/18/00 The task tool can intensify writers' awareness that writing is interactive, involving readers who come to a piece of writing with certain expectations and requirements. Indeed, the tool explicitly acknowledges that there may be some discrepancy between the assigned task and the writer's interpretation of it. An AWA assessor evaluates the extent to which an essay critiques the particular argument statement in the test; an MBA professor determines if a student develops logical solutions to a business case by using the particular theories recently learned in class. Thus, "writing to the task," as we call it, involves both negotiating and fulfilling needs and expectations of readers. The task tool further suggests that the nature of tasks and reader expectations can differ radically across contexts; writers cannot assume that choices appropriate for one situation will be applicable to another. Misunderstanding the nature of the task of critique can influence AWA writing in significant ways, as we have seen in reading hundreds of essays. For example, some GMAT test-takers, interpreting the writing task as a request to demonstrate their knowledge of Standard English, focus on crafting a syntactically and grammatically correct response without any solid analysis. Others revert to their own workplace writing experiences or impose their own expectations for a workplace audience on the AWA. Instead of writing an academic essay that critiques the argument, they recast the argument as a business problem and propose a practical solution for management. Applying the task tool to the AWA argument essay thus provides the perfect opportunity to discuss different writing contexts with very different audiences and contrasting expectations for writing. Coherence Tool

24 02/18/00 The coherence tool (Figure 2) examines whether the writing makes sense to the intended reader.3 A coherent text allows the reader to proceed with little or no re-reading, successfully interpreting what the writer is trying to say, point to point. Without explicitly mentioning psycholinguistic principles such as the "given-new contract," the coherence tool makes it clear that understanding the text depends upon shared writerreader knowledge. According to the tool, the fundamental determinant of a text's coherence resides in the simple question: Does it hang together and make sense to its readers? Despite this seeming simplicity, the coherence tool is very ambitious in scope and basic assumptions. Combining the discourse-structuring concept of coherence and the linguistic devices of cohesion, the tool covers skills in developing a thesis, organizing content, constructing paragraphs, and applying relevant grammatical rules on pronoun references. The basic assumptions challenged by the coherence tool are also ambitious, especially the deeply-held belief that clarity is a property of text itself rather than of communicative interaction. For example, the tool asserts that the effectiveness of cohesive ties depends on the writer-reader relationship and communication rather than rules. Even though several well-known linguistic measures value frequent use of cohesive devices, large numbers of devices may not correlate with high coherence. In fact, experts' highly admired writing may display relatively few cohesive devices, while novices' awkward texts may be loaded with them. In short, the tool specifies that coherence truly depends on the writer's effective communication with the intended reader.

25 02/18/00 [Figure 2] Derived from this communicative perspective, the coherence tool depends less upon writers' abilities to use textual devices than upon their skill in creating text for the context, appropriating sufficient textual features to make the text accessible for intended readers. As the coherence tool specifies, cohesive devices that connect one item to the next (such as transitions, references, repetition, and substitutions) and various types of metadiscourse (features that offer commentary on the text itself) may contribute to coherence, but the ultimate test is how readily the reader can comprehend the continuity of the writer's ideas, sentence-to-sentence, and the relevance and relationship of the text to the context ("Oh, yes," the reader may recall, "this comment refers to that meeting we had on Tuesday."). From this perspective, achieving coherence depends as much on what readers infer from the text as on what the text explicitly says. Accurately predicting reader inferences is critical to achieving coherence. Taking this reader-based view, the coherence tool presents writing as highly dependent on the writing context, including the discourse practices and rules of the community and the type of writing involved. Indeed, what might be coherent to one set of readers in one context, could be unintelligible to those in other contexts. Moreover, individual readers have variable standards and criteria for judging a text's coherence; they apply different sets of standards and criteria in different circumstances, partly on the basis of their individual preferences and partly because of requirements in the various communities to which they belong.

26 02/18/00 The coherence tool specifies that the writer's responsibility is to create a text specifically for those who need to comprehend it. This requires learning the conventions and rules for communicating, including what kinds of expectations readers have, what they know, as well as how they typically use and read texts of various kinds. For example, in writing case analyses, novice MBA students must learn how to use the case to develop their analysis without retelling the story. All in all, the coherence tool represents an acknowledgement that there is no reliable method for fully explaining why readers may find one text coherent and another not; indeed readers' responses to a text represent the only reliable method for evaluating coherence. Reasoning Units Tool Evaluating effectiveness from a communicative perspective, the reasoning units tool (Figure 3) posits that a unit of discourse is not logical in and of itself but onlyas it has validity for an intended reader in a particular context.4 The reasoning tool simplifies a model of informal logic which defines a unit of discourse as consisting of two irreducible components: claim (a conclusion the writer puts forward) and support (data and explanation for the reader). The tool focuses on these two components without explicitly discussing "warrants" that show connections between them because, as noted earlier, warrants are amnbiguous for even experts to identify. Still, the concept of warrant is implied by the tool's name, "reasoning unit," and description. Clearly, to be a "unit," a claim and its support must be logically joined together. [Figure 3]

27 02/18/00 According to the reasoning tool, the credibility and sufficiency of claims and support, and the adequacy of the links used to form them into units, all depend on reader interpretation and the communicative context. Readers distinguish claims from supporting evidence even in lengthy text and make decisions regarding the validity of claims based on the kind, relevance, and amount of supporting evidence provided. Moreover, claims and support vary in nature, density, and use —all depending on the writer's knowledge and relationship with the reader and the expectations inherent in the particular writing situation. For example, test evaluators reading an AWA essay for the argument question expect to find claims in the form of writers' personal opinions, observations, and knowledge of history and current events. In contrast, professors reading MBA students' field study projects expects claims-as-recommendations supported with disciplinary principles, models, extensive statistical analyses, and various calculations. In these and other writing contexts, readers have very specific assumptions and expectations that writers must consider to compose units of discourse that are sufficient and credible. In contrast with the coherence tool, the reasoning tool tends to be user friendly, even for novices. The model of informal logic with claim and support is straightforward and well known. For writing experts, claim and data have long been used to evaluate writing quality, both in conjunction with holistic evaluation (e.g. criteria for the AWA issue question include "development of a position with insightful reasons and/or persuasive examples") and as the basis for analytical tools focusing on traits of discourse ranging from English essays of international students to managerial memoranda.

28 02/18/00 Error Interference Tool The error interference tool (Figure 4) takes the communicative perspective that errors are not simply "there" in the text, objective phenomena that can be counted reliably.5 Instead, errors are defined by the extent to which they interfere with communication between the writer and reader. The error tool assumes that when reading for meaning, most readers neither read for nor perceive errors unless the errors are severe and/or frequent, or unless (like writing instructors and editors) they expect to find errors. The error tool encourages users to read for meaning, noting errors-only as they impede the interpretive process. In other words, the error tool discourages hunting for errors. Instead it asks users to assess whether textual features inhibit the writer's efforts to communicate with the reader. [Figure 4] By confronting the whole notion of error as a dynamic component in the writerreader interaction, the error tool aims to strike a balance between the "cult of correctness" and the notion that textual distractions do not matter. Although there is no consensus among language style panels or researchers on how to define or measure error, errors can be usefully classified for writers by how potentially disruptive they may be on readers' attempts to relate to the writer and to understand his/her meaning. The error tool discriminates among errors' disruptive power by suggesting that some textual 'transgressions might receive no notice from target readers; others may distract readers, affecting the writer's credibility; others may result in misreadings; still others may disrupt the reading process altogether, causing the reader to wonder what the writer meant to say.

29 02/18/00 Classifying errors by their intrusion on the reading process underscores the idea that writers must adapt their editing procedures to different contexts and readers. Business and content-area academic readers, for example, may respond to fewer errors than writing instructors. In addition to providing a communicative perspective for evaluating errors, the error tool also suggests the value of classifying errors by diagnosing the writer's developmental needs. For example, error patterns may demonstrate a writer's current stage in learning to control textual conventions and thus furnish an individual learning agenda. In contrast, random textual errors or "mistakes" resulting from drafting under pressure or inadequate editing may be within the writer's power to correct. Indeed, many textual conventions traditionally classified as "errors" may be perceived as errors by only an elite few, particularly many usage and mechanical violations of "Standard English." Devaluing such errors can attenuate some writers' anxieties about producing error-free text, helping them focus on severe and critical errors while concentrating on the communicative issues. In effect, the error tool attempts "to put error in its place," relegating error control to an editing skill that matters as it impedes communication or damages writers' credibility. Results from Testing the Tools This section reports quantitative and qualitative results from testing the tools. Blind scoring of the AWA essays with the tools, post-consultation survey data, and evidence from primary researchers' consultation notes suggest that the tools are effective in facilitating writers' transitions into a new writing context.

30 02/18/00 Blind Scoring Data from blind scoring of the essays show that all of the analytical tools function effectively according to standard assessment procedures. Each of the tools correlates fairly well with the holistic score on the AWA essays, but none correlates highly (see Table 2). This result suggests that the analytical tools identify components of writing distinct from overall writing ability. [Table 2] Blind scoring also shows that the analytical tools discriminate among the four traits of writing. The correlations in Table 3 indicate that error correlates only weakly with task and reasoning, whereas it correlates reasonably with coherence —not unexpected because errors can weaken cohesion. These results suggest that the error tool represents a fairly distinct textual skill and that error-free writing is not construed as effective writing. This is significant because research shows that error-free text tends to have a halo effect on readers' evaluations, raising their scores on overall quality; conversely, error-ridden text tends to have a negative effect on readers' impressions, lowering their scores on a text's overall quality (Bloom, 1956; Haswell, 1998; Raforth & Rubin, 1984). [Table 3] Apart from error, the task, coherence, and reasoning tools correlate quite highly with each other, but none is identical to any other. These results suggest that each of these tools does, at least to a limited degree, differentiate a distinctive component of writing. The relatively high correlations are not surprising; considerable research shows that evaluators cannot easily differentiate such traits, particularly those as closely related

31 02/18/00 as logical reasoning and coherence (Durst et al., 1990; Raforth & Rubin, 1984; Witte & Flach, 1994; Yeh, 1998). Furthermore, interrater reliability of researchers' scores for this study demonstrates that each tool comprises a distinct writing trait and discriminates among levels of performance relevant to it (Table 4). Confirming previous research on scoring error, the highest agreement among raters was achieved on the error tool (Haswell, 1998). Whereas the scoring agreement on the task and reasoning tools is decidedly "fair to good" (according to Fleiss's interpretation of Cohen's Kappa, 1981), agreement is just above the level of "poor" for coherence.6 Still, the interrater reliability is comparable or better than rates achieved in many analytical assessments (DeRemer, 1998; and see Hayes et al., 2000). Indeed, the rates of agreement are rather high considering that readers were not trained or monitored to achieve conformity in their judgments and that they scored the two AWA essays as a unit (a more complex task than in a conventional assessment in which readers score each sample separately). [Table 4] Scoring as Reader-Evaluators Subsequent to the blind scoring, each researcher prepared for the writing consultations by rescoring the essays as an authentic reader-evaluator, taking into account background information about the student as well as the blind scores. The resulting scores correlate quite highly with the original blind scores, but less so than might be expected were the goal of the reader-evaluators simply to achieve interrater agreement. As Table 5 shows, although the individual readers' scores on all tools except reasoning correlate with

32 02/18/00 the blind scores in the "excellent" range (according to Fleiss, 1981), none is higher than.85. Clearly, in reading and rescoring the essays prior to the writing consultations, the researchers did do something different in attempting to respond authentically to the full individual and anticipate his/her needs. [Table 5] Student Post-consultation Survey Post-consultation survey data demonstrate that the student-writers involved in this study found the consultations with the analytical tools enhanced their learning about writing in the MBA environment. Overall, the students explicitly said that they found the consultation worthwhile (mean = 4.6 on the 5-point Likert-type scale), and their answers to most of the other survey questions supported this view (Table 6). [Table 6] Responses on several questions show that students believed that the consultations helped them begin to differentiate the academic writing represented by the AWA essays from the MBA writing they were beginning to experience in their business school classes. In effect, their survey responses ratify the notion that the traits presented in the analytical tools can be applied in two different discourses, For example, most students agreed strongly that some of the problems they were experiencing with their MBA assignments had been identified by discussing their AWA essays (mean 4.7). Most also acknowledged that the consultations went another step by helping them learn ways to go about improving some problem areas that would be important for their MBA studies (mean 4.4). Though with somewhat less enthusiasm, students also affirmed the value of

33 02/18/00 the analytical tools —both for learning about their problem areas in the AWA essays and for understanding how to improve for their MBA writing assignments (mean 4.2). Finally, about three-quarters of the students agreed that as a result of their participation in the consultations, they felt "more confident about doing writing assignments in the MBA program" (mean 3.8 at elite school; 4.2 at mainstream school). Since lack of confidence can undermine writing performance (Pajares & Johnson, 1994), any claimed increase in confidence is beneficial. Notably, post-consultation survey results were quite similar between the two business school sites for this study. Mean responses to four survey items directly concerning learning about writing (getting answers to their questions, learning about their problems, learning how to improve, and finding value in the consultation) were 4.61 (n = 18, SD.40) at the elite school and 4.63 (C = 19, SD.46; t (35 -.14,.=.88) at the mainstream school. Researcher Consultation Notes The consultation notes comprise the primary researchers' attempts to record students' reactions to the consultations and the tools so that they are available for review (Chin, 1994; Cross, 1994; Smagorinsky, 1994). Researcher notes of meetings with individual students strongly suggest that the analytical tools facilitate a collaborative discussion around issues of consequence for student-writers making the transition into the new writing context. Indeed, these notes show that the traits presented in the tools proved significant not only in regard to the graduate business school environment, but also to the students' various workplace environments (Table 77).

34 02/18/00 [Table 7] The students in our sample typically found the analytical tools relevant to many of their individual writing concerns and meaningful for evaluating their writing in unfamiliar contexts, both in the business school and at work. Wile some of the concepts presented in the tools were entirely new to them, researcher notes indicate that students were able to use the tools with little or no instruction. Indeed, the tools prompted students to apply the traits to their current writing tasks, to raise questions about recurring problems they had experienced or about possible alternatives to their writing habits, and to draw conclusions about the usefulness of the tools for evaluating their MBA and workplace writing. Discussion This study demonstrates that students find the analytical tools helpful in analyzing their AWA essays and can readily apply the tools to their MBA assignments. Indeed, the tools facilitate students' discussion and personal assessment of their own writing, enabling them to recognize significant opportunities to improve their writing in the unfamiliar context of the MBA program. Although our student sample size is too small to generalize from the results, the schools in this exploratory field study do represent sharply contrasting educational contexts, and the participating students represent a wide range of overall academic abilities and writing deficiencies. Therefore, the results may be applicable to quite diverse contexts and users. In practical terms, the tools offer an opportunity for MBA programs worldwide to incorporate the AWA into their communications curricula, taking advantage of this largely

35 02/18/00 ignored resource. The task, reasoning units, and error interference tools work effectively to assist student writers transitioning into the new writing context of the business school, and the coherence tool works reasonably well. The correlative data suggest that the tools are highly interrelated traits in terms of overall text quality, yet each constitutes a distinctive concern. This allows the tools to function as an interactive set. Moreover, the isolation of error in a separate tool attenuates conventional tendencies for evaluators to be unduly influenced by errors when judging other components of writing (Haswell, 1998), and thereby frees readers to focus, say, on the development of reasoning units. Indeed, the power of the tools as a set derives partly from the limited array and composition of the traits. Since the tools were developed to reflect traits in academic essays relevant to MBA writing, they do not explicitly address some important requirements for business-school writing (such as visual formatting), nor do they cover criteria that are part of the AWA evaluation (such as syntactical variety, style, and diction; GMAC, n.d.). In more theoretical terms, the study suggests that the tools may be applicable to writers making other transitions besides moving from the AWA academic essays to MBA writing. As the researcher consultation notes show, many students participating in this study related the tools to their own workplace writing experiences. Subsequently, we have extended these applications to other genres and contexts (see the pedagogical section below). Moreover, given the communicative perspective governing the tools, the particular traits are likely to be of central significance to many discourse practices, including those in the workplace. Rather than focusing on textual components, the tools raise such matters

36 02/18/00 as they are relevant to the writer-reader dialogue: the task tool focuses on whether the writer fulfills the assigned task for readers and the situation; the coherence tool on whether writers present a message that "hangs together" and makes sense for readers; the reasoning tool on whether writers develop claims and support that readers find logical and credible; the error tool on whether writers control errors sufficiently so that readers are not disrupted in trying to grasp the meaning. In effect, viewed through the lens of "communicative interaction," the traits may be applied in quite different ways to various writing contexts. This focus on the communicative activity shifts attention to observations regarding what the writer and the reader do with the text. The communicative focus enables us to use the familiar writing of the academic essay to help student-writers to identify potential problems in the unfamiliar writing of MBA programs, and informally, to their workplace writing as well. As Witte argues, studies of writing in particular settings cannot be limited "to the study of printed and spoken linguistic utterances" (1992, p. 240). Discussing Toulmin's claim-data structure for logical argument, Bracewell explains that the "terms refer not to characteristics of an argument text, but to how information functions in the text, and as such implicate interpretive processes by the reader" (1998, p. 153), and we might add, implicate processes by the writer as well. One might further argue that the writer-reader dynamic bridges different genres, contexts, and discourse communities. As numerous studies have shown, for example, argument structure can be appropriated to very specific disciplinary demands and to practices of diverse discourse communities —what is convincing to an attorney, for

37 02/18/00 example, may be quite unconvincing to an engineer (Miller & Selzer, 1985; Spilka, 1990; Winsor, 1996). Focusing on reader needs, the reasoning tool accounts for such contextual variation. Similarly, coherence and error may be perceived quite differently by management professors, their peers in English departments, or managers (Hairston, 1981, 1992; Leonard & Gilsdorf, 1990, 1999). Construing the task, reasoning logically, developing coherent and relatively error-free text all do matter to these readers across the board, although each is likely to perceive and operationalize each trait quite differently, context to context. By focusing on reader needs, the tools allow for such differences; they can be variously applied. Implications for Research As an exploratory study, this project suggests a variety of research opportunities involving the individual tools, the tool set, and the larger notion of transcending writing contexts. Of the four tools, task and coherence cry out for investigation. Since the task tool was designed to focus on the argument essay in the AWA, we adapted it into a generic, analytic task tool for use in classrooms and training programs (Appendix F). While we find the generic version works well for us, it has not been tested. In addition, we found the task tool somewhat difficult to operationalize as a tool. Not only does it embrace a wide array of writing skills (including audience analysis), but its focal point lies in that often hidden factor in evaluating writing, the potential discrepancy between the assigned writing task and the notion of "task" that writers individually construe for themselves (Cherry & Witte, 1998; Ruth & Murphy, 1988). In short, rather than assuming that all writers "get" the same supposedly real assignment, the tool evaluates writers' interpretations of the assigned task. The coherence tool could also benefit from further exploration. Our scoring data on the coherence tool is not as complete as data collected for the rest of the tools because

38 02/18/00 this tool was derived later from two earlier drafts and thus tested on a smaller sample. Embracing both coherence and cohesion in one tool also means that this tool covers a very wide range of skills. To focus non-expert users on essential matters, we omitted a number of things, including the "given-new contract" (see Kent, 1984). After considerable experience using the coherence tool, however, we believe that the notion of familiar before unfamiliar information would enrich this tool considerably and we have added this concept to the version we use in our own programs. Regarding the tools as a set, we know that after using them students feel more confident that they can succeed in the unfamiliar writing environment of the business school. Unpublished exit assessment results at the elite school further indicate that the tools actually work to help students with their MBA writing. Determining the extent to which the tools facilitate writer transitions to the new environment would require follow-up research tracing student progress to see if, in fact, the tools contributed to their writing performance in the MBA program. Furthermore, we have presented the tools at various conferences and distributed them along with the program we developed for GMAC (Rogers & Rymer, 1996c & d), but we have only anecdotal evidence regarding how or the tools are being applied in other business schools. Thus, some sort of follow-up survey might prove informative. Most significant, future research might explore the applicability of the traits to other transitions, and the notion that some "fundamental traits" might transcend contexts, genres, and media. Implications for Pedagogy Our experience using the analytical tools for writing consultations (Rogers & Rymer, 1996d), classroom exercises, and exit assessments suggest that the analytical tools can be used successfully by non-experts alone or with the guidance of a writing expert.

39 02/18/00 The tools can be employed for individual goal setting and revising of writing as well as for training activities, writing consultations, and writing assessment programs. For all such applications, we have found that the tools heighten writer awareness of how interaction with the reader affects all aspects of writing. Indeed, the tools become a means by which to impress writers that they are obligated to consider reader and situational needs. Scoring the AWA essays to diagnose individual writers' potential problems in MBA writing is an obvious application of the analytical tools.8 The Eccles School at the University of Utah has used the tools for writing consultations during MBA orientation (Walker, 2000), as does the University of Michigan Business School, where the tools are also employed for follow-up assessments that allow students to measure their improvement. In a course required of all matriculating MBAs who score less than 3.5 on the AWA, students at Wayne State University assess their own GMAT essays with the analytical tools and write a memo to set goals for improving their MBA writing. Although scoring activities of this kind are useful, we have found that the actual analytical scores become much less important than what can be learned via the process of scoring. Scoring sessions involving comparison and discussion can dramatically confront writers with the necessity of considering how and why readers respond as they do. If, for example, peers score one piece of writing higher than another, the writer with the lower score invariably wants to discuss why; peers typically draw concepts from the tools to explain their judgments. Comparative analysis of self and others' scores is also very productive. For example, we frequently ask individuals to score their own writing on the tools and, ',.i

40 02/18/00 simultaneously, have the same piece of writing scored blind by another colleague, class member, writing consultant, or instructor. When the blind scoring process is completed, we compare writer scores with those of the other evaluator, noting the degree of agreement as the basis for follow-up discussion and evaluation. (It's sometimes helpful if this blind scoring is proceeded by a brief introduction to the tools, along with sample papers to practice scoring.) Another more ambitious use of comparative scoring involves pre- and posttraining evaluations. Simply put, the tools are used to evaluate a writing task at the beginning of training and, subsequently, to evaluate a simiar task near the end. The resulting pre-post comparison provides writers with some evidence of improvement on specific aspects of their writing; it can aso accentuate areas where their continued attention may assist future development Indeed, using comparative scoring in our classes over the last several years, we have found that scores tend to improve, and in those few instances where scores stayed the same or fell, follow-up discussion, typically instigated by the writer anxious to know why, contributed to the learning process. In this way, the tools become a benchmarking mechanism, including not only descriptive but quantitative results, something business school students relish and have even been known to use in job interviews as "proof' of writing ability. Conclusion We regard the tools as reflective and planning as well as instructional and evaluative devices; summations of communicative concerns that are as valuable for guiding the decisions of an individual writer, as for assisting an instructor in determining writer

41 02/18/00 needs. In other words, although the tools can be applied to determine specific writing problems in holistically scored texts (which may be of interest to assessment practitioners and researchers), our experience in using the tools in training reveals that they support a wide range of writing activities ranging from individual writer decisionmaking to collaborative analyses. Even more significantly, the analytical tools, in effect, are "tools" with the potential to enable learning across discourses. They may further understanding about the communicative core of writing across contexts, namely, the dialogue between writers and readers. The tools then can become a basis for discussions about the specific goals, expectations, and standards particular to a situation, enabling a conversation about communicative issues that may prove consequential for writing in unfamiliar contexts.

42 02/18/00 Notes 1. This study was conducted in the 1995-96 academic year and includes AWA scores mainly from March and June 1995. The mean GMAT score of the entering daytime class at the elite school was 642 and the mean AWA was 4.4 (with 6.0 being the top score) The mean GMAT score at the mainstream school was 512 and the mean AWA was 3 9. (The mean GMAT score for all test-takers was 503 and the mean AWA was 3.8.) 2. Literature consulted in developing the task tool included the following: Bitzer, 1968; Berkenkotter & Huckin, 1995; Cherry & Witte, 1998; Couture & Rymer, 1993; Fanr, 1993; Flower & Hayes, 1980; Forman & Rymer, 1999a & b; Miller, 1984; Penrose, 1992; Ruth & Murphy, 1988; Swales, 1990; Witte & Cheny, 1994; Witte & Flach, 1994. 3. Literature consulted in developing the coherence tool included: Bamberg, 1983; Barton, 1995; Cheng &Steffensen, 1996; Clark & Haviland, 1977; Connor, 1990; Connor & Johns, 1990; Connor & Lauer, 1985; Durst et al., 1990; Fairclough, 1993; Forman & Rymer, 1999; Halliday & Hasan, 1976; Haswell, 1989; Huckin, 1992; Kent, 1984; Kolln, 1996; Lorch & O'Brien, 1995; Markels, 1984; Thomas, 1995; Williams, 1981; Yeh, 1998. 4. Literature consulted in developing the reasoning units tool included the following: Barton, 1993; Chambliss, 1995; Chambliss & Garner, 1996; Connor, 1990; Crammond, 1998; Fulkerson, 1996a, 1996b; Hairston, 1992; Locker & Keene, 1983; Purves, Gorman, & Takala, 1988; Rogers, 1994; Toulmin, 1958; van Dijk & Kintsch, 1983; Williams, 1991; Walter, 1966; Yeh, 1998. 5. Literature consulted in developing the error interference tools included the following: Barton, Halter, McGee, & McNeilley, 1998; Beach, 1989; Connors & Lunsford, 1988; Faigley et al., 1985; Hairston, 1981; Harris & Silva, 1993; Hillocks, 1986; Horning, 1987; Leonard & Gilsdorf, 1990; Leki, 1992; Ochsner, 1990; Reid, 1993; Rymer, Beard, & Williams, 1990; Shaughnessy, 1977; Sloan, 1990; Wall & Hull, 1989; Williams, 1982; Yeh, 1998.

43 02/18/00 6. These designations, according to Fleiss's interpretation of Cohen's Kappa used for calculating agreement are as follows:.75 as excellent;.40 to.75 as fair to good; and.40 and below as poor agreement beyond chance (1981, p. 218). Cohen's Kappa, a conservative method for analyzing interrater reliability, requires that means be calculated for all pairs of raters and then averaged. As is customary in writing assessments, scores were considered in agreement if they were the same or one point adjacent; that is, discrepant scores lay more than one point apart on the 6-point scale (see Cherry & Meyer, 1993; White, 1994). 7. Passages in Table 7 are condensations from researchers' consultation notes; quotations are their verbatim notations. Neither are audiotaped transcriptions. 8. Tool users should be guided how to interpret scoring results. Typically in standard scoring, a difference of more than one point on any tool is significant (that is, a 3 versus a 5, but not a 4 and a 5). Also, scores that fall on the lower half (score 3 and below) merit particular attention.

44 02/18/00 References Ackerman, J. (1991). Reading, writing, and knowing: The role of disciplinary knowledge in comprehension and composing. Research in the Teaching of English, 25, 133-178. Ackerman, J. (1993). The promise of writing to learn. Written Communication, 10, 334-370. Anson, C. M. (Ed.). (1989). Writing and response: Theory practice. aid, research. Urbana, IL: National Council of Teachers of English. Anson, C. M., & Forsberg, L. L. (1990). Moving beyond the academic community: Transitional stages in professional writing. Written Communication, 7, 200-31. Bakhtin, M. (1986). Speech genres and other late essays. (V. W. McGee, Trans.; C. Emerson & M. Holquist, Eds.). Austin: University of Texas Press. (Original work published in 1952-53.) Bamberg, B. (1983). What makes a text coherent? College Composition and Communication. 34,417-429. Barton, E. L, Halter, E., McGee, N., & McNeilley, L. (1998). The awkward problem of awkward sentences. Written Communication, 15, 69-98. Barton, E. L. (1993). Evidentials, argumentation, and epistemological stance. College English, 55, 745-769. Barton, E. L. (1995). Contrastive and non-contrastive connectives. Written Communication, 12, 219-239. Bazerman, C. (1988). Shaping written knowledge: The genre and activity of the experimental article in science. Madison: University of Wisconsin Press. Beach, R. (1989). Showing students how to assess: Demonstrating techniques for response in the writing conference. In C. M. Anson (Ed.), Writing and response: Theory. practice, and research (pp. 127-148). Urbana, IL: National Council of Teachers of English.

45 02/18/00 Berkenkotter, C.,& Huckin,T. N. (1995). enre knowledgeindisciplinary communication: Cognition/culture/p9wer. Hillsdale, NJ: Lawrence Erlbaum. Bitzer, L. F. (1968). The rhetorical situation. Philosophy and Rhetoric, 1, 1-14. Bloom, B. S. (Ed.) (1956). Taxonomy of educational objectives. Vol. 1: Cognitive Domain. New York: McKay. Bracewell, R. J. (1998). Commentary on 'validation of a scheme for assessing argumentative writing of middle school students'. Assessing Writing, 5, 151-157. Bridgeman, B., & Carlson, S. B. (1984). Survey of academic writing tasks. Written Communication, 1, 247-80. Broad, B. (1997). Reciprocal authorities in communal writing assessment: Constructing textual values within a 'new politics of inquiry'. Assessing Writing, 4,133 -167. Brown, R. L., & Hermdl, C. G. (1986). An ethnographic study of corporate writing: Job status as reflected in written text. In B. Couture (Ed.), Functional approaches to writing: Research perspectives (pp. 11-28). Norwood, NJ: Ablex. Chambliss, M. J. (1995). Text cues and strategies successful readers use to construct the gist of lengthy written arguments. Reading and Research Quarterly, 30,778 -807. Chambliss, M. J., & Gamer, R. (1996). Do adults change their minds after reading persuasive text? Written Communication, 13, 291-313. Cheng, X., & Steffensen, M. S. (1996). Metadiscourse: A technique for improving student writing. Research in the Teaching of English, 30, 149-181. Cherry, R. D., & Meyer, P. R. (1993). Reliability issues in holistic assessment. In M. Williamson and B. Huot (Eds.), Validating holistic scoring for writing assessment: Theoretical and empirical foundations (pp. 109-141). Cresskill, NJ: Hampton Press. Cherry, R. D., & Witte, S. P. (1998). Direct assessments of writing: Substance and romance. Assessing Writing, 5,71-87.

46 02/18/00 Chin, E. (1994). Ethnographic interviews and writing research: A critical examination of the methodology. In P. Smagorinsky (Ed.), Speaking about writing: Reflections on research methodology. Sage Series in Written Communication, Vol. 8 (pp. 247-272). Thousand Oaks, CA: Sage. Clark, H.H., & Haviland, S. (1977). Comprehension and the given-new contract. In R.O. Freedle (Ed.), Discourse production and comprehension. Norwood, NJ: Ablex. Connor, U. (1990). Linguistic/rhetorical measures for international persuasive student writing. Research in the Teaching of English, 2467-87. Connor, U., & Johns. A. (1990). Coherence in writing. Alexandria, VA: Teachers of English to Speakers of Other Languages, Inc. Connor, U., & Lauer, J. (1985). Understanding persuasive essay writing: Linguistic/rhetorical approach. Text, 5, 309-326. Connors, R., & Lunsford, A. (1988). Frequency of formal errors in current college writing. College Composition and Communication, 39,395-409. Couture, B., & Rymer, J. (1993). Situational exigence: Composing processes on the job by writer's role and task value. In R. Spilka (Ed.), Writing in the workplace: New research perspectives (pp. 4-20). Carbondale, IL: Southern Illinois University Press. Crammond, J. G. (1998). The uses and complexity of argument structures in expert and student persuasive writing. Written Communication, 15,230-68. Cross, G. A. (1994). Ethnographic research in business and technical writing. Journal of Business and Technical Communication, 8, 118-134. DeRemer, M. L. (1998). Writing assessment: Raters' elaboration of the rating task. Assessing Writing, 5,7-29. Diederich, P. French, J., & Carlton, S. (1961). Schools of thought in judging excellence in English themes. Princeton, NJ: Educational Testing Service. Diederich, P. B. (1974). Measuring growth in Enlish. Urbana, IL: National Council of Teachers of English.

47 02/18/00 DiPardo, A. (1994). Stimulated recall in research on writing. In P. Smagorinsky (Ed.), Saking about writing: Reflections on research methodolo. Sage Series in Written Communication, Vol. 8 (pp. 163-181). Newbury Park: Sage. Doheny-Farina, S. (1993). Research as rhetoric. In R. Spilka (Ed.), Writing in the workplace: New research perspectives (pp. 253-267). Carbondale, IL: Southern Illinois University Press. Driskill, L. (1989). Understanding the writing context in organizations. In M. Kogen (Ed.), Writing in the business professions. Urbana, IL: NCrE. Durst, R. K. (1987). Cognitive and linguistic demands of analytic writing. Research in the Teaching of English, 21, 347-376. Durst, R., Laine, C., Schultz, L. M., & Vilter, W. (1990). Appealing texts: The persuasive writing of high school students. Written Communication, 7,232-255. Elbow, P. (1993). Ranking, evaluating, and liking: Sorting out three forms of judgment. College English 55, 187-206. Faigley, L. (1992). Fragments of rationality: Postmoderity and the subject of composition. Pittsburgh: University of Pittsburgh Press. Faigley, L., Cherry, R. D., Jolliffe, D. A., & Skinner. A. M. (1985). Assessingwriters' knowledgeand processes of composing. Norwood, NJ: Ablex. Fairclough, N. (1993) Discourse and social change. Cambridge, UK: Polity Press. Farr, M. (1993). Essayist literacy and other verbal performances. Written Communication, 10, 4-38. Farr, M., & Nardini, G. (1996). Essayist literacy and sociolinguistic difference. In E.M. White, W.M. Lutz, & S. Kamusikiri (Eds.) Assessment of writing: Politics, policies, practices. New York: The Modem Language Association. Fleiss, J.L. (1981). Statistical methods for rates and proportions. New York: Wiley. Flower, L.S., & Hayes, J.R. (1980). The cognition of discovery: Defining a rhetorical problem. College Composition and Communication,, 21-32.

48 02/18/00 Forman, J., & Rymer, J. (1999a). Defining the genre of the case write-up. The Journal of Business Communication, 36, 103-33. Forman, J., & Rymer, J. (1999b). The genre system of the Harvard case method. Journal of Business and Technical Communication, 13, 373-400. Freedman, A., Adam, C., & Smart, G. (1994). Wearing suits to class: Simulating genres and simulations as genre. Written Communication, 11. 193-226. Fulkerson, R. (1996a). Teaching the argument in writing. Urbana, IL: National Council of Teachers of English. Fulkerson, R. (1996b). The Toulmin model of argument and the teaching of composition. In B. Emmel, P. Resch, & D. Tenney (Eds.). Argument revisited: Argument redefined, (pp. 45-72). Thousand Oaks, CA: Sage. Geertz, C. (1983). Local knowledge: Further essays in interpretive anthropology. New York: Basic Books. GMAC (Graduate Management Admission Council). (nd, distributed 1994, May). The GMAT analytical writing assessment: An introduction. Santa Monica, CA: Author. Green, S., & Higgins, L. (1994). Once upon a time: The use of retrospective accounts in building theory in composition. In P. Smagorinsky (Ed.), Speaking about writing: Reflections on research methodolo. Sage Series in Written Communication, Vol. 8 (pp. 115-140). Newbury Park: Sage. Guba, E. G., & Lincoln, Y. S. (1989). Fourth generation evaluation. Newbury Park, CA: Sage. Hairston, M. (1981). Not all errors are created equal: Nonacademic readers in the professions respond to lapses in usage. College English, 43, 794-806. Hairston, M. (1992). Successful writing (3Md Ed.) New York: Norton. Halliday, M., & Hasan, R. (1976). Cohesion in English. London: Longman. Harris, M. (1995). Talking in the middle: Why writers need writing tutors. College Enlish, 57, 2742.

49 02/18/00 Harris, M., & Silva, T. (1993). Tutoring ESL students: Issues and options. Collee Composition and Communication, 44, 525-537. Haswell, R. (1989). Textual research and coherence: Findings, intuition, and application. College English, 51, 305-319. Haswell, R. H. (1998). Rubrics, prototypes, and exemplars: Categorization theory and systems of writing placement. Assessing Writing, 5,231-268. Hayes, J.R., Hatch, JA., & Silk, C.M. (2000). Does holistic assessment predict writing performance? Written Communication, 17, 3-26. Hill, L. A. (1992). Becoming a manager. Massachusetts: Harvard Business School Press. Hillocks, G., Jr. (1986). Research on written composition. Urbana, IL: National Conference on Research in English. Hoetker, J. (1982). Essay examination topics and students' writing. College Composition and Communication, 33 377-92. Homing, A. S. (1987). Teaching writing as a second language. Carbondale, IL: Southern Illinois University Press. Huckin, T. N. (1992), Context-sensitive text analysis. In G. Kirsch & PA. Sullivan (Eds.), Methods and methodology in composition research (pp. 84-104). Carbondale, IL: Southern Illinois University Press. Huot, B. A. (1993). The influence of holistic scoring procedures on teaching and rating student essays. In M. M. Williamson & B. A. Huot (Eds), Validating holistic scoring for writing assessment: Theoretical and empirical foundations (pp. 206-36). Cresskill, NJ: Hampton Press. Huot, B. A. (1996). Toward a new theory of writing assessment. College Composition and Communication, 47,549-566. Hynds, S., & Rubin, D. (Eds). (1990). Perspectives on talk and learning. Urbana: National Council of Teachers of English.

50 02/18/00 Kent, T. L. (1984). Paragraph production and the given-new contract. The Journal of Business Communication 21,45-66. Kent, T. L. (1993). Paralogic rhetoric: A theory of communicative interaction. Lewisburg, PA: Bucknell. Knight, M. (1999). Management communication in US MBA programs: The state of the art. Business Communication Quarterly, 62,9-32. Kolln, M. (1996). Rhetorical grammar: Grammatical choices, rhetorical effects. (2nd ed.). Boston: Allyn & Bacon. Kuriloff, P. C. (1996). What discourses have in common: Teaching the transaction between writer and reader. College Composition and Communication, 47,485-501. Leki, I. (1992); Understanding ESL writers: A guide for teachers. Portsmouth, NH: Boynton/Cook. Leonard, D. J., & Gilsdorf, J. W. (1990). Language in change: Academics' and executives' perceptions of usage errors. The Journal of Business Communication. 27, 137 -158. Leonard, DJ., & Gilsdorf, J.W. (1999, November). Executives' and academics' perceptions of questionable language usage elements. Paper presented at the meeting of the Association for Business Communication, Los Angeles. Lincoln, Y. S.,'& Guba, E. G. (1985). Naturalistic inquiry. Beverly Hills, CA: Sage. Locker, K. 0., & Keene, M. L. (1983). Using Toumin logic in business and technical writing classes. In W. K. Sparrow & N. A. Pickett (Eds.), Technical and business communication in two-year programs (pp. 103-110). Urbana, IL: National Council of Teachers of English. Lorch, R. F., Jr., & O'Brien, EJ. (1995). Sources of coherence in reading. Hillsdale, NJ: Erlbaum.

51 02/18/00 Lloyd-Jones, R. (1977). Primary trait scoring. In C. Cooper & L. Odell (Eds.), Evaluating writing: Describing, measuring. judging (pp. 33-69). Urbana, IL: National Council of Teachers of English. Markels, R. B. (1984). A new perspective on cohesion in expository paragraphs. Carbondale, IL: Southern Illinois University Press. Mathes, J. C. (1986). Three mile island: The management communication role. Engineering Management International., 261-268. Miller, C. R. (1984). Genre as social action. Quarterly Journal of Speech, 70, 151 -167. Miller, C.R., & Selzer, J. (1985). Special topics of argument in engineering reports. In. L. Odell & D. Goswami (Eds.), Writing in nonacademic settings (pp. 309-341). New York: Guilford. Moss, P. (1994). Can there be validity without reliability? Educational Researcher, 23, 5-12. Noll, C.L., & Stowers, R.H. (1998). How MBA programs are using the GMAT's Analytical Writing Assessment. Business Communication Quarterly, 6166-71. Northey, M. (1990). The need for writing skill in accounting firms. Management Communication Quarterly,, 474495. Nystrand, M. (1989). A Social-interactive model of writing. Written Communication, 6, 66-85. Ochsner, R. S. (1990). Physical eloquence and the biology of writing. Albany: State University of New York Press. Odell, L., Goswami, D., & Herrington, A. (1983). The discourse-based interview: A procedure for exploring the tacit knowledge of writers in nonacademic settings. In P. Mosenthal, L. Tamor, & S. A. Walmsley (Eds.), Research on writing: Princiles and methods (pp. 221-236). New York: Longman.

52 02/18/00 Pajares, F., & Johnson, M. J. (1994). Confidence and competence in writing: The role of self-efficacy, outcome expectancy, and apprehension. Research in the Teaching of English, 28, 313-331. Penrose, A. M. (1992). To write or not to write: Effects of task and task interpretation on learning through writing. Written Cmmunication, 9,465-500. Pula, J. J., & Huot, B. A. (1993). A model of background influences on holistic raters. In M. M. Williamson & B. A. Huot (Eds.) Validating holistic scoring for writing assessment: Theoretical and empirical foundations (pp. 237-265). Cresskill, NJ: Hampton Press, Purves, A. C. (1992). Reflections on research and assessment in written composition. Research in the Teachingof English, 26, 108-122. Purves, A. C., Gorman,T. P., & Takala, S. (1988). The development of scoring scheme and scales. In T. Gorman, A. C. Purves, & R. E. Degenhart (Eds.), The IEA's study of written composition. The International writing tasks and scoring scales, vol. 5 (pp. 41 -58). New York: Pergamon. Quinn, R.E., Hildebrandt, H.W., Rogers, P.W., & Thompson, M.P. (1991). A competing values framework for anlayzing presentational communication in management contexts. The Journal of Business Communication28,213-232. Raforth, B. A., & Rubin, D. L. (1984). The impact of content and mechanics on judgements of writing quality. Written Communication, 446-58. Ray, R.E. (1993). The practice of theory: Teacher research in cornmposition. Urbana, IL: National Council of Teachers of English. Reid, J. M. (1993) Teaching ESL writing. Englewood Cliffs, NJ: Regents/Prentice Hall. Rogers, P. S. (1994). Analytic measures for evaluating managerial writing. Journal of Business and Technical Communication 8, 380-407.

53 02/18/00 Rogers, P.S. (2000). CEO presentations in conjunction with earnings announcements: Extending the construct of organizational genre through competing values profiling and user-need analysis. Management Communication Ouarterly 13,426-485. Rogers, P.S., & Hildebrandt, H.W. (1993). Competing values instruments for analyzing written and spoken management messages. Human Resource Management, 32, 121-142. Rogers, P. S., & Rymer, J. (1995a). What is the relevance of the GMAT analytical writing assessment for management education? Management Communication Quarterly, 8, 347-367. Rogers, P. S., & Rymer, J. (1995b). What is the functional value of the GMAT analytical writing assessment for management education? Management Communication Quarterly, 8, 477494. Rogers, P. S., & Rymer, J. (1996a). The GMAT analytical writing assessment: Opportunity or threat for management communication? Business Communication Quarterly, 60,7-25. Rogers, P. S., & Rymer, J. (1996b). "What shall we do with these essays?" Using the analytical writing assessment for diagnostic purposes. Selections, 39 (Spring): 25-39. Rogers, P. S., & Rymer, J. (1996c). The Analytical Writing Assessment: Using the test results for diagnostic purposes (131 pp.). Prepared for the Graduate Management Admissions Council, McLean, VA. Rogers, P. S., & Rymer, J. (1996d). The Analytical Writing Assessment diagnostic program (69 pp.). Prepared for the Graduate Management Admissions Council, McLean, VA. Russell, D. R. (1991). Writin in the academic disciplines, 1870-1990. Carbondale: Southern Illinois University Press. Ruth, L., & Murphy, S. (1988). Designing writing tasks for written assessment of writing. Norwood, NJ: Ablex.

54 02/18/00 Rymer, J., Beard, J.D., & Williams, D.L. (1990, November). Correlating correctness and functionality in MBA writing assessments. Paper presented at the meeting of the Association for Business Communication, San Antonio. Schriver, KA. (1989). Evaluating text quality: The continuum from text-focused to reader-focused methods. IEEE Transactions on Professional Communication, 32,238-255. Schriver, Karen A. (1992). Teaching writers to anticipate readers' needs: What can document designers learn from usability testing? In Henk Pander Maat & Michael Steehouder (Eds.), Studies of functional text quality (pp. 141-157). Amsterdam: Rodopi. Shaughnessy, M. (1977). Errors and expectations: A uide for the teacher of basic writing. New York: Oxford University Press. Sloan, G. (1990). Frequency of errors in essays by college freshmen and by professional writers. College Composition and Communication, 41,299-308. Smagorinsky, P. (1994). Speakingabout writing: Reflections on research methodology. Sage Series in Written Communication, Vol. 8. Newbury Park: Sage. Smeltzer, L. R. & Thomas G. F. (1994, April). Managers as writers: A metanalysis of research in context. Journal of Business and Technical Communication 8(2), 186-21 1. Spilka, R. (1990). Orality and literacy in the workplace: Process- and text-based strategies for multiple-audience adaptation. Journal of Business and Technical Communication, 4(1), 45-67. Spivey, N. N. (1990). Transforming texts: Constructive processes in reading and writing. Written Communication, 7, 256-287. Straub, R. (1996). The concept of control in teacher response. College Composition and Communication, 47, 223-251. Suchan; J. (1998). The effect of high-impact writing on decision making within a public sector bureaucracy. The Journal of Business Communication, 35,299-327. Swales, J. (1990). Genre analysis. Cambridge: Cambridge University Press. Swales, J. M. (1998). Other floors, other voices. Mahwah, NJ: Lawrence Erlbaum.

55 02/18/00 Thomas, J. P. (1995, November). Coherence Toward a logic for effective management documents. Paper presented at the meeting of the Association for Business Communication, Orlando. Thornton, G. C. (1992). Assessment centers in human resource management. Reading, MA: Addison-Wesley. Toulmin, S. E. (1958). The uses of argument. Cambridge: Cambridge University Press. vanDijk, T. A., & Kintsch, W. (1983). Strategies of discourse comprehension. New York: Academic Press. Walker, C., & Elias, D. (1987). Writing conference talk: Factors associated for high- and low-rated writing conferences. Research in the Teaching of English,21 226-85. Walker, R. (2000). Improving writing and analytic skills: The development of a student-centered system. Unpublished manuscript. Wall, S. V., & Hull, G. A. (1989). The semantics of error: What do teachers know? In C.M. Anson (Ed.), Writing and response: Theory,practice, and research (pp.261-292). Urbana, IL: National Council of Teachers of English. Walter, 0. M. (1966). Speaking to inform and persuade. New York: Macmillan. Walvoord, B.E., & Anderson, VJ. (1998). Effective grading: A tool for learning and assessment. San Francisco: Jossey-Bass. White, E. M. (1994). Teachin and assessing writing. San Francisco: Jossey-Bass Publishers. Williams, J.M. (1981). Style: Ten lessons in clarity and grace. Glenview, IL: Scott, Foresman. Williams, J. M. (1982). The phenomenology of error. College Composition and Communication, 32: 152-168. Williams, J. M. (1991). Rhetoric and informal reasoning: Disentangling some confounding effects in good reasoning and good writing. In J. F. Voss, D. N. Perkins, & J.

56 02/18/00 W. Segal (Eds.), Informal reasoning and education (pp. 225-246). Hillsdale, NJ: Lawrence Erlbaum. Williamson, M. M. (1993). An introduction to holistic scoring: The social, historical and theoretical context for writing assessment. In M. M. Williamson & B. A. Huot (Eds.), Validatinc holistic scoring for writing assessment: Theoretical and empirical foundations (pp. 1-43). Cresskill, NJ: Hampton Press. Winsor, D. (1996). Writing like an engineer: A rhetorical education. Mahwah, NJ: Lawrence Erlbaum. Witte, S. P., & Cherry, R. D. (1994). Think-aloud protocols, protocol analysis, and research design. In P. Smagorinsky (Ed.), Speaking about writing: Reflections on research methodology (pp. 69-85). Newbury Park: Sage. Witte, S. P, & Flach, J. (1994). Notes toward an assessment of advanced ability to communicate. Assessing Writing,.1,207-246. Wolcott, W., with Legg, S. M; (1998). An Overview of writing assessment: Theoy, research. and practice. Urbana, IL: National Council of Teachers of English. Yancey, K. B. (1999). Looking back as we look forward: Historicizing writing assessment. College Composition and Communication, 5.0,483-503. Yeh, S. S. (1998). Validation of a scheme for assessing argumentative writing of middle school students. Assessing Writing, 5, 123-150.

Task Tool for AWA argument essay The Task tool evaluates how well the writing fulfills the assigned task and reader expectations. The Task tool applies to the argument essay, focusing on how well the writer analyzes the line of reasoning and the use of evidence in the argument. It also includes how well the writer interprets the assignment, figuring out what is appropriate for the type of writing (an analytical essay), the reader, and the situation. The task for the GMAT argument essay requires the writer to critique the argument by analyzing the line of reasoning, especially by identifying and analyzing significant flaws. An effective critique identifies questionable assumptions, lack of evidence, or logical fallacies (e.g., making hasty generalizations, oversimplifying, and drawing inadequate cause/effect conclusions). A critique may also present alternative explanations, raise counter arguments, suggest what would make the argument more persuasive or facilitate evaluation of its conclusions. To complete the argument essay successfully, the writer must focus on critiquing the argument rather than on analyzing it for some other purpose (e.g., to propose a business solution to a problem described in the argument). Score Explanation 6 Critiques the argument thoroughly, identifying several significant flaws and exploring them in some depth. 5 Critiques the argument quite thoroughly, identifying and explaining some significant flaws. 4 Critiques the argument, identifying some significant flaws and providing some explanation. 3 Begins to critique the argument, but may identify flaws without explaining them adequately; or may identify flaws that are not of central significance; or may narrowly identify and analyze a single significant flaw; or may analyze several asects of the argument without directly critiquin it. 2 Solves business problem inherent in the argument without directly critiquing it; or may focus on insignificant aspects of the argument. May attempt to critique but without a clear understanding of the argument. 1 No critique of the argument; or may discuss the business problem without any.______ critique. Figure 1: Task tool for the AWA argument essay

Coherence Tool The Coherence Tool focuses on how well the piece of writing forms a meaningful whole for the assumed reader. A coherent analytical essay is built around a controlling idea that is logically developed with each passage moving clearly to the next. A coherent essay holds together. It is a logical, unified, and complete text that the reader can readily comprehend. A writer facilitates the reader's sense of coherence in a text by.providing sufficient context for interpreting the piece of writing (such as "in my training at the X Company"); indicating the relevance of the context to the situation (such as "causes for this problem"); and explaining the relationships among the ideas in the text (such as "most significant of all"). A writer enhances coherence by using cohesive devices that connect words, sentences, and paragraphs together, devices such as transitional words (such as "therefore," "yet," and "however"); appropriate pronoun references (such as "it," "this," "she"); synonyms (such as "Mary," "my manager," the boss"); and repeated words and phrases. However, textual devices by themselves do not ensure that the underlying meaning will be clear to the reader. In fact, although the factors contributing to coherence may be difficult to pinpoint, readers can intuitively distinguish texts that are not fully coherent (often responding with "I don't quite get it," or "It doesn't all hang together"). A coherent piece of writing provides all the essential information the assumed reader will need at every point to comprehend the meaning —usually without rereading. If many passages remain unclear even after rereading, the text usually lacks coherence. Score..Explanation 6 Text forms a meaningful whole with a controlling idea that is logically developed, each passage-clearly related to the next. Rereading is unnecessary, even if content is complex. 5 Text largely forms a meaningful whole with a controlling idea that is logically developed, each passage clearly related to the next. Rereading is rarely necessary, and there are no unclear passages. 4 Text forms some overall sense of meaning around a central idea and a generally logical movement from one passage to the next. Occasional rereading may be necessary, but unclear passages are few and minor. 3 Some passages hold together, but do not form a meaningful whole; context may be missing, parts may be unclear, inconsistent, or unrelated. Cohesive devices may be used appropriately, but some may be inappropriate, and cohesive devices may not compensate for the lack of overall meaning. Frequent rereading of passages may be required; some passages may remain unclear. 2 Many passages do not hold together; there is no overall sense of a meaningful whole. Cohesive devices may be needed or inappropriately used. Passages may require rereading; many may remain unclear. I Passages are disjointed and there is no overall sense of meaning. Text requires rereading; many passages remain unclear._ Figure 2: Coherence tool

Reasoning Units Tool The Reasoning Units Tool examines how logically convincing the reader finds the claims and support presented in the writing. Specifically, the Reasoning Tool focuses on logical units of discourse in which the writer presents claims and establishes their merit with supporting evidence (e.g., explanation, examples, comparison). A claim is a viewpoint or position statement, an opinion that the writer is trying to support, or a conclusion that the writer is trying to establish. Claims are frequently stated using a form of the verb "to be" ("could," "should," or "must") or opinion words ("fails," "hinders," "demonstrates"). Support provides evidence or proof for a claim, defining, explaining, and substantiating it. Support in analytical essays may rely heavily on analysis with explanation and examples. Support may be drawn from personal experience, observations and reading, and may include real or hypothetical examples, comparison/analogies, citations/quotations of authorities, survey data, and statistics. To convince the reader that the piece of writing is logical and reasonable, the writer provides "reasoning units." Each unit should consist of a significant claim that is explained and demonstrated with several kinds of supporting evidence. Packaged together to form a "data set," the evidence should be sufficient and relevant for the reader and the situation. Score Explanation 6 Reasoning units, consisting of claims and support, are logical, credible, and complete. Claims are explicitly stated, explained, and substantiated with supporting evidence. Support is relevant to claims, varied, concrete, and engaging. 5 Reasoning units, consisting of claims and support, are logical arid credible. Claims are explicitly stated and sufficiently explained. Support is relevant to claims and varied. Reasoning units, consisting of claims and support, are adequate. 4 Claims are relatively apparent. Support is reasonably relevant to claims. 3 Reasoning units are incomplete or inadequate. Claims may be undeveloped with little or no support. Support may be insufficient; or seem simplistic (the "everyone knows" type); or may be inappropriate to the claim. 2 Claims may be very vague, difficult to find, insignificant, or irrelevant. Little or no support for claims; or support may be presented without claims. L _ I t No claims; no support. Figure 3: Reasoning units tool

Error Interference Tool The Error Interference Tool assesses whether errors interfere with the writer's communication and/or damage his/her credibility. A writer's communication may be judged partly on how closely the language follows conventions in sentence structure, grammar, usage, mechanics, spelling, etc. Severe and frequent errors (and in some circumstances even milder forms and degrees of error) can negatively impact the writer's communication and/or credibility. All errors are not equally intrusive or offensive to all readers, but they can be differentiated into categories by their typical impact on readers who are attempting to read for meaning rather than hunt for errors. Disruptive Errors (e.g., unintelligible sentences, omitted words/phrases, unclear pronoun references, incorrect verb forms, run-on sentences, wrong words) tend to make the reader's task more difficult, even intruding on the reading process. Disruptive errors may also interfere with communication, preventing the reader from comprehending what the writer means. Credibility Errors (e.g., faulty subject/verb agreements, some punctuation errors, spelling errors) do not usually disrupt communication, but they tend to reflect negatively on the writer's credibility, reducing the reader's confidence in what a writer has to say. Credibility errors become serious if they cause the reader to judge a writer's character or management ability by the frequency or mere presence of certain violations of Standard English. Etiquette Errors (e.g., substituting "I" for "me" after prepositions; substituting "someone left their report," instead of his/her; misplacing apostrophes; confusing it's and its) are errors that many readers —but not all — hardly notice, especially if reading quickly for meaning. However, etiquette errors can reduce the writer's credibility, especially with those readers who are concerned about professional image or those who believe that critical thinking is reflected in the observance of grammar rules. Accent Errors (e.g., missing or wrong articles, wrong prepositions, incorrect use of idioms) commonly characterize the writing of non-native speakers. Accent Errors —which are nearly impossible for non-native speakers to correct in the short term —will often be ignored by readers. Accent Errors rarely interfere with communication, and they usually do not seriously damage the writer's credibility. Score. Explanation 6 No errors interfere with communication or damage credibility. No disruptive errors, but may have a few credibility and/or etiquette errors, and/or accent errors. 5 Errors do not interfere with communication or damage credibility. No disruptive errors, but may have occasional credibility and/or etiquette errors; may have some accent errors. 4 Errors do not seriously interfere with communication or damage credibility. Occasional disruptive errors and/or some credibility errors and etiquette errors; may have frequent accent errors. 3 Errors interfere with communication and/or damage credibility. Some disruptive errors and/or frequent credibility and etiquette errors; may have numerous accent errors. 2 Errors interfere seriously with communication and damage credibility. Frequent disruptive..___..__...... errors and numerous credibility and etiquette errors; may have numerous accent errors. I Numerous errors of many types severely interfere with communication and damage _credibility. Figure 4: Error interference tool

Table I Mid- and Low-Scoring Students Percentages for Consultation Sample and Nationally AWA Scores Consultation Sample National Datat Below 4.5 N = 42 N= 197,368 4.0 26% 34% 3.5 31% 28% 3.0 26% 18% 2.5 14% 11% 2.0 2% 6% (1.5) NAb 3% (1.0) NA _0% Total 100% 100% Note. 'National summary statistics are reported for the first year of AWA administration (10/15/94 to 06/17/95), the testing period during which most participants in this study took the AWA. tThe 1.5 and 1.0 scores are not applicable to this field study because the number of students scoring below 2.0 at the site schools is so low.

Table 2 Correlations Between Multiple Readers0 Analytical Tool Scores and Holistic Scores Sample of Essay Setsb Correlation with Holistic Score Analytical Tool Task Tool Coherence Tool Reasoning Tool Error Tool n=58.67 n=36.68 n =58.71 n=58.65 Notes: e <.05. "Most essays were scored by four independent readers. However, in ten cases there were only two or three readers. bThe sample included the essays of students participating in the consultations, as well as the essays used for the last stage of developmental scoring. Except for the task tool, all tool scores reflect scoring both essays in each student's AWA essay set as a unit. The sample was smaller for the coherence tool because it was developed later in the study.

Table 3 Correlations Between Pairs of Analytical Tools Scored by Multiple Readersa Correlation Between Readers' Sample of Essay Setsb Mean Scores on Two Tools Analytical Tool Pairs Task and Coherence n = 37.84 Coherence and Reasoning n= 37.84 Task and Reasoning n = 59.82 Error and Coherence n = 37.65 Error and Reasoning n = 59.54 Error and Task n = 59.50 Notes: p <.05. aMost essays were scored by four independent readers. However, in ten cases there were only two or three readers. bThe sample included the essays of students participating in the consultations, as well as the essays used for the last stage of developmental scoring. Except for the task tool, all tool scores reflect scoring both essays in each student's AWA essay set as a unit. The sample was smaller for the coherence tool because it was developed later in the study.

Table 4 Interrater Reliability Between Multiple Readers? Analytical Tool Scores Sample of Essay Sets b Analytical Tool Mean Median Task Tool n =58.56.63 Coherence Tool n =36.47.42 Reasoning Units Tool n =58.59.59 Error Tool n = 58.66.71 Notes: Interrater reliabilities were calculated using weighted Cohen's Kappa (with "agreement" considered to be the same score or one-point adjacent.) "Most essays were scored by four independent readers. However, in ten cases there were only two or three readers. 'The sample included the essays of students participating in the consultations, as well as the essays used for the last stage of developmental scoring. except for the task tool, all tool scores reflect scoring both essays in each student's AWA essay set as a unit. The sample was smaller for the coherence tool because it was developed later in the study.

Table 5 Correlations Between Multiple Readers'a and Individual Consultants'b Analytical Tool Scores Correlation Between Readers' Sample of Essay Sets' Mean Score and Consultant's Analytical Tool Score Task Tool n = 34.84 Coherence Tool n = 31.80 Reasoning Units Tool n = 34.71 Error Tool n =34.84 Notes: E <.05. aMost essays were scored by four independent readers. However, in ten cases there were only two or three readers. lIndividual consultants' analytical tool scores were assigned by one of the readers in the original multiple scoring who, in preparing for a consultation, reviewed the original scores for a student's essays, and scored the essays again. CThe sample included the essays of students participating in the consultations during the 1996 winter semester only. Except for the task tool, all tool scores reflect scoring both essays in each student's AWA essay set as a unit. The sample was smaller for the coherence tool because it was developed later in the study.

Table 6 Survey Responses on Writing Consultation With Analytical Tools Scale Highly Highly Question Disagree Agree 1 2 3 4 5 Writing Problemsa Elite School 0% 0% 0% 6(33%) 12(67%) Mainstream 0% 0% 1(5%) 3(16%) 15(79%) Total Sample 0% 0% 1(3%) 9(24%) 27(73%) Ways to Improveb Elite School 0% 1(6%) 2(11%) 5(28%) 10(56%) Mainstream 0% 0% 1(5%) 7(37%) 11(58%) Total Sample 0% 1(3%) 3(8%) 12(32%) 21(57%) Tools Helpfulc Elite School 0% 0% 0% 9(60%) 6(40%) Mainstream 0% 0% 3(27%) 5(46%) 3(27%) Total Sample 0% 0% 3(12%) 14(54%) 9(35%) More Confidenced Elite School 6% 6% 0% 14(78%) 2(11%) Mainstream 0% 0% 6(32%) 3(16%) 10(53%) Total Sample 1(3%) 1(3%) 6(16%) 17(46%) 12(32%) Consultation Worthwhilee Elite School 0% 0% 0% 6(33%) 12(67%) Mainstream 0% 0% 2(11%) 4(21%) 13(68%) Total Sample 0% 0% 2(5%) 10(27%) 25(68%) 'Notes. Analysis done using Fisher's Exact Test since sample sizes were too small for Chi Square (Agresti, 1990). Using Fisher's there was no significant difference between the Elite and Mainstream Schools in terms of ratings. a=.352 p =.834 c!=.138 e E =.484'

Table 7 Researcher Notes What Students Gain by Using the Tools Identify An international student from Belgium with high GMAT scores (710 with Deficiencies a Verbal score of 41) believed his AWA 4.0 score did not reflect his writing abilities. Encouraged to read through his essays in light of the tools, he began identifying many writing problems. After reading the second paragraph in his argument essay, for example, he immediately recognized that he had not followed through on the contract he originally established with the reader. I confirmed his observations and encouraged him to review his MBA writing, looking for connections from paragraph-to-paragraph and sentence-to-sentence, as described in the coherence tool. Defending his "list-like" AWA essays, this student recalled workplace proposals and technical reports, including communications with Japanese colleagues. He explained that he was expected to communicate with clarity and authority, using a "bullet format" to present the end product, without much elaboration or analysis. After being introduced to concepts in the tools, the student said that he wished that he had them before taking the AWA. He noted that although he was already writing reports for his MBA classes, and that he felt confident about his writing, the consulting session was helpful, especially the reasoning units tool which had given him a new sense of what it meant to support his position An international student from India read portions of his argument essay aloud. I jotted down errors that the student corrected orally as he read, such as substituting the correct word "there" for "they" in the text. Gradually this student became aware of what was happening and expressed his astonishment at this phenomenon — he himself was correcting his errors without realizing it. Using this discovery of his potential to edit his own writing, I helped him apply the error tool to classify patterns of significant _errors (such as omitted and confused words). i

Table 7 (continued) L Learn about New Discourses I A student described her writing of the AWA with some chagrin: "I didn't get a good grasp of the questions," she admitted, adding that she just got caught up in the ideas, letting one idea lead to the next without even thinking about what she was trying to say. Then with increased embarrassment she noted the similarities between the deficiencies in her essays and her first MBA writing assignment which had been recently returned to her. According to her, the professor's comment essentially said that her paper didn't address the assignment and that she lacked adequate writing skills for a graduate program. Focusing on the needs for writing in her MBA courses, I used the task and coherence tools as the basis for discussing ways to help her figure out the demands of the assignment and to develop a coherent message. An accountant for a bank, a student explained that she rarely writes in her job — only an occasional memo with bullet points —but her writing at school was always "okay." When I asked her to assess her essays from a reader's perspective, she soon responded that she had not quite answered the AWA questions, that her points were not quite clear, and that she seemed to repeat herself Using the reasoning tool she saw how the examples in her argument essay needed additional explanation in order to clarify their connection to her major claim. Immediately she began providing those connections orally. Then we discussed the kinds of explanation required for various MBA assignments she was preparing. With high scores on the GMAT (including the Verbal section), this student's AWA holistic score of 3.0 and analyticaltool scores of 2.5 to 3.0 indicated that he knew the rules for textual conventions (error score was 4.5), but he didn't understand how to analyze the task or develop a coherent argument. When using the tools to examine his essays, he responded strongly to each one as if a light bulb were going on, and at the end of the session, commented: "These [tools] provide a very interesting template for thinking about writing. I've never thought about it in this kind of organized way." i Facilitate Workplace Writing Wanting to explore the reasoning tool in light of his experience marketing products for a pharmaceutical company, the student asked: "How do you know when the amount of support is sufficient?" Building upon the notion of data as described in the reasoning tool, we discussed the need to package support based on the situation and the readers. After the consultation, the student said he would use the tools not only for himself but also for reviewing other people's documents, including some of the "poor" memos at work. In this student's AWA essays the assertions lacked development and support. In fact, his essays scored 2.0 on the reasoning tool. As an engineer, he wrote procedural manuals. This involved giving specific answers, the student explained, to questions in "brief, not flowery, communication." All he had to do was "explain 'the facts," he said. Using the notion of support in the reasoning tool, he began to reflect on what constitutes "a fact" and to reinterpret his workplace writing. Noting that the amount of support he had to provide depended upon the type of communication involved, he finally admitted that for the more technical aspects of his projects he did need to elaborate his positions more to be convincing. -

Appendix A Student Sample Essays Scored 3.0 Essay 1: Issue The directors of a security and safety consulting service recommend the use of identification badges to all of its clients believing this is the reason why none of their clients have reported any incident of employee theft. There are a lot of uncertainties proving this claim. The directors are assuming the badges are the reason no theft is reported. The problem may be that unreported company theft is a possibility. Companies may have company theft but choose not to report it because it may result in higher theft insurance rates or promote a bad reputation of the company name. Companies may also not be aware of any employee theft. It took Comerica Bank Incorporated years to discover that tellers were internally withdrawing money from stagnant customer accounts into their own accounts. Several weaknesses are evident with the theory of identification badges. The identification badges may be fraudelent with false names or pictures. The practice of checking identification badges is also questionable. Entry into a company may be possible as long as the person has an identification badge. The badge might not rightfully belong to this individual. Comerica Bank Incorporated checks badges of all employees entering their buildings and allows entry if only a badge is in possession. The directors would have a stronger argument with additional evidence of their security practices. They should show that the identification badges are further checked for authenticity during the day. Further practices such as renewal of badges periodically would prove that employees have the appropriate badge. Essay 2: Argument I agree that "there are essentially two forces that motivate people: self-interest and fear." These two forces occur daily in the workplace and our personal lives. Self-interest is a motivation for many people. In the corporate world, there is strong competition which strengthens an individual's self-interest. Successful people want to be ahead of everyone else and motivate themselves to be the best they can be. To be the best you have to further yourself as much as possible. Examples of improving yourself can be taking on more challenges at work, furthering your education, and improving yourself person?? Fear is also strong motivation for many people. In this world, a lot of emphasis is placed on success. The world judges people by theirjobs and relationships. Many people are afraid of not having a successful job and this motivates people to strive to successful. Many people are afraid that if they do not take the steps to be successful they will not be able to support themselves or their families. As a result, they fear they will not fit. Many people are afraid of being alone and not in love and this motivates them to take the steps needed to accomplish their goals. The forces of self-interest and fear are very strong factors for motivation. Motivation may not always be easy but the individual desire to succeed and the fear of not succeeding forces motivation for many people.

Appendix A (continued) Student Sample Essay Score 3.5 Essay 1: Argument The directors of a security and safety consulting service reported that from their research it was found that requiring employees to wear photo identification badges would prevent employee theft. This advice is not very sound because their argument has many flaws. First, from my own personal experience identification badges do not prevent theft. At my company all employees must wear a badge and yet we have items stolen all the time. Second, the security force must check the badges and personal belongings of people leaving a build in order for them to work efficiently. Finally, the security consulting service should conduct more research to find more simularities between the ten companies mentioned. I am sure that they will find other practices that all the companies perform. The security service should recommend these as well. Essay 2: Issue 'There are essentially two forces that motivate people: self-interest and fear." I find this statement to be very compelling. There are examples throughout society to support this statement. It can be seen in the laws we make, at work, and in schools. Many of the laws we have rely on either self-interest or fear. For example we have tax laws that let you deduct money from your income if you makes donations to the United Way. This motivates people to give money away so that they will save on their taxes. Fear is used in laws when we want to control ones actions. For example, people fear going to jail so they will not commit crimes. At work fear is used to get people to perform. Someone might fear losing their job if they do not finish a certain report. At the same time people are paid to work. This envokes ones self-interest to make money and buy something that they desire. These two forces can also be found in school. One might fear getting a low grade on a test because they will be punished. Someone else might be trying to get the best grade because they think that it will mean a larger paycheck later in life. Therefore, self-interest and fear are the two forces that motivate people. We use both of them in the laws we enact, at the work place, and in our schools to motivate people.

Appendix B Maior Problems MBA Writing Tutors Identified in Essays Reader 1 Reader 2 Reader 3 Reader 4 A* Organization of ideas * Lack of logical flow ~ Failure to take a position * Cohesion -transitions between ideas ~ Organizational approach * Simplistic & superficial * Failure to do what the instructions are rough * Incomplete paragraph understanding & argumentation said * Focus - staying on topic a problem development * Too many grammar & punctuation in Essay #1; Essay #2 contradicts ~ Development of essay as a whole mistakes itself ~ Grammatical & usage errors * Issues are approached only l __________i ________________________________ superficially, especially in Essay #2 B * Organization generally strong * Tighter organization/argumentation * From the evidence in these essays, I * Most ideas are explored on a ~ Addresses reasoning and use of needed: a workable proposal is would not worry about the student. superficial level - needs work evidence in prompt needed and examples/reasoning too Shows a grasp of logic and ability to developing ideas ~ Content/depth analysis: useful vague focus. * No concrete support for statements ideas, could use more examples * Clearer evidence/support for * Has not come to grips w/issue in in either essay * Contend/evidence weak in statements needed Essay #2 * Vague language Essay #2 * Point in last paragraph of Essay #2 * Position statement buried in unclear Essay #2 C * Superficial analysis * Logical flow, coherence needs * Needs to write in active voice; many * Narrowness of argument - in each * Problems with clarity improvement, especially in Essay #1 nominalizations bury meaning essay, only one example or piece of ~ Errors in grammar, spelling, * Depth of analysis and conclusions * Paragraph two in both essays wend reasoning is given usage; awkward use of idioms drawn need work their way to a conclusion; try to write * Convoluted logic * Reasoning seems circular * Problems w/word choice, syntax, topic sentences that focus * Language control problems - word (Essay #2) style (probably non-native speaker) r Spelling mistakes choice, grammatical structure, use of idiom D. Tighter organizational strategy * Flow of ideas, logical argumentation *| Essay #1 failed to complete task of Internal contradictions needed (Essay #1); serious a problem analyzing argument; Essay #2 much * Complexity - arguments (especially organizational problems * "Supporting" evidence not clearly better in Essay #2) limit discussion to the (Essay #2) connected to claims made * Great imprecision of language superficial * Contradictory ideas (Essay #2) * Style, syntax problems * Many spelling, grammar, punctuation * Overall cohesion a problem * Needs more detailed errors development of ideas * Rough, disjointed style * Grammatical/spelling errors___ _lI___

Appendix B cont. Reader 1 Reader 2 Reader 3 Reader 4 E * Seems not to understand * Order of ideas, logical flow a * Essay #1 fails to complete assigned * Development of ideas - both essays instructions re: providing support problem (especially in Essay #2) task- does little more than repeat the restate the prompt and seem to take a * Tone is general, lacking a clear * Support, evidence, examples not statement in the prompt position, but never explain why position statement (Essays well organized ~ Main problem in both essays is lack l Examples must relate to main point, #1 &2) * Vague, too general (especially in of substance; student develops whole rather than being a "laundry list" * Serious problems with clarity Essay #1). piece around a trivial point ~ Explanation of "how" and "why" and organization (Essay #2) consistently missing * Problems w/word choice, sentence structure, use of idioms * Problems w/cohesion/flow F ~ Organizational problems * Needs better connection between * Lack of focus a problem in both ~ Essay #1 bases its conclusions on * Rambling style statement/claim & essays: stick to a topic, figure out generalizations * Problems w/sentence structure: examples/evidence what to say about it, identify what * Awkward words and phrases in direct repetition of words & * Clear intro and/or conclusion doesn't fit Essay #1 especially phrases needed; make sure the * Topic sentences would help * Essay #2 has a clear, focused * Superficial analysis in Essay #2 question/statement is explicitly argument, with some analysis answered/addressed___ G * Serious language control * Needs clear connection between * Essay #1 - very undeveloped thoughts ~ Organization a problem - although problems claim & support/evidence ~ Essay #2 - had to read it many times superficial markers like "first," ~ Problems w/content & depth ~ Logical flow of ideas needed before understanding what it meant "second," etc. are used, connection analysis * Grammatical problems * Work on word choice and other between paragraphs is tenuous at ~ Problems w/expressing content language problems best succinctly * Support for main points must be * Lack of clarity throughout more than scattered, anecdotal Essay #2 examples Had trouble understanding * Language control - writer cannot Essay #2 at all. express ideas clearly * Repetition w/o advancement of ideas Ogniaioa poles*_esose _fftrgt-.edo__acfousamaoa problem H '* organizational problems * Responses off-target- need to * Lack of focus a major problem in |* Explanation of why examples ~ Language control problems, recognize & focus on what prompt is Essay #1, though not in #2 support claims needed in Essay #1 awkward sentence structure, addressing * Many language problems, both * Needs to come to grips w/main word choice problems * Support/evidence/examples mistakes and imprecision question in Essay #2 * Does not attempt to provide two unconvincing or not central * All of essay #2 is one paragraph. examples that are relevant * Grammar, syntax problems Organization could be enhanced if * Needs help focusing in Essay #2.there were three - one to frame - _______.__,...............................__.................. _______ _ __ _ l issues and one for each part

Appendix B cont. Reader I Reader 2 Reader 3 Reader 4 1 * Essay #1 well organized, * Clarify intro to relate more clearly to * This student is in good shape - no * Writer attempts to contextualize addresses questions in some prompt flaws in logic, excellent sentence argument in both essays w/marginal depth * Be sure to address all parts of structure and language control success. Needs to make intro work ~ Essay #1 has clear and flowing prompt * Should state the opposing case in w/main point rather than just getting style, but might need to work on Essay #2; give the argument some attention transitions context w/reader ~ Use of vague terms hinders meaning ~ Essay #2 is strong "narrative" ~ Essay #1 is a little "thin" in terms of * Essay #2 has a position and some piece development (but since this is a timed clear reasoning, but it, too, fails to * Essay #2: needs more content exercise, wouldn't assume it's a "nail down" the evidence in a ___ develogpment ____ problem) convincing manner J * Lacks clear position statement * Superficial treatment; complexity of ~ Logic relatively sound in both essays ~ Use of "charged language" * Serious problems with sentence issues not addressed Trouble with language control: abrupt (especially in Essay #2) hurts structure, word choice, general * Support missing or unconvincing shift of subject w/in sentences, from credibility language control one sentence to another, and from one * Repetition of claims, rather than * Superficial analysis paragraph to another. Some sentences development, gives impression of N* eeds topic sentences.don't fit. Vague & imprecise words bullying, more than arguing * Need to tighten paragraph focus * Opening of Essay #2 does not provide clear focus - is it discussing the specific issue or legislative _l- -......................................................~. I| change in general

Appendix C Writing Profile for a Phone Consultation FROM: SUBJ: Writing Profile DATE: January 28, 1996 This is a profile of your writing skills based on our review of your GMAT essays. The profile indicates some of your strengths and some suggestions to improve your writing for the MBA program. Also included is a brief survey (and a stamped envelope) for you to let us know your views about the AWA consultation. I enjoyed talking with you and will look forward to receiving a copy of your term paper from the fall semester management class. After I have reviewed it, I will contact you to set up a telephone consultation. If you ever want to contact me directly, please call. STRENGTHS: - Some good ideas and concrete examples. - Adequate vocabulary, control of Standard English and language conventions. - Engaged and committed to the subject. SUGGESTIONS TO IMPROVE: - Figure out what's wanted in terms of the writing assignment before you get started. Read/listen to the task, considering carefully what is assigned. What are the expectations? What are the specific tasks and/or questions posed? Then describe for yourself (paraphrase in your own words) what you are supposed to do. - Determine your purpose and major message responding to the assignment and develop pertinent ideas. Plan your overall statement to make sure that everything fits. Outline if you have time, making sure you are staying on target to achieve the task. - Develop each point adequately before moving on, providing evidence to support it. - After drafting, review/revise to make certain that you have made your message clear from the start and that one idea leads logically to the next. i

Appendix C Writing Profile for a Face-to-Face Consultation TO: FROM: SUBJ: Writing Profile DATE: Feb. 18, 1996 This is a profile of your writing skills based on a review of your GMAT essays. The profile indicates some of your strengths and some suggestions to improve your writing for the MBA program. Also included is a brief survey for you to give your views about the AWA consultation. STRENGTHS: Essay 2 has a clear overall organization with a beginning, middle, and end, providing coherence to the essay. Essay 2 begins to address an important issue in the argument essay. Essay 2 indicates a smooth writing style. SUGGESTIONS TO IMPROVE: Task focus could be strengthened by the following: *Identifying as many of the major issues as possible in the situation *Offering more explanation and support for each issue identified Reasoning in each essay could be strengthened by the following: *Stating claims/points up front *Providing more support, with more complete explanations, reasons and evidence *Using specific statements for each point; providing more context for each.

Appendix D Consultation Protocol ~ Establish rapport. Begin the consultation by engaging the student in conversation about the MBA program and about him/herself. Establish rapport and develop a sense of his/her situation and circumstances. Ask questions about his/her background and experience in writing, and then ask specifically for his/her opinions regarding the GMAT writing test. Aim to develop some perspective on the writer, including his/her readiness to explore needs for improvement, for the MBA program and what approaches seem most suited to discuss. * Define consultation's purpose and forecast the session's contents. The goal of this consultation is to review your GMAT writing test to identify your writing strengths and any needs that are important for performing well in the MBA program at this business school. We will discuss your experiences as a writer, including your writing of the AWA essay, we will review your essays, and we will identify some ways you might want to work to improve your writing. Please feel free to ask questions, make comments, and ask for clarification. The session will last from 20 to 30 minutes. (If the student is engaged in meaningful conversation about writing, the session may continue up to 45 minutes or even an hour.) Motivate the student's interest in discussing the AWA essays by noting that the goal is not to find fault with the essays, but to explore how the essay writing might help him/her with MBA writing. * Discuss the student's attitude toward the AWA essays and their score. Do you recall writing the AWA essays? How did you feel about your performance on the test that day? How did you feel about your AWA score? Some students want to unburden themselves about an "off day" etc., and it is wise to let them do this early. Also, get the student to talk about the AWA score because it is helpful to know the student's attitude and it may provide. some idea on his/her agenda for the consultation. Did the AWA score simply confirm the student's perspective of mediocre skills, shatter a belief that he/she was a good writer, or prompt anger without altering a belief in the superiority of his/her skills?

(If the student asks about the AWA scoring: The AWA score represents at least 4 different blind readings, two raters for each of the essays. The score is the average of the two essays, rounded up to the nearest half point interval.) Reviewing your AWA essays can be very useful to you in beginning your MBA career. Although these essays are quite different from MBA writing, we can evaluate features in your essays that are important for preparing assignments in the MBA program. For most students, a review of the AWA essays can indicate some deficiencies that represent opportunities to improve in key ways necessary to succeed in MBA writing assignments. Introduce copies of the student's AWA essays You may recall thatyou wrote two essays for the GMAT writing test, one to critique an argument and the other to analyze an issue. Show the student the set of essays and the AWA questions for their test administration, allowing him/her sufficient time to review the essays and bring them back to memory. Both essays require analytical writing, but the issue essay is relatively open compared with the argument essay which required you to identify the important flaws of the stated argument and analyze those flaws. In effect, the arument essay is much more narrowly defined or circumscribed than the issue essay. The way to understand why you got a particular AWA score is to review the essays themselves. Each essay is scored holistically on the basis of certain criteria, but the specific elements in your writing which influenced the evaluators to give you the overall score are never recorded. Therefore, the essays are provided to your school so that a diagnostic consultation/review is possible. * Read/review one essay. Read one essay with the student at this point, unless it is clearly inappropriate because of the student's attitude or the circumstances. Asking the student to read the essay aloud has several benefits: she/he becomes actively involved and reacquainted with the essay and begins to remember actually writing it; you can learn through the oral reading what problems the student sees for him/herself; and frequently reading aloud causes students to recall what they were trying to do in constructing the essay, as well as notice problems in the writing themselves. Encourage the student to stop and comment during the reading. Paraphrase your own reading of sections of one essay, indicating your responses and any difficulties in reading.

* Engage the student in conversation about the essay, soliciting his/her views about the writing, attempting to understand what he/she were trying to do. Do you have any comments to make about your essays? What were you trying to do in this essay (passage)? Identify problematic areas and ask the student questions about them so that you can better understand how to help them. * Provide positive feedback. When I was reading your essays, I found some strengths in your writing: Discuss strengths, alluding to anything of relevance to MBA writing, even if not included in the AWA criteria or the analytical tools. * Use the analytical tools to discuss some critical aspects of MBA writing that can be observed in the student's AWA essays. Use the analytical tools for the specific areas of strength and of weakness in the AWA essays. Discuss the tools one by one either according to the natural progression of the writing process (with the task tool first) or according to your priorities for the student. Read the description of the analytical tool, followed by the description of the scoring levels, focusing on the scores at and adjacent to your evaluation of the student's performance. Refer to specific passages in the student's essays that demonstrate strengths or areas to improve as identified by discussion of the tool. Then discuss suggestions for specific improvement and procedures for making those improvements, referring the student to particular passages in one or both of the essays. In making suggestions for improvement, refer to the student's writing processes and indicate links with typical kinds of MBA writing that would require similar skills. If the student seems more attuned to workplace writing, refer to it as well. If the student does not seem to accept or understand the weaknesses in his/her writing, show samples of high-scoring essays. Point out some of the strengths in these essays that correlate with weaknesses you are aiming to explain in the student's essays. (You might consider showing models of AWA essays scored 5 to 6 at any point during the consultation to clarify what a strong response might be.)

* Summarize the student's strengths and the suggestions to improve. You might provide a writer profile to the student. You may prepare the profile prior to the consultation or you may write it out with the student during the consultation. Suggest local resources in the MBA program, business school, university, and/or community for helping students improve their writing. Adapt the suggestions to your school, providing concrete options and focusing them on the needs of the individual student. Talk about ways the student can take advantage of writing courses, consultants, tutors, and workshops. Also discuss using regular courses to improve writing, for example by asking for instructors' feedback, by encouraging peers' feedback, by participating as a writer in team assignments rather than by crunching the numbers and letting someone else do the writing. Discuss courses, workshops, English-as-a-second-language facilities, tutorial services, community college courses, whatever might be available. * Conclude by answering any questions and by offering encouragement. Ask if the student has any questions or comments. Encourage the student to take advantage of the opportunities available to improve his/her writing, both for the MBA program and for a career in management. Give the student the AWA essays and the analytical tools used in the session.

Appendix E Post-consultation Survey Please respond on the scale of I to 5 (where 1 is highly disagree and 5 is highly agree) by circling the number most closely representing your opinion. When you have completed the survey, please put it into the stamped envelope and mail it to us. Thanks for your response. 1. I received answers to my questions about the GMAT writing test during the consultation. 1 highly disagree 2 3 4 5 highly agree 2. During the consultation, I learned about some problem areas of my writing that need improvement for the MBA program. I 2 3 4 5 3. During the consultation, I learned ways to go about improving some problem areas of my writing for the MBA program. 1 2 3 4 5 4. I found the tools (e.g. task tool, coherence, reasoning, error) to be helpful in understanding how to improve my writing. 1 2 3 4 5 5. During the consultation, I learned about resources available at my school to help me improve my writing for the MBA program. I 2 3 4 5 6. As a result of the consultation, I feel more confident about doing writing assignments in the MBA program. 1 2 3 4 5 7. Overall, I found the consultation worthwhile. 1 2 3 4 5 Please add your comments, suggestions, and questions regarding the consultation or the GMAT writing test below:

Appendix F Generic Version of the Task Tool Task Tool The Task Fulfillment Tool evaluates how well the writing meets the reader expectations and situational demands necessary to fulfill the assigned task. Accomplishing the task requires the writer to understand why the writing is necessary, what generic form it should take, and how it will be read and used. To fulfill the task, the writer must successfully interpret it, figuring out what the writing needs to accomplish, the content that should and should not be included, and the type of writing required for the context. The reader-evaluator of GMAT Analytical Writing Assessment argument question, for example, expects the writer to provide an essay that describes flaws in argument; the reader-evaluator does not expect the writer to provide a memo outlining solutions to the business problem suggested in the argument question. Or, for example, a professor may reward an MBA student's case write-up because it analyzes key issues relevant to improving the performance of the business in the case, but find another student's retelling of the case without a clear recommendation. Different again, an employee may expect a manager to clarify whther the company's hiring policy has changed because recauent practices seem to break that policy and, consequently, dismiss as irrelevant the manager's e-mail giving reasons for recent hiring decisions. In such a situation, the managerial writer's task analysis was flawed and the resulting e-mail did not meet reader expectations. Score Explanation 6 Fulfills the task completely by addressing reader expectations and meeting situational requirements for the writing. Content addresses reader requirements, issues, concerns, or questions; form is fully appropriate for the context and situation. 5 Fulfills the task to a great degree, addressing most reader expectations and meeting situational requirements for the writing. Content is largely relevant and form is sufficiently appropriate for context and situation. 4 Fulfills the task to some degree, addressing some reader expectations and, for the most part, complies with situational requirements for the writing. The content may meet reader expectations to some extent, but not completely. Form may depart from the contextual and situational norms in some respects. 3 Begins to fulfill the task. Includes some content that the reader finds relevant. Content leaves the reader expectations largely unfulfilled, however. 2 Questionable as to whether the writing fulfills the task. Content may seem somewhat relevant to the task, but may addresses it in a roundabout fashion. The reader may need to hunt hard to determine the writer's reason for writing or what the writing is actually for. 1 Does not fulfill the task. Content evidences writer misunderstanding regarding what the writing needs to accomplish. It may be irrelevant to the task. It may surprise or disappoint the reader. Can be characterized as "off-task." i