Division of Research May 1992 School of Business Administration Revised January 1993 ANALYTIC MEASURES FOR EVALUATING MANAGERIAL WRITING Working Paper #647 Priscilla S. Rogers The University of Michigan FOR DISCUSSION PURPOSES ONLY This material may not be quoted or reproduced without the expressed permission of the Division of Research. COPYRIGHT 1992 The University of Michigan School of Business Administration Ann Arbor, Michigan 48109-1234

I I I~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~" I ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ -— ~'* ~ 11-~-_ _-^ ~ i- ~~T --- —~1-I- I _-. I~IC ~ Ir

ANALYTIC MEASURES FOR EVALUATING MANAGERIAL WRITING ABSTRACT To address the need for writing assessment tools, this article presents two analytic measures for evaluating managerial writing: the Analysis of Argument Measure based on Toulmin's (1958) elements of an argument, and the Persuasive Adaptiveness Measure, which draws on the Delia, Kline, and Burleson (1979) ranking system for scoring the degree of social perspective-taking. Developed through a series of pilots, the Analysis of Argument and Persuasive Adaptiveness Measures were designed to evaluate documents that are deliberately persuasive and "directorial" in nature, particularly documents written to manage organizational activities. To test reliability and validity, the measures were employed to assess a selected sample of managerial memoranda that had been scored holistically. Interrater reliability using cohen kappa achieved good agreement beyond chance and correlations using Stuart Tau-C revealed a positive association between the analytic and holistic scores. Successful employment of the analytic measures for research and training exercises, samples of which are included, demonstrate their functionality.

i II Ii I I i I I

ANALYTIC MEASURES FOR EVALUATING MANAGERIAL WRITING' Researchers and trainers in Communication and Composition readily acknowledge the need for writing assessment tools for research and pedagogy. As Hoetker (1982) contended, currently we do not have evaluative instruments for quantifying ratings of document quality that are fine-grained enough for statistical research, and holistic assessment, while useful for evaluating general writing ability, is inadequate since it yields only a single, non-descriptive score. Such a result is insufficient for certain experimental and ethnographic studies, particularly when the assessment of document quality plays a central role. Moreover, researchers, including Connor (1988), Rymer (1989), and Huot (1990) have individually articulated the need for assessment tools that allow us to test the validity of holistic assessment. As for pedagogy, White (1985) summarized the inadequacy of holistic assessment, noting that we can test writing holistically but we cannot teach writing holistically —a single holistic score neither specifies the extent to which documents are rhetorically effective, nor does it provide a means for pushing analysis beyond observations concerning grammar and syntax. At the same time, evidence suggests that mechanical correctness may be less important to readers than content and organization. For example, Diederich (1974) analyzed the evaluative responses of over 50 readers, including editors, lawyers, and business executives, and found that they were most influenced by the richness, soundness, clarity, development, and relevance of the ideas expressed. Along similar lines, Breland and Jones's (1984) study of the criteria that raters use when making holistic judgments of essays revealed 'This research was supported by the University of Michigan School of Business Administration. Special thanks to Kathleen Welch, Consultant at the University of Michigan Center for Statistical Consultation and Research Lab, who guided decisions regarding the application of statistical instruments; to Carol Mohr who prepared the final manuscript; to Ulla Connor and Jone Rymer, whose work on assessment provided impetus for this study; and to individuals serving as Michigan Business School Senior Writing Consultants, Edna Brenner, Beth Chlarucci, Janise Honeyman, Cynthia Koch, Martina Kohl, Mark McPhail, Carol Mohr, Bethany Spotts, Jane Thomas, and Leslie Wilhelm, who participated In the various pilots used to develop the analytic measures presented here. 1

that content was most significant in determining raters' scores; organization was second. As Irmscher (1979) observed, without sufficient means to articulate and quantify various rhetorical concerns (including the selection, organization, and development of content for intended readers) writing instruction can be reduced to proofreading, suggesting that "error-free writing" is synonymous with "good writing." In an effort to address these concerns, researchers have proposed the development of analytic evaluative measures (Purves, Gorman, and Takala, 1988; Connor, 1988). Connor characterized analytic assessment tools as "discoursestructuring measures" for evaluating a writer's ability to effectively organize text, in contrast to "text/linguistic measures" for evaluating a writer's ability to produce texts with appropriate spelling, punctuation, and grammar. Basically, analytic measures break things down into constituent parts in order to examine how they work (White, 1985); therefore, analytic evaluation involves two basic activities: (1) determining a list of desired characteristics for the rhetorical situation and, (2) devising a scoring scale with low, middle, and high rankings for rating each characteristic (Cooper & Odell, 1977; Perkins, 1983). This study presents two analytic measures for evaluating managerial writing: the Analysis of Argument Measure and the Persuasive Adaptiveness Measure. Both measures were inspired by Connor (1988), who employed Toulmin's "elements of argument" and Clark and Delia's (1977) "adaptiveness scale" to evaluate English essays written by international students. Using Connor's (1988) work as a starting point, we developed the Analysis of Argument and Persuasive Adaptiveness Measures through a series of pilots involving the assessment of business documents. The goal of this piloting process was to formulate measures that could be employed to evaluate managerial documents of a deliberately directorial nature, a focus based on the conviction that managers frequently write persuasive documents to produce 2

cooperation, approval, compliance, and sales (Driskill, 1989; Northey, 1990; Paradis, Dobrin, and Miller, 1985). We subsequently employed the piloted measures to assess a selected sample of managerial memoranda that had been previously scored holistically. Drawing correlations between the holistic and the analytic scores allowed us to test the validity and potential usefulness of the analytic measures. METHODOLOGY The Analysis of Argument and the Persuasive Adaptiveness Measures were developed through three pilots following the approach outlined by Perkins (1983). For each pilot, senior writing consultants at the University of Michigan School of Business Administration used the measures to score several kinds of persuasive documents written by managers and MBA students in response to management communication cases. Basically, the process for each pilot ran like this: evaluators gathered in a room, reviewed the measure to be employed, and then individually used that measure to evaluate five to fifteen documents. Subsequently, the evaluators compared scores, discussed the workability of the measure, and refined the scoring levels and corresponding descriptions. We repeated this process until the evaluators were satisfied with the measure and could apply it with 95 to 100 percent agreement. We completed two of the three pilots during the 1989-1990 school year. The following year, writing consultants used the measures for consulting sessions with students. Discussions about the usefulness of the measures during these sessions prompted additional revisions. A third pilot, in the summer of 1991, resulted in the analytic measures presented here. After the piloting process, we employed the measures to evaluate a selected sample of persuasive memoranda written in response to the Crown Regent Case (Appendix A), one of the cases used for a managerial writing assessment at the 3

University of Michigan School of Business Administration. Two hundred seventy entering MBA students took this assessment. Their assessment memoranda were scored holistically and filed according to scoring level. To compile the sample for this study, we pulled every fifth memorandum, covered identifying names and holistic scores, and made copies. These copied documents comprised a sample of 54 memoranda, with an appropriate number of documents at each holistic scoring level. Subsequently, two evaluators independently scored each memorandum. In cases where the evaluators' scores differed, a third evaluator independently scored the memorandum. Results from this process served to test the reliability and validity of the measures. The following discussion presents the Analysis of Argument and Persuasive Adaptiveness Measures in conjunction with the theory upon which each is based. Sample exercises suggest ways each can be employed for training. Findings from the use of the measures to evaluate the selected document sample provide interrater reliability results, which are discussed in conjunction with the measure descriptions. Correlations between the analytic and holistic scores are presented in a special section before the conclusion. Throughout the discussion, cuttings from Crown Regent sample memoranda serve as illustrative material. THE ANALYSIS OF ARGUMENT MEASURE In 1958, Toulmin identified three irreducible attributes for the existence of an argument: claims, data, and warrants. According to Toulmin, claims are conclusions whose merits one is seeking to establish, data are the facts appealed to as a foundation for a claim, and warrants are connectors that justify or register the legitimacy of the step from the data to the claim. Toulmin contended that "establishing conclusions by the production of arguments" required these three elements (1958, 97). 4

In 1988, Connor employed Toulmin's elements of an argument to construct scale for scoring the claims, data, and warrants in student essays. Simply summarized, Connor's "Analysis of Reasoning Measure" defined claims as interrelated major and minor topic statements that support a particular point of view. Using Connors measure, an essay lacking a specific topic statement and consistent viewpoint received a low claim score of one; whereas, an essay including a specific topic statement, several well-developed supporting statements, and possessing a consistent point of view received a high claim score of three. Connor scored data and warrants similarly —an essay containing no, or very little, data received a low data score of one; whereas, an essay containing well-developed and varied data received a high score of three. The Analysis of Argument Measure presented here works somewhat differently, largely because it was designed for evaluating persuasive documents written for a variety of managerial situations; that is, documents intended to promote or defend specific conclusions or recommendations regarding an idea, an object, or an action, such as a proposal for a new product marketing strategy, a letter presenting reasons why a loan has been denied, a memorandum encouraging sales representatives to promote a particular product, or a press release defending company procedures during a crisis. Based on the notion that often a manager must write documents that very clearly state and substantiate his or her conclusions or recommendations, the Analysis of Argument Measure treats claims as the recommendations or conclusions that a writer wants his or her readers to accept. Evaluators assign a score ranging from a low of one to a high of four on each of the following: claims (conclusions or recommendations), data (evidence supporting those conclusions or recommendations), and warrants (explanation of the connection between the data and the claims). A document lacking conclusions or recommendations and with no 5

consistent point of view receives the low claim score of one; whereas, a document receives the high score of four if the recommendations or conclusions are clear, interrelated, and highly relevant for the rhetorical situation. Scoring for data and warrants works in kind, as seen in Figure 1. Figure 1: Analysis of Argument Measure for Managerial Writing Directions: To administer the Analysis of Argument Measure, an evaluator must identify the claims, data and warrants in the document. These elements appear in various arrangements-sometimes warrants come before claims, data before warrants, etc. Not every document includes all three. Some evaluators find it useful to identify the claims first, then to locate the data supporting those claims, and finally to look for the warrants. Sometimes evaluators also find that by labeling the claims, data, and warrants (C, D, W) when reviewing a document facilitates final scoring. Ultimately, the evaluator's task is to find the scoringlevel description for claim, for data, and for warrant that is most representative of how each is employed in the document under review. Claim (C): Conclusions or recommendations the writer wants believed, followed, or adopted. Claims may also take the form of assertions or propositions. C1 Conclusions/recommendations not stated. No consistent point of view. C2 Conclusions/recommendations clearly stated, but general rather than concrete. Conclusions/recommendations may be difficult to actually apply. What they mean may not be absolutely clear. Conclusions/recommendations may not be relevant or may not relate to the key issues. May be multiple unrelated conclusions/recommendations. No consistent point of view. C 3 Conclusions/recommendations stated and somewhat relevant. They address some of the key issues. Conclusions/recommendations are somewhat specific and useful. Some relationship between conclusions/recommendations. Document somewhat focused around the conclusions/recommendations. Conclusions/recommendations begin to have organizational force in the document. Somewhat consistent point of view. C4 Conclusions/recommendations are specific, highly relevant and useful. Conclusions/recommendations address key issues. Conclusions/recommendations are related and compatible. Document focused around the conclusions/recommendations. Conclusions/recommendations have organizational force in the document. Consistent point of view. 6

Data (D): Evidence for the claim in the form of facts, statistics, examples, quotations or opinions of experts, comparisons, etc. Evidence is not a plan of action or procedure, but rather is justification or support for conclusions/recommendations. D 1 Data is not used. No facts, statistics, examples, quotations, comparisons, or data of other kinds. D 2 Data used minimally. Amount and/or quality of data insufficient to support conclusions/recommendations. Data may not directly relate to major recommendations/conclusions. Data may be general or "everyone knows" type. D 3 Data used to some extent. Amount and/or quality of data somewhat sufficient to support some of the conclusions/ recommendations. Data generally related to major conclusions/recommendations. Some variety of data types. D 4 Data used extensively. Specific, well-developed data to support every conclusion/recommendation. Data explicitly relates to each major conclusion/recommendation. Variety of data types. Warrant (W): Explanation of the relationship/connection between claim and data; bridge from the data to the claim (rather than new information). Answers the question: How do the data support the claim? Warrants indicate explicitly or implicitly that the data supports the claim "because of or "since" or "given that" some explanation is the case. W 1 Warrants not used. No attempt to relate data to conclusions/recommendations. Relationship between data & conclusions/recommendations is not clear. Logical gaps. W2 Warrants minimally used. No deliberate attempt to relate data to conclusions/recommendations. Connection between conclusions/recommendations & data is more intuitive than obvious. May have to hunt for warrants. Warrants used may include logical fallacies. Warrants not always used when needed. Warrants induded, but because of a lack of data they do not function as connectors. Arguments are not complete claim-data-warrant units. W 3 Warrants used to some extent. Somewhat deliberate attempt to relate data to conclusions/recommendations. Specific connection between the data & conclusions/recommendations not always clear. May include one claim-data-warrant unit, which demonstrates some deliberate argumentation. W 4 Warrants used when needed. Relationship between data & conclusions/recommendations consistently dear & obvious. Deliberate connecting of data & conclusions/recommendations. Individual arguments are claim-data-warrant units. 7

In using the Analysis of Argument Measure, evaluators found claims and data relatively easy to identify and score. Claims (conclusions or recommendations) frequently appeared as propositional or declarative statements, such as the following samples from Crown Regent memoranda: "Our occupancy rate is worse than previously determined." "Our analysis of our situation has led us to the conclusion that we are losing market share." "We need to become more of a boutique style hotel." "I recommend that we immediately begin a staff education program...." "There is a communication problem between department heads and their staffs." Sometimes claims were implied in an explanation of a plan, as in the following Crown Regent memorandum cutting: "We will establish a nationwide toll-free hotline to receive customer feedback and complaints.... The hotline will be staffed by every employee, on a rotating basis" (Appendix C). Here, the writer is clearly recommending the establishment of a nationwide hotline to be operated by the hotel staff. Like claims, data (e.g. facts obtained from surveys or interviews, statistics, examples, quotations, and comparisons) were also readily flagged. Frequently, writers actually introduced data, as in the following phrases from Crown Regent memoranda: "The results of the questionnaire indicate...." "Earlier this month, my department conducted a lengthy survey of customer satisfaction..." "I reviewed all the guest complaints and found that..." "After analyzing our current occupancy position...." 8

Consequently, during the pilots evaluators located and scored claims and data with relative ease. Evaluators achieved an interrater reliability using cohen kappa of 0.736 for claim and of 0.656 for data on the sample documents for this study, as shown in Table 4 later in the article. Scoring warrants proved more challenging, at least during the initial pilots. Toulmin characterized warrants as "general, hypothetical statements, which can act as bridges, and authorize the sort of step to which our particular argument commits us" (1958, 98). One might say that warrants are to claims and data what cement is to building blocks, glue is to the pieces of a broken tea cup, or a hook is to a picture on a wall. "The warrant is, in a sense, incidental and explanatory," Toulmin stated, "its task being simply to register explicitly the legitimacy of the step involved and to refer back to the larger class of steps whose legitimacy is being presupposed" (1958, 100). Toulmin suggested that warrants "service" or connect claims and data; therefore, warrants may be intentionally obscured and readily overlooked. So then, what textual components does one search for when seeking to locate warrants? Toulmin's example provided some direction: "'Data such as D entitle one to draw conclusions, or make claims, such as C,' or alternatively 'Given data D, one may take it that C.'" (1958, 98). Using Toulmin's explanation as a guide, during one pilot we asked evaluators to mark portions of text they were treating as warrants. When their marked documents were compared, we discovered that these evaluators uniformly regarded the following features as warrants: 1) transitional phrases linking claims and data or vice versa, 2) passages in which either "because of" or "since" appeared or was implied, 3) infinitives suggesting the purpose of the claims or serving to rationalize, justify, or explain the claims, and 4) explanations justifying or solidifying the claims. 9

Additional scholarly work is needed to describe the specific nature of warrants in management texts. Although we did achieve an interrater reliability of 0.672 on warrant when scoring the sample documents for this study, we have not obtained a similar level of agreement among participants using the Analysis of Argument Measure in our managerial training programs. Although participants in our training programs readily understand the concept of warrants, they have difficulty uniformly identifying them in actual text. A Sample Management Training Exercise Using the Analysis of Argument Measure We have asked participants in our MBA course titled Managerial Writing and in our Executive Communication Program to score various persuasive documents using the Analysis of Argument Measure. Scoring just the claims and data (which we find provides the most clear-cut and striking results) participants frequently discover they have written documents with unfocused paragraphs that are packed with claims, but are entirely data free. Such documents tend to receive low scores on all the elements. Our use of the Analysis of Argument Measure to evaluate job application letters illustrates. Taking collective wisdom of several well-known business communication textbooks (Locker, 1992; Murphy and Hildebrandt, 1991), we can say that job application letters are written with a major objective in mind: to persuade the reader to offer the writer a position. Given this persuasive goal, we would expect such letters to include one or more paragraphs asserting conclusions (or claims) regarding particular qualities or competencies that recommend the writer as a strong candidate for the available position. One would also expect such claims to be supported with data, as suggested in the second and third paragraphs of the letter outline in Figure 2. 10 - 1-11-x' n"~ ""-"^"-'L"m* I —I-`-"I"` --- —— `-311-

Figure 2: Possible Content Outline for a Job Application Letter First Paragraph Reason for the letter: "I would like a job interview with you." Identification with reader: "We met at..." "Your name was given to me by...." "Your job ad appeared in..." Personal introduction: "I am a second-year MBA student at X majoring in X." Second Paragraph Claim # 1: "I have a thorough knowledge of accounting procedures for..." Data: "I managed a $X budget for company X..." Data/warrant: I developed a cost-savings program which saved $ X... Warrant: "This experience is directly related to the job requirements at X." Third Paragraph Data: "I was promoted to X..." Data: "I received the X opportunity to direct the..." Warrant: "These responsibilities, I believe, resulted from..." Claim #2: "...my ability to..." Fourth Paragraph Specific Request: "Please include me in your interviewing schedule." Action writer will take: "I will call you next week to discuss this possibility." However, we frequently find that paragraphs from actual MBA student job application letters are filled with unfocused claims and lack data; in other words, such letters contain few names, dates, and numbers —the data that brings authority to a text and credibility to a writer. In the case of job application letters, data may ultimately distinguish one job candidate from another, for in the data one discovers personal differences and unique experiences that may recommend an applicant. Applying the Analysis of Argument Measure to documents, such as job application letters, causes writers and evaluators to identify the presence or absence of data and to inspect adequacy of claims and warrants as well. The paragraphs in Figure 3, taken from actual MBA application letters, serve to illustrate this point. Using the Analysis of Argument Measure, paragraphs A and B receive low scores of one or two; whereas, paragraph C, which includes interrelated claims and substantial data, scores high. 11

Figure 3: Sample Second Paragraphs from Job Application Letters Paragraph A Please be kind enough to consider me for a position in your firm. My strengths are: a depth of cultural background, an ability to lead projects (or be a team member), and a facility in conferencing. Having considerable maturity and a good knowledge of New York, I am comfortable entertaining groups of visiting clients. My diverse background helps me develop working relationships with a wide variety of people on many levels. BUtmne Data to SUDDOrt Claims 1. I have a deep cultural background 2. 1 have the ability to lead projects. 3. I have the ability to work in a team. 4. I can facilitate conferencing. 5. 1 am mature. 6. I know my way around New York. 7. 1 am comfortable entertaining groups. 8. I have the ability to develop good working relationships with all kinds of people. None. None. None. None. None. None. None. None. Paragraph B Dow Corning's leadership in the silicone industry, recent sales achievement, continued emphasis on growth, and marketing focus are reasons why I would like to work for the company. As a second-year MBA student at the University of Michigan School of Business, I believe that my qualifications for a summer position with Dow Coming include a good base of business and marketing knowledge, as well as analytical and interpersonal skills, which will enable me to work effectively in a team-oriented environment. gti 1. I have a good base of business knowledge 2. I have a good marketing knowledge. 3. I have analytical skills. 4. I have interpersonal skills. 5. I can work well in a team-oriented environment Data to Support Claims Completed one year of Michigan MBA Program. Completed one year of Michigan MBA Program. Completed one year of Michigan MBA Program. None. Completed one year of MBA studies. Paragraph C I am currently a second-year MBA student at the University of Michigan School of Business. Before returning to school I worked for the innovative retailer R. H. Macy. During my two years at Macy's I managed a $1.6 million domestics (sheets, towels, blankets, pillows) business. I was subsequently promoted to the Assistant Buyer for Macy's $20 million bed linens business. Both positions involved a great deal of responsibility, including a sense of urgency and an ability to prioritize. These are important attributes needed to effectively participate in brand management at your company. cubm gData to Support Claims 1. I can handle a lot of responsibility 2. 1 can handle urgent tasks 3. 1 can prioritize Manager for two years at Macy's Managed Macy's $1.6 million domestics Promoted to Assistant Buyer at Macy's Buyer Macy's $20 mil linens business 12

"The simplicity, completeness, and heuristic power of Toulmin logic," wrote Locker and Keene, "make it especially valuable for courses in business and technical writing" (1983, 104). We find Locker and Keene's observation relevant to the Analysis of Argument Measure, particularly because the measure provides users with a specific scoring scheme that facilitates detailed discussion about persuasive document content. Participants in our management training programs have responded enthusiastically: "This makes more sense than anything I've ever heard about persuasive writing," remarked one individual. His response is typical. THE PERSUASIVE ADAPTIVENESS MEASURE Writers are routinely instructed to compose documents that answer two basic reader questions: 1) Why are you sending me this message? and 2) What does this message have to do with me? (Connor, 1988). Those who use documents as a means to encourage actions that get work done, such as managerial writers, may want their readers to ask a third question: 3) What must I do in response to this message? In fact, persuading readers to act as directed may be the manager's primary motivation for writing. The Persuasive Adaptiveness Measure is a tool for evaluating the extent to which a document addresses these key reader questions. Recognizing the merits of appealing to intended receivers, particularly through oral channels, several Communication researchers developed a system for scoring persuasive messages according to the extent to which those messages assumed the receiver's perspective. Clark and Delia (1976) originated a four-level, hierarchical ranking system to score what they characterized as the "degree of social perspectivetaking" in messages. Delia, Kline and Burleson (1979) expanded this prototypic ranking system into a nine-level hierarchy with levels ranging from a low score of one, for messages with "no discernible recognition of the receiver's perspective," to a high 13

score of nine for messages with "explicit recognition and adaptation to the receiver's perspective." Subsequent empirical work (Shepherd and O'Keefe, 1984) confirmed the validity of the Delia, Kline and Burleson (1979) instrument by demonstrating a relationship between messages receiving high scores on the social perspective-taking scale and messages that were effective. Delia, Kline, and Burleson (1979) devised their instrument to score the extent of listener adaptation in oral, interpersonal interchanges; however, in 1988, Connor successfully used their scale to score student essays, and thus illustrated the applicability of the instrument for written messages. Connors (1988) work suggested the potential applicability of the Delia, Kline, and Burleson (1979) instrument for evaluating the degree of social perspective-taking in persuasive managerial documents. We subsequently used the Delia, Kline, and Burleson (1979) instrument as a starting point for the first of several pilots that produced the Persuasive Adaptiveness Measure presented here. The Persuasive Adaptiveness Measure is distinct from the Delia, Kline, and Burleson (1979) instrument in several significant ways. For one thing, the piloting process allowed us to collapse the Delia, Kline, and Burleson (1979) scale from three levels with nine scores, ranging from a low of zero to a high of eight (Level 1: 0-2; Level 11: 3-5; Level III: 6-8), to a scale with two levels and six scores, ranging from a low of one to a high of six (Level 1: 1-3; Level II: 4-6), as seen in Figure 4. Consequently, the Persuasive Adaptiveness Measure consists of a six-point (rather than nine-point) hierarchical scale, which is organized into two (rather than three) major levels. First, we eliminated both the zero and level five scores. The zero score was confusing because the original scale consisted of nine possible levels, yet the highest possible score was only eight (scoreres were 0 to 8). Moreover, in our initial pilot no documents received a zero score, suggesting its inappropriateness in our case. The zero scoring 14

level may have been appropriate for the Delia, Kline, and Burleson (1979) scheme because the intent was to examine the development of persuasive communication strategies for children, as well as for adults. For our purposes, the zero score proved unnecessary. Figure 4: Persuasive Adaptiveness Measure for Managerial Writing Directions: To apply the Persuasive Adaptiveness Measure, read the document with the scoring scale in mind. Look for expressions of need, desirability, usefulness and/or for discussion of consequences. You might write "need " or other key words in the margin Then assign the document a score on the scale from one to six. Level 1: Not reader focused. No discernible adaptation of writer conclusion/recommendation to reader's perspective. 1 Writer's conclusion/recommendation not apparent or deliberately stated. 2 Writer's conclusion/recommendation stated but not explained. 3 Writer's conclusion/recommendation stated with some elaboration. Level II: Reader focused to some extent. Adaptation of writer condusion/recommendation to reader's perspective. 4 Writer suggests the necessity, desirability or usefulness of the conclusion/recommendation for the reader. 5 Writer focuses on the necessity, desirability or usefulness of the conclusion/recommendation. This may include one or some combination of the following: - some dealing with reader objections/concers regarding the conclusion/ recommendation, - some suggestions for implementing the conclusion/recommendation, or - some effort to demonstrate how the reader benefits by accepting the conclusion/recommendation 6 Writer takes the reader's perspective in articulating the necessity, desirability or usefulness of the conclusion/recommendation. This may include one or some combination of the following: - dealing with reader objections/concerns regarding the conclusion/recommendation, or - explaining how to implement the conclusion/recommendation, or - demonstrating how the reader benefits by accepting the conclusion/recommendation 15

Findings from previous Communication research indicated the appropriateness of removing the level five score, intended to assess the extent to which the communication dealt with anticipated receiver counter-arguments. When empirically testing the Delia, Kline, and Burleson (1979) scoring scheme, Shepherd and O'Keefe found no correlation between effective documents and documents containing counter arguments. They explained this result by suggesting that counter arguing may be "intrinsically face-threatening" to the receiver of a message because it denies the legitimacy of the receivers objections (1984, 148-9), a conclusion recalling Brown and Levinson's (1978) work on politeness, which suggests that requests are intrinsically face-threatening acts involving some imposition on the receiver. Expanding upon Brown and Levinson's notion, Shepherd and O'Keefe concluded that counter arguing "adds insult to imposition, since in counterarguing a message producer denies the legitimacy of the objections the target may have and does so preemptively" (1984, 149). The Shepherd and O'Keefe analysis and the fact that the level five score did not add value or clarity to the document assessment process prompted us to drop the level five "counter-argument category." The decision to remove the Delia, Kline, and Burleson (1979) major category titled "Level II: Implicit Recognition of and Adaptation to the Target's Perspective" occurred in our first pilot. Evaluators found the category inappropriate given our intent to assess managerial documents in which deliberate or "explicit" directives are desirable. The descriptions for the scoring levels were similarly modified. As a result, the Persuasive Adaptiveness Measure shown in Figure 4 is considerably different than the Delia, Kline, and Burleson (1979) instrument. Evaluators found the Persuasive Adaptiveness Measure highly functional for document evaluation. On the sample documents for this study, calculated agreement between evaluators' scores using cohen kappa achieved an interrater reliability of 16

0.806, as seen in Table 4. Moreover, when the evaluators' scores differed, there was only one case when that difference exceeded more than one point even though the Adaptiveness Measure allows for six distinct scores. A Sample Management Training Exercise Using the Persuasive Adaptiveness Measure One of our most successful exercises using the Persuasive Adaptiveness Measure involved participants in a comparative analysis. The responses of 18 middle managers who participated in our March 1992 Executive Communication Program illustrate. In this particular case, we employed three documents from the Crown Regent sample (Appendices B, C, D). These documents had received a range of high, middle, and low holistic and adaptiveness scores when they were evaluated for the research reported here and therefore represented varied approaches and quality (Memorandum #33 received a holistic score of 4 and an adaptiveness score of 4; Memorandum #35 received a holistic score of 1 and an adaptiveness score of 3; Memorandum #52 received a holistic score of 3 and an adaptiveness score of 6). Unaware of the scores awarded these documents earlier, participating managers were asked to rank order them in terms of overall quality. We then recorded participants' choices on a transparency, as shown in Table 1. TABLE 1 Summary of Managers' Overall Quality Choices Document Identification # 33 35 52 First Quality Choice 12 1 5 Second Quality Choice 5 3 10 Third Quality Choice 1 14 3 17

Twelve, or two thirds of the participating managers selected Crown Regent document #33 as superior in quality. Document #35 was ranked low by an even greater number, 14 managers. Document # 52 received mixed reviews, with just over half of the managers placing it second. After discussing document features that contributed to these rankings, we went to lunch. (Sometimes it is useful to take a break of some sort at this point in the exercise so that participants may return to the same documents with some measure of freshness.) After the break, participating managers learned the Persuasive Adaptiveness Measure through a process involving scoring and discussing a diverse set of persuasive memoranda, letters, and short proposals. When it was apparent that the managers felt comfortable applying the Persuasive Adaptiveness Measure, they were directed to use it to score the three Crown Regent documents. Their adaptiveness scores were then recorded on a transparency, as shown in Table 2. TABLE 2 Managers' High, Medium, and Low Adaptiveness Scores Document Identification # 33 35 52 High Adaptiveness Score 8 1 9 Medium Adaptiveness Score 9 1 8 Low Adaptiveness Score 0 16 2 In this case, managers' high adaptiveness scores were largely split between documents #33 and #52. Discussion revealed that managers found bits of useful information in both documents, although they thought neither was satisfactory from the readers' point of view. Actually, only one participant awarded the highest possible adaptiveness score of 6 to document #52; however, this participant later remarked that 18

her scores were "all somewhat inflated." On the low end, document #35 received consistently low adaptiveness scores from all but two of the participating managers. The third step of this exercise involved comparing the managers' adaptiveness scores with their overall quality choices. The transparency with the adaptiveness scores (Table 2) was then placed on top of the transparency with the overall quality scores (Table 1) to visualize the comparison, as shown in Table 3. TABLE 3 Comparison of Managers' Quality Choices and High-Low Adaptiveness Scores Document Identification 33 35 52 First Quality Choice 12 1 5 High Adaptiveness Score 8 1 9 Second Quality Choice 5 3 10 Medium Adaptiveness Score 9 1 8 Third Quality Choice 1 14 3 Low Adaptiveness Score 0 16 2 In this instance, document #35 received both low quality and low adaptiveness scores; whereas, documents #33 and #52 received higher scores in both cases. It is particularly interesting that more than half of the participants selected document #52 as their first choice on overall quality, yet less than half gave this document a high adaptiveness score. This result invited lively discussion about a number of questions: 1) What is the reader looking for in the Crown Regent memorandum? 2) To what extent is overall document quality and reader adaptation each important in this situation? 3) To what extent are overall document quality and reader adaptation related? 4) Can a document possess low overall quality and yet be highly (or 19

somewhat) adapted to the reader? 5) Can a document be poorly written in some respects and yet provide the reader with the needed information or prompt the reader to take the appropriate actions? 6) Can a document be well written and yet fail to provide the kind of information that the reader needs or fail to persuade the reader? Such questions raise issues about the nature of functional writing and dramatically illustrate the complexities involved. Moreover, since no one (not even the training leader) can absolutely predict the outcome of a comparison between overall quality scores and the adaptiveness scores this exercise possesses an element of spontaneity that can enliven a training session. Perhaps the overall quality choices and adaptiveness scores will correlate; perhaps they well not. Whatever the result, we find that the comparative analysis of the quality choices and adaptiveness scores stimulates engaging discussions concerning fundamental questions about the function of written messages as management tools. STATISTICAL ANALYSES OF THE ANALYTIC MEASURES Statistical analyses suggest that the Analysis of Argument and Persuasive Adaptiveness Measures are reliable and valid indicators of managerial document quality. In every case interrater reliability achieved "good" agreement beyond chance. Using cohen kappa, the lowest value was 0.656 for interrater reliability on data and the P Values in every case were wildly beyond chance at.0000, as seen in Table 4. As Fleiss specifies regarding the use of cohen kappa for interrater reliability in Statistical Methods for Rates and Proportions: "For most purposes, values greater than.75 or so may be taken to represent excellent agreement beyond chance, values below 40 or so may be taken to represent poor agreement beyond chance, and values between.40 and.75 may be taken to represent fair to good agreement beyond chance" (1981, 218). 20

TABLE 4 Interrater Reliability Using Cohen Kappa Asymptotic Standard Value Error T Value P Value Adpt. R1 & Adpt R2 0.806 0.069 11.681.0000 Clm R1 & CIm R2 0.736 0.086 8.558.0000 Data R1 & Data R2 0.656 0.086 7.628.0000 War R1 & War R2 0.672 0.086 7.814.0000 Descriptions of Statistical Categories Value: The extent to which the variables are associated. Asymptotic Standard Error: The certainty of the Value. T Value: The ratio of Value to its Standard Error. P Value: The degree to which the results could have been due to chance alone. (The smaller the P Value the less likely the Value is due to chance.) Correlations between the analytic and holistic measures were determined using Stuart Tau-C, which is designed to calculate the degree of "concordance" as opposed to "discordance," or, in this case, the degree of positive association between the holistic and analytic scores (Liebetrau, 1983). Stuart Tau-C was most appropriate for this analysis for several reasons: 1) the data are categorical (having a limited number of values) rather than continuous, 2) the scoring scales are ordinal and impressionistic rather than exact, and 3) the analytic and holistic measures do not possess equal numbers of scoring levels. (Had the analytic and holistic measures consisted of the same number of scoring levels, Kendall Tau-B would have been appropriate. Moreover, regression analysis did not best meet the criteria for analyzing these data because it assumes that the variables are continuous rather than categorical.) 21

As displayed in Table 5, the Stuart Tau-C revealed a positive association between the holistic and analytic scores in every category except data for which the P Value exceeded.05, the typical cut-off point for analyses of this nature. On the other hand, the low P Values for claim, warrant, and adaptiveness demonstrated positive association between the holistic and analytic scores. TABLE 5 Correlations of Holistic and Analytic Scores Using Stuart Tau-C Stuart Asymptotic Tau-C Standard Value Error T Value P Value Holistic/Claim 0.238 0.091 2.615.0089 Holistic/Data 0.185 0.099 1.869.0616 Holistic/Warrant 0.298 0.114 2.614.0089 Holistic/Adaptiveness 0.340 0.089 3.820.0001 See Table 4 for descriptions of the statistical categories. The Tau-C on data indicates that when evaluators scored the Crown Regent memoranda using holistic evaluation they did not require high-scoring documents to possess strong data as it is articulated in the Analysis of Argument Measure. One explanation for this finding may rest with the fact that the Crown Regent Case provided the writer with very little data. Therefore, to compose a memorandum with substantial data would have required the writer to fabricate. Given the fact that for a number of years our case prompts for large-scale holistic assessment contained almost no data (including the Crown Regent Case), we were not inclined to award a low holistic scores to documents simply for lack of data. Recently, however, we have included several types of data in our assessment case prompts and have more deliberately 22

evaluated data use when applying holistic evaluation. (A future research project might involve analyzing a selected sample of the documents written in response to case prompts including data and comparing the results with those reported here.) In contrast to data, we found a strong association between the adaptiveness and holistic scores, as evidenced by the Stuart Tau-C Value of 0.340 and low P Value of.0001. This result suggests that if a document received a high adaptiveness score, it also received a high holistic score. Since audience analysis comprised one of the four main criteria for the holistic evaluation originally administered on these sample documents, this correlation was expected. In every case, correlations between the holistic and analytic scores could have been stronger; however, the Analysis of Argument and Persuasive Adaptiveness Measures are not intended to tell the whole story, but rather to evaluate particular rhetorical features. By contrast, holistic assessment is inherently inclusive, designed to assess a broad range of features in major areas, such as audience awareness, organizational strategies, content development, and language control. The positive associations between the holistic and analytic scores on the sample documents for this study demonstrated that the Analysis of Argument and Persuasive Adaptiveness Measures explain some, but not all, of the holistic score. This is an appropriate outcome. CONCLUSION The Analysis of Argument and the Persuasive Adaptiveness Measures are intended as straightforward tools for researchers, teachers, and writers; tools that facilitate both the composition and evaluation of documents that are deliberately persuasive and directorial in nature. In our experience, the measures have tremendous heuristic value. For example, we use them for post-assessment 23

consultations to explain students' holistic assessment scores. The measures allow us to articulate and quantify students' holistic scores and few students appeal for reassessment as a result. In MBA classes and executive education seminars, the analytic measures have provided an appealing means to introduce concepts and vocabulary that may be unfamiliar to many managerial writers, concepts such as claims, data, and reader adaptiveness. Even more significantly, the measures illustrate the multiple and complex choices involved in crafting documents to achieve goals in diverse organizational settings. Moreover, comparative use of the measures allows participants to explore, in very specific terms the strengths and weaknesses of various documents; for example, a document may score high on the Analysis of Argument Measure but fail to appeal to the readers for which it is written and therefore score low on the Persuasive Adaptiveness Measure. For research, the measures have proven functional for projects that depend upon the evaluation of document quality. For example, Horton, Rogers, McCormick and Austin (1991-2) employed these measures to score document quality for an experimental study comparing collaborative writing with and without computer technology. In the past, such studies have had to rely on holistic evaluation which greatly reduces the clarity and impact of the results. Furthermore, positive correlations between the analytic and holistic scores on the sample documents evaluated for this study, indicate that these analytic measures may also be useful for future research addressing what Huot has characterized as "the neglect of validity" in relationship to holistic assessment (1990, 205). The Analysis of Argument and Persuasive Adaptiveness Measures should be used with some caution, however. As Shepherd and O'Keefe noted, "constructing an effective message is not a matter of generating just any message to fit some abstract pattern, but rather of exploiting the information available in the situation to construct 24

the most effective specific appeal" (1984, 151). High scores on analytic measures do not insure success. Rather, the analytic measures serve as instruments that may sensitize writers to important rhetorical choices. 25

APPENDIX A: CROWN REGENT CASE The Crown Regent is a 150 room, deluxe hotel on fashionable Michigan Avenue, in Chicago, Illinois. The hotel caters to business travelers and upscale tourists. Over the past six months, the hotel's occupancy has been down by 50%. This Is a surprising drop given it occurred during the peak season when the hotel normally operates at full capacity. The low occupancy coupled with numerous guest complaints lead General Manager Carolyn McDonnell to believe that service at the hotel is not what it has been in the past and certainly not what it should be to compete with the growing number of upscale hotels in the city. You are Director of Sales and Marketing at Crown Regent. Two weeks ago you met with Carolyn, who is your boss, to discuss the declining occupancy. Carolyn feels that although the staff is aware of the drop in business, they are not aware of the part they play in retaining and gaining customers. The Crown Regent staff must become more aware of customer needs and develop a new attitude about serving customers. Carolyn wants the Crown Regent staff to operate totally from a customer point of view. Carolyn wants you and your Sales & Marketing Department to come up with recommendations for developing a strong sales-and-marketing orientation among the staff. Employees need to realize they have the opportunity within their own jobs to sell the hotel. With this in mind, she asked you to review and work with all hotel departments: Front Office (Reservations, Front Desk, Bell Staff, Concierge); Housekeeping; Food and Beverage; Maintenance; and Sales and Marketing. She wants your recommendations in writing. Instructions Write a persuasive memo to your boss, Carolyn McDonnell, recommending specific actions for improving customer service. Convince Carolyn that your recommendations are valid. Make up details not included in the case. Your writing should be clear and direct as opposed to vague and official sounding. You have 50 minutes. Work for the entire 50 minutes. No time is allowed for recopying. Edit your memo for correctness. Don't worry about erasures or crossouts; however, illegible writing will impact your score. Please return the CROWN REGENT case with your memo. 26

Appendix B: Crown Regent Sample Memorandum #33 CROWN REGENT TO: Carolyn McDonnell, General Manager FROM: [writer name removed] DATE: 9 Sept. 1989 SUBJECT: Staff Sales and Marketing Orientation - Recommendations for Action After discussions with the supervisors concerned (Front Office, Housekeeping, Food and Beverage, Maintenance), plus discussions with selected employees, and our own internal review and audit, we have developed specific recommendations for a course of action to reverse our declining occupancy. 1) Re-establish the philosophies that the customer is always right, and service with a smile. Crown Regent is not a "No-Tell motel," but an up-market hotel. We must act like one; our clients expect it. If they want a 6-course meal at 4am, so be it. If they want tickets to a sold-out opera, get them. If necessary, management should be called in, and their influence used, if not their tickets. We all must work to our philosophies from top to bottom. 2) Redesign staff uniforms. The last revision was in 1959. Our uniforms look dated, they are too hard to dean, and they are not exactly comfortable. We propose calling in a designer to review our design, and develop new uniforms which are stylish, compatible with our image, easy to maintain and clean, and comfortable. Ifs hard to smile in wool trousers during a Chicago summer when you're working the curb. As necessary, the company must subsidize the cost of uniforms for the staff. 3) Re-training of all staff, on a rotating basis. Staff members must be kept up to date on all our systems, and they must be fast and efficient with them. Nothing impresses clients more than fast, efficient service. 4) Perform a market survey, including sending our own people undercover to our competition, not only in Chicago, but in other selected cities in the U.S. This will give us ideas on how to improve our service, and give us a measure to compare ourselves against. 5) Profit-sharing for line employees. Our employees must be made to feel that they have a direct stake in this hotel. And that they, rather than management, will get the first cut of the pie. 6) Staff orientation meetings, on company time to the extent possible. Upper management must be at attendance at all meetings, even during the graveyard shift. If the hotel is not important enough for us to lose a little sleep for, it won't be important enough for our employees to expend a little effort for. In this meeting, we must orient them to our situation, inform them of our plans for improvement, and tell them what we expect from them. We must not hold back how serious the situation is, but we must promise them to stay the course. Lay-offs must be our lat resort. Upon your approval, my department will begin developing detailed plans for the above activities, with a goal for have the staff meeting 6 weeks from approval. 27

(1 Appendix C: Crown Regent Sample Memorandum #35 TO: Carolyn McDonnell, General Manager FROM: [writer name removed], Director of Sales and Marketing SUBJECT: Recommendation to improving occupancy In response to our previous talk about declining occupancy I asked my staff to scrutinize our operation and to find the reasons for our shrinking business. It was not easy for our staff to check all of our operations within the limited two-week's period, however, I recommend that services in the Reservations desk and restaurants should be improved. With regard to our services in the Reservations desk, it has been said that our attitude toward an individual visitor is too cool. One reason for this reputation is our reservation system. Namely, the desk would not accept any reservation made by a new individual more than 30 days prior to the actual stay, and this desk is required to hold 50 rooms in order to meet one-week prior requirement from our loyal customers listed in our Repeaters' dist. I recommend that we should remove the former 30 days restriction and decrease the number of the latter reserved rooms for repeaters to 30 rooms. I also recommend that this decrease in number of rooms for loyal customers should be compensate for 28

Appendix D: Crown Regent Sample Memorandum #52 TO: Carolyn McDonnell, General Manager FROM: [writer name removed] Director of Sales and Marketing DATE: 08 September, 1989 SUBJECT: Customer Satisfaction Awareness Proposals As you requested in our meeting two weeks ago, I have given serious consideration to the subject of increasing customer satisfaction awareness amongst our employees. This memo outlines the programs I would like to implement in order to improve our customer service. A summary of the proposals is included at the end of this memo. Customer Service Hotline We will establish a nationwide toll-free hotline to receive customer feedback and complaints. I prefer a phone system to the current comment card system because it is frequently more convenient for customers to contact us after they have rushed out of the hotel on business. The comment cards will be retained, but they will no longer be processed by my staff. The key elements of the hotline are training and staffing. Every full time employee will receive training from my staff on how to handle incoming calls. Quick reference cards will be provided for most "procedural" issues, to enhance employee memory and usability. The training will be approximately two hours for each employee. The hotline will be staffed by every employee, on a rotating basis. The hotline will be open 24 hours, seven days per week. Based on a similar program at a comparably sized facility in Boston, this should require one employee for each shift. Overload on the first shift will be handled by the customer relations manager in my staff. He will also serve as the scheduling coordinator for the hotline and comment cards. A key factor to the success of this program will be in departmental scheduling. Given our current staff, each employee will monitor the hotline only once per year. By careful schedule rotation, each department will get regular and continuous exposure to the program. This program association with peers and workmates will raise the entire organizations awareness on a continuous basis. The operators will be empowered to offer discounts on future visits, thus giving us an opportunity to win back a discontent customer. Comment Cards The comment cards will be reviewed and entered to a database by hotline operators. This will allow my customer relations manager to concentrate on scheduling the hotline (a 10% job) and following up with customers. This additional follow-up will be a shared activity with the hotline operators. By coordinating the two activities, the operator benefits from a "training program" of one half day with our customer relations expert. He has agreed to begin working a split shift to maximize his in-house impact. Staff Reaction I have reviewed the elements of this proposal with all of the department heads. At first a concern was raised on the manpower required. Each staff head did agree that with the rotational schedule they could absorb the workload. After scheduling was addressed, each staff head seems to have warmly embraced the idea. Dave Clark, the head chef, has even expressed personal interest in helping to develop the training. Competitive Programs I previously mentioned a similar program in Boston. Steve Wertz, a long time associate of mine, developed the concept for the Hyatt Cambridge. Since its inception they have seen a 50% reduction in 29

negative comments during a period when the occupancy rose from 57% to 89%. Also, they have an astounding 75% redemption rate on 35% (list price) discount coupons. Steve has already been asked to broaden the program to all Hyatt Northeastern Region facilities. Interestingly, the profit margin on the discount coupons alone pays for all system overhead and staffing for one and a half shifts. Summary A toll free hotline will provide increased customer satisfaction and by using existing staff on a rotational basis employee awareness will be tremendously improved. All current staff heads have agreed that, based on rotational scheduling, the workload can be absorbed without additional headcount. The Hyatt Cambridge has seen tremendous results with a similar program which is now being implemented in all Northeastern Region Hyatt facilities. I look forward to our September 25 staff meeting where this proposal will be discussed with the entire staff present. cc: All Staff Heads 30

REFERENCES Breland, Hunter M., and Robert J. Jones, Robert J. "Perceptions of Writing Skills." Written Communication 1.1 (1984):101-119. Brown, Penelope, and Stephen Levinson. "Universals in Language Usage: Politeness Phenomena." Questions and Politeness. Ed. Esther N. Goody. Cambridge: Cambridge University Press, 1978. 56-232. Clark, Ruth Anne, and Jesse G. Delia. "The Development of Functional Persuasive Skills in Childhood and Early Adolescence." Child Development 47 (1976): 1008-14. Clark, Ruth Anne and Jesse G. Delia. "Cognitive Complexity, Social PerspectiveTaking and Functional Persuasive Skills in Second-to Ninth-Grade Children." Human Communication Research (1977): 128-34. Connor, U. (1988). "Linguistic/Rhetorical Measures for International Persuasive Student Writing." Indiana University in Indianapolis Working Paper. Cooper, Charles R., and Lee Odell. Evaluating Writing: Describing, Measuring, Judging. Buffalo: State University of New York, 1977. Delia, Jesse G., Susan L. Kline, and Brant R. Burleson. "The Development of Persuasive Communication in Kindergartners Through Twelfth-Graders." Communication Monographs 46 (1979): 241-56. Diederich, Paul. Measuring Growth in English. Urbana, IL: National Council of Teachers of English, 1974. Driskill, Linda. "Understanding the Writing Context in Organizations." Writing in the Business Professions. Ed. Myra Kogen. Urbana, IL: National Council of Teachers of English, 1989. 125-145. Fleiss, Joseph L. Statistical Methods for Rates and Proportions. 2nd ed. New York: Wiley, 1981 Hoetker, James. "Essay Examination Topics and Students' Writing." College Composition and Communication 3 (1982): 377-392. Horton, Marjorie, Priscilla Rogers, Michael McCormick, and Laurel Austin. "Exploring the Impact of Face-to-face Collaborative Technology on Group Writing." Journal of Management Information Systems 8.3 (Winter 1991-2): 27-48. Huot, Brian. "Reliability, Validity, and Holistic Scoring: What We Know and What We Need to Know." College Composition and Communication 41.2 (1990): 201-213. Irmscher, William F. Teaching Expository Writing. New York: Holt, Rinehart and Winston, 1979. 31

Liebetrau, Albert M. Measures of Association. Newbury Park, CA: Sage, 1983. Vol. 32 of Quantitative Applications in the Social Sciences. Locker, Kitty 0. Business and Administrative Communication. 2nd ed. Homewood, IL: lrwin,.1992. Locker, Kitty 0. and Michael L. Keene. "Using Toulmin Logic in Business and Technical Writing Classes." Technical and Business Communication in TwoYear Programs. Ed. W. Keats Sparrow and Nell Ann Pickett. Urbana, IL: National Council of Teachers of English, 1983. 103-110. Murphy, Herta A., and Herbert W. Hildebrandt. Effective Business Communications. 6th ed. New York: McGraw Hill, 1991. Northey, Margot. "The Need for Writing Skill in Accounting Firms." Management Communication Quarterly. 3.4 (1990): 474-495. Paradis, James, David Dobrin, and Richard Miller. "Writing at Exxon ITD: Notes on the Writing Environment of an R&D Organization. Writing in Non-Academic Settings. Ed. Lee Odell and Dixie Goswami. New York: Guilford, 1985. 281-307. Perkins, Kyle. "On the Use of Composition Scoring Techniques, Objective Measures, and Objective Tests to Evaluate ESL Writing Ability." TESOL Quarterly 17.4 (1983). Purves, Alan C., Thomas P. Gorman, and Saulie Takala. "The Development of Scoring Scheme and Scales." The IEA's Study of Written Composition I: The International Writing Tasks and Scoring Scales. International Studies in Educational Achievement, v. 5. Eds. Thomas P. Gorman, Alan C. Purves, and R. E. Degenhart. New York: Pergamon, 1988. Rymer, Jone. "Scoring Methods." Presentation at Association of Business Communication International Conference. Las Vegas, 1989. Shepherd, Gregory J., and Barbara J. O'Keefe. "The Relationship Between the Developmental Level of Persuasive Strategies and Their Effectiveness." Central States Speech Journal 35 (Fall 1984): 137-152. Toulmin, S. E. The Uses of Argument. Cambridge: Cambridge University Press, 1958. White, Edward M. Teaching and Assessing Writing. San Francisco: Jossey-Bass, 1985. A4, 32