ENGN UMR0132 RSD-TR-15-8 COORDINATED RESEAPRCH; IN ROBOTICS i ~AND INTEGRATED MANUFACTURING FINAL REPORT August 1, 1984 - July 31, 1985 Co-Principal Investigators D. E. Atkins R. A. Vols Contract F49620-82-C-0089 Robot Systems Division Center for Research on Integrated Manufacturing College of Engineering The University of Michigan Ann Arbor, Michigan 48109 USA

:URITY CLASSIFICATION OF THIS PAGE REPORT DOCUMENTATrION Pi/.. C,REPORT SECURITY CLASSIFICATION lb. RESTRIr-, iS P MAR i | Unclassified. SECURITY CLASSIFICATION AUTHORITY 3. DISTRIBUTION/AVA ILAvtI t EtO 5 jy 1 y. DECLASSIFICATION/DOWNGRADING SCHEDULE Unlimited Distribution PERFORMING ORGANIZATION REPORT NUMBER(S) 5. MONITORING ORGANIZATION REPORT NUMBER(S) RSD-TR-15-85. NAME OF PERFORMING ORGANIZATION 6b. OFFICE SYMBOL 7a. NAME OF MONITORING ORGANIZATION College of Engineering (u fapplicable) USAF, AFSC The University of Michigan Air Force Office of Scientific Research. ADDRESS (City. State and ZIP Code) 7b. ADDRESS (City, State and ZIP Code) Ann Arbor, Michigan 48106 Building 410 Bolling AFB, D.C. 20332. NAME OF FUNDING/SPONSORING 8b. OFFICE SYMBOL 9. PROCUREMENT INSTRUMENT IDENTIFICATION NUMBER ORGAN I ZATION (If applicable) F49620-82-C-0089 ADDRESS (City, State and ZIP Code) 10. SOURCE OF FUNDING NOS. PROGRAM PROJECT TASK WORK UNIT ELEMENT NO. NO. NO. NO. 61102F 2306/A3. TITLE (Include Security Classification) lordinated Research in Robotics and Integrated Manufacturing.... PERSONAL AUTHOR(S) D. E. Atkins, R. A. Volz| a TYPE OF REPORT 13b. TIME COVERED 14. DATE OF REPORT (Yr.. Mo., Day) 15. PAGE COUNT Final FROM8/1I/8q TolO/3/38 November 1985 1 106. SUPPLEMENTARY NOTATION COSATI CODES 18. SUBJECT TERMS (Continue on reverse if necessary and identify by block number):IELD GROUP SUB. GR. M anufacturing Science, Robotics, Artificial Intelligence, i___ _____ Sensors, Robot Control, Machine Vision, Special Purpose Processors. Motion Pn, R ABSTRACT,.onrzr! -)' )n reverse if necessarv and identizf b,; block numbers The research procured under this contract is oriented toward the understanding and development of the flexible robot based manufacturing cells or "islands" which will increasingly become basic blocks for the building of modern parts production and assembly facilities. Present work spans a hierarchy of sub-systems oriented toward the development and integration of high performance manipulators into flexible manufacturing cells. The following items have been accomplished during the past year. * Theoretical conditions fully characterizing the class of control laws which decompose and allow simplified robot controls have been developed. * A robust nonlinear controller was developed which allows tracking control with the use of a general error measure and tight bounds on the asymptotic error. * Simulation studies have predicted that new control designs will significantly lessen ________. the detrimental effects of flexible robot arms DISTRIBUTION/AVAILABILITY OF ABSTRACT 21. ABSTRACT SECURITY CLASSIF i CATION CLASSIFIED/UNLIMITED E SAME AS RPT. C DTIC USERS C I. NAME OF RESPONSIBLE INDIVIDUAL 22b. TELEPHONE NUMBER 22c. OFFICE SMBOL (Include Area Code) FORM 1473, 83 APR EDITION OF 1 JAN 73 IS OBSOLETE. SECURITY CLASSIFICATION OF THIS PAGE

l iwaoCr^ ""''i # l"1, el"

RITY CLASSIFICATION OF THIS PAGE BJECT TERMS: (continued) Requirements Analysis, Robot Programming, Optical Processing, Manufacturing DaCabase, ^anufacturz n Cell Architecture STRACT:!-ntinued)' General mathematical treatment of the effect of force feedback and load distribution on the local compliance on a manipulator and effector have been determined. * Optimal control solutions have been obtained for simultaneous force/motion control of manipulator end effectors and applied to a deburring operation. * The trajectory planning techniques have been extended to include jerk constraints and dynamic uncertainties. * Two new approximate methods have been developed for generating minimum-time geometric paths for robots in the absence of obstacles. * Our 8X8 tactile array is capable of resolution exceeding 8 bits and the operating range can be varied by several orders of magnitude. * A prototype 32 element imager has been fabricated and 2-3~ K temperature difference resolution in scanning mode and 0.01 ~ K in nonscanning mode are expected. * A new technique for recovering structure from motion using long sequences of frames (not less than five and perhaps up to 30) allows the detection of motion events (e.g. change in velocity). * A strategy has been developed for reconstructing and recognizing an object and its pose from multiple views (important distinguishing features, e.g. set screw in a fan blade, may not be initially visible). * The optimal salient boundary segment matching technique installed on the Apollo workstation has been favorably benchmarked against other techniques and delivered to the sponsor. Two new techniques for efficiently recognizing 3-dimensional parts have been designed; performance tests are currently underway. * A new incoherent light filtering technique has been developed for utilizing a pair of grating diffractions. The quality of the resulting image compares very favorably with images one would expect from coherent light filtering. The new method has a number of advantages over previous incoherent light techniques. * A rapid prototyping vision has been designed and the major initial components put in place. * Run-time interference testing between an object to be grasped and nearby obstacles has been completed and added to the basic experimental model driven system integrating a robot, vision system and CAD database. * The grip determination software has been placed in exportable form and is available. * A new grip position selection technique has been developed which eliminates the constant pressure approximation of the original method. * The graphical programming system for robots has been extended to handle binary sensors and actuators. * An analysis of communication requirements among different parts of a distributed sensor-based multi-robot system has yielded a set of communication primitives. * The heuristic problem solving technique developed during the first two years of the contract has been applied to five well known problems. The efficiency of the general results compare favorably with the individually tailored techniques. SECURITY CLASSIFICATION OF THIS PAGE

AUTHORS (continued) PROFESSORS: Elmer E. Gilbert Robert M. Howe Keki B. Irani Emmett N. Leith Arch W. Naylor N. Harris McClamroch Kensal D. Wise ASSOCIATE PROFESSORS: Ramesh Jain Trevor N. Mudge Anthony C. Woo ASSISTANT PROFESSORS: A. Galip Ulsoy

COORDINATED RESEARCH IN ROBOTICS AND INTEGRATED MANUFACTURING FINAL REPORT August 1, 1984 - July 31, 1985 Co-Principal Investigators D. E. Atkins R. A. Volz Contract F49620-82-C-0089

EXECUTIVE SUMMARY FINAL REPORT ON COORDINATED RESEARCH IN ROBOTICS AND INTEGRATED MANUFACTURING THE UNIVERSITY OF MICHIGAN Background In August of 1982, the Air Force Office of Scientific Research awarded a contract to the Center for Robotics and Integrated Manufacturing (CRIM) at the University of Michigan as a "Center of Excellence" in manufacturing science. The work supported has been principally conducted in the Robot Systems Division of CRIM, with a relatively smaller level of effort being supported within the Management Systems Division. This is the third annual report on the research activities of CRIM which are supported research under this AFOSR contract. It should also be reported that there have been synergistic and leveraging effects of the contract, which, have gone far beyond the research directly supported. The award of this contract came at a time when the College of Engineering had made a commitment to focus its resources in certain key areas based upon its existing strengths and build world class research activities in these areas. The contract has been most helpful in indirect, as well as direct, ways in building toward these goals. The existence of this award played a role in the establishment of major research partnerships with industrial organizations such as General Motors, IBM, and General Dynamics. It has also helped us receive a major gift to assist in the establishment of a Computer Aided Engineering Network (CAEN), a prototype of the next generation distributed computing/communication environment. In the conviction that it is the "wave of the future", the College is also vigorously developing a powerful engineering computer workstation oriented environment for all of its students, faculty and staff; CAEN is the delivery mechanism for these workstations. Further, it has helped us in liaison activities with the Industrial Technology Institute (such as joint recruiting and a joint seminar series). These related accomplishments have had a significant positive feedback on the AFOSR supported during this past year, and the effect will continue to grow in the future. Overview of AFOSR Sponsored Component of CRIM The research program has been broadly oriented toward the understanding and development of intelligent, flexible robot systems for manufacturing application. The research topics being pursued range from highly sophisticated and accurate control algorithms for robot arm control, new types of sensors, analysis and use of advanced sensor information, to higher level languages for robot control, integration of robot systems with CAD databases, and heuristic problem solving techniques. Figure E.1 shows how individual subproject areas relate to the overall project activity. This breadth of activity has been achieved through the Final Report i August 1, 1984-July 31, 1985

AFOSR SUPPORTED TASK AREAS Sensor i Hg Speclal Purpose Sauysta | anipulators: s Computers i eStrr Cotrol o Laguges three categSesor ies /: TKherwliiesage h Roaot Systeme / row parThe foll tion o a sustil n ber of facult superishi ents in a wie. August 1, 1984-July 31, 1985 ii Final R epor t range of areas. Third Year Activities The principal activities of the third year of the project can be classified into three categories: * There has been considerable progress in each of the research areas under investigation. * The research facilities available have seen continued growth during the past year, and plans have been laid for major new developments during the coming year. * The graduate student education activities have begun to bear fruit. A significant number of Ph.D's were graduated during the past year and several more are anticipated during the next six months. The following sections will review these accomplishments briefly. August 1, 1984-July 31, 1085 ii Final Report

Facility Development Our approach to research in sensors, robotics and manufacturing is heavily based upon experimental demonstration and simulation of the ideas developed. A strong laboratory environment is thus essential to the success of our endeavors. Of particular importance is the availability of adequate computational resources, both hardware and software, since advances in our areas of research are closely linked with the computational capabilities available. The Robotics Research Laboratory facilities have seen continued growth during this past year. Several Apollo nodes have been added and the Laboratory's configuration now includes on DN660, three DN 320's and a DSP-80. A second DSP-80 will be added shortly. The Electrical Engineering and Computer Science Department has recently taken delivery of hypercube processing system with 64 processors in the initial configuration, expandable to 1024. This system will be networked to the CIPRNET and Robotic's facility via an Ethernet. We also have received a gift of a General Electric robot and Optimation vision system. The GE robot will be used for welding applications, together with a unique linear temperature sensing array which has been developed under AFOSR sponsorship. The temperature sensor will allow the temperature distribution across the weld puddle to be determined in arc welding applications. This work will be begun during the next year. A new computer vision facility has been started based around the Apollo systems. CCD cameras have been interfaced to the Apollo system. Both Datacube image conversion and Mercury image processing boards are being interfaced to the Apollos to provide rapid image processing. Our goal is to develop a rapid prototyping system for the development of new vision algorithms. Research Accomplishments The following paragraphs briefly overview the principal accomplishments in each of the major research areas. Simulation and Control of High-Performance Manipulators This year of the project has seen the completion of some of the subprojects, the use of the results from some areas in others, and a shift in emphasis to activities essential for assembly operations. Specific research tasks investigated were the following: * Dynamic simulation of robots * Development of nonlinear decoupling theory for robot control * Analysis and control of manipulators with compliance and force constraints * Development of path/trajectory planning algorithms. Final Report lii August 1, 1984-July 31, 1985

Achievements during the third year include: * The simulation of a 6 link robot on the AD-10 computer has been augmented to include graphic visualization of the robot, and the whole simulation reconfigured to execute on an AD-100 computer. Simulations up to 14 times faster than real time have been achieved. * Theoretical conditions fully characterizing the class of control laws which decompose and allow simplified robot controls have been developed. * A robust nonlinear controller was developed which allows tracking control with the use of a general error measure and tight bounds on the asymptotic error. * The small laboratory robot built last year for use in control studies has been interfaced to an IBM personal computer and fully instrumented with accelerometers and strain gages. * Simulation studies have predicted that the appropriate modification to the control software will significantly lessen the detrimental effects of flexible robot arms * A general mathematical treatment of the effect of force feedback and load distribution on the local compliance on a manipulator and effector has been developed. * The controller design tradeoff between speed of response and end effector compliance have been explicated. * Optimal control solutions have been obtained for simultaneous force/motion control of manipulator end effectors, and applied to a deburring operation. * A discrete trajectory planning scheme for straight line paths with smoothness and torque constraints has been developed. * The trajectory planning techniques based upon the phase plan and dynamic programming techniques developed last year have been extended to include jerk constraints and dynamic uncertainties. * Two new approximate methods have been developed for generating minimum-time geometric paths for robots in the absence of obstacles. * The path finding problem has been formulated as an optimal control problem with minimum distance constraints between the robot and obstacles, and conditions determined under which solutions exist. The two dimensional problems solved last year have been extended to 3-D. Encouraging studies on efficient computational techniques for 3-D objects have been carried out. Sensor and vision subsystems Among the many challenges confronting the realization of advanced robots, the need for improved sensor subsystems is one of the most critical. With increasing levels of automation, greater emphasis must be placed on sensor August 1, 1984-July 31, 1985 iv Final Report

accuracy and reliability, as well as simple existence. Furthermore, great effort is yet required in analysis techniques for processing sensor derived data. Specific topics being investigated in sensor and vision subsystems include: * High performance tactile imagers * Silicon thermal imagers for process control or safety applications * High performance edge detection * Dynamic scene analysis * 3-D surface reconstruction via vision * Optimal salient feature matching for (occluded) object recognition * Application of optical processing to robotic vision. Specific achievements during this reporting period are: * Several 8X8 tactile arrays have been fabricated in our laboratory with readout circuitry implemented with commercial integrated circuits. * The tactile array is capable of resolution and reproducibility exceeding 8 bits and the operating range can be varied by several orders of magnitude. * A prototype 32 element imager has been fabricated and 2-3 K temperature difference resolution in scanning mode and 0.01 K in nonscanning mode are expected. * The thermal array will operate on a wide ambient temperature range with broadband spectral response (< 40,m) and a 106:1 dynamic range. * A new technique for recovering structure from motion using long sequences of frames (not less than five and perhaps up to 30) allows the detection of motion events (e.g. change in velocity). * A strategy has been developed for reconstructing and recognizing an object and its pose from multiple views (important distinguishing features, e.g. set screw in a fan blade, may not be initially visible). * The optimal salient boundary segment matching technique installed on the Apollo workstation has been favorably benchmarked against other techniques and delivered to the sponsor. * Two new techniques for efficiently recognizing 3-dimensional parts have been designed; performance tests are currently underway. * A new incoherent light filtering technique has been developed for utilizing a pair of grating diffractions. The quality of the resulting compares very favorably with images one would expect from coherent light filtering. The new method has a number of advantages over previous incoherent light techniques. * A rapid prototyping vision has been designed and the major initial components put in place. Final Report v August 1, 1984-July 31, 1985

Robot-based manufacturing cell The development of new sensors and sensor data processing techniques bring with them the need for greatly increased computational capabilities of the robot systems upon which they will be installed. Both the hardware and software systems necessary to meet these requirements will have greatly increased complexity. Future systems will certainly use multi-processing, with a number of the processors being special purpose devices designed to achieve the needed processing speed, and the software systems necessary to operate these systems will be very large and complex in comparison to today's systems. To address these issues requires a combination of systems integration, distributed computing systems, robot programming, planning and artificial intelligence research. Specific task areas being investigated include: * Multiprocessor computer architectures to support complex configurations of equipment in manufacturing cells * The use of object based computer systems for robotics * Graphical programming of robots * Algorithms for extracting information from CAD/CAE databases to aid in planning and programming robot and sensor actions * Generalized problem-solving using heuristics. Accomplishments during the past year include: * Run-time interference testing between an object to be grasped and nearby obstacles has been completed and added to the basic experimental model driven system integrating a robot, vision system and CAD database via object oriented hardware (the Intel iPAX 432) and software (Ada). * The grip determination software has been placed in exportable form and is available. * A new grip position selection technique has been developed which eliminates the constant pressure approximation of the original method. * The graphical programming system for robots has been extended to handle binary sensors and actuators. * A rapid prototyping vision system based upon the Apollo computer nodes has been begun. Special purpose image processing boards will be used to provide rapid image acquisition. Development of a software vision library is under way, and control of several laboratory robots will soon be added. * Based upon an analysis of communication requirements among different parts of a distributed sensor-based multi-robot system a set of communication primitives have been designed. * The heuristic problem solving technique developed during the first two years of the contract has been applied to five well known problems. The efficiency August 1, 1984-July 31, 1985 vi Final Report

of the general results compare favorably with the individually tailored techniques. Graduation, publication and staffing levels During the past year there has been a significant increase in the number of papers appearing in review journals while the number of papers presented at significant conferences has remained consistently high. Participation in national meetings and workshops has also been high. The number of students supported by AFOSR funding decreased somewhat from the first year due to lower turnovers among students as project personnel stablized. There were fewer Ph.D. graduates than expected but a large number of doctoral students are expected to complete their degree requirements within the next six months. Summary statistics in the foregoing areas are as follows: * 25 papers presented at significant conferences * 14 papers appeared in review journals * 13 papers are currently under review or have been accepted for journal publication during the coming year. * participation in 4 national meetings and workshops * 8 Ph.D. graduates * 2 Ph.D. graduates expected within the next six months * 4 M.S. graduates * Direct involvement of 33 students in AFOSR research projects. In the category of accomplishments that have been influenced by the AFOSR contract are the following: * 12 students have full or partial fellowship support from U.S. industry to work in our program. * 6 students who have at least partial internal fellowship support, their own support, or teaching assistant support are participating in our program. * 3 new members have been added to the Industrial Affiliates Program. * Major research support has been obtained from one industrial sponsor and additional major support obtained from a current industrial sponsor. Final Report vii August 1, 1984-July 31, 1985

August 1, 1984-July 31, 1985 viii Final Report

TABLE OF CONTENTS 1. Introd u ction.............................................................................. 1 1.1 Report Organization.................................................................... 2 2. Facility Development......................................... 2 2.1 Robotics Research Laboratory Facilities..................................... 2 2.2 Computer Aided Engineering Network(CAEN)........................... 2.3 Construction Projects................................................................ 6 3. Research Activities................................................................ 6 3.1 The Control of Mechanical Manipulators.................................... 7 3.1.1 Real-Time Simulation........................................................ 7 3.1.2 Nonlinear Feedback Control............................................ 3.1.3 Optimal Control Techniques to Enhance Robot P erform ance.................................................................... 11 3.1.3.1 Trajectory Planning, Obstacle Avoidance, and Control of Two Robots in a Common Workspace............................................................ 11 3.1.4 Optimal Control Techniques to Enhance Robot Perform ance.................................................................... 13 3.1.4.1 Status of the Research Effort on Path and Trajectory Planning............................................. 14 3.1.4.2 Optimal Path Planning in the Presence of O bstacles.............................................................. 17 3.1.5 Control of a Flexible Robot Arm....................................... 20 3.1.6 Manipulator Compliance................................................... 25 3.1.6.1 Control of Electrohydraulic Robots..................... 25 3.1.6.2 Motion and Force Control of the End Effector of a Manipulator in Contact with a Constraint............................................................ 29 Final Report ix August 1, 1984-July 31, 1985

3.2 Integrated Solid-State Sensors for Closed-Loop Robot Control....................................................... 31 3.2.1 A High-Performance Silicon Tactile Imager Based on a Capacitive Cell........................................................ 31 3.2.2 A Silicon-based Thermal Imager....................................... 38 3.3 Vision for Robots......................................................................... 43 3.3.1 Dynamic Scene Analysis: Event Recognition and Correspondence....................................................... 43 3.3.2 Partially Occluded Part Recognition................................. 46 3.3.3 Applications of Optical Processing to Robotic Vision....... 49 3.3.4 Object Recognition and World Modelling Multiple Views................................................................ 57 3.3.5 Machine Vision Workbench............................................. 60 3.4 Architecture of Robot Based Manufacturing Cells...................... 61 3.4.1 Object Based Experimental Robot Cell............................ 61 3.4.1.1 Status of Research on Integrated Multi-Robot Systems................................................................ 64 3.4.2 Automatic Determination of Gripping Positions............... 69 3.4.3 Graphical Programming of the Geometric and Logic Logic Aspects of a Robotic Cell...................................... 74 3.5 Methodology for Solving Problems.............................................. 83 4. Publications............................................................................ 85 4.1 Journal Publication..................................................................... 85 4.2 Under Review.............................................................................. 86 4.3 In Preparation............................................................................ 87 4.4 Conference Presentations............................................................ 87 4.5 Technical Reports........................................................................ 89 5. Personnel.................................................................................. 91 5.1 F aculty........................................................................................ 91 5.2 Students...................................................................................... 92 5.3 Degrees Awarded......................................................................... 93 5.4 Graduate Student Placement...................................................... 94 5.5 Permanent Staff........................................ 95 August 1, 1984-July 31, 1985 x Final Report

6. Coupling Activities.................................................... 97 6.1 Intra Project Interactions....................................... 97 6.2 University Interactions................................................. 97 6.3 Interactions with Industry........................................................... 99 6.4 DoD Interaction.......................................................................... 101 7. Distribution List.................................................................. 103 Final Report xi August 1, 1984-July 31, 1985

Augus 1, 184Ju 31, xial Repor August 1, 1984-July 31, 1985 xii Final Report

FINAL REPORT ON COORDINATED RESEARCH IN ROBOTICS AND INTEGRATED MANUFACTURING THE UNIVERSITY OF MICHIGAN 1. Introduction Manufacturing Science is not a single narrow discipline. Rather, it is a synthesis of techniques from a broad range of areas, from materials and processes through computer science. The treatment of manufacturing science by researchers and academicians must be correspondingly broad. Indeed, some problems will only be solved by bringing together expertise from multiple areas. Moreover, the applied nature of the subject argues for a tight interaction among research, education and applications, for it is a synergism among these three that is necessary to achieve the technological advance and training so badly needed by U.S. industry. It is in this spirit that the work on this contract is being conducted There are three goals to the project: * To build a strong research environment in manufacturing science, * To train graduate students, particularly at the Ph.D. level, and * To carry out research in manufacturing science. Significant advancements have been made toward all three goals during the past year. The research environment has grown in several ways. First, the installation of research facilities begun during the first two years of the contract have been completed, and augmented by a number of additional new acquisitions. Second, we have successfully attracted new faculty in machine vision and kinematics and dynamics. Third, our growing research environment has enabled us to attract a number of new contracts and grants. The Robotics Research Laboratory is now operating at an annual level of approximately $2,750,000 in sponsored research, and total manufacturing research level in the College of Engineering has exceeded $5,000,000 per year. The typical time required for a student to complete a Ph.D. after receiving a master's degree is in the vicinity of three to four years. It thus can be expected to take a new program this long to reach a steady state level of Ph.D. production. The number of graduate students supported in the Robotics Research Laboratory has grown to approximately 60, of whom approximately 1/3 have received Air Force support, with an additional 10 or 15 students supporting themselves via teaching assistantships or other means. An important milestone is the growth in the number of Ph.D. graduates. There have been 8 Ph.D.'s granted to students working in the Robotics Laboratory during the past year, with 2 more expected within the next six months. The focus of the research carried out lies broadly in the area of robotics, encompassing research topics ranging from highly sophisticated and accurate control algorithms for arm motion, development of new types of sensors, use of Final Report 1 August 1, 1984-July 31, 1985

advanced sensor information, to higher level languages for robot control, and the integration of robot systems with CAD databases. Figure 1.1 shows how individual sub project areas related to the overall project activity. The nodes in this figure represent individual areas of research and the arcs represent the flow of results from one area to another. For example, a tactile sensor developed in our Solid-State Electronics Laboratory is currently being installed on one of our robots. 1.1. Report Organization Section 2 describes the additions and modifications to the computational laboratory facilities available within the Division. Section 3 reviews all of the research activities supported by the contract. Publications and conference papers are listed in Section 4. Personnel supported by the project are given in Section 5, and coupling activities among participants in the project and with other organizations are presented in Section 6. 2. Facility Development The Robotics Research Laboratory facilities have seen continued growth during this past year. Most of the computer installations begun during the two AFOSR SUPORTED TASK AREAS Seasor H1i-Perfonawce ISpecial Purpose Sbsys|tem nmeanipulators: Computers & Stuct re & Contrl Languges \ Sesor / Rooot systems &... PTrOOlem / TB of. bt Solving RoOot Basno/ / "0, Or Ivanr acturli Cell fo proa r, Integrated Factory Infomation System Figure 1.1 August 1, 1984-July 31, 1985 2 Final Report

years have been completed, new equipment has been obtained from industry, and several new local construction projects have been begun. Of particular importance to the future research work in the Laboratory is the development of the College of Engineering Computer Aided Engineering Network, which has added major facilities to the Laboratory and allows network access to even greater facilities throughout the College. Section 2.1 will review the current status of the equipment in the Robotics Research Laboratory, and section 2.2 will describe the College of Engineering Computer Aided Engineering Network (CAEN) and its relation to the Robotics Research Laboratory. 2.1. Robotics Research Laboratory Facilities Figure 2.1 shows the major equipment in the Robotics Research Laboratory and its relation to the other principal facilities used by researchers in the Laboratory. The VAX computers and the DeAnza image processing system have been the mainstay of the Laboratory's computational facilities, and the two PUMA, ASEA and IBM robots have formed the core of the experimental robot facilities. This past year has been significant additions to both the computational and robot facilities. Several Apollo nodes have been added during the past year and the Laboratory's configuration now includes one DN660, three DN 320's and a DSP80. A second DSP-80 will be added shortly. In addition, there are two Apollo workstations which are shared between researchers in the Laboratory and other researchers in Mechanical and Industrial Engineering. The DN660 is a full color system with hardware floating point, a bit mapped screen and' has the processing power of a VAX 780. The DN320's are similar, but slower and have black and white screens. The DSP80's are similar to the DN320's but have no keyboard or screen. The DSP80's will be used to control interfaces to various laboratory equipment and to operate an ethernet interface which will allow fast communications to the VAX systems in the laboratory. The computational power available in the Apollos now exceeds that available in the VAX system, though the Apollos do not yet have all of the standard service facilities available on the VAXes. The trend toward powerful workstations is clear, however, and the future will be based more upon networked workstations then upon centralized computer facilities. Though not directly part of the equipment of the Robotics Research Laboratory, the Electrical Engineering and Computer Science Department has recently taken delivery of hypercube processing system with 64 processors in the initial configuration, expandable to 1024. This system will be networked to the CIPRNET and Robotic's facility via an Ethernet. The department has also received a gift of a Data General MV1000 computer, upon which the Rolm Ada compiler is being installed. Research work on the integrated part sorting cell, the code for which was was written in Ada and which was performed under AFOSR support, was instrumental in the negotiation Final Report 3 August 1, 1984-July 31, 1985

LSI-11/23 Microbots COfMPUTER & IMAGE Hv u Ma-l chineol,432:.. ISmaiatsu,illlSf!f!!?1. _p. iDia itizing r Computer Procoeor. Robot or Vision Attached i h a a ComfEonter vo y se. Te Architectrre sensing ManudfaOcturing soo The temperature sensorrwillyallow the temperature distribution.... the:wel tio i e Laboratory cri al p: r. Cell Control T w Hyparcu b Machinte x y.r........... August 1,.July_ 312 Fairchild84 Fi a R por l:: CCD Camerasc —. "Frc'' G.E. Camera -: Apollo C o m puter~..:....!: Se o660r ~ z AFOSR PURCHASE se|| es IBM i** _ I L Optimation ______^To |:::75bS |:::, Vision System | 3 ] | llo TDept r( —an1 ~~~~~~~~~~~~~~~~.::!: Robot:,::,:,:a Ring [: j INUSTRIAL GIFT l**320 | *n ROBOTICS RESECAH LA/RATORY. G.E. Robot 2 SP-8 Figure 2.1 of this gift. Also partially due to research performed under AFOSR sponsorship we have received a gift of a General Electric robot and Optimation vision system. The GE robot will be used for welding applications, together with a unique linear temperature sensing array which has been developed under AFOSR sponsorship. The temperature sensor will allow the temperature distribution across the weld puddle to be determined in arc welding applications. This temperature distribution is believed to critical parameter in the control of are welding. This work will be begun during the next year. August 1, 1084-July 31, 1885 4 Final Report

The introduction of powerful workstations, such as the Apollo, with bitmapped displays and much improved price/performance ratios over current midsized computers opens new potentials for machine vision systems. In order to explore these potentials, a new computer vision facility has been started during the next year based around the Apollo systems. CCD cameras have been interfaced to the Apollo system. Both Datacube image conversion and Mercury image processing boards are being interfaced to the Apollos to provide rapid image processing. In conjunction with this, an extensive set of computer vision software will be developed on the Apollo systems. Our goal is to develop a rapid prototyping system for the development of new vision algorithms. Also our PUMA and ASEA robots are being interfaced to Apollos to provide a more complete workstation and the tactile sensor built in our Microelectronic Laboratory will be mounted on the robot hands. 2.2. Computer Aided Engineering Network (CAEN) The College of Engineering is making the integration of significant engineering computer workstations into all phases of its activities one of its top priorities. Funded by student fee assessments, open clusters of workstations have been placed in all areas of the campus frequented by engineering students. Any student registered in engineering may use the clusters for whatever purpose he/she wishes. All clusters are being integrated into the campus wide computer network which will provide access from any workstation to any major computer facility on the campus. With funding provided by a large gift from General Motors Corporation, workstations have been provided to all major research laboratories in the College. In addition, each faculty member in the College has been given a sizable allocation of funds from which he/she may purchase a workstation for his/her office. Three levels of workstations are being acquired, as appropriate for each individual need, IBM XTs, Apple MacIntoshs, and various levels of Apollo workstations. The College is dedicated to upgrading the CAEN as necessary to keep state-of-the-art workstations available to all faculty and students, and to make them available in sufficient quantity that computing resources are not a bottleneck for general student and research computations. The current goals are to have the top level workstations of the 4M category (Million instructions per second, Megabyte of memory, Megapixel display, and Megabit per second communications). The development of CAEN has been important to the Robotics activity in a number of ways. First, it has significantly added to the computational resources of the laboratory in a direct way. Second, since the research students supported under the contract have access to workstations in the open clusters as well and these are being networked together, they have been able to extend the domain in which they do their computations far beyond the Robotics Research Laboratory and yet maintain communication with the facilities within the laboratory. Third, the support staff of the CAEN will be acquiring and developing a large set of generally useful software which will be readily available to the research Final Report 5 August 1, 1984-July 31, 1985

laboratory, e.g. the TEK text processing system, graphics software, etc. 2.3. Construction Projects One of the principal activities to be emphasized in our future work is machine vision. Two new construction projects were begun during the year, and a major project is planned for the next year based upon the Apollo workstations, mentioned above, to support this area. Three dimensional information and multiple images with different spatial relations between the camera and the projects being viewed are critical types of sensor data required for several of the research projects in progress. To provide 3D data, a laser light stripping source is being built to work with one of our existing area CCD cameras. The camera output will be connected to a Colorado Video frame grabber, which is in turn attached to one of the VAX computers. A servo controlled camera system is being built to allow a variety of multiple image sequences to be obtained. The camera will be under the computer control of one of the VAX computers, and will be capable of movement in 6 dimensions. Initially, it is being built around the pair of PUMA robots. The project was started late in the contract year and is expected to be completed around the middle of the next contract year. The introduction of powerful workstations, such as the Apollo, with bitmapped displays and much improved price/performance ratios over current midsized computers opens new potentials for machine vision systems. In order to explore these potentials, a new computer vision facility will be started during the next year based around the Apollo systems. Initially, one of our area CCD cameras will be interfaced to the Apollo system to provide data acquisition. Both multibus connection and possible direct infusion of the data into the screen buffer will be investigated. In conjunction with this, an extensive set of computer vision software will be developed on the Apollo systems. Our goal is to develop a rapid prototyping system for the development of new vision algorithms. 3. Research Activities The research objectives during the past year were the development of new techniques and methodologies in robotics and manufacturing databases to enhance the capabilities of a broad range of industrial production process. In robotics, the activity focused on the development of intelligent sensored based robots whose operations is driven by data contained in CAD databases. Specific subprojects involve sophisticated robot control and simulation, new sensor development, techniques for several aspects of machine vision, special computer architectures for robotics, the use of CAD to drive robot operations, programming languages for robotics, and artificial intelligence problem driving techniques. The database research had two principal components, the distribution of the data itself across distributed nodes, and the development of tools to allow shop August 1, 1884-July 31, 1985 6 Final Report

floor users to have easy to use access to the data they needed to perform their jobs. The following sections describes the progress in each of these areas during the past year. 3.1. The Control of Mechanical Manipulators Current manipulator applications such as material-handling (pick and place), spray painting, spot/arc welding and machine loading/unloading, place only moderate demands on control technology. Future tasks such as sensor driven assembly and use of manipulators as part of an industrial process operation require that much more attention be given to manipulator structure, dynamics and control. Absolute accuracy rather than repeatability will become one of the key performance measures. The current treatment of each joint as a simple servomechanism is inadequate because it neglects gravitational loading of the links, the dynamic interactions among the links, all sources of mechanical compliance and the affects of external loads. The result is reduced servo response speed and damping, which in turn limits the precision and speed of the end-effector. Significant gains in manipulator performance require the consideration of structural dynamic models, sophisticated control techniques, and the exploitation of advanced computer technology. These problems have been addressed on several different, but coordinated fronts, including * Accurate modeling and simulation: of robot arm kinematics and dynamics: different formulations of rigid-body equations, vibrational dynamics, discontinuous nonlinearities, actuators effects, development of special software packages, utilization of ultra-high-speed computers, graphics display techniques. * Nonlinear multivariable control: decoupling methods, stability improvement, treatment of discontinuous nonlinearities. * Optimization of robot arm motion: constraints on control and state variables, different optimization criteria, numerical methods, practical implications, good suboptimal motions, obstacle avoidance issues. * Mechanical compliance: characterization of the sources of mechanical compliance and control of their effects on manipulator motion control. * Manipulator force control: insertion tasks and associated force control issues; formulation of force control problems using surface constraints. 3.1.1. Real-Time Simulation Investigator: Professor R. M. Howe Research Objective The objectives of the research is to develop a computer simulation of a sixdegree-of-freedom robot arm that can be executed on AD-10 or AD-100 computers at faster than real time. Final Report 7 August 1, 1984-July 31, 1985

Status of Research The six-degree-of-freedom robot arm simulation using the AD-10 computer has been completed and verified. During the verification process it had been noted that the accuracy of the solution, especially for joint accelerations, was limited by the 16 bit precision available on the AD-10. Also noted was the inappropriateness of certain numerical intergration techniques in the context of dynamic system simulation with sampled-data components. The inherent discontinuities present in mixed-data systems necessitate the use of self-starting integration techniques, that is, methods which do not rely on past evaluations of the derivatives of states. This topic has been the subject of additional investigations as described in a subsequent paragraph of this section. The AD-10 model has also been augmented with a line segment representation of the robot arm. These graphic routines permit the visualization of a simulated three-dimensional maneuver as projected onto the XZ, YZ, or XY planes. The execution of the graphics portion. of the model requires 1.375 msec. This portion of the model is invoked much less frequently than the routines for robot arm state evaluations and therefore its inclusion does not significantly alter the overall running speed. Accurate solutions are thus still possible at up to 5 times real-time. Beyond the AD-10 work, the complete six-degree-of-freedom model including the graphics routines has been reconfigured to execute on the AD-100 computer. The transfer to this newer computer system was motivated by a factor of 3 faster execution speed and the availability of a floating point processor with up to 65 bit precision. This work has been completed and simulation speeds up to 14 times faster than real-time were realized without accuracy loss. Some example execution times for the major portions of the simulation on the AD-100 were: - Newton-Euler (torque computation) 124.5 usec. - Orin-Walker (6x6 inertia matrix eval.) 139.8 usec. - Solution of 6 simultaneous algebraic equations 48.2 usec. - Graphics 186.5 usec. As a result of this work either the AD-10 or the AD-100 robot models have been used to demonstrate real-time or faster than real-time simulation of complex dynamic systems to visiting scholars, staff from AFOSR, staff at Applied Dynamics Inc., and representatives of the Evans & Southerland Company. As mentioned above, an investigation of specialized numerical integration techniques for use in the robot arm model has been initiated. From the study of the solution of the robot arm dynamics in the presence of a sampled-data controller the desirable features of such specialized algorithms were identified as 1) self-starting capability, 2) low number of derivative evaluations per state update, i.e., fast execution, 3) accuracy equal to or above that of presently available numerical integration methods. With these as guidelines, a number of combinations of the classical integration methods executed in a predetermined sequence was investigated. Through August 1, 1984-July 31, 1985 8 Final Report

this analysis two combinations were identified for experimentation using a nonreal-time, PDP 11/34 simulation of a linear second order system. These combinations were: a second order Runge-Kutta routine followed by a second order Adams-Moulton routine, followed by a modified single pass second order AdamsMoulton routine, followed by a single pass third order Adams, Moulton routine; and a comparable sequence started with a third order Runge-Kutta routine. Experiments conducted using these hybrid algorithms to integrate for the velocity and position of a linear second order system with piece-wise constant forcing have revealed that the hybrid routines accomplish the following: * yield an asymptotic characteristic root error that has one order higher dependence on the stepsize than expected based on the order of the algorithms starting the sequence. This results from careful mixing of lower and higher order algorithms to preserve the dependence of the error on the higher order of stepsize. * exhibit better accuracy than that of comparable-order Runge-Kutta methods for equivalent execution time. * convergence characterized by the convergence properties of the least stable algorithm in the sequence. Based onthe positive results from the simulation of a second order system, work has begun on the adaptation of the hybrid integration methods to the AD100 robot arm model. It is expected that the mixed algorithms will exhibit a superior performance within this nonlinear and computationally-intensive simulation. Dynamics of the joint actuators will be incorporated into the six-degree-offreedom AD-100 model. Also anticipated is the addition of a coulomb friction model to represent more realistically the friction effects at the robot arm joints. These augmentations will increase the realism of the simulation and will provide a more rigorous test of the controller performance. The effects of these additions will be studied with intent to improve the controller performance. 3.1.2. Nonlinear Feedback Control Investigator: Professor E. G. Gilbert Research Objective The research objective is to study nonlinear feedback theory and its application to the control of mechanical manipulators. The research has been carried out with I. J. Ha, a student in the Computer, Information and Control Engineering Program, who has now completed all requirements for the doctorate and in August began a position with the General Motors Research Laboratories. Status of Research Most of the research is now complete and mnuch of the activity during the year was devoted to writing down the various results. This includes the Ph.D. dissertation of Ha [I], a report [2] and three papers [3, 4, 5]. The following is Final Report 9 August 1, 1Q84-July 31, 1Q85

mainly a brief synopsis of the finished work, but it also indicates a few items which remain to be completed. The dissertation [11 concerns nonlinear decoupling theory and its applications to manipulator control. The theoretical contributions include: a clarification of the concepts of decoupling and decomposition, conditions for a system to be decouplable or decomposable, full characterizations of the class of control laws which decouple or decompose, and a form of approximate decoupling which allows control law simplification. It is shown that the theory applies to a wide class of manipulator control problems including those with hand coordinate control and a very general form of first order nonlinear actuator dynamics. Two papers [4,5] have been completed and several more will be prepared by I. J. Ha. The work on nonlinear robust control begun last year was finished. It was first reported in [2], which was later revised slightly for a journal and conference submission [3]. The research was motivated by the desire to obtain robust nonlinear feedback laws for the sort of robot tracking problems considered in [6]. Two main concepts are combined in a novel fashion: the transformation of the dynamic equations to a linear system by a nonlinear feedback of the type considered in [61, the robust control ideas of Corless and Leitmann [7] and others. Contributions include: robust tracking with the use of a general error measure, tight bounds on the asymptotic error, a detailed parameterization of the saturating robust controller, useful measures for bounding the errors in describing the manipulator dynamics. Presently, computer simulations of the robust control scheme are being conducted on generalizations of the manipulator control problem described in [8]. They will be reported later. References [1] Ha, I. J., "Nonlinear Decoupling Theory with Applications to Robotics," Ph.D. Dissertation in Computer, Information and Control Engineering and RSD-8-85, Robot Systems Division, Center for Research in Integrated Manufacturing, College of Engineering, University of Michigan, Ann Arbor, MI, May 1985. [2] Ha, I. J. and E. G. Gilbert, "Robot Tracking in Nonlinear Systems and Its Application to Robotics," RSD-11-84, Robot Systems Division, Center for Research in Integrated Manufacturing, College of Engineering, University of Michigan, Ann Arbor, MI, October 1984. [3] Ha, I. J. and E. G. Gilbert, "Robust Tracking in Nonlinear Systems and its Application to Robotics," submitted to IEEE Transactions on Automatic Control, accepted for presentation at the 24th IEEE Conference on Decision and Control in December 1985. [4 Ha, I. J. and E. G. Gilbert, "A Complete Characterization of Decoupling Control Laws for a General Class of Nonlinear Systems," presented at the 1985 Conference on Information Sciences and Systems at John Hopkins University in March 1985, appears in the conference Proceedings, submitted to the IEEE Transactions on Automatic Control. August 1, 1984-July 31, 1985 10 Final Report

[5] Ha, I. J. and E. G. Gilbert, "The Standard Decomposed System and the Noninteracting Control of Nonlinear Systems," submitted to the SIAM Journal on Control and Optimization. [6] Gilbert, E. G. and I. J. Ha, "An Approach to Nonlinear Feedback Control with Applications to Robotics," IEEE Transactions on Systems, Alan, and Cybernetics, Vol. SMC-14, pp. 879-884, 1984. [7] Corless, M. J. and G. Leitmann, "Continuous State Feedback Guaranteeing Uniform Ultimate Boundedness of Uncertain Dynamic Systems," IEEE Transactions on Automatic Control, Vol. AC-26, pp. 1139-1144, 1981. [8] Corless, M. J., G. Leitmann and E. P. Ryan,'Tracking in the Presence of Bounded Uncertainties," Proceedings of the Fourth International Conference on Control Theory, Cambridge University, September 1984. 3.1.3. Optimal Control Techniques to Enhance Robot Performance Research Objectives While productivity gains may be made using current robots, such robots are often under utilized, and many future applications will depend upon high performance. The methods of optimal control and optimal design have demonstrated great potential for improving the efficiency and quality of robot and process behavior. The control problem for manipulators is to determine the torques or forces which should be exerted at the joints of the robot so as to drive the robot in the most economical manner, while meeting certain constraints on the robot's position, velocity and actuator torques. There are several performance measures one can use, minimum-time of motion of the robot arm, minimum energy consumption, or distance of the path from obstacles. Furthermore, there is recent evidence that an optimal control approach with minimum distance constraints between elements of the robot arm and obstacles may provide an effective means of path planning, which is usually approached purely geometrically and causes severe difficulties for more than two dimensions. The objectives in these areas are to determine effective methods for path trajectory determination. Two aspects of the problem are being investigated. In the first, movement along that path and the nominal control to minimize an appropriate cost function is sought. In the second, the path itself is sought as well. 3.1.3.1. Trajectory Planning, Obstacle Avoidance, and Control of Two Robots in a Common Workspace Investigator: Professor C. S. G. Lee Research Objectives This research focuses on various issues in assembly task operations involving the control and coordination of two cooperative robots in an assembly system. The assembly system is a small size manufacturing system in which there are sequences of time-coordinated operation. These time-coordinated operations Final Report 11 August 1, 1984-July 31, 1985

require the robot arms to coordinate their movements from one point in the Cartesian space to another in finite time. Current industrial practice employs simple time-space coordination which does not allow more than one robot working in a common workspace efficiently and effectively. Such coordination and control results in under-utilization of robot arms. Any improvements in the performance of controlling two robot arms in a common workspace will require further investigation of sophisticated off-line straight line trajectory planning, collision-free path planning, and an accurate control strategy. Thus, the principal objectives of this research are: (1) Off-line planning of straight line manipulator trajectory in Cartesian space with torque constraints. (2) Wrist collision avoidance by preplanning a collision-free path using the above trajectory planning results. (3) Controlling the manipulators to follow the preplanned trajectory faithfully so that two robots in a common workspace can assemble a workpiece correctly without any collision. The ultimate goal of the research is to provide solutions for current industrial robots in performing more intelligent assembly tasks. This research is being carried out jointly with B. H. Lee. Planning of Straight Line Manipulator Trajectory A discrete time trajectory planning scheme for a given straight line path has been developed. The planning scheme determines the trajectory set points which satisfy both smoothness and torque constraints. A real time servo interval instead of a normalized time interval was used for planning all the joint trajectories. The planning of the straight line trajectory vas performed in the Cartesian space and requires maximization of the speed between two consecutive Cartesian set points on the straight line path. An iterative forward and backward search algorithm coupled with a modified forward search algorithm was developed. They determine the joint values at each servo interval within the bounds imposed by the smoothness and torque constraints and the straight line requirements with high accuracy. The resultant straight line trajectory is composed of three segments: an acceleration portion, a deceleration portion, and a relaxation portion. The control set points on the acceleration and deceleration portions of the trajectory satisfy the straight line requirement while those on the relaxation portion do not. These sets of joint position, velocity and acceleration can be used as a reference input to the manipulator controller for the joint-variable space control. Also, they can be used to determine the corresponding Cartesian position, velocity, and acceleration for the proposed Cartesian space adaptive control. Planning of a Collision-Free Path for Two Robots After obtaining the Cartesian trajectory set points from the trajectory planner, the potential wrist collisions between two robots can be detected easily by using a sphere model to represent the manipulator wrist. The detection of the August 1, 1984-July 31, 1985 12 Final Report

potential wrist collisions can be done by checking the distance between the center of the two spheres with respect to a global coordinate system. A collision-free path of a moving robot in the presence of a stationary robot is found from connected straight line segments selected subject to a performance measure. Various collision situations for two moving robots are identified. Notions of collision map, which provides the location and corresponding servo time information of the robots simultaneously, and time scheduling are introduced and utilized. Collision-free paths of two moving robots are obtained by considering the paths and trajectories of the two robots simultaneously. When the path modification is not allowed for collision avoidance, rescheduling the traveling time on the existing path is performed to realize collision-free motion planning. When the path modification is allowed for collision avoidance, a collision-free path is obtained subject to a performance measure. Control of a Manipulator in Cartesian Space An adaptive control in Cartesian space, which adopts the ideas of resolved motion rate control and resolved motion acceleration control, is proposed to control the manipulator hand to track the collision-free trajectory as closely as possible. Equations of motion of the manipulator hand are developed in Cartesian coordinates from which an adaptive control scheme is derived. The adaptive control scheme is based on the linearized perturbation equations along a desired hand trajectory. The controlled system is characterized by feedforward and feedback components which can be computed separately and simultaneously. The feedforward component computes the nominal torques from the Newton-Euler equations of motion using the joint trajectory information from the trajectory planner. The feedback component, consisting of a recursive least square identification scheme and an optimal adaptive self-tuning controller for the linearized system, computes the perturbation torques which reduce the manipulator hand position and velocity errors along the nominal hand trajectory. A feasibility study of implementing the adaptive control for a six-joint PUMA manipulator is explored using present-day low-cost microprocessors. 3.1.4. Optimal Control Techniques to Enhance Robot Performance Research Objectives While productivity gains may be made using current robots, such robots are often under utilized, and many future applications will depend upon high performance. The methods of optimal control and optimal design have demonstrated great potential for improving the efficiency and quality of robot and process behavior. The control problem for manipulators is to determine the torques or forces which should be exerted at the joints of the robot so as to drive the robot in the most economical manner, while meeting certain constraints on the robot's position, velocity and actuator torques. There are several performance measures one can use, minimum-time of motion of the robot arm, minimum energy Final Report 13 August 1, 1984-July 31, 1985

consumption, or distance of the path from obstacles. Furthermore, there is recent evidence that an optimal control approach with minimum distance constraints between elements of the robot arm and obstacles may provide an effective means of path planning, which is usually approached purely geometrically and causes severe difficulties for more than two dimensions. The objectives in these areas are to determine effective methods for path trajectory determination. Two aspects of the problem are being investigated. In the first, movement along that path and the nominal control to minimize an appropriate cost function is sought. In the second, the path itself is sought as well. 3.1.4.1. Status of the Research Effort on Path and Trajectory Planning Investigator: Professor Kang G. Shin Past Work The previous report described several methods of trajectory planning as follows. * The phase plane method uses phase plane plots to generate a set of joint torques which would drive a robot along a given spatial path in minimum time[l]-[3]. * Dynamic programming method has been developed to remedy two important drawbacks of the phase plane method: it works only for minimum-time trajectory planning, and does not work if the robot's actuators interact. The use of parametric functions to describe robot's geometric paths has reduced the trajectory planning problem to a two dimensional problem, thereby making the dynamic programming attractive [4]. * We also have taken a different approach to the path planning problem, which is based on traditional nonlinear programming [5],[6]. Current Status of Research Over the last one year major effort has been given to (i) the study of robustness of our trajectory planning methods, and (ii) the generation of a geometric path which is used as input to the trajectory planning methods above. A. Trajectory Planning During the past year, two important trajectory planning results have been obtained. One is a new trajectory planning method called the Perturbation Trajectory Improvement Algorithm, or PTIA [7], and the other is a modification to PTIA which allows dynamic uncertainties to be taken into account at the trajectory planning stage. The perturbation trajectory improvement algorithm is related to both dynamic programming (DP) and gradient optimization techniques. It can be thought of as a repeated application of dynamic programming in which the grid August 1, 1984-July 31, 1986 14 Final Report

is varied each time the DP algorithm is applied. By making the grid points closer together at each iteration, successively finer approximations to the optimal solution of the minimum-time trajectory planning problem can be obtained. Since the DP grid can have a relatively small number of points, each iteration of the algorithm is very inexpensive. Even though the algorithm may require many iterations, the computation times required to achieve a given accuracy are smaller than the computation times for conventional dynamic programming by several orders of magnitude. PTIA differs from conventional dynamic programming in that it does not flood the entire space with trajectories; it works only with information gathered from the general vicinity of an approximation to the optimal trajectory. In this respect it is similar to gradient optimization. It should be noted that the justification for the use of the PTIA is, at this point, heuristic; the theory which underlies its use has yet to be completely developed, so its precise convergence properties are not yet known. However, the development of such a theory appears to be feasible. The PTIA has two major advantages over other trajectory planning techniques. First, the algorithm is very straightforward, and can be programmed in a matter of hours. Second, it enforces actuator constraints by making a simple go/no-go check which can be written as a single function, called the constraint function. This function can be almost arbitrarily complex, and can include, for example, jerk constraints as well as constraints on power supply voltages and saturation torques. The perturbation algorithm has been tested on a three-degree-of-freedom cylindrical robot with DC torque motor drives. With only motor drive voltage and torque constraints, the results agree almost perfectly with those obtained by other methods. When jerk constraints are considered, the results appear to be correct, though this cannot be verified by other trajectory planning methods since most other trajectory planners cannot account for jerk constraints. (See [7] for a detailed description.) By a relatively simple modification of the perturbation technique, it is possible to determine trajectories which are optimal for a given range of robot dynamic characteristics. By this it is meant that the resulting trajectory is the fastest trajectory which is realizable, i.e., does not violate any actuator constraints, for all payloads which fall into some range. Such planning is needed in practice, since no robot's dynamics or payload characteristics are known exactly. This modification involves a simple change to the constraint function. The constraint function must check a range of torques, rather than a single torque, for realizability. The procedure is made very simple by the fact that the dynamic coefficients of the robot are linear in the payload and link masses, and so does not radically increase the computational requirements of the PTIA. (See [8] for a detailed discussion.) Final Report 15 August 1, 1984-July 31, 1985

B. Geometric Path Planning Some progress has also been made on the problem of selecting geometric paths with short traversal times. Though these results do not consider obstacle avoidance, they do give a feeling for what path characteristics are required if a path is to have a short traversal time. Two approximate methods have been developed for generating minimumtime geometric paths for robots. One involves computation of lower bounds on traversal times for given geometric paths. Then the curve which minimizes the lower bound is obtained. The second approach is to find velocity limits for given geometric paths and maximize the velocity limits. For motions in free space, these two methods give the same curves, namely geodesics in inertia space, a Riemannian space which has the robot's mass matrix as its metric tensor. Simulation results, again with a three-degree-of-freedom cylindrical arm, appear to be promising. The traversal times of the inertial geodesics (the curves generated by the approximations described above) were compared to Cartesian straight lines and straight lines in joint space. As a general rule, the inertial geodesics gave the shortest traversal times, though there were some exceptions. (See [9] for a detailed account of the geometric path planning.) Future Research We will make an effort to extend the above geometric path planning to the case of collision avoidance. At the same time other approaches (e.g. "piano movers problem") will be explored as alternatives. Additionally, a computer program for generating path planners is under construction, which takes an expert system form. This system will take as input a description of the robot's links and actuators, generate the robot's dynamic equations, and generate a trajectory planner of either the phase plane or the dynamic programming type. References [1] Shin, K. G., and McKay, N. D., "An Efficient Robot Arm Control Under Geometric Path Constraints", Proc. 22nd Conf. on Decision and Control, San Antonio, TX, pp. 1449-1457, Dec. 1983. [2], "Minimum-Time Control of Robotic Manipulators with Geometric Path Constraints", IEEE Trans. on Automatic Control, vo. AC-30, no. 6, pp. 531-541, June 1985. [3], "Open-Loop Minimum-Time Control of Mechanical Manipulators and Its Application", Proc. 1984 American Control Conference. [4], "Robot Path Planning Using Dynamic Programming", Proc. of the 23rd CDC. [5] Kim, B. K., and Shin, K. G., "An Efficient Minimum-Time Robot Path Planning under Realistic Conditions", Proc. 1984 American Control Conference. [6], "Minimum-Time Path Planning for Robot Arms with Their Dynamics Included", IEEE Trans. on System, Man, and Cybernetics, vol. SMC-15, August 1, 1984-July 31, 1985 16 Final Report

no. 2, pp. 213-223, March/April 1985. [7] Shin, K. G., and McKay, N. D., "Minimum-Time Trajectory Planning with General Torque Constraints," under preparation. [8]., "Selection of Near Minimum-Time Geometric Paths for Robotic Manipulators," to appear as a regular paper in IEEE Trans. on Automatic Control. [9], "Incorporation of Payload Uncertainties into Robot Trajectory Planning," submitted to IEEE Trans. on Automatic Control. 3.1.4.2. Optimal Path Planning in the Presence of Obstacles Investigator: Professor E. G. Gilbert Research Objective The research objective is to develop a general computational methodology for off-line determination of optimal robot motion in the presence of obstacles. The formulation takes into account: the dynamics of the robotic manipulator, constraints on actuator effort, suitable measures of performance such as energy expended by actuators or the time of travel, general descriptions of initial and terminal velocities (e.g., the interception of a moving object), and the avoidance of collisions between the parts of the manipulator and its spatial environment or with the parts of a cooperating manipulator. The research is being carried out jointly with D. W. Johnson, a student in the Department of Aerospace Engineering who should complete his doctoral work this calendar year. Status of the Research The basic approach was reviewed in last year's annual report and is described in some detail in conference presentations (e.g., [1]) and a published journal article[2]. Path planning problems are stated as problems in optimal control where obstacle avoidance is expressed by inequality conditions on the distances between potentially colliding parts of the robotic manipulators and their environment. Since the distances are functions of the configuration variables for the manipulators, they become state-variable constraints in the optimal control problems. Such constraints are usually difficult to treat computationally, but the special structure of the path planning problems allows an effective penalty function approach which uses spline-function representations for the configuration variables. A key development in last year's work was a detailed characterization of the distance functions and their derivatives. The formulation of the path planning problem is much more general than those considered in the prior literature where either obstacles are neglected or the form of the geometric path is specified a priori (see, e.g., the research of Professors Lee and Shin and their students which is described elsewhere in this report). The advantage of the general approach is attainment of high performance in the execution of difficult robotic tasks. The disadvantage is a very large computational burden. Research during the previous year centered on the basic ideas and feasibility studies. This year the emphasis has been on developing techniques for Final Report 17 August 1, 1984-July 31, 1985

handling more general problems and for improving the efficiency of the numerical computations. Specific topics which were investigated include: minimum-time problems, the solution of three-dimensional examples, polyhedral shape models and the numerical computation of distances between them, adaptive schemes for time-grid reduction and methods for obtaining derivatives of the functions describing manipulator dynamics. To minimize the time of travel it is necessary to characterize manipulator motion over a period of variable length. This was achieved in the numerical computations by time scaling of the spline basis functions. For minimum-time problems the constraints on actuator effort are crucial. It was found that their implementation in the minimization algorithm was most effectively handled through the use of exterior penalty functions. Interior penalty functions caused numerical difficulties (during descent the controls tended to jump out of the feasible region). Figure 3.1.4.2.1 shows a minimum-time path for the threedegree-of-freedom manipulator treated in [2]. The optimal control is both singular and nonunique. These features did not seem to affect the performance of the algorithm. It is also worth noting that optimum time (23.5) is considerably less than the travel time (33.1) for a reasonable guess at the optimum path. Presently, additional minimum-time examples involving the cooperative interaction of two cylindrical manipulators (total number of degrees-of-freedom: four) are being programmed. To solve problems which involve complex three-dimensional environments and manipulators with irregularly shaped links and payloads it is necessary to determine numerically the distances between component parts of the manipulators and their environment. As has been pointed out in [2], it is possible to make the component parts convex even though the manipulator and their environment are nonconvex. The modelling of complex convex parts which are translated and rotated in space was studied. The simplest description appears to be convex polyhedrons, determined by lists of their vertices. Algorithms for efficiently computing the distances between two such polyhedron were investigated. The general class of procedures described in [3,4] appear most promising because they do not require the explicit characterization of the algebraic difference of the two sets. In particular, it has been found that variants of [4], when appropriately specialized to three dimensions, have low operations counts and initialization features which greatly speed the computations when the distances are determined for successive small changes in the translations and rotations. Comparative numerical experiments using the algorithms will be run in the near future. Another way of reducing appreciably the overall computational cost is to reduce the number of points in the time grid on which the distances are evaluated. In [2] a rather crude scheme for doing this was described. A better scheme has now been developed. It makes use of a simple formula for evaluating the time derivatives of the distances on an initial course grid. The time derivatives allow an accurate determination of where the time grid must be fine in order to compute the collision-avoidance penalty function. August 1, 1984-July 31, 1985 18 Final Report

MINIMUM TIME TRAJECTORY FOR A CARTESIAN MANIPULATOR. PLOT NUMBER 64.,J777A —-;A-ZT6 \. X.00 2.00ee'.00.00 8.00 10.0 12.00ee t4.e e.00 1'8.0 20. ee QC1) - AXIS Figure 3.1.4.2.1. A minimum-time path for the three degree-of-freedom cartesian manipulator of reference [2], m' = 1, u' | <,i = 1,2,3. Finally, the work begun last year on manipulator dynamics has been completed. In order to determine the penalized cost it is necessary to compute the derivatives of the generalized inertial matrix, M(q), and the gravitationalCoriolis- centrifugal function, F(q,q) with respect to components of the configuration vector, q, and its time derivative, q. Numerically effective formulas for doing this have been derived. The formulas should also be of interest in other areas such as the computation of the sensitivity of manipulator motions with respect to physical parameters of the manipulator. Some of the research described above will appear in a conference paper [5]. The rest, along with extensions yet to be completed, will appear in Johnson's dissertation and other papers. Final Report 10 August 1, 1084-July 31, 1085

References [1] Gilbert, E. G. and D. W. Johnson, "The Application of Distance Functions to the Optimization of Robot Motion in the Presence of Obstacles," Proceedings of the 23rd IEEE Conference on Decision and Control, [2] Gilbert, E. G. and D. W. Johnson, "Distance Functions and Their Application to Robot Path Planning in the Presence of Obstacles," IEEE Journ. Robotics and Automation, Vol. RA-1, pp. 21-30, 1985. [3] Wolfe, P., "Finding the Nearest Point in a Polytope," Mathematical Programming, Vol. 11, pp. 128-149, 1976. [4] Barr, R. 0., "An Efficient Computational Procedure for a Generalized Quadratic Programming Problem," SIAM Journ. on Control, Vol. 7, pp. 415429, 1969. [5] Johnson, D. W. and E. G. Gilbert, "Minimum-Time Robot Path Planning in the Presence of Obstacles," To be presented at the 24th IEEE Conference on Decision and Control, Dec. 1985. 3.1.5. Control of a Flexible Robot Arm Investigator: Professor A. G. Ulsoy Research Objectives Ideally robots should have a lightweight yet rigid structure to meet the high performance requirements of demanding industrial applications such as the manufacture and assembly of precision parts. Applications requiring high speed and high accuracy of the robot arm require consideration of link flexibility effects in the mechanical design of the robot arm as well as in the design of the controller. The objective of this research project is to investigate the influence of link flexibility on performance, and to consider methods to improve the performance of flexible robot arms through the controller design. This research is being carried out with N. Chalhoub, a doctoral student in the Department of Mechanical Engineering and Applied Mechanics. Status of the Research The flexible robot arm control problem is being investigated using analytical methods, simulation, and laboratory experimentation. A three-degrees-offreedom spherical coordinate laboratory robot is being used as the focus of this study where two-degrees-of-freedom are driven by leadscrews and the third is driven by a gear train. A dynamic model of the robot, including the effects of the flexibility of the final link and the transmission mechanism constraints, is formulated as the basis for analytical and simulation studies. In the mathematical model, the leadscrew is assumed to be a self-locking drive; that is once the leadscrew is subjected to an axial load it will not rotate unless the control torque applied by the joint actuator that drives the leadscrew is large enough to overcome a certain resistive torque. The latter, which depends on the axial force, is due to the thread's friction and geometric effect. In this August 1, 1984-July 31, 1985 20 Final Report

study, dry coulomb friction is assumed where the friction force is proportional to the absolute value of the normal force exerted on the thread's surface and always opposes the motion. The effect of the resistive torque becomes critical when the system comes to a complete halt. The driving torque (or the control torque) must be large enough to overcome the static friction or otherwise the system would remain at rest waiting for the control effort to build up. This situation is encountered whenever the controller leads to an oscillatory system response (see Figure 3.1.5.1). Note that the leadscrew transmission mechanism, one which is not backdriveable, does not completely eliminate the interaction between rigid and flexible motions. This behavior arises from the axial force exerted on the leadscrew. The axial force would include in general, gravitational and inertial terms due to the combined rigid and flexible motions. Therefore, to be consistent with the self locking assumption, in the absence of the control torque or force, the axial load, regardless of its magnitude, would not cause the leadscrew to rotate. However, the control torque supplied by the actuator must still overcome the loading torque induced by the axial force. A detailed model of the leadscrew and its housing have been added to the dynamic model of the robot to simulate the nonbackdriveable behavior. This simulation is used to investigate the interactions between the controller design and the structural design of the robot arm, the influence of the leadscrew constraints, the interaction between rigid body and flexible motions and the trade off between speed and accuracy. These relationships will be used as the basis for CD CE 0 z.CZ 5, AI3.00 4.00 8.00 12.00 TIfME, t [SECOND] Figure 3.1.5.1 Final Report 21 August 1, 1984-July 31, 1985

designing and evaluating controllers for the flexible as well as the rigid body motions of the robot arm. The simulation results are discussed in the paper [1], and for a detailed discussion, the reader is referred to the related technical report [2] and this study has led to a better understanding of the interaction between the structural design and the controller design for robot arms. The dynamic modeling and simulation of the laboratory robot, including the flexibility of the final link and the transmission mechanisms constraints, are completed. Also completed is a review of the literature on control of flexible robot arms. The controller design employed here, which is typical of current industrial practice, can be improved significantly by including the effect of the flexible motion in the control action. This can be done in three ways: (1) Design a controller based on a detailed dynamic model of the manipulator which includes the effect of flexibility. This is the simplest approach but has many drawbacks. The dynamic model would be very complex even for low order beam models. Any discrepancy in the dynamic model or disturbances in the system would considerably affect the overall performance of the robot, since no actual measurements of the flexible motion are being fed back to the controller. In other words, the robot would behave as an open loop system as far as the flexible motion is concerned. This approach requires only software modifications. (2) A full kinematic state measurement at the end point of the robot arm is fed back to the controller. The latter activates the actuators at the joints to m -— i ___________ 2 Figure 3.1.5.2 August 1, 1984-July 31, 1885 22 Final Report

correct for the malpositioning of the end effector. This approach compensates for the inaccuracy of the dynamic model as well as for any disturbances in the system. It does not require any additional actuators. The major drawback of this method is the difficulty of obtaining accurate measurements of the position and orientation of the end effector over a large operation volume. The controller design is also complex, and will require careful study. (3) Another approach is to sense and correct the flexible motion by using additional sensors and actuators. This is expected to improve the robot performance over the previous two controller design, but at greater cost and complexity. This third approach is basically the micro-manipulator system that is being studied by Cannon and his co-workers at Stanford [3,4] As a first step towards the inclusion of the flexible motion in the control action, a three lumped mass system connected with springs has been studied. This simple system represents a lumped model for one flexible link where the first mass represents the joint rigid body degree-of-freedom while the second and third masses represent the first and second elastic modes. The rationale is to scale down the complexity of the dynamic model of the robot arm, to get better insight of the controller performance while working with a simple linear model and reduce the cost of computer runs. As in the physical system, the control effort is restricted to the rigid body degree-of-freedom (i.e. to the first lumped mass). Three runs have been made: (1) Rigid body controller: Only the position and the velocity of the first mass are fed back to the controller. (2) Rigid and flexible controller: A single input multiple output with integral action applied on the first mass is used. The position and velocity of the first two masses are fed back to the controller. This run shows the effect of control spillover on the third mass. (3) Rigid and flexible controller with estimator: The same controller as in the second run is used. However, the position and velocity of the second mass are estimated from the model shown in Figure 3.1.5.2. The dashpot is introduced to account for the damping in the closed-loop system introduced by the controller. A comparison of the results obtained from the three runs is shown in Table 3.1.5.1. Final Report 23 August 1, 1984-July 31, 1985

Transient Response Steady State Oscillation (peak to peak) Case 1: rigid body controller very oscillatory 0.372 mm Case 2: rigid & flexible motion controller less oscillatory than Case 1 0.238 mm Case 3: rigid & flexible motion controller with less oscillatory than Case 1 0.297 mm estimator Table 3.1.5.1 A comparison of the Characteristics of the Three Controllers for the 3-Lumped Mass System. The construction of a small spherical coordinate laboratory robot along with its interface to an IBM/PC microcomputer is complete. Three DC servo motors driven by power amplifiers are used as actuators. The control torques are transmitted to the links using leadscrews and a gear train. One tachometer generator and one encoder are mounted on each link to measure its position and velocity. The encoder signals are read by the computer through digital counters. Each joint has a different number of counters associated with it according to their range. The contents of any counter can be strobed into a tri-state buffer. This buffer can be accessed by the computer through differential drivers. Differential receivers are also used for the same purpose on the control lines originating with the microcomputer. The flexible link consists of a thin aluminum rod with a 50g, payload. Two accelerometers are used to measure the vibratory motion in both transverse directions. The acceleration signal is then integrated twice using an analog double integrator. This is done to devote most of the sampling period to the computation of the control action. Strain gages are also mounted on the flexible rod, and the instrumentation is tested and calibrated. The software to control the robot using the IBM PC is also essentially complete. We are in the process of modifying the rigid and flexible motions controller, used in the three lumped masses, to apply it on the robot arm (i.e. moving from single input - single output controller to a multiple input - multiple output controller). We will be in a position to evaluate the three controller design approaches described earlier using simulations and experiments. This study will be conducted to assess the potential benefits obtainable by each approach, as well as the potential difficulties. As proven in the three lumped mass study, it is expected that significant reduction in the detrimental effects of link flexibility can be achieved by modification of the control software only, and perhaps more improvement can be obtained by the addition of sensors but no additional actuators. This should provide a means to reduce the detrimental effects of link flexibility in many applications without the need for both additional sensors and actuators (i.e. microAugust 1, 1884-July 31, 1985 24 Final Report

manipulators). References [1] Chalhoub, N. and A.G. Ulsoy, "Dynamic Simulation of a Flexible Robot arm and Controller," Proceedings of the 1984 American Control Conference, San Diego, CA. [2] Chalhoub, N. and A.G. Ulsoy, "Dynamic Simulation of a Flexible Robot Arm and Controller," RSD-TR-13-83, Center for Research in Integrated Manufacturing, Ann Arbor, Michigan, September 1983. [3] Cannon, R.H., et al, First Annual Report of the Center for Automation and Manufacturing Science, Stanford University, pp. 31-41, November 1983. [4] Cannon, R.H., and E. Schmitz, "Initial Experiments on The End-Point Control of a Flexible One Link Robot," International Journal of Robotics Research (submitted for publication), 1983. 3.1.6. Manipulator Compliance General Research Objectives The general objective is to characterize the sources of manipulator compliance and their effects on manipulator motion control. Sources of manipulator compliance include: * Link flexibility: this is due to the elasticity of the components of the robot arm structure. * The actuators' internal characteristics: some mechanical compliance arises from use of electrical servo motions as control actuators; substantial mechanical compliance occurs if either electrohydraulic or (especially) electropneumatic actuators are used to control manipulator motion. * Transmission of force/torques from the actuators to the links: mechanical compliance can arise if transmission is through either gear trains or cables, especially if the cables are elastic. The manipulator compliance effects lend extra complexity to the equations of motion when link flexibility, actuator dynamics and transmission compliances are included; control design, taking such effects into account, is exceedingly difficult. 3.1.6.1. Control of Electrohydraulic Robots Investigator: Professor N. H. McClamroch Research Objectives An important source of compliance in many manipulators is due to actuation effects. A study has been made of the dynamics of manipulators controlled by electrohydraulic servo motors or actuators. Closed loop stabilization issues have been addressed in the case where both displacement and force feedback to the servo valves are used; the results of this study have been presented in [1]. The Final Report 25 August 1, 1 984-July 31, 1985

effects of force feedback and load disturbances on the local compliance of a manipulator at the end-effector have long been recognized; a general mathematical treatment of these issues has been developed [2]. The trade off between speed of response and end effector compliance to external loads has been clarified, especially as affected by the control. Status of the Research A general form for the equations of motion of a wide class of robots with n links and n joints is M(e)e + H(e,e) = Tm + TL where OeR n is the vector of link angles Tm eR " is the vector of joint torques due to an external load at.the end effector. Here M(e) represents an inertia matrix function and H(e,O) represents a vector function which defines the Coriolis and gravitational terms. The generalized position, possibly including orientation, of the end effector of the robot, in a fixed coordinate frame, is denoted by the vector Z R m and depends on the geometry of the robot and the link position vector 0 through z = Z(e). Consequently, the load torque vector TL is given by TZ( e) FL where FL cR m is the generalized load force, possible including a load torque, on the end effector of the robot, in the fixed coordinate frame. It is assumed that there is a direct relation, including all gear trains, between the angular displacement vector of the motor rotors Om eR r and the angular displacement vector e of the robot links through m = Y(e). Consequently, the motor torque vector Tm is given by TM =) ( r m [OYe)Q where reR' is the vector of torques generated by the hydraulic servo motors. A general mathematical model for a wide class of electrohydraulic servo motors is give by K';1 + O = F(U,r). Here u represents the vector of normalized currents to the servo valve torque motors. This model includes the effects of fluid compressibility in the motors through the hydraulic stiffness matrix Km which is a positive diagonal matrix; the effects of fluid leakage in the hydraulic motors and the nonlinear pressureAugust 1, 1984-July 31, 1985 26 Final Report

flow relation through the servo valves is included in the nonlinear vector function F(u,r). Thus, the open loop model of a robot driven by multiple electrohydraulic joint servo motors is given by the vector equations [**T r iT M(e)e +H(e,)== [9e) je T ae Ko [+ ao = F(u,r) = Z(e). These equations are representative of a large class of robots driven by electrohydraulic servo motors and actuators. Consider the problem of characterizing the closed loop "local" static compliance of an electrohydraulic driven robot. If the nonlinear link dynamic equations are linearized about equilibrium values for the joint angles e and motor torques 7 the resulting equations are of the form MAO +KAO= BT AT+ CTFL In general the matrices M and K are symmetric and M is positive definite. The corresponding linearized equations for the motor torques are KmlAr + GAT + B T A = G Au Based on properties of the hydraulic servo motors the matrix Gp is a positive diagonal matrix representing the leakage parameters in the hydraulic motors and Gu is a positive diagonal matrix representing the servo valve parameters. The change in the generalized position of the end effector, based on the linearized approximation, is necessarily Az - CAO where C is the Jacobian matrix introduced above. In the above equations At = A - A Ar = -U Au = u - u Az = Z() - Z(). Now suppose that the closed loop control is of the form Final Report 27 August 1, 1984-July 31, 1985

U = 5- +!Gd(e - 0) + Gf (7- r) so that on linearization A =- G.-'[d Ag + Gf AT] The matrices Gd,Gf are referred to as feedback gain matrices. Consequently, the closed loop equations are MAO + KaA = BAr+ CTFL KlT+ (GP + Gf )Ar+ GdBTA +BAe = 0. Az = CAe. Thus, the conditions for equilibrium are that KAO = BrAT+ CTFL (Gp + Gf )Ar + Gd Ae = 0. Hence, the variation in the end effector position can be expressed in terms of the load force FL as AZ = C[K + BT(GP + Gf )-'GdB'CTFL Thus, the constant matrix C= C[K +BT(Gp + G )-1GdB]-1CT defines the "local" closed loop static compliance of the end effector. Note that this end effector compliance matrix depends on the mechanical parameters of the robot, the hydraulic leakage parameters and the feedback gain matrices, but does not depend explicitly on the hydraulic stiffness parameters. Suppose that the feedback gain matrices Gf and Gd are selected as diagonal matrices so that K + BT(G, +Gf ) B is positive definite; then the local closed loop static compliance matrix C is necessarily symmetric and positive definite. The largest and smallest eigenvalues of C define the maximum and minimum end effector compliances, in directions defined by the corresponding eigenvectors. It is clear that to reduce the robot compliance, the feedback gain matrices should be chosen so that (Gp + Gf )-1Gd is as large as possible. But the feedback gain matrices must also be chosen so that the closed loop is locally asymptotically stable, and this stabilization constraint limits the possible reduction in the robot compliance. Sufficient conditions for local asymptotic stability of the closed loop are satisfaction of (a) K + B T(Gp + G! )-1 Gd B is positive definite August 1, 1084-July 31, 1085 28 Final Report

(b) Km > (G, + G1 )Gd It is clear that the selection of feedback gain matrices Gf and Gd involves consideration of a trade-off between closed loop stability (or speed of response) and closed loop compliance reduction. References [1] McClamroch, N. H., "Displacement Control of Flexible Structures using Electrohydraulic Servo Actuators," ASME Journal of Dynamic Systems, Measurement and Control, Vol. 107, March, 1985, pp. 34-39. [2] McClamroch, N. H., "Compliance at the End Effector of an Electrohydraulically Controlled Robot," Proceedings of 1985 Conference on Decision and Control, Ft. Lauderdale, Fla., December, 1085. 3.1.6.2. Motion and Force Control of the End Effector of a Manipulator in Contact with a Constraint Investigator: Professor N. H. McClamroch Research Objectives This research has been carried out jointly with Mr. H. P. Huang, a doctoral student in the Department of Electrical Engineering and Computer Science. The research has been concerned with control of a manipulator where the end-effector is in contact with a constraint surface. If the manipulator dynamics are adequately modeled to take into account the effect of continuous contact, significant control improvement may be achieved. Such dynamics are characterized by the existence of a closed kinematic chain. A closed kinematic chain manipulator can be regarded as a series of linkage mechanisms where the tip of the manipulator is in contact with a fixed rigid constraint manifold. The closed chain manipulator model provides a basis for the study of: * motion/force planning * (force) sensor-based servo control * the interface between a manipulator and manufacturing tasks Status of the Research A brief summary of the research is now given. Let T E R " be the generalized input force vector, q E R be the robot link position vector. M(q) denotes the inertial matrix which is symmetric and nonsingular. F(q,q) comprises Coriolis terms, centrifugal terms, and gravitational terms. We define p E R as the position vector of the end effector in workspace coordinates. The relation between robot coordinates and workspace coordinates can be expressed as p =H(q) where H:Rn -+R n is twice continuously differentiable. Now we impose the holonomic path constraints Final Report 29 August 1, 1984-July 31, 1985

Oj(P) = 07 ( = 1,... m) where: Rn - R m is twice continuously differentiable. These constraints define an m dimensional manifold, and hence the effective number of degrees-offreedom of the manipulator is (n - nm). Let r E R n be the generalized reaction force vector required to maintain satisfaction of the path constraints; then the dynamic equations for a closed chain manipulator, taking into account the reaction force, are M(q)q + F(g,q)= T + r where = JT(q)D T(p)X and XER m represents the multiplier vector required to maintain satisfaction of the constraints. Here J(q)=- H(q) D (p) = (p This model is represented by a set of ordinary differential equations and by a set of algebraic constraint equations. Such systems are said to be in singular or descriptor form and the mathematical techniques associated with them are quite difficult. Some basic properties, particularly important in robot applications, have been developed. These results were announced in the last annual report and have recently been presented in [1]. During the last year, work has continued on these problems, with important results being obtained in the following areas: * fundamental conditions for the well-posedness of constrained manipulator models, as described previously in terms of singular equations. * methods for the reduction of the singular robot model described to a nonsingular form; such reduction methods are useful both in developing the theory and in numerical control computations. * formulation and development of optimal planning problems for end effector constrained manipulators when simultaneous control of the end effector motion on the constraint manifold and the contact force is desired; existing optimization software can be used to solve such problems; * extension of the computed torque control method to end effector constrained manipulators; * simulation of responses of end effector constrained manipulators; special numerical integration algorithms, e.g. use of backward difference formulas, is required for effective simulation; * extension of the dynamic model to cases where the end effector may or may not contact the constraint; transition between configurations is important, August 1, 1984-July 31, 1985 30 Final Report

including modelling of possible impact forces between the end effector and the constraint. * application to the problem of using a manipulator to perform a deburring operation along a circular arc on a spherical object; such an application would represent a major advance in achieving increased use of robotic technology in manufacturing. References [1] N. H. McClamroch and H. P. Huang, "Dynamics of a Closed Chain Manipulator," Proceedings of American Control Conference, Boston, MA., June, 1985. 3.2. Integrated Solid-State Sensors for Closed-Loop Robot Control Investigator: Professor K. D. Wise The development of improved sensors is an essential key to the eventual realization of robots capable of performing sophisticated tasks in nonstructured environments. Sensor development represents a primary focus for research in the Solid-State Electronics Laboratory at Michigan, where most past and present work has concentrated to sensors based on solid-state process technology. These devices typically combine transducers and interface circuits on a single monolithic sensor chip which provide high-level bus-compatible data to the host processor. As part of the total system, these sensors must be looked on as system elements and must be designed to allow the best in system-level performance. Thus, our approach to sensor development is increasingly aimed at the development of automated sensing systems. While the design of improved transducing structures and high-performance interface circuitry remains important, system development must also include features at the sensor to enhance its system role and must include the development of processor algorithms to allow machine interpretation of the measured data. Thus, sensor development is becoming a coordinated effort across several disciplines. Work to include self-test and remote-test capability on the sensor chip itself reflects this increasing emphasis on sensing systems. A primary emphasis for our work in sensor development is provided by applications in integrated manufacturing: specifically in sensors for robotics and, under a separate contract funded by the Semiconductor Research Corporation, for use in the automated manufacturing of VLSI semiconductor circuits. This section will summarize research conducted under the third and final year of the current Air Force contract. This research focuses on two specific robotic sensors: a tactile imaging array, and a broadband non-contact thermal imager. These two projects are described below. 3.2.1. A High-Performance Silicon Tactile Imager Based on a Capacitive Cell A wide number of tactile array sensors are now being developed to provide industrial robots with information on the distribution and amount of contact Final Report 31 August 1, 1984-July 31, 1885

DEFORMABLE PAD /OUTER SKIN ACCESS HOLE SILICON f p SENSOR ARRAY SEAL REFERENCE CAVITY GLASS PLATE ////////// SUPPORT BASE Figure 3.2.1.1 Cross-Section of the Silicon-Based Tactile Imager. force (pressure) between the workpiece and the robot gripper. While vision sensors have existed for several years and are becoming increasingly sophisticated, tactile imagers are only now emerging for use in applications ranging from assembly tasks to surface texture measurement. The significant drawbacks of robot systems without such sensors are insufficient flexibility, open-loop control, and inability to detect and correct errors. The earliest approaches to tactile sensors were based on the use of carbon-impregnated foam-rubber pads, which change their resistance when compressed by applied force. These sensors were relatively simple but exhibited serious stability problems, both over temperature and over time. The majority of more recent development efforts are based on the use of piezoelectric polymeric films. These structures are still relatively simple; however they lack DC response, are difficult to scale to different force ranges, and are not noted for high stability. In most such approaches, the stability of the polymeric film or pad sets the operating performance limits of the overall sensor, and sensor development requires a simultaneous optimization over both the electrical and the mechanical characteristics of the pad material. This typically forces compromises in the performance of the pad and is generally more difficult than optimization of the electrical or mechanical characteristics separately. Pad instabilities have been a serious problem in development of high-performance tactile imagers. These problems motivated a new approach for the tactile imager array which offers high resolution, high sensitivity, and high stability, and can be easily scaled in its operating pressure over a wide range with no sacrifice in relative August 1, 1984-July 31, 1985 32 Final Report

performance. Typical performance requirements for tactile sensors intended for the high-performance end of the applications spectrum include at least 6 bits of resolution (grey scale) per pixel, an element spacing of 1-2mm, high stability, and ruggedness in a difficult operating environment. A wide operating temperature range and fast response time are required. Figure 3.2.1.1 shows a cross-section of the proposed tactile imaging cell. A force/pressure-variable capacitor is formed between a selectively metallized glass plate (below) and a metal layer on the oxidized silicon wafer (above). The wafer is selectively thinned from the back to produce a silicon diaphragm above the capacitor. Deflection of this diaphragm in response to applied force varies the plate separation and hence the cell capacitance. The silicon wafer is electrostatically bonded to the glass plate. Force can be applied to the cell via a variety of pad designs, one of which is shown in Figure 3.2.1.1. A perforated cover plate is overlayed on the array for protection, and this plate is in turn covered by a compliant, replaceable pad and outer skin. This pad can be slit to decrease blooming C ______ __ _ _ TACTILE c TRANSDUCER - S? —MATRIX -- D - COLUMN DETECTION SAMPLE AMPLIFIERS AND HOLD. V O CIRCUITS U ROW COLUMN ADDRESSES SELECT AND DRIVE CIRCUITS | CONTROL CLOCKSl Figure 3.2.1.2 Block Diagram of the Readout System for the Tactile Imager. Final Report 33 August 1, 1984-July 31, 1985

of applied local force if the pad compliance is high. The access holes coupling force to the cell are filled with a substance (such as silastic) which acts as a force transmitter. While pad designs vary widely for applications which range from enveloping grasp to surface texture measurement, the important feature of this basic cell structure is that the pad is used only for force transmission and plays only a relatively minor mechanical role. The dominant structure in determining overall force sensitivity is the silicon diaphragm, whose properties are well controlled, easily scaled for various applications, and are known to be stable over time and free of hysteresis. Since the total capacitive gap is very small, there is very little actual motion in the lower portion of the access chamber. The capacitive cells are naturally fabricated in an X-Y array form. Row conductors are run horizontally across the silicon wafer in slots which are simple extensions of the capacitive gap recess. Metal column lines run vertically on the glass under recesses in the silicon, expanding to form capacitor plates over the cell areas. Thus, a simple X-Y capacitive keyboard is formed which has precisely controlled dimensions and a force sensitivity set by the thickness of the silicon plate. The row-column capacitance change with applied force is [1] RESET Vpm M,toL C (Z/L)2 1I MIA R 2 1c —-'~ C*0MID Vout =Vm(CX CR)/CF RESET ------- #2 I I7 J I Voutu Figure 3.2.1.3. A Switched-Capacitor Column Readout Amplifier. August 1, 1984-July 31, 1985 34 Final Report

c0A W W AC = C - Co- = S - W C where co is the free-space dielectric constant, A is the area of the metallized plates, S is the zero-pressure plate separation, and W(x,u) is the diaphragm deflection due to applied force or pressure, and since deflection is proportional to applied pressure, AC is linear with pressure for small displacements, becoming quite nonlinear for larger W. When the plates touch, (W(0,0) = S), a natural strain relief is provided which significantly extends the damage threshold of the cell. Note that electrically the cell capacitance is unchanged with force scaling, i.e., the cell capacitance is not directly dependent on the force range, which can be set with h. One of the most important features of this approach to tactile imaging is the ease with which the capacitive array structure can be read out. Figure 3.2.1.2 shows a block diagram of the tactile transducer matrix and its interface circuitry along with signal conditioning electronics. Data acquisition is controlled by 6 bits of clocks from the microprocessor. A counter generates row and column addresses which are fed to a row decoder and multiplexer, respectively. The row addresses select one of eight rows, and following the read operation the eight outputs from the row cells are simultaneously available from the column detection amplifiers using MOS switched-capacitor interface circuitry [2]. The data during Figure 3.2.1.4. A Wire-Wrap Implementation of the Readout System Final Report 35 August 1, 1984-July 31, 1985

one frame time is read by the NSC 800 microprocessor, calibrated for any nonuniformities in offset or sensitivity and then stored in memory used for subsequent real-time data processing by a host machine. Since the readout amplifiers must be able to detect small variations in capacitance (a few femtofarads), high performance readout circuitry is needed. The basic organization of the readout scheme is shown in Figure 3.2.1.3 for one cell. To read the array, an input digital address selects the desired row and steers clock phase q1 to it. The complementary clock phase, 02, is applied to a dummy row included at one end of the array. This dummy (reference) row has thick silicon back of the plates and is not force sensitive; however, its row-column capacitance is C0, matching the active rows. The two clocks induce complementary 2.5 CIRCULAR LOAD (D=2.5 mm) u 1.5 - 1. 5 |(D=I mm) 0 I —0 0.5 - 0 10 20 30 40 50 APPLIED FORCE, g Figure 3.2.1.5 The Output Characteristic of a Typical Tactile Cell for Two Different Force Contact Areas. August 1, 1984-July 31, 1985 36 Final Report

charges onto the column lines, and the net column charge following clock switching is integrated by the readout amplifiers as shown. The output voltage after integration is (Cx - CR ) C CF CF where Cx and CR are the transducer and reference capacitors, and CF is the integration capacitor. The suppression of parasitic capacitance Cps is very important since it allows electronics to be eliminated from the array itself, simplifying the fabrication process. Results to Date Several 8X 8-element tactile imaging arrays have been fabricated, implementing readout electronics with commercial integrated circuits as shown in Figure 3.2.1.4. The clock and address generating circuitry on the left side of the board is physically separated from the data processing circuitry. A wire-wrap board is used to reduce parasitic capacitance and to simplify the connections between the array and electronics. This implementation is being used to measure the performance of the fabricated arrays for testing purposes, and a new design is being developed for mounting on robot grippers. In the new design, only a tactile array and readout amplifiers are mounted on the support base inside the gripper. The diaphragm size is 1.2mm X1.2mm XlOpm, and the active array area is 1.6X1.6cm. Figure 3.2.1.5 shows the output voltage from a typical cell as a function of applied force. Localized force at the center of the diaphragm has higher sensitivity and lower dynamic range than uniform force applied to both the diaphragm and support rim. Silicone rubber (400pm thick) was used to couple force to the array. The force sensitivity is 15mV/gm at low load and 100mV/gm at maximum load, with a maximum operating force of 45g/ element for this design. The array is capable of a resolution and reproducibility exceeding 8 bits, and the operating force range can be varied over several orders of magnitude by controlling the diaphragm thickness. Since rubber transmits force in every direction, coupling between adjacent cells is an important parameter in the pad design. Figure 3.2.1.6 shows the ratio of the measured crosstalk to adjacent cell with no load on the measured cell. The force range is normalized to the maximum operating force. The primary output in the array when load is applied on an crosstalk is only a few percent of the primary output, and while it represents one type of noise source for vertical force detection, it can also provide useful information regarding shear forces and slip. Final Report 37 August 1, 1084-July 31, 1085

5.0 0 0: | 4.0 0 * \ ~s \. D- 2.5mm. 3.0 0 I-. i. 2.0 C. 0 _o 0 o 1.0 D=Imm 0 0.2 0.4 0.6 0.8 1.0 NORMALIZED DYNAMIC FORCE i gure 3.2.1.6. The Ratio of Nearest Neighbor Crosstalk to Primary Output References [1] Chun, K., and K. D. Wise, "A high-Performance Silicon Tactile Imager Based on a Capacitive Cell," IEEE Transactions on Electron Devices, Vol. ED-32, No. 7, pp. 1196-1202, July 1985. [2] Park, Y., and K. D. Wise, "An MOS Switched-Capacitor Readout Amplifier for Capacitive Pressure Transducer," Proc. Custom IC Conference, pp. 38384, May 1983. 3.2.2. A Silicon-based Thermal Imager Infrared imaging is potentially very useful for a number of important applications in robotics and integrated manufacturing [1I. However, commerciallyavailable thermal imaging arrays are of limited use in these areas because they August 1, 1984-July 31, 1985 38 Final Report

must typically be operated at 77 ~K or below and their spectral sensitivity ranges are rather narrow. Moreover, the need for large arrays creates a challenging difficulty in signal readout, which ideally requires on-chip multiplexing. A silicon-based infrared detector array has been developed for use in automated process control and manufacturing applications where moderate levels of temperature resolution are acceptable and broad ambient temperature operation is desirable. The infrared detector being used is a polysilicon-gold thermopile fabricated on a silicon substrate. This approach is based on convenient silicon IC technology and is compatible with on-chip MOS circuitry for signal read and signal conditioning. These arrays can be operated over a wide ambient temperature range with a broadband spectral responses (>40pm) and a wide dynamic range (more than 106: 1). compatibility as well as moderate sensitivity. During the first two years, the essential technology for producing the thermopile array (thin dielectric diaphragm, double silicon rim structure, and use of polysilicon-gold couples as infrared sensing materials) was developed and the first 8-element prototype thermopile arrays [2,3] were realized together with 60element diaphragm window arrays. These arrays confirmed that our approach provides high yield, decent packing density and on-chip circuit Also, a singlepoint thermopile detector was designed and fabricated to understand this diaphragm structure in more detail. During the past year, work has concentrated on the design and fabrication of, a 32-element linear imager and its system performance evaluation. Figure 3.2.2.1 shows the linear array being fabricated. The full prototype linear array consists of two linear sub-arrays of 16 detector elements and two 16-to-1 analog -....- i.'"I 1-: r i 41 lW;-, 1.,:.',",i;....:; 1M. 5. -'.......,. w.? JI~ii Iiijl.II I'ill'l1 Figure 3.2.2.1 A 16X2 therompile linear array with on-chip multiplexers. The chip size is ll1mm X 5.5mm with 10pm feature size. Final Report 39 August 1, 1984-July 31, 1985

CrAu /Si02 POLY Si B-DOPED GATE SiO+ Si B -DOPED <100> Si Figure 3.2.2.2. A cross-sectional view of the integrated detector array. The thermopile detectors and PMOS devices share three thin-film layers: dielectric layer, polysilicon layer, and metal layer. multiplexers. The two sub-arrays are staggered with respect to each other by 300pm to achieve an effective vertical element separation (center-to-center) of 300pm. The cross-sectional view of the on-chip array is shown in Figure 3.2.2.2. Each detector element consists of 40 polysilicon-gold thermocouples supported by a 1.3#1m thick SiO 2/Si3N4/SiO2 thin-film diaphragm window, each window measuring 0.4mm X0.8mm. One lead from each detector is connected to a common ground line and the other is connected to one input of an on-chip (16-to-l) analog multiplexer. Binary address signal for the multiplexer decoders will be supplied by off-chip circuits. This chip also contains a separate calibration thermopile, polysilicon heating resistors, diodes and MOS transistors to allow direct measurement of both the cold junctions temperature and the thermoelectric power of the polysilicon-gold thermocouples. The overall dimensions of this array chip are 11mm X5.5mm using 10pm design rules. A total 8 masks are used for the fabrication of this array. Figure 3.2.2.3 shows the functional block diagram of the electronics to readout, condition and process the infrared signal from the on-chip thermopile array. Figure 3.2.2.4 shows the imager camera which will be used to install the linear imager array and the electronics shown in Figure 3.2.2.3. When the imager array is applied in a scanning-mode thermal imaging system being considered, it is expected that a noise equivalent temperature difference (NETD) of 0.9 ~K and a minimum resolvable temperature difference (MRTD) of 2-3 ~K at ft = 0.2 cycle/mrad. For non-scanned (staring) applications in process monitoring, typical NETD values at 1Hz electrical bandwidths are expected to be less than 0.01 ~K. August 1, 1984-July 31, 1985 40 Final Report

MEMORY ~~~~~~~TEESI4M ~ ~ ~ ~ ~A P. J SIGNAL / COND IT IONE Figure 3.2.2.3. Functional block diagram of the electronics for the linear array. Work is now underway to complete the fabrication of the linear arrays and incorporate them in the above-mentioned imaging camera. This camera is being. interfaced with image processor and display units and is expected to be used for a variety of applications in robotics and integrated manufacturing. References 1]SUBSTRA Chin, B.A., J.S. Goodlig, and N.H. adson, Infrared thermography shows promise for sensors in robotics welding," Robotics Today, pp. 85-87, Feb. 1983. [21 Choi, I.H... and K Wise, "A liner thermopile infrared detector array with on-chip multiplexing," Digest of International Conference on Solid-State Sensors and Actuators, pp. 132-135, Philadelphia, June 1985 [3 Choi, I.H. annd K.D. Wise, "A silicon thermopile-based infrared sensing array for use in automated manufacturing," to be published in the EEE Tranaction on Electron Devices. Concluding Comments on Integrated Sensor Development During the past three years, Air Force support has allowed the development of two new sensors which should find important applications in automated manufacturing. Tactile imaging is generally acknowledged to be a badly needed and most structures under development are aimed at a relatively lowFi nal Re port 41 August 1, 184-July 31, 1985adelphia, June Finall Report 41 Aug8ust 1, 1084-Juiyl 31, 1085

Figure 3.2.2.4. A prototype camera used to install the linear array thermal imaging. The camera utilizes a 50mm, f/1 germanium lens sensitivity of the thermal imager. performance and of the applications spectrum. The device developed under'his program offers the highest resolution (pressure or spatial) and highest stability (over time or temperature) of any structure thus far reported. It achieves this performance using well-established technology which is capable of high precision. Only five masking steps are required in the fabrication of the silicon chip, none of them critical. The readout scheme achieves high performance while allowing offchip electronics to be used. We feel this imager offers a practical solution to one class of problems in tactile sensing and hope it will see wide application. Characterization of the basic device has now been completed, and work using such devices on grippers here at Michigan will continue under other contracts. The thermal imager offers a new tool for non-contact thermography and infrared spectroscopy. The device is responsive over a broad spectral bandwidth, relatively fast, moderately sensitive (especially in staring applications), has a broad dynamic range, and operates without cooling over a wide ambient temperature range. Like the tactile imager, this device development is now virtually completed. Work to apply the device in automated manufacturing is continuing under funding provided by the Semiconductor Research Corporation. The basic thin-diaphragm thermopile structure is also being examined for possible application in a monolithic infrared spectrometer and in a highly-sensitive gas flowmeter. August 1, 1984-July 31, 1085 42 Final Report

3.3. Vision for Robotics Vision is of such importance to robotics and manufacturing that it is reported separately from other sensing research. Vision will be used extensively for all manner of inspection and as a key component in automated assembly, welding and sorting operations. While commercial vision systems have recently been introduced to the marketplace they are relatively crude devices incapable of adequately handling many important problems. The use of grey level information is rudimentary at best. The acquisition and use of range or depth data exists only in research labs, and still requires more research. Little has been done with moving objects. The "bin of parts" problem remains inadequately solved, and the processing time for many algorithms remains unusably long. Several aspects of these problems are being addressed by GRIM, including * The development of enchanced edge detection algorithms: this is at the heart of many vision algorithms * The development of vision algorithms for tracking and identifying moving objects (also problems involving moving cameras or movement of both object and camera) * The reconstruction of three dimensional surfaces from visual inputs: among the goals here are dimensional part inspection * The development of new more effective algorithms for the recognition of occluded parts * The direct optical processing of images: the goal here is to reduce processing time dramatically * The analysis of vision operations to determine the class of special computer architecture best suited for implementing the operation * Techniques for surface characterization in range images * Multiple view object recognition: the camera is moved to obtain informative views for object recognition. Vision research in our laboratory increased significantly in the last year. We have about 15 students addressing various aspects of computer vision. In the following sections, we present research that was supported by AFOSR. 3.3.1. Dynamic Scene Analysis: Event Recognition and Correspondence Investigator: Professor Ramesh Jain Research Objective While dynamic scene analysis has been receiving increased attention recently, most researchers focus on a very few frames [1]. For example, optical flow can, theoretically, be computed over most of the image given only two frames [2]. Of course, no one has managed to DO this, or to even come close to doing it. At the most dynamic vision people worry about obtaining velocities and then what information is available in the disparity or optical flow field, e.g. Final Report 43 August 1, 1984-July 31, 1985

surface orientation. We have taken a different approach. When one thinks about dynamic scenes, it is clear that for many cases the motion in the scene will be smooth (bodies possess inertia) with occasional changes in velocity or acceleration. We choose to focus on EVENTS. If it is possible to detect events, one can use them for a variety of purposes. For example: events are interesting and, therefore, are a powerful focus of attention cue. Also, events will frequently arise because of interactions between objects giving us a handle on object recognition that is not easily obtained otherwise. And events in smooth motions can be used to characterize the long term movements of objects. One can think of events as a way of SEGMENTING an object's trajectory, rather like edges can be used to segment a single frame. Events only make sense in terms of extended frame sequences and fairly smooth motions. We have only begun to skim the surface of the usefulness of the event concept, and we are excited about the possibilities [3]. Status of Project The first problem, clearly, is how to DETECT these events. The immediately obvious approach of Kalman (predictive) filter tracking can be immediately dismissed, because changes in the system dynamics, that which we wish to obtain, are one of the problems of Kalman filtering [4]. We solved this problem of event detection by modifying an on-line fitting algorithm developed by O'Rourke [5]. Consider the case where we have ballistic motion, and where the coordinates are independent. The position of a particle xi at t, is given by the usual i 2 Xi = -- ati2 + vti + I where a is acceleration, v is velocity, and p is initial position. However, xi is never known exactly. For our first experiments we require the point to be present at time t; and further, that its position is bounded by a noise factor. So the true xi is bounded by xi- ni _ xi < x!i + n where ni is a positional constraint on the noise of the point's position;x. Thus, xi - ni < 2 ati2 + vti + i, < Xi + ni 2 - Now xi,ni, and ti are all known, a,v, are unknown. So the pair of inequalities describe two parallel constraint planes with normal: (-I t;2, t,1) and displacements from origin: xi - ni and x; + ni. These planes constrain the allowed values of a,v and p. We can show three (or more) planes for different times intersect to form a closed polyhedron. Suppose we have formed such a polyhedron for t(; -2),t(i_ -l),ti. The values a,v and p are constrained to lie within. A new x2( + l) for time t(; + 1) provides a new pair of parallel constraint August 1, 1984-July 31, 1985 44 Final Report

planes. These planes with either (1) intersect with the polyhedron to form a new polyhedron or (2) will give a null intersection with the previous polyhedron. Case (2) is the event detection. Case (1) gives a successive refinement of the object's motion parameters. We further note here that the polyhedron for a particular trajectory constrains the possible positions for the point in later frames if there is no intervening event. We used this event detection to solve the correspondence problem of several points moving about in the scene. The correspondence problem is the problem of matching points, frame to frame, when in a single frame all points appear identical. Our solution to the correspondence problems is to hypothesize motions for the particles using the several polyhedra for the several hypotheses. If a polyhedron successively predicts a point's next location, then the polyhedron is refined using case (1) above. If a polyhedron does not successively predict a next location (case (2) above), then an event in that motion trajectory is signalled, and a new polyhedron is initiated using all the available points to extend the motion. Everytime an event is signalled, there is a sudden explosion in the number of hypothesized trajectories. By requiring a minimum length of smooth motion, the hypothesis list is pruned. When a correspondence is requested the complete list of hypothesized trajectories is used to COVER the data points, and the partition with the minimum number of events is chosen as the correspondence. In the event of two or more correspondences based on minimizing number of events, all are outputted. A simple event count is not sufficient to distinguish among them. The advantages of this method are several. For one thing, the trajectories are not smoothed over events, events are localized and do not affect motion parameter assessment. For another, since all the hypotheses are kept around, a later correspondence will not be adversely affected by the choice of a wrong correspondence earlier on. For yet another advantage, the pruning method is very gentle, yet many spurious polyhedra are thrown out as having insufficient support. For another, the longer the algorithm is allowed to run, the better the results will be, and indeed the time and space order approaches 0(n) where n is the number of points. Future Research There are several areas in which we want to extend this work. One is to minimize the start-up computational burden (0(n3)) using local grey-level information. Another is to loosen the restriction on motion dynamics by using qualitative descriptions of motion rather than numerical descriptions. But most importantly, we want to use this concept of events and event detection as a primitive, as a tool for general motion understanding. References [1] Sethi, I. K. and R. Jain, "Finding Trajectories of Points in a Monocular Image Sequence," RSD-TR-3-85, Robot Systems Division, Center for Research on Integrated Manufacturing, College of Engineering, The University of Michigan, Ann Arbor, MI 48109, April 1985. Final Report 45 August 1, 1984-July 31, 1985

[2] Horn, B. K. P. and B. Schunck, "Determining Optical Flow," Artificial Intelligence, vol. 17, pp. 185-203, 1981. [3] Haynes, S. M. and R. Jain, "Low Level Motion Events: Trajectory Discontinuities," Proc. IEEE Conf. Artificial Intelligence Applications, pp. 251-256, 1984. [4] Chang, C-B, Tabaczynski, J., "Application of State Estimation to Target Tracking," IEEE Transactions on Automatic Control, vol. AC-29, no. 2, pp. 98-109, February 1984. [5] O'Rourke, J. "An ON-Line Algorithm for Fitting Straight Lines Between Data Ranges," Communications of the ACM, vol. 24, no. 9, pp. 574-578, September 1981. 3.3.2. Partially Occluded Part Recognition Investigators: Professor T.N. Mudge and R.A. Volz Earlier reports described the phases in a research program that has resulted in a set of algorithms to recognize partially occluded parts (POP's). This work has been documented in [1-5]. In addition, a forthcoming thesis by Jerry L. Turney supplies a detailed description of the latest version of the algorithms, an exhaustive survey of earlier work on the POP problem, and benchmarks our method against those in [6,7]. These were chosen as yardsticks to measure the effectiveness of our approach because they are representative of two very different approaches to the POP problem. The method of Koch and Kashyap [6] is a Hough transform based approach that determines the location of parts by histogramming the position and orientation of part corners, while that of Ayache and Faugeras [7] is a synthetic approach that attempts to reconstruct parts in the image under the guidance of a heuristic search. The set of algorithms (described in Turney's thesis) has been implemented as a Pascal program that has been transmitted to AFOSR. It currently runs on an Apollo workstation, and, to facilitate its use, makes extensive use of menus. Although the program was designed to work with Apollos its only system calls are to programs that implement the Core Graphics System. Since this a de facto standard for graphics software the POP program should be usable with a wide range of hardware. A quick reference operator's guide illustrating the menu system is appended to this report. The research program to develop algorithms for the POP problem has, until recently, considered only parts that are "2-dimensional". By this we mean parts with at least one dimension that is significantly smaller than the other two. This allowed us to assume that a part would always appear in something approximating one of its stable positions even when it was in a "bin of parts" situation. The positions can be regarded as quasi-stable. For 3-dimensional parts, i.e., ones that are not restricted to having one dimension much smaller than the other two, the notion of quasi-stable positions no long applies. Two new lines of research are underway to deal with 3-dimensional parts. Initially, both are working with non-occluded parts. The first line of work is August 1, 1984-July 31, 1885 46 Final Report

concerned with the use of moment invariants to characterize the shape of edge boundaries. The following paragraph summarizes some of the basic notions. Given an edge boundary f (x,y) (see Figure 3.3.2.1), the family of central moments of f (x,y) about C, the centroid, is given by MM Mma = f xm y f (x,)dxdy 00 It can be shown that the central moments are information preserving. More importantly from our point of view they have some interesting invariant properties. (1) They are invariant to translation in the x-y plane. (2) Central moments normalized by a power factor, ~P~ = Mmn mn Mo(m + n +2)/2 are invariant to scaling. (3) There exists [8] an infinite set of combinations of central moments which are invariant to rotation. The first three are C1 = (M30- Mo2)2 + 4M1 C2 = (M30 - 3M12)2 + (3M21 - M03)2 C3 = (M30 + M12)2 + (M21 + M03)2 Looking for shapes in the 2-dimensional image 3 space scenes requires exploring 6-dimensional space: x,y,z and three angles. The invariant properties of central moments reduced the number of dimensions to search for from 6 to 2. The x -y degrees of freedom are eliminated by translational invariance, the z degree of y M centroid (xc,yc) CC f(x,y) 1 M Figure 3.3.2.1. Final Report 47 August 1, 1984-July 31, 1985

freedom is eliminated by the scaling invariance, and one of the angular degrees of freedom is eliminated by the rotational invariance. This greatly simplifies the recognition process. Experiments are underway to test the effectiveness of this approach. The second line of work is developing an algorithm for determining the location and rotational orientation of an unoccluded three-dimensional object given a digitized gray-scale image. The algorithms consists of (1) extracting an edge map from the image; (2) locating feature points, such as points of sharp curvature, in the edge map; (3) using intrinsic properties of the feature points in the image, such as signs of curvature of the feature points, to order a model data base of perspective views for the object according to likelihood of correspondence to the image perspective; (4) for each perspective view in the model database, matching properties of the image feature points and object feature points in order to generate vectors of potentially corresponding points; and (5) verifying the most likely vector correspondences by examining the convergence of a least-squares fit of a rotation matrix to the data in each correspondence. The model database of perspective views is generated off-line, and sets of perspective views containing the same feature points are merged into common perspective views prior to run-time. Several variations and extensions of the algorithms are being explored. One immediate application is to further refine the 2-dimensional POP program to untilt parts when they appear in heaps. This will make possible very accurate location of the parts for robot hand grasping. References [1] Turney, J.L., T.N. Mudge and R.A. Volz, "Recognizing Partially Occluded Parts," Trans. on Pattern Analysis and Machine Intelligence, Vol. PAMI-7, No. 4, pp. 410-421, July 1985. [2] Turney, J.L., T.N. Mudge and R.A. Volz, "Recognizing Partially Hidden Objects," Proc. of the IEEE International Conference on Robotics and Automation, pp. 48-54, March 1985. [3] Turney, J.L., T.N. Mudge and R.A. Volz, "Recognizing Partially Hidden Objects," Proc. of the Society of Photo-optical Instrumentation Engineers Cambridge Symposium on Intelligent Robots and Computer Vision, pp. 108113, November 1984. [4] Mudge, T.N., J.L. Turney and R.A. Volz, "Experiments in Occluded Parts Recognition," Proc. of the Society of Photo-optical Instrumentation Engineers Cambridge Symposium on Intelligent Robots and Computer Vision, SPIE Vol. 449 (part 2) pp. 719-725, November 1983. [5] Turney, J.L., T.N. Mudge, R.A. Volz and M.D. Diamond, "Experiments in Occluded Parts Recognition Using the Generalized Hough Transform," Proc. of the Conference on Artificial Intelligence, Oakland University, Rochester, MI, April 1983. [6] Ayache, N. and O.D. Faugeras, "A New Method for Recognition and Position of 2-D Objects," INRIA pp. 1-17, 1983. August 1, 1984-July 31, 1085 48 Final Report

[7] Koch, M.W. and R.L. Kashyap, "A Vision System to Identify Occluded Industrial Parts," IEEE Conf. on Pattern Recognition & Image Processing, pp. 55-60, 1985. [8] Hu, M.K., "Visual Pattern Recognition by Moment Invariants" IRE Trans. on Info. Theory, pp. 179-187, Feb. 1961. 3.3.3. Applications of Optical Processing to Robotic Vision Investigator: Professor E. Leith General Research Objectives The goal of this research effort was to apply optical data processing techniques as a method of reducing computational overhead in robotic vision applications. The following system constraints were imposed as a basis from which a practical system might be implemented. (1) Hybrid optical-computer architecture for real-time operation. (2) Utilization of incoherent light for optical processing. (3) Capability to perform sophisticated recognition and classification algorithms. (4) Dimensionally stable optical architecture yielding reliable, reproducible output with minimum operator input. The robotic vision system replaces special purpose electronic hardware and software algorithms with "optical hardware". Significant advantages in system throughput can be achieved due to the parallel processing capability of optical systems. General tasks, such as feature extraction can be performed in the imaging process, prior to detection and computer digitation. The source illumination should be incoherent to, (1) Utilize existing scene illumination; eliminating the need for expensive; incoherent-to-coherent conversion to perform conventional coherent signal processing. (2) Improve the signal-to-noise over coherent source illumination for standard coherent optical processing. (3) Allow for the possibility of using incoherent structured scene illumination as a means of input coding. The optical filtering system should be able to perform arbitrary filtering algorithms in order to compete with digital methods. The optical processing system should be both dimensionally stable and reliable to adequately perform in the factory environment. The system integration of the hybrid optical data processing system is shown schematically in Figure 3.3.3.1. Overview of the First and Second Year Initially three dimensional contouring techniques were investigated utilizing structured illumination. Our primary interest was investigating methods for projecting structured incoherent light. We envisioned implementing a conventional detection technique similar to that of Lippincot and Stark [1] to detect bumps Final Report 49 August 1, 1984-July 31, 1985

HYBRID ROBOTIC VISION SYSTEM MODEL DESCRIPTION OPERATE i _............. __ 3; __ RESULT FEATURE PRE-PROCESSING CAMERA FEATURE IDENTIFICATION EXTRACTION EXACTION EXTRACTION OPTICAL HARDWARE t —--- ELECTRONICS -- — MICRO-PROCESSOR Figure 3.3.3.1. Hybrid Robotic Vision System and scratches in three dimensional structures. The grating interferometer was chosen as a fringe projection system because of its ability to form high contrast fringes under incoherent illumination. Using a grating imaging technique it was shown that arbitrary frequency fringe planes could be projected about a test plane. By altering the degree of spatial and temporal coherence of the source the depth of localization (volume) of the fringes could be adjusted about the reference plane. This provided a means of accurately locating object points relative to a reference plane over a wide range of tolerance requirements. These incoherent, low noise fringes could then be detected using a conventional camera system and the information processed by standard techniques [1]. We, therefore, developed an extremely high resolution technique of three dimensional object point sampling. Our efforts were then re-directed to address optical methods to process large amounts of three dimensional object information in quasi-real-time. In the second contract period our research objective was to develop novel incoherent techniques to reduce the computational overhead involved in processing optical information serially via the digital computer. The grating interferometer was again chosen as a basis architecture which might provide a solution to August 1, 1884-July 31, 1885 50 Final Report

this optical data processing problem. It is well known that the grating interferometer can produce a one dimensional Fourier transform of the intensity of the source structure [2]. It is this property which led to some interesting spatial filtering configurations that fit within the framework of our research objectives. This interferometric system (Figure 3.3.3.2) was shown to provide a means of performing one dimensional spatial filtering on a tilted plane. This effort exhibited promising results in the following areas: (1) Reduced bias buildup under incoherent illumination. (2) Reduced coherence improves signal-to-noise ratio. The system performance will not be degraded by optical quality or environmental factors. (3) Readily implemented in hybrid optical-digital configurations. (4) Dimensionally stable and reduced sensitivity to mechanical vibrations. (5) The potential to function effectively under white light illumination. The analysis of this spatial filtering system provided basic insight to extend these one dimensional results to two dimensions. Status of the Research Efforts Gorlitz and Lanzl, Lohmann, and Rhodes [3-6] first considered two-channel incoherent spatial filtering systems. Incoherent object illumination is split into two paths, separately filtered in each path and recombined at the output plant. If H1 and H2 are pupil function masks placed in each of the two channels, then the incoherent system transfer function is the autocorrelation of (H1 + H2). The s G6 G2 G3 I~ ~ ~~ 2 j~~~~ -1 H~~~3. - d i*~ I Figure 3.3.3.2. Interferometric optical data processing system. Final Report 51 August 1, 1984-July 31, 1985

terms H1*H; and H2*H' represent the autocorrelation functions that result from conventional one-channel incoherent spatial filtering while H1 *H and H2*H, the cross correlation of the two pupil functions, result from the presence of two simultaneous channels. Since H1 and H2 can be any complex functions, the resulting cross correlation will generate any arbitrary system transfer function. Stoner [7-8] has described a system for producing the two channel incoherent system. As shown in Figure 3.3.3.3 an object distribution s (x) (in its simplest form, a transparency) is overlain with a diffraction grating. Provided the spatial incoherence is not too great, the light forming the 0, + 1, and - 1 diffracted orders of the grating will be essentially separated at the focal plane of the lens L1. Here, transparencies H1 and H2 are placed in the + 1 and - 1 diffracted orders, and the 0 order term is blocked by a stop. The two lenses, L1 and L2 together form an image of the object-grating combination. Defining h (x) as the point spread function of H(f ), the incoherent distribution at the output, or image, plane is then sG L L P 1 1 2 o -i I:I —, d - F vFe d -- Figure 3.3.3.3. Basic method for two-channel incoherent transfer function synthesis. August 1, 1984-July 31, 1985 52 Final Report

I = *{ I h 2+ h2 2} + s*{hlh +h2}CS(4rf O) (1) The output consists of a baseband term and a term embodying the desired filter operation on a spatial carrier. The baseband term is of no interest. The image on the spatial carrier is the cross product produced by the two channels together, and has been operated on by a transfer function that is the cross correlation of the pupil function in each path. By choosing the pupil function masks H1 and H2 appropriately, the incoherent transfer function H =H1 *H* can be tailored in any desired way. The only limitation is that the pupil functions H1 and H2 must be bandlimited. This is a severe limitation, because the lens spatial frequency bandwidth capacity is not used efficiently. Despite this drawback, incoherent spatial filtering systems constructed in this manner have the same broad range of transfer functions as the coherent system. We consider modifications that can extend the versatility of this method by combining a two-channel spatial filtering system in tandem with a grating interferometer [9]. We note three limitations of Stoner's technique. First, if the lenses in the system have to resolve fine spatial carrier lines, severe limitations are imposed on the system performance. Second, interferometers typically are limited in their broad-source fringe forming capability. Third, ultimately, one would like to use a technique which is suitable not only for transparencies, but also for Cathode-ray-tube images or real-world objects. In these applications the illumination typically is completely spatially incoherent and lacks the monochromaticity of laser light; such conditions would arise in robotic vision situations. These types of highly incoherent sources give rise to severe difficulties. The first of the above problems is solved as follows: The required twochannel system, along with the fringe-forming capability, is generated by an interferometer that is downstream from all the lens elements. The interferometer can be constructed from diffractive or reflective elements. Interferometers consisting of reflective elements, such as the Mach-Zehnder type, have severe source constraints [10]. Grating interferometers, constructed of diffractive elements, can form unlimited numbers of fine, high contrast fringes under very broad-source, polychromatic illumination [11]. In the system we describe (Figure 3.3.3.4) the lens L1 images the object to the output plane Po. An interferometer consisting of three diffraction gratings in tandem is placed on the image side of the system. This interferometer has the capability for forming high contrast fringes in completely incoherent light, that is, light that is both spatially and temporally incoherent. Consider first the use of spatially incoherent monochromatic light. The interferometer is adjusted so that the localization plane of fringe formation coincides with the image plane. The first grating G 1 splits the light into two beams, the + 1 and - 1 orders of the grating. The + 1 order becomes the - 1 order of G 2 and again the - 1 order of G S. The two beams are brought together at the output plane PO. The image distribution is then, to within an unimportant constant Final Report 53 August 1, 1984-July 31, 1985

L1 G G G3 P ~2 3 O~0 H I 0i i~ i I 1 i 2 F- d - F dE ------ d ----- F ----- -F -- Figure 3.3.3.4. Two-channel system incorporating grating interferometer. I = s {1 + cos(4rf x )} (2) The image consists of two parts; one is simply the image that would be formed by L 1 alone, the second is the same image on a spatial carrier. The transparencies H1 and H2 previously described are placed in the two beams, preferable at, but not restricted to, the focal plane (coherent Fourier transform plane) of L i. We assume the focal plane of L1 lies between G2 and G3. The output is identical to that described by Eq. (1). The baseband image is the sum of the incoherent transfer function of each independent channel. The carrier-centered image is operated on by the cross correlation of the coherent transfer function of each channel. To demonstrate the capability of the interferometric spatial filtering system, a high pass filtering operation was performed. H1 was a circular stop of radius ro centered in channel. H2 was an annulus of inner radius r0 outer radius 32r0 centered in channel 2. The two channels were separated by a distance of 64ro and located at the focal plane of L l. The object was located do from L 1 and the output was recorded at a position of d,. The grating frequency was f 0. The interferometer was adjusted for broad-source fringes without the frequency plane August 1, 1984-July 31, 1985 54 Final Report

mask. The frequency plane mask was then inserted and the filtered object distribution was recorded on photographic plate. (The recording media could be a video camera followed by electronic baseband filtering for quasi-real-time operation.) The recorded output was then illuminated with monochromatic spatially incoherent light and the + 1 order was imaged with two-fold magnification to yield the result shown in Figure 3.3.3.5. The signal-to-noise ratio (SNR) is seen to be quite high, despite the tendency of the high pass filtering operation to degrade SNR. The ring around the object edge is sharp and bright. The noise that is observed is essentially due to scattering on the object; the usual coherent noise appears to be absent. The vertical line on the lower part of the letter originates from a scratch on the object emulsion. We would expect that results of this quality would be difficult to achieve in a comparable experiment performed under coherent illumination unless the lenses were essentially free of scattering centers, which by no means was the case here. Measurements on this system indicate that the resolution is limited by the resolving power of the lenses themselves, rather than by the imitations particular to the optical processing system, (imaging of the object on a spatial carrier, aberrations due to gratings, etc.). Figure 3.3.3.5. Two-channel incoherent high-pass filtering via the grating interferometer. Final Report 55 August 1, 1984-July 31, 1985

This architecture has several significant advantages over other previously mentioned incoherent techniques. First, the interferometer is behind all the lens elements. Thus, the lens is not required to resolve the fringe pattern and, therefore, the full resolving power of the lens can be utilized for imaging the object. Stated equivalently, the lens spatial-frequency passband can be centered on the object distribution, whereas otherwise, the portion of the object distribution that goes onto the carrier will be located far to one side of the center. Not only would the lens system require at least three times the resolution needed for merely imaging the object, but also, the image would be attenuated by passing through a portion of the lens transfer function that is relatively low, resulting in low contrast fringes. Finally, the image would be distorted by the asymmetry of the transfer function about the values + _f 0. Another advantage is that the grating interferometer is constructed from transmissive rather than reflective elements; also both beams traverse the same elements in almost a common-path configuration. Consequently, vibration and other environmental factors have much less effect on fringe stability. Thus, the interferometer can readily be operated under environmental conditions that would be difficult for many other interferometers. Finally, as was previously noted, the symmetrical grating interferometer is achromatic, and can be used with white light. Thus, the image distribution of Eq. (1) could be formed equally well in light that is both spatially and temporally incoherent. However, since the spatial filtering system is not achromatic, broadspectrum operation presents limitations on the transfer function H. If the source spectrum is broadened, the resulting transfer function becomes blurred. This limitation is minimized if the transfer function H is slowly varying as a function of f, and fy. For example, simple high pass filtering with a gradual cutoff should work quite well; however, spectral broadening will prevent sharp transitions in the system transfer function. In general spectral broadening results in severe limitations on the complexity of the filtering operation. Alternatively, it is possible to achromatize the filtering process, so that a given pupil function would yield the same transfer function over a broad-band of wavelengths. Such a process is analogous to achromatizing the Fourier transformation process in a coherent system, and techniques for accomplishing this have been demonstrated [12-171. Presently, we are developing such achromatized systems, with preliminary results indicating this technique to be as promising as achromatization for the coherent case. References [1] Lippincot, H.W. and H. Stark, "Optical-digital detection of dents and scratches in specular metal surfaces," Appl. Opt., 21, 2875, 1982. [2] Leith, E.N. and B.J. Chang, "Image formation with an achromatic interferometer," Opt. Comm.. 23, 217, 1977. [3] Gorlitz, D. and F. Lanzl, "Methods of zero-order non-coherent filtering," Opt. Comm., 20, 68, 1977. August 1, 1984-July 31, 1985 66 Final Report

[4] Lohmann, A.W., "Incoherent optical data processing of complex data," Appl. Opt., 16, 261, 1977. [5] Rhodes, W.T., "Bipolar pointspread function synthesis by phase switching," Appl. Opt., 16, 261, 1977. [6] Lohmann, A.W. and W.T. Rhodes, "Two-pupil synthesis of optical transfer functions," Appl. Opt., 17, 1141, 1978. [7] Stoner, W., "Edge enchancement with incoherent optics," Appl. Opt., 16, 1451, 1977. [8] Stoner, W., "Incoherent optical processing via spatially offset pupil masks," Appl. Opt., 17, 2454, 1978. [9] Angell, D., "Incoherent spatial filtering with grating interferometers," Appl. Opt., 24, 1985. [10] Bennett, F.D., "Optimum source size for the Mach-Zehnder Interferometer," Jour. Appl. Phys., 22, 184, 1951. [11] Swanson, G.J., "Broad-source fringes in grating and conventional interferorneters," J. Opt. Soc. AM., A 12, 1147, 1984. [12] Katyl, R.H., "Compensating Optical Systems. Part 1: Broad-band holographic reconstruction," Appl. Opt. 4, 1241, 1972. [13] Morris, G.M. and N. George, "Matched filtering using band-limited Illumination," Opt. Lett., 5, 446, 1980. [14] Morris, G.M. and N. George, "Frequency-plane filtering with an achromatic optical transform," Opt. Lett., 5, 446, 1980. [15] Morris, G.M. and N. George, "Space and wavelength dependence of a dispersion-compensated filter," Appl. Opt., 19, 3843, 1980. [16] Morris, G.M., "Diffraction theory for an achromatic Fourier transformation," Appl. Opt., 20, 2017, 1981. [17] Collins, G.D., "Achromatic Fourier transform holography," Appl. Opt., 20, 3109, 1981. 3.3.4. Object Recognition and World Modelling using Multiple Views Investigators: Professor R. Jain, R. A. Volz and H. S. Kim The Problem and Research Objectives We are addressing the problem of multiple view object recognition and world modeling[l]. By "world modeling" we mean the volumetric description of the workspace together with the name, location and orientation of each object in it. The world model thus built will be useful in many applications, among them are collision avoidance, path planning and gripping. To make the complete world model, we need to identify and determine the location and orientation of all the objects in the workspace. Since an object may be completely occluded by another object from a particular viewpoint, the multiple view approach is required. Another problem arises from the similarity relationships among the Final Report 57 August 1, 1984-July 31, 1985

objects. If there are similar objects such that two or more objects look identical except some minor differences on particular faces (we call these distinguishing features), single view cannot distinguish them from a viewpoint that does not show the feature. We will accumulate and refine the knowledge about the world from a sequence of views. At each intermediate step the world model is only partially complete. The partially built world model is used to select next viewpoint to help remove the ambiguities in the world model. The assumptions we make are as follows: * Our system is model-based, that is, we have the model for every object that might appear in a scene. * The distinguishing features are designated in the model by some other input, possibly human. They can be categorized into inter-object features and intra-object features. * We can position the camera (imaging device) as desired. In addition to these, we assume that a single-view object recognition process (SVORP) is capable of recognizing objects when the features are visible. This assumption will be relaxed in the later stages of our research. Approach The overall system works as follows[2]: 1. Take initial image from arbitrary viewpoint. 2. Recognize (identify, locate and orient) the recognizable object by invoking SVORP. For unrecognizable objects, get the possible object list together with their range of poses which could give rise to the image obtained. 3. Make "cones" from the blobs. They are made by back-projecting the silhouettes into the world and the tips are at the current viewpoint. Associate the cones with the possible objects list or recognized object. This is the initial world model. 4. Select the next viewpoint by examining the world model and model data base. The selection is based on several considerations and it is the major theme of our research. 5. Get the next image from newly selected viewpoint. Do step 2 and 3. 6. Intersect current cones with the objects in the world model. The resulting objects become objects in the refined world model, the unions of the possible objects lists (except already recognized object) become new possible objects and the intersections of location and orientation ranges for each possible object become new approximate ranges. For a "recognized" object in the world model, compare it with the object model in the database. If they conform, replace the object in the world model with the object model properly posed and make the possible object list a singleton. If not confirmed, maintain the volume with recognized name together with other names. August 1, 1984-July 31, 1085 58 Final Report

7. Repeat step 4 through 6 until every object is recognized. The method of selecting viewpoint in the step 4 above is divided into two parts, first determine the camera distance from a selected object and then determine the direction. We first select an object to focus on and it is based on the following criteria. * Select an object in the world model with high possibility of recognition (seeing the feature) by one more view, since this will confirm an object and reduce the complexity. * Choose an object that will make us select a view direction which is relatively far apart from the view points examined so far. * An object near the center of the cluster of many unrecognized objects is preferable since it will give the chance of recognizing more objects by next view. * An object of more interest can be selected if there is any immediate need to recognize it. After the object to focus on is selected, the distance of the camera is determined by considering the following constraints. * Far enough to see as many objects as possible (or the whole workspace). * Far enough to cover as much of the object surface as possible. (In general, we can see more of the surface from viewpoint far from the object than near viewpoint.) * Close enough to see the features clearly and easily recognizable by the object recognition process. While the distance depends on the size of the object and the world, the direction depends on the shape of the objects and disposition of the objects in the world. We have studied the relationship of coverable area with the distance for objects. The direction is selected so that it will show the features clearly. When the location and orientation is known approximately, the direction is selected to maximize the possibility of seeing the feature using the information. If this information is not adequate, then the direction which covers so far unexplored region of the object will be selected. Current Status and Future Research We are conducting the research in two phases: 1. Graphic simulation using synthetic object and synthetic camera. 2. Real world test using camera and real object. We have made database of synthetic object models, instantiated the object models in the world to make a synthetic world configuration, displayed the world configuration in perspective using a camera model, computed the blobs (the outer boundaries of object silhouettes), made the cones from the blobs and intersected the cones to make the world model but the viewpoint selection algorithm is yet to be implemented. Final Report 59 August 1, 1884-July 31, 1985

References 1. Hong, T. H. and M.O. Shneier, "Describing a Robot's Workspace using a Sequence of Views from a Moving Camera," National Bureau of Standards, 1985. 2. Kim, H. S., R.C. Jain and R.A. Volz, "Object Recognition Using Multiple Views," Proceedings of 1985 IEEE Conference on Robotics and Automation, pp. 2833, March 1985. 3.3.5. Machine Vision Workbench This project is a new line of research that has grown out of our work in computer vision algorithms and architectures for executing those algorithms. The workbench is intended to help researchers in computer vision to rapidly prototype new ideas. The premise for the computer vision workbench is that a quantitative improvement in the speed with which ideas can be tested will lead to a qualitative improvement in the concepts that are developed. We expect that the ability to rapidly prototype computer vision concepts will lead to the discovery of new concepts that might otherwise have been overlooked if conventional research methods were used. Figure 3.3.5.1 below illustrates the system that we have planned. The main components are an Apollo domain which consists of a number of high performance color graphics workstations, some file servers, and an interface for various vision sensors, initially cameras, into the domain. The interface which is contained in an Apollo DSP-80A is built around a smart frame grabber. This frame grabber is complete with high performance image processing hardware that will allow the user of the workbench to perform low level image processing functions such as edge detection, edge thinning, image smoothing, image rotation, etc. very rapidly. The results of these functions can be displayed on the Apollo, and intermediate and high level processing can also be performed on the results. The system will thus allow the user to acquire an image, look at it, perform a variety of operations on it, and view the result. An important component in all this, is the availability of various software modules to perform standard functions on images. For low level processing a number have been purchased along with the frame grabber, and others have been written by our researchers. Our long range plans are to establish a library of reusable modules to further simplify the users job. A user of the system will then be able to focus his energies on the research aspects of his work and avoid having to reimplement standard algorithms for image acquisition and low level processing. The evolution of this workbench is expected to move in the direction of building and specifying a computer vision library, and also to consider integrating non-vision sensors into the domain. Obvious candidates would be tactile sensors and various strain gauge sensors found in robot arms. August 1, 1984-July 31, 1985 60 Final Report

Network Connection File Server I DSP Apollo Domain | [;Cameras Work I I': Station *. I..R'......... Figure 3.3.5.1 3.4. Architecture of Robot Based Manufacturing Cells The ultimate goals of much of our research are to bring some level of intelligence to sophisticated sensor-based robot systems. All of the sensors, sensing algorithms and controls described above, and the CAD model linkages described below are essential ingredients in achieving these goals. But in addition, there is a critical factor, systems integration, i.e., the architecture of robot based manufacturing cells, without which these goals cannot be achieved. The research in this area is directed toward analysis and design of the distributed computing systems which can support complex distributed control and sensing algorithms, device controls, and databased interaction, as well as the hardware and software tools which allow the systems designed to be implemented. 3.4.1. Object Based Experimental Robot Cell Investigators: Professors R.A. Volz and T.N. Mudge The Problem and Past Work The basic problem being addressed in this portion of our research is the hardware and software complexity expected in future intelligent sensor based robot systems. The approach being taken to manage the system complexities Final Report 61 August 1, 1984-July 31, 1985

arising is the use of object based systems, at both the hardware and software levels [1], [2], [3]. At the hardware level we have used the Intel iAPX 432 multiprocessor computer system. At the software level we have used the Ada programming language. A multiprocessor based robot system consisting of an ASEA RB6 robot, GE TN-2500 camera and the iAPx 432 computer, as shown is Figure 3.4.1.1, has been implemented. A standard binary vision system was implemented using this equipment. The implementation was distributed, however, with thresholding and data compression being performed via an attached processor, and the major feature extraction and recognition algorithms implemented in Ada on the 432. The control of the robot system was also distributed, with the world to joint coordinate transformations and communications protocol to the robot being handled on an attached processor. Status of the Research Effort The principal accomplishment during the past year was the addition of runtime interference checking to the system, which is used as part of the grip determination strategy. The interference testing testing is based upon keeping a world model of the workspace. As each object is recognized and its pose determined by the vision system a transformation form the world coordinate system of the robot to the reference coordinate system of the object is determined, as illustrated in cDr | | I l | nPX 4320 IP IP IP a irrp — - ]dis &k raAE Sram DMX I VAX L ~ li~Y, I p d ~ 11~~~~~~~~~11 3TN52 Figure 3.4.1.1 August 1, 1984-July 31, 1985 62 Final Report

Figure 3.4.1.2. The grip selection algorithm produces a list of feasible grips ordered by desirability [4]. Beginning with the most desirable grip, geometric interference testing of the robot hand with each of the other objects in the workspace is performed. The same interference testing algorithm used in the original grip list selection to eliminate grips having interference between the hand and the object to be grasped is used in here. The basic system is now complete, including the robot control, the vision system, the link to the off-line CAD system, the use of CAD model data for training of the vision system, and grip determination. A video tape of the running system is available. Due to its relatively slow performance and difficult development environment the iAPX 432 system has been retired. The focus of the workcell activity is now based on the Apollo based rapid prototyping system. Object oriented software will continue to play an important role. Although becoming available more slowly than originally expected both Ada and Lisp environments for the Apollo are now expected shortly. Our work to date will be incorporated in the IT Figure 3.4.1.2 Final Report 63 August 1, 1884-July 31, 1985

new Apollo hand workcell. References [1] Volz, R.A., T.N. Mudge and D.A. Gal, "Using Ada as a Robot System Programming Language," IEEE Transactions on Systems, Man and Cybernetics, Vol. 14, No. 6., November/December 1984. [2] Volz, R.A., T.N. Mudge and D.A. Gal, "Using Ada as a Robot System Programming Language," Proceedings of the 13th International Symposium on Industrial Robots 8 Robot 7, pp. 12-42 to 12-57, April 1983. [3] Turney, J., T.N. Mudge and R.A. Volz, "Recognizing Partially Occluded Parts," Trans. on Pattern Analysis and Machine Intelligence, PAMI-7, pp. 410-421, July 1985. [4] Wolter, J., R.A. Volz and A.C. Woo, "Automatic Determination of Gripping Positions," Trans. on System, Man and Cybernetics, Vol. SMC-15, No. 2, pp. 204-214, March/April 1985. 3.4.1.1. Status of Research on Integrated Multi-Robot Systems Investigator: Kang G. Shin The Problem and Past Efforts The problem is to design and analyze a flexible, fast, and fault-tolerant system that consists of multiple robots, CNC and DNC machines, sensors, transport mechanisms, and computers for an integrated manufacturing environment. We define the term "process" to mean an industrial (but not computational) process, which could be decomposed into several subprocesses. Each subprocess may be accomplished by executing a module in a computerized controller. Each module can be decomposed into computational tasks. The actions taken (in both the software and hardware) to achieve each subprocess also may or may not be dependent. We have developed a module architecture by using the following classification (Table 3.4.1.1.1) of IMRS processes [1]. In Table 3.4.1.1.1 table we have named each of the four possible process classes appropriately. The formal definitions of each process class conform to the different interactions between subprocesses and their actions. Examples of each class are: A. Independent Processes: Two robots exist on the same plant floor, but the work for each robot is independent of the other's and is blind to the other's existence. Each robot may depend on common state variables (e.g. conveyor belt). The values of these state variables are determined by many different tasks, and thus simultaneous changes must be handled reliably (e.g. by use of a proprietor or administrator [2]). B. Loosely-Coupled Processes: Tool sharing is an example of this class. If robot A is using tool T, another robot B may be forced into either waiting for tool T, or into performing another action not involving tool T. The work of each August 1, 1984-July 31, 1985 64 Final Report

Subprocesses Actions Process Class Independent Independent Independent Independent Dependent Loosely-Coupled Dependent Dependent Tightly-Coupled Dependent Independent Serialized-Motion Table 3.4.1.1.1. - The Four Basic Process Classes robot is independent, but the individual actions taken are not. Collision avoidance between two robots executing independent processes but sharing the same workspace is another example of a loosely-coupled process. C. Tightly-Coupled Processes: One example of a tightly-coupled process are two robots which must grab a long steel beam off a conveyor belt. The action of one process must be tightly-coupled to the action of the other process, otherwise the beam could slip or damage could occur to a robot. D. Serialized Motion Processes: We have chosen the name serialized motion because the most practical process illustrating this interaction involves serializing the action of different robots. If subprocess A must be executed before subprocess B can commence, then A and B form a serialized motion process. The use of one robot as a generalized fixture for another robot is an example of this. E. Work-Coupled Processes: This class is not listed in Table 3.4.1.1.1 because it is not a basic process class. If two processes are work-coupled, then should one process fail, the other will perform error recovery and take over the responsibilities of the failed process. It is obvious that the process will also be one of the four aforementioned processes. Work coupling may be oneway or two-way, depending on the ability of the equipment to be used toward either process. The furnace pit operation described in [3] utilizes two-way work coupling. Each of the subprocesses will be programmed with a module. The module architecture refers to (i) the structure of a module, and (ii) the logical structure and/or communication channels that connect the modules in an IMRS. Current Accomplishments Designing a set of primitives for a language is a difficult task. The primitives should be general enough to solve a broad class of problems on many architectures, yet at the same time provide efficient, reliable, structured, elegant solutions to them. CSP, DP, and Ada have vastly different semantics, and although each language can be used to solve virtually any concurrent application, there are Final Report 65 August 1, 1984-July 31, 1985

wide variations in their elegance and efficiency [4]. However, the usefulness of each primitive in these languages has not been proven in real-time distributed systems. Using ports [5] takes major strides towards integrating individual modules, but the primitives dictate how easy it is to perform the communication and synchronization between modules. We have identified first communications need for each IMRS process class, which then has led to selection of the following primitives (see [6] for a detailed discussion).1 send, receive, and reply are used for both blocking and nonblocking message passing (see [2] for a good discussion on these primitives). The semantics are straightforward, as are their implementations. If task A issues a send to task B via a port, then task A will remain blocked until it has received a reply from task B. Task B executes a receive on a port. If task B executes its receive before the send has occurred, it becomes blocked. Task A remains blocked until a reply is executed by task B, thus every send-receive sequence requires a reply to unblock tasks. The reply is nonblocking because task B knows that Primitive Semantics send blocking send. receive blocking receive. reply nonblocking reply. query Used to asynchronously invoke statments in one task from another task. Preemption may occur depending on the priorities given in the order statement. response A block of code at the end of a task that is asynchronously invoked by queries from other tasks. order Used to prioritize conditions in a task. waitfor Multiple-task synchronization and communications. Table 3.4.1.1.2. Communication Primitives Needed For an IMRS However, we will not discuss the actual design of a robot programming language, which requires other developments such as a real-time distributed operating system, CAD/CAM interface, etc., and is expected to take several years to complete. August 1, 1884-July 31, 1985 66 Final Report

task A is already blocked at a send, thus when the reply is executed, task B does not need to block. 2-way naming (CSP) can be attained by using a port user restriction. 1-way naming (DP, Ada) can be attained by using a port without user restrictions. Nonblocking semantics are attained via a boundedbuffer port filter. An advantage of these primitives is that the protocol is a 2way message transfer so remote procedure calls are effectively simulated, and the work done by Birrell and Nelson in creating reliable communications is applicable[7]. The query, response, and order statements are used to allow one task to interrupt another task. When a task needs information from another task, it queries the other task through a port. This is similar to an exception being raised in Ada or PL/I, except it happens across task boundaries. This cannot be simulated by using multiple tasks, because tasks cannot share common variables. The appropriate response handler at the other end of the port is then executed. Two differences between the query - response mechanism and Ada exceptions are: (i) Ada does not allow parameters to be passed, and (ii) after an exception handler has executed, control does not continue from the interrupted point. The query is thus similar to a remote procedure call, except it preempts the current thread of control. The query causes the response to be raised in the task that owns the port Portname, provided the user is doing the query. Alternatively, but less useful, the owner could execute the query and one of the users would be interrupted. (A parent could query its children to check their status.) A technical problem with the query - response is that in a real-time system, a more urgent operation should not be interrupted by a query. Silberschatz [8] has proposed an order statement, which is remotely similar to what we need. His order statment is used in CELL to specify the priorities of threads of execution as they become unblocked. The order statement is essentially a directive to a user programmable scheduler, The order statement contains a list of the different sections of a task arranged according to their priorities; a preemption requested by a query will occur depending on the order. The sections of a task that appear in the order statement are the response handlers, procedures, functions, and background code. This gives the programmer real-time control over the different sections of a task, which is needed in an IMRS and likely to be needed in other process control systems. The last primitive is the waitfor primitive, and is needed to allow more than two tasks to synchronize and communicate. Consider, for example, how to perform three way synchronization and communication with the other primitives. One approach is to have one task issue two consecutive receives. The other two tasks would then issue sends to this task via a port. This simple solution unfortunately has flaws: (i) the asymmetry allows communications only between the sending tasks and the receiving task. Even though three tasks are synchronized, the two sending tasks cannot directly communicate. (ii) The solution is not very safe, since accidental misuse could easily occur if the wrong task entered the three-way synchronization by performing a send. (iii) The source code in all three tasks does not make clear what is really intended. (iv) This method is Final Report 87 August 1, 1984-July 31, 1985

inefficient as the number of tasks grows. The problem is that the send-receive is designed for a two-way rendezvous only. The waitfor primitive is our proposed primitive to perform n-way rendezvous. A call to waitfor includes a message, a function name, and a list of the tasks with which to synchronize. The semantics are as follows. When a task executes a waitfor, it remains blocked until all the tasks named in its waitfor list have executed a waitfor. When a set of tasks unblock because their waitfor list become satisfied, the named function in each waitfor would be executed. When the function is completed, execution of the task continues after the waitfor. The functions would have read access to all the messages pooled by the tasks involved in the synchronization via the waitfor. The rationale behind having these functions is that each task will have to respond differently according to the messages. The function would be written by the user, and would return a single message by operating on the pooled messages. To be correctly used, if task A executes a waitfor, it should not be allowed to either unblock other tasks yet remain blocked or unblock itself yet have a task on one of the unblocking tasks' waitfor lists still remain blocked. Since it is too costly to insure this feasible at run-time, the user is made responsible for avoiding deadlock and insure correct usage.3 Note that this is not a language primitive, but a system call, that provides an easy-to-use method of multitask communications and synchronization. Further, note that since many tasks are involved in a symmetrical rendezvous, ports are not applicable, so the waitfor does not use ports. To implement the waitfor, a message will have to be sent to every processor that contains a task in its waitfor list. One message would originate, and be relayed among the necessary processors. Except for an unavoidable framing window, the synchronization occurs simultaneously. Once again, it is intended that each task unblocking because of another task executing a waitfor is named in all the waitfors of the unblocking tasks. That is, each unblocking task has identical waitfor lists. To require this would need run-time testing, and thus the looser semantics are preferred. Future Research On the basis of the above framework, we will look at various aspects of IMRS implementation including, but not limited to, the following topics. * Allocation and scheduling of IMRS tasks to a network of heterogeneous processors subject to timing constraints, i.e., hard deadlines. We will also consider interactions and precedence constraints among IMRS tasks. * Optimal sharing of the workspace and tools among multiple robot arms. This will ensue a careful coordination of robots and fast and reliable detection and avoidance of obstacles. 2t may even be possible to define a predefined array or record of task names. Rather than giving a list of task names to waitfor, the record could be given. This could speed run-time efficiency, and may help debugging. August 1, 1984-July 31, 1985 68 Final Report

* Design and analysis of rule-based message handlers. Success of an IMRS heavily depends on the design of efficient and intelligent message handlers. References [1] Shin, K. G., Epstein, M. E., and Volz, R. A., "A Module Architecture for an Integrated Multi-Robot System", Technical Report, RSD-TR-10-84, Robot Systems Division, Center for Robotics and Integrated Manufacturing (CRIM), The University of Michigan, Ann Arbor, MI, July 1984. Also appeared in the Proc. 18th Hawaii Int'l Conf. on System Sciences, Jan. 1985, pp. 120-129. [2] Gentleman, W. M., "Message Passing Between Sequential Processes: the Reply Primitive and the Administrator Concept," Software Practice and Experience, Vol. 11, 1981, pp. 435-466. [3] Steusloff, H. U., "Advanced Real-Time Languages for Distributed Industrial Process Control," Computer, Feb. 1984, pp. 37-46. [4] Welsh, J., and Lister, A., "A Comparative Study of Task Communication in Ada," Software- Practice and Experience, vol. 11, 1981 pp. 257-290. [5] Silberschatz, A., "Port Directed Communication," The Computer Journal, Vol. 24, No. 1, 1981, pp. 78-82. [6] Shin, K. G., and Epstein, M. E., "Communication Primitives for a Distributed Multi-Robot System," Proc. of Int'l IEEE Conf. on Robotics and Automation, March 1985, pp. 910-917. [7] Birrell, A., and Nelson, B., "Implementing Remote Procedure Calls," ACM Transactions on Computer Systems, Vol. 2, No. 1, Feb. 1984, pp. 38-59. [8] Silberschatz, A., "Cell: A Distributed Computing Modularization Concept," IEEE Transactions on Software Engineering, vol. SE-10, no.2, Mar. 1984, pp. 178-185. 3.4.2. Automatic Determination of Gripping Positions Investigators: Professor R.A. Volz, Professor AC. Woo Research Objectives The goal of this research is to determine the best way of gripping an object using a two-fingered manipulator. This problem is part of our general objective of automatically generating task-level programs for robot systems used in manufacturing. Approach Selection of good gripping positions requires careful consideration of several basic issues: * Definition of a good grip * Specification of the geometry of the objects involved * Methods of computation Final Report 89 August 1, 1984-July 31, 1985

Typical "good grip" conditions include the reachability [1], [2] of the grip position by the robot without interference [3] between the robot and the object to be grasped, and the stability [4] - [6], or immobility of the grasped object under external forces and moments. Asada [4] has given an alternate definition of stability: stability exists when a small relative displacement between a gripped part and the gripper causes a restoring force that brings the part back to its original situation. Both Asada [4] and Paul [1] have also suggested that a good heuristic for maintaining stability is to grip an object near its center of mass. We take a slightly different approach in determining a good grip. To evaluate the "goodness" of a grip, we take the force exerted by the gripper as given, and compute the minimum coefficient of friction required for the object not to slip. This coefficient of friction allows comparison between different grips for a given set of forces. The smaller the coefficient of friction required, the better the grip. To assist in selecting good gripping positions, we make several assumptions. We assume that the robot system and all objects to be manipulated are adequately described by computer-aided-design and other databases. We also assume that all objects that we will work with can be approximated by polyhedral models. Finally, We assume that the gripper faces are parallel. Status of Research During the last year, run-time interference checking has been implemented, and the model of gripping has been improved considerably. The previous model made two approximations for simplicity. This past year's work has eliminated these approximations and extended the range of solutions. The first approximation was that externally forces applied to the object were in the gripping plane and that the center of rotation of the object was at the centroid of its contact area with the gripper. This is a good approximation for grips that are not located too near the center of mass, but inadequate for grips that are near the center of mass. The center of rotation is now computed from a set of equations that resulted from a complete analysis of the friction effects. The second approximation was that the gripping pressure was constant over the surface of the contact area with the gripper. The new model represents this pressure by a linear distribution over each of the gripper faces. Run-time Interference Checking The implementation of an earlier algorithm [7] has been extended this past year to include run-time interference checking. This algorithm used the model of uniform pressure over the face of the gripper, and thus was not valid for forces and moments that are out of the plane parallel to and between the two gripper faces. Instead, other measures were used to evaluate resistance to twisting during motion, and twisting during closing of the gripper. There were three stages to the algorithm. The first two were implemented off-line on a VAX 11/780 and included proposing a set of grip positions for a part such that each potential grip had no interference between the gripper and the August 1, 1984-July 31, 1985 70 Final Report

contact area. acc.rate method 1.2 original method 0.9 \ _0.3_ coeff of static friction -2.0 -1.5 -1.C -0.5 0.0 0.5 1.0 1.5 2.0 Figure 3.4.2.1 part. The second step was to evaluate the grips according to the uniform pressure model of gripping and the other twisting criteria mentioned above. Lists of good grips, in order of desirability, were output from the algorithm and transmitted to an experimental iAPX 432 system where they were used in a run-time system for automatically selecting gripping positions. This system used a vision system to obtain and match boundary representations of the parts. Once a part was identified, a non-interference check was made for the grips proposed from the off-line system, in the order of desirability, until a suitable grip was found. The system then used this grip in grasping and moving the part. The system was fully implemented and made operational. Testing the Center of Slip Approximation In our previous model, we made the approximation that an object that began to slip in a gripper would slip around the centroid of its contact area. This model produced optimistic values for the coefficient of friction and was therefore adjusted by some small amount, c, to produce more accurate values. The new model, described in a later section, removes the approximation and was used to test the accuracy of the old model. Figure 3.4.2.1 shows a plot of coefficient of friction versus the distance of the gripper from the center of mass of the object. The old approximation is quite accurate except when the gripping position is very close to the center of mass of the object. As the gripping position becomes closer to the center of mass, the center of slip moves farther away and the coefficient of friction approaches a limiting, non-zero value. Final Report 71 August 1, 1984-July 31, 1085

grip (rigid):.............. = Figure 3.4.2.2 Linear Pressure Model Our previous model also approximated the gripper pressure as constant over the entire gripping surface. This is only true when the gripped object has no forces or moments acting on it that are not within the plane of the gripper faces. A more accurate linear model is now used. This model is shown in Figure 3.4.2.2. Gripping pressure is higher where the object is pushing into the gripper, and lower where it is twisting away. The pressure can be represented by: P.(*,y) = ao + asx + ayy P/ (X,y )= allo - a, x - % y where PI and P11 represent the pressure at a point (x,y) for gripper face I and II respectively, a3 and ay represent the slope of the linear pressure distribution in the x and y directions, and ao0 and ai0o represent the pressure on each gripper face at the origin. Computing the Center of Slip from Friction Effects When an object begins to slip in the gripper, we assume that it rotates about some point (x,y). Using balances of forces and moments, the precise location of this point is computed. The forces and moments exerted by friction are equal and opposite to the externally-applied forces and moments. This yields six force/moment equations: three for each of the force directions, and three for each of the moment axes. One more equation results from the gripping force being equal to the integral of the pressure over the area of each of the gripper faces. This gives a total of seven equations. There are also seven unknowns: x, y, a0o, August 1, 1984-July 31, 1985 72 Final Report............................................................ ~~l~~stic................................................................. ~o~o ~ ~~o~~ ~ ~~~ ~~0 J ~~~~~ Figure 3.4.2.2~~~ kV - - - - I _ - _ _ _ I- - ____ - -' - --- k - - P _ - - _' - P A' - P - - - - -1. - - A' - _ - 1 A llP1

a1o, a,, al, and p/, the coefficient of friction. Since the equations are highly non-linear and cannot be solved explicitly, an implicit technique is used. The seven equations were used to create an error function by subtracting and squaring the right sides from the left sides of the equations. This error function is zero only where all the equations are satisfied. A conjugate-gradient method minimizes this function [8]. One further consideration is that there is nothing inherent in the above model of pressure that would prevent the pressure from becoming negative over a particular region of the contact area. In reality, this negative pressure could only be exerted if the object were glued to the gripper and could thus exert a pulling force on it. What really happens is that the object becomes slightly separated from the gripping surface and the pressure over that area is zero. The algorithm therefore includes a step of clipping the areas of contact where the pressure equations give a negative value. Results Our current implementation of the model is a tool that evaluates the goodness of a given grip for a given set of applied forces and moments. We are now using it by applying a representative set of forces and moments from all directions and examining the resulting ratings. But the best way to use the tool would be to use the trajectories from a path-planning algorithm and predict the forces and moments that the object will experience. Taking the worst of the resulting grip ratings would yield the goodness of the grip. Future Research The current implementation of the gripping algorithm uses a linear model of pressure. Future work will examine piecewise-linear models where the pressure has a different linear distribution across each of many facets of the polyhedral model of the gripped object. This should result in an more general algorithm. References [1] Paul, R.P., "Modeling, Trajectory Calculation, and Servoing of a Computer Controlled Arm," AIM 177, Artificial Intelligence Lab, Stanford University, November 1972. [2] Taylor, R.H., "The Synthesis of Manipulator Control Programs from TaskLevel Specifications," AIM-282, Artificial Intelligence Lab, Stanford University, July 1976. [3] Laugier, C. and J. Pertin, Automatic Grasping: A Case Study in Accessibility Analysis. Barga, Italy: NATO - Advanced Studies Institute on Robotics & Artificial Intelligence, June 1983. [4] Asada, H., Studies in Prehension and Handling by Robot Hands with Elastic Fingers (Ph.D. dissertation), University of Kyoto, 1979. [5] Wolter, J.D., A.C. Woo and R.A. Volz, "Gripping Positions for 3-D Objects," Proceedings of the 1982 Meeting of the Industrial Applications Society, pp. 1309-1314, October 1982. Final Report 73 August 1, 1984-July 31, 1885

[6] Laugier, C., "A Program for Automatic Grasping of Objects with a Robot Arm," Proceedings of 11 Int. Symp. on Industrial Robots, pp. 287-294, October 1981. [7] Wolter, J.D., R.A. Volz and A.C. Woo, "Automatic Generation of Gripping Positions," 4th Jerusalem Conference of Information Technology, May 1984. [8] Shanno, D.F., Phua, K.H. "Minimization of Unconstrained Multivariate Functions" ACM Transactions on Mathematical Software, Vol 6., No. 4, December, 1980, pp. 618-622. 3.4.3. Graphical Programming of the Geometric and Logic Aspects of a Robotic Cell Investigators: Professor A.W. Naylor, Professor R.A. Volz The Problem and Past Efforts It is widely accepted that programming of robots must move from the current emphasis on "teaching" to offline programming. Among other things it is just too expensive to shut down a plant to program several hundred robots by the teach method. One answer has been the development of languages for programming robots. However, it is relatively difficult to use such languages in complex geometric situations, and this has led to wide interest in graphical programming of robots. Being able to look at a picture of a robot and its environment is obviously a great aid to understanding of the geometric situation. However, it is important to appreciate that a major part of a robotic cell's situation is logical in nature. Moreover, and this is a key point, the logical and geometric aspects blend together. For example, binary sensors have a location and a logical variable associated with them, and our view is that a graphical programming system should allow the programmer to see and interact with this blend of geometry and logic. Once in the world of graphical programming, it becomes possible, in fact, necessary, to use graphics to show and to alter various representations of the program and its structure. Thus, the programmer can "see" the geometric and logic structure of the robotic cell, and he or she can see the structure of the program being developed. Most importantly, the interplay between the two can be seen. As reported in the previous AFOSR annual report, a basic framework for such the system has been implemented and demonstrated. This framework included the abilities to * display and manipulate graphical representations of objects and robots, * view from different perspectives with different scales, and to * generate robot code corresponding to the graphical manipulation of the robot. The system displays a typical scene in under two seconds. While this response is not sufficient for real-time (machine to machine) applications, it is sufficient for the human oriented, off-line programming, and impressive considering the cost of the hardware involved. The preliminary concepts for inclusion of binary sensors August 1, 1984-July 31, 1985 74 Final Report

and actuators in the systems were developed, as were the concepts for logical programming, e.g., conditionals and looping. Status of the Research The principal efforts during this past year have focused on two areas: * Extension and development of the work to include sensors, actuators, logical conditions and control constructs into the programming system. * Redesign of the underlying system to make it possible to easily modify the system and add new functions to it. The first activity continued the work reported in last year's annual report and led to implementation of the principal ideas. It became evident during the implementation of the binary sensor and actuator support, that, while it was possible to included these functions, there would be great advantage to future additions if the structure of the underlying system were redesigned in light of our experience over the past two and one half years. This led to the activity in the second area indicated above. Binary Sensors and Actuators Sensors and actuators are part of all but the simplest robot systems. It is insufficient, therefore, for a graphical programming system to only address the matters of manipulation of the robot. It is necessary that programming methods Figure 3.4.3.1 Final Report 75 August 1, 1984-July 31, 1985

include capabilities for manipulating sensors and actuators. There are three aspects to accomplishing this: * Introducing conditionals and alternative actions into the graphical programming concept. * Integration of sensors and actuators into the graphical system. * Providing convenient user interaction. Each will be discussed below. Prior to inclusion of sensor and actuators, an output program was a sequence of move instructions, generated in accordance with motions of the graphical representation of the robot produced by the programmer. The only actuations allowed were the binary operations of the opening and the closing of the gripper. These were permitted to be interspersed among the move commands. The selection of the kind of operation was accomplished by selection from a menu: the motion of the graphical representation of the robot was determined by graphical input from a joystick. In order to include conditionals and alternative actions, the concept of a conditional block was introduced. Such a block is introduced into the program stream by making an appropriate selection from the menu system. Typical selections would be an IF..THEN or a DO WHILE. both of these selections require a Figure 3.4.3.2 August 1, 1984-July 31, 1985 76 Final Report

condition which must be tested. Typically, the conditionals involve sensor inputs. After selecting a conditional block entry, the user is automatically placed in a conditional definition mode. After entering the conditional (discussed below), the user proceeds to enter programming commands as before. When the end of the block is encountered, he/she simply selects an end-of-block menu entry. The beginning and ending block selections are shown in figures 3.4.3.1 and 3.4.3.2. Sensors and actuators have both a physical presence in the workspace and logical values associated with them. They are represented on the graphics screen as two dimensional icons, triangles for sensors and squares for actuators. They are manipulated as other objects in the graphical programming system, except that they have only the two dimensional presence. They may be labelled, and during an initialization phase of their creation, the programmer must bind them to one of a set of binary I/O signals to/from the robot control system. To make their presence clearly distinct from other objects in the workspace, they are colored differently from other objects in the system. Their state is indicated by filling in the interior of the icon for a TRUE(on) condition, and leaving the icon hollow for a FALSE(off) condition. Typically, the user would place the sensor and/or actuator icons in the vicinity of the location of the actual sensor or actuator. Figure 3.4.3.3 shows typical scene with several sensors and actuators. The user interaction with the system for entering conditionals and looping was partially described above. What remains is to describe the manner in which the conditional expressions themselves are defined. Upon selecting a menu entry which indicates the beginning of a conditional block, the user is automatically placed in a conditional definition mode. Boolean expressions are created by selecting AND/OR options with the buttons on the joystick and touching the sensor involved include it in the expression. Sensor values or complement values are selected by touching the sensor until it shows the desired state. Successive touches to the sensor cause a toggling between including its value directly and its complemented value. The part of the expression involving a particular sensor is not frozen until the programmer moves to another sensor. At the present time, conditionals involving an arbitrary number of sensors and an arbitrary canonical Boolean expression have been implemented. Binary actuator outputs are similarly implemented. A test sorting cell involving three sensors and three actuators has been built and several programs using the extended graphical programming system produced. As with the original system, the programming method is easily learned, and a novice can successfully produce programs after only an hour of training. Extended Graphic Programming System Design One of the principal characteristics of the graphics programming system is its continual growth and evolution. Its capabilities with respect to explicit programming features have been expanding at a steady pace. In the future it will be desired to incorporate higher level program generation aids which have been developed in other parts of the project, e.g. grip determination or vision system Final Report 77 August 1, 1984-July 31, 1985

Figure 3.4.3.3 interaction. And, it is desirable to be able to easily emit code for any of a number of robots, not just the PUMA. Our experience with developing the current graphical programming system has led to a number of organizational concepts, which, if fully developed and implemented, would make it possible to readily accommodate the evolution and growth which is taking place now and which will accelerate in the future. We have therefore begun a redesign of the structure of the underlying system. The new system will be very flexible and make it possible to very easily add new capabilities to the system or even significantly revise its structure again. The redesign is not complete at this time. We report here the fundamental ideas underlying it and the directions being taken in completing the design. The redesign is based upon a careful separation and modularization of both data representation (graphic and robot program) and functions of the programming system. We begin by recognizing that the graphics programming system must produce not one, but three interleaved programs, one for directing motion of the robot, one for specifying and maintaining a graphic representation of the environment, and one representing the exogenous inputs to the system. The first is sufficient for producing actual code for the robot, but the second two are necessary for graphic simulation and editing of a program. The key elements in the new system are: August 1, 1984-July 31, 1985 78 Final Report

* An intermediate (robot) program representation which incorporates information about the environment in which the robot must operate. * A flexible support system which augments the menu system to make program generation and editing easy and natural. * A modifiable menu system which allows the system to be easily extended and tailored to suit individual users. * An evolvable programming and editing capability. Each of these are discussed briefly in the following sections. Environments and their States To describe the environment portion of the program we need a definition of the environment. To define environments we introduce the concepts of atomic and composite objects. An atomic object is a five-tuple (Sid, Sur, Ln, Pt; 0), where Sid is a, perhaps empty set of solids, Sur is a, perhaps empty, set of surfaces, Ln is a, perhaps empty, set of lines, Pt is a, perhaps empty, set of points, and 0 is a binary relation on Sld U Sur U Ln U Pt denoting ownership. A solid may "own" surfaces, lines, or points a surface may "own" lines or points, and so on. For example, a solid block might own six surfaces. Each surface could own four lines, and each line could own two points. Each atomic object has a type, and there may be many different objects of the same type; for example, there may be many objects of type ball. The components, that is, the solids, line, surfaces, and points, of an atomic object are all defined in terms of a common implied coordinate system. The atomic object and some of its components may have names, though none is required. Sets of solids, surfaces, lines, and points are also allowed to have names. This allows complex parts of an atomic object to be referred to by name. The type of an atomic object is, however, always named. If Sld, Sur, Ln, and Pt are all empty, then the atomic object is just a coordinate system. A composite object is a collection of atomic and compost objects together with a tree of coordinate transformations. Each object except one - the root object -is related to exactly one parent object by a coordinate transformation. Constituent objects are allowed to be empty. For example, it is often convenient to have an empty root object. Composite objects are created by joining other objects or by decomposing objects into constituent parts. A composite object and its constituent parts may have names. In contrast to atomic objects, they need not have a type. We would like to say that an environment is a collection of objects; however, this would not allow, for example, a part to enter and leave the cell. Consequently, we include all the objects that will ever be of interest. We will use the terms "active" and "inactive" to distinguish those currently in and not in the workspace. Final Report 70 August 1, 1984-July 31, 1985

Environments will also contain variables, and these variables have types. In particular, some types have an associated location and/or orientation.3 An environment is a collect of objects and variables. State of an Environment The state of an environment is the set of active objects and variables, i.e., the location and orientation of each active object, and for each variable its value and where appropriate its location and/or orientation. The set of all possible states for an environment will be referred to as its state set. Note that the state as defined above does not include the motion of an object. Eventually this will be included, but currently we view the environment as changing from one static situation to another. Simulated environments and their states are defined in exactly the same way, except that they represent values of what the actual environment is supposed to be. For our purposes environments and simulated environments are the same kind of things. Of course, they are really different - one is made of say, metal and the other of linked lists - but this difference is of no importance here. Execution of a Program in an Environment Programs execute in environments by sequentially changing the state of the environment. Thus the "data" given to a program are an environment plus an initial state for the environment. Often the initial state (in simulation mode) is simply that there are no active objects or variables and the first part of the program initializes the environment. We will allow two or more programs - that is, tasks - to execute in a given environment. For example, there might be two robots in the robotic cell, each with its own program. In the real environment there are usually exogenous events that cause the state of the environment to change. We view these changes as being the result of the execution of a program. Thus, a typical situation will involve an additional program executing in the environment, the "exogenous event program." This view is taken because of simulation. If the robot program is executed in the simulated environment, something must take the place of the real exogenous events. For example, if parts arrive at a loading place in the real environment, then they must be made to arrive at the loading place in the simulated environment. Typically the exogenous event program will be interleaved into the robot program. Of course, for actual execution, the environment and exogenous event program are ignored. They are only used during simulation and editing. We will also allow a programs to be simultaneously executed in more than one environment. Typically this means the execution of the program in the real environment and in a simulated environment. 3When the distinction is unimportant, we will use the term "object" to refer to atomic or composite object. August 1, 1984-July 31, 1985 80 Final Report

Support Environment There are three principal components to the support system being developed: * graphics subsystem * window management subsystem * viewing management subsystem. The first, the graphics subsystem, will essentially be the current graphics subsystem, and hence warrants no further discussion. The preliminary concepts for the window and viewing management subsystem will be described briefly. Window Management System A window management system will be used as the interface between the user and the rest of the system. At any time it will be possible to have several windows on the screen or screens. The user will be able to either interact with the window management system itself or through a window with the rest of the system. Interaction with the window management system will allow windows to be moved around, to be changed in size, to be covered and uncovered by other windows, or to be created and destroyed. Each window will be associated with a view of the rest of the system. Some windows will give views of the environment in various states and others will give views of the menu system, and still others give views of the structure of the program being developed or executed. The association of a view with a window is not part of the window management system itself. It involves an interaction with the viewing system discussed in the next section. Viewing Viewing involves three things: a window to look through, a subject to look at on the other side of the window, and a place to put the window between the viewer and the subject, that is, a viewing point. Subjects will be defined with the menu system. The viewing point will be specified using the graphical menu subsystem. This subsystem allows the viewing point to be placed anywhere with respect to the subject, at any distance and aperture. There will be two basic ways of working with the viewing point: manually and automatically. In the manual case the user will use the graphic subsystem to view the program from various viewing points. In the automatic case, the viewing point is based upon a viewing function. For example, as the state of the environment changes the viewing point can be made to weave in and out and around the environment to present different aspects of the state. The Modifiable Menu System The principal facts in the new design which facilitate flexibility is a modifiable menu system. Two familiar complaints about menu systems are that they are often confusing and that moving back and forth between widely separated Final Report 81 August 1, 1984-July 31, 1985

parts of a menu system can be infuriatingly slow for the experienced user. We are attempting to address both problems in the design of our menu system. The complexity issues is being addressed by having a simple model for menu systems and allowing this model, or parts of it, to be displayed in a window. The slowness problem is addressed by having "short cuts" and by allowing the user to modify or tailor the menu system dynamically. Moreover, we will allow more than one menu to be active at a time. We model a menu system as a directed graph, one in which we allow more than one arc going from one node to another. A node is a menu, and at any time some of the menus are active and the rest are inactive. "Active" means a selection can be made from the items on the menu. Suppose m and m2 are menus, and that there is an arc from m to m 2. If m 1 is active, the existence of the arm shows that a selection can be made from m which will, among other things, cause m2 to become active and ml to become (usually) inactive. Typically the arc from m i to m2 will be labelled with the corresponding selection in m 1. Arcs are also labelled with zero or more actions. Actions fall into two classes: Those that modify the menu system itself and those that are commands to the system being controlled by the menu system. A selection may have both kinds attached to it. For example, suppose that menu m 1 is a list of all possible object types and m2 is a list of all active objects. A selection from m 1 has two associated actions: one is a command to the controlled system to create a new object, and the other changes the menu system itself by adding the new object to m2 and adding appropriate new arcs from m2. If an arc is not labelled with an action, the selection simply changes menus. At the heart of the menu system is a kernel menu system that is concerned with modifying the menu system itself. It addresses the following fundamental operations: * Delete a menu * Add a new menu * Delete an arc * Add a new arc * Edit the label on an arc. When fully developed this will both give the user great flexibility in tailoring the menu system to his/her use, and make it very easy to expand the system. Programming and Editing Capability In the simplest case programming involves the writing of three interleaved programs: one for the robot, one for the viewing point and one for the simulation of exogenous inputs. We will refer to the result as the program. When we need to discuss one of the constituent programs, we will be careful to specify which is intended. One enters the programming mode by making appropriate selections in the menu system. The program menu subsystem is used to program all three constituent programs. Initially an empty environment and default viewing point August 1, 1984-July 31, 1986 82 Final Report

are assumed. The actual programming operation will be very similar to those in the present programming system. Indeed, the new version of the system will be able to reuse many of the program modules of the current system. The new design will simply allow additions and changes to the system to be performed in a much more rational manner. Accordingly, the programming operation will not be discussed further here. The editing capability has not been designed yet. Future Work The previous section has outlined a major redesign and extension to the graphic programming system. The principal efforts during the coming year will be to complete the system design and begin implementation of the second generation system. In addition, the design will be extended to include a number of specific automatic programming aids based upon analysis of geometric models of the robots and objects to be manipulated. 3.5. A Methodology for Solving Problems Investigator: Professor K. B. Irani Problem and Objectives This work explores a methodology for solving problems using the search algorithm A ~. A systematic approach for modelling a problem is first suggested in such a way that the heuristic used for A' is automatically abstracted. The heuristic for A' is then computed from the simplified problem model which is derived by relaxing each of the predicate formulas describing the rules and the goal state of the problem. To improve the efficiency of the derived heuristic, various other versions of the basic problem model are suggested. Research Results Following results have been obtained. (1) A mathematical structure to represent a general problem. (2) Algorithms to compute the heuristic for the A * algorithm. (3) Various versions of the problem model derived by partitioning the set of elementary units of the problem and partitioning the set of attributes of the problem. (4) For each version of the problem model, complexity for deriving the heuristic and tightness of the derived heuristic. Our approach for solving problems has been illustrated by five well known problems, namely, the 8-puzzle problem, the traveling salesman problem, the robot planning problem, the theorem proving problem, and the consistent labeling problem. The efficiency of our approach is favorably compared against those of other heuristic problem solving approaches which are specifically developed for each of these problems. Final Report 83 August 1, 984-July 31, 1085

August 1, 1084-July 31, 1985 84 Final Report

4. Publications 4.1. Journal Publications Control of Mechanical Manipulators (1) Gilbert, E.G. and I.J. Ha, "An Approach to Non linear Feedback Control with Applications to Robotics," IEEE Trans. on Systems, Man and Cybernetics, Vol. SMC-14, No. 6, pp. 879-884, November/December 1984. (2) Gilbert, E.G. and D.W. Johnson, "Distance Functions and Their Application to Robot Path Planning in the Presence of Obstacles," IEEE Journal of Robotics and Automation, Vol. RA-1, No. 1, pp. 21-30, March 1985. (3) Kim, B.K. and K.G. Shin, "Suboptimal Control of Industrial Manipulators with a Weighted Minimum Time-Fuel Criterion," IEEE Transactions on Automatic Control, Vol. AC-30, No. 1, pp. 1-10, January 1985. (4) Kim. B.K. and K.G. Shin, "Minimum-time Path Planning for Robot Arms with their Dynamics Included," IEEE Transactions on System, Man and Cybernetics, Vol. SMC-15, No. 2, pp. 213-223, March/April 1985. (5) McClamroch, N.H., "Displacement Control of Flexible Structures Using Electrohydraulic Servo-Actuators," ASME Journal of Dynamic Systems, Measurement and Control, Vol. 107, pp. 34-39, March 1985. (6) Shin, K.G. and S.B. Malin, "A Structures Framework for the Control of Industrial Manipulators," IEEE Transactions on Systems, Man and Cybernetics, Vol. SMC-15, No. 1, pp. 78-90, January/February 1985. Vision for Robots (1) Besl, P. and R. Jain, "Intrinsic and Extrinsic Surface Characteristics," IEEE CVPR, 1985. (2) Besl, P.J., E.J. Delp and R. Jain, "Automatic Visual Solder Joint Inspection," IEEE Journ. of Robotics and Automation," Vol. RA-1, No. 1, pp. 4256, March 1985. (3) Jain, R. and N. O'Brien, "Ego-motion Complex Logarithmic Mapping," SPIE, Vol. 521, Intelligent Robots and Computer Vision, pp. 16-23, November 1984. (4) Turney, J.L., T.N. Mudge and R.A. Volz, "Recognition of Occluded Parts," IEEE Trans. on Pattern Recognition, Machine Intelligence and Robotics, Vol. PAMI-7, No. 4, pp. 410-421, July 1985. (5) Wolter, J.D., A.D. Woo, and R.A. Volz, "Optimal Algorithms for Symmetry Detection in Two and Three Dimensions," The Visual Computer, Vol. 1, No. 1, pp. 37-48, July 1985. Architecture of Robot-Based Manufacturing Cells (1) Volz, R.A., T.N. Mudge and D.A. Gal, "Using Ada as a Programming Language for Robot-based Manufacturing Cells," IEEE Transactions on Systems, Man and Cybernetics, Vol. SMC-14, No. 6, November/December 1984. Final Report 856 August 1, 1984-July 31, 1985

(2) "CAD Robot Programming and Ada," (chapter in) Machine Intelligence NA TO-Advanced Studies Institute on Robotics & Artificial Intelligence, Sept. 1984. Improved Robot Programming (1) Wolter, J., A.C. Woo and R.A. Volz, "Automatic Determination of Gripping Positions," IEEE Trans. on Systems, Man and Cybernetics, Vol. SMC-15. No. 2, pp. 204-212, March/April 1985. 4.2. Under Review Control of Mechanical Manipulators (1) Chalhoub, N.G. and A.G. Ulsoy, "Dynamic Simulation of a Leadscrew Driven Flexible Robot Arm and Controller," submitted to ASME Journal of Dynamics, Systems, Measurement and Control. (2) Ha, I.J. and E.G. Gilbert, "Robust Tracking in Nonlinear Systems and Its Applications to Robotics," submitted to IEEE Trans. on Automatic Control. (3) Ha, I.J. and E.G. Gilbert, "A Complete Characterization of Decoupling Control Laws" for a General Class of Nonlinear Systems," submitted to IEEE Trans. on Automatic Control. (4) Iskanderani, A.I. and N.H. McClamroch, "Signal Estimation for Secondorder Vector Difference Equations," to appear IEEE Trans. on Automatic Control. (5) Lee, C.S.G. and M.J. Chung, "Adaptive Perturbation Control with Feedforward Compensation for Robot Manipulators," to appear Simulation. (6) Lee, C.S.G. and B.H. Lee, "Planning of Straight Line Manipulator Trajectory in Cartesian Space with Torque Constraint," submitted to IEEE Trans. of Automatic Control. (7) Lee, C.S.G., B.H. Lee and R. Nigam, "Development of Generalized D'Alembert Equations of Motion for Robot Manipulators," submitted to IEEE Trans. on Systems, Man and Cybernetics. (8) Lee, C.S.G. and M. Ziegler, "A Geometric Approach in Solving the Inverse Kinematics of PUMA Robots," to appear IEEE Trans. on Aerospace and Electronic Systems. (9) McClamroch, N.H., "Stabilization of Elastic Systems Using Sampled Data Feedback," to appear Trans. on Automatic Control. (10) Shin, K.G. and M.E. Epstein, "Intertask Communications in an Integrated Multi-robot System," submitted to IEEE Journal Robotics and Automation. (11) Shin, K.G. and M.E. Epstein, "A Unified Method for Evaluating Real-time computer Controllers and Its Application," to appear IEEE Trans. on Automatic Control. (12) Shin, K.G. and N.D. McKay submitted to Proceedings of the 1985 American Control Conference, June 1985. August 1, 1984-July 31, 1985 88 Final Report

Vision for Robots (1) Angell, D.K. and E.N. Leith, "Incoherent Spatial Filtering with Grating Interferometers," accepted for publication in the 15 September issue of Applied Optics. (2) Berman, S., P. Parikh and C.S.G. Lee, "Computer Recognition of Two Overlapping Parts Using a Single Camera," to appear Computer. Improved Robot Programming (1) Irani, K.B. and S.I. Yoo, "A Methodology for Solving Problems: Modelling and Generating Heuristics," submitted to IEEE Transactions on Pattern Analysis and Machine Intelligence Conference. Integrated Solid-State Sensors for Closed-Loop Robot Control (1) Choi, I.H. and K.D. Wise, "A Silicon Thermopile-Based Infrared Sensing Array for Use in Automated Manufacturing," submitted to IEEE Trans. on Electron Devices. 4.3. In Preparation Control of Mechanical Manipulators (1) Chalhoub, N.G. and A.G. Ulsoy, "Modeling of a Leadscrew Driven Flexible Robot Arm". (2) Chalhoub, N.G. and A.G. Ulsoy, "Evaluation of Control Strategies for a Flexible Robot Arm". Vision for Robots (1) Angell, D.K. and E.N. Leith, "Generalize Incoherent Optical Processing with Grating Interferometers". To be submitted to Applied Optics. (2) Angell, D.K. and E.N. Leith, "Achromatized Interferometers for Incoherent Spatial Filtering." To be submitted to Applied Optics. 4.4. Conference Presentations Control of Mechanical Manipulators (1) Chalhoub, N.G. and A.G. Ulsoy, "Dynamic Simulation of a Flexible Robot Arm and Controller," Proceedings of the American Control Conference, San Diego, California, June 1984. (2) Gilbert, E.G. and D.W. Johnson, "The Application of Distance Functions to the Optimization of Robot Motion in the Present of Obstacles," in Proc. of the 23rd Conference on Decision and Control, Las Vegas, Nevada, December 1984. (3) Ha, I.J. and E.G. Gilbert, "A Complete Characterization of Decoupling Control Laws for a General Class of Nonlinear Systems," The John Hopkins University, 1985. Will appear in the conference proceedings. (4) Johnson, D.W. and E.G. Gilbert, "Minimum Time Robot Path Planning in the Presence of Obstacles," Submitted in abstract form to a special session Final Report 87 August 1, 1984-July 31, 1985

of the 24th Conference on Decision and Control. (5) Lee, C.S.G. and B.H. Lee, "Planning of Straight Line Manipulator Trajectory in Cartesian Space with Torque Constraints," Proc. of 23rd IEEE Conference on Decision and Control, Las Vegas, pp. 1-7, December 12-13, 1984. (6) McClamroch, N.H., "Compliance at the End Effector of an Electrohydraulically Controlled Robot" Conference on Decision and Control, Ft. Lauderdale, FL., 1985. (7) McClamroch, N.H. and H.P. Huang, "Dynamics of a Closed Chain Manipulator," Proc. of American Control conference, Boston, MA, June 1985. Vision for Robots (1) Besl, P., E. Delp and R. Jain, "Automatic Visual Inspection of Solder Joints," IEEE International Conference on Robotics, St. Louis, MO. pp. 467-473, March 25-28, 1985. (2) Haynes, S. and R. Jain "Low Level Motion Events: Trajectory Discontinuities," First Conference on Artificial Intelligence Applications, December 1984. (3) Jain, R., H.S. Kim and R.A. Volz, "Object Recognition Using Multiple Views," IEEE Conference on Robotics, St. Louis, MO, March 25-28, 1985. (4) Jain, R., "Detecting Moving Edges," First Conference in Artificial Intelligence Applications, December 1984. (5) Jain, R. and N. O'Brien, "Ego-Motion Complex Log Mapping," SPIE's Intelligent Robots and Computer Vision Conference, November 1984. (6) Mudge, T.N., J.L. Turney and R.A. Volz, "Recognizing Partially Hidden Objects," IEEE International Conference on Robotics, St. Louis, MO, March 25-28, 1985. Architecture of Robot-Based Manufacturing Cells (1) Antonelli, C.J., R.A. Volz and T.N. Mudge, "Hierarchical Decomposition and Simulation of Manufacturing Cells," Proceedings of the 1984 Winter Simulation Conference, pp. 415-423, November 1984. (2) Mudge, T.N., J.H. Mayer and R.A. Volz, "Some Problems in Distributing Real-Time Ada Programs Across Heterogeneous Processors," IEEE Workshop on Real-Time Operating Systems, Wakefield, Mass., November 1984. (3) Shin, K.G. and M.E. Epstein, "Communication Primitives for a Distributed Multi-robot System," presented to the IEEE 1985 International Conference on Robotics and Automation, St. Louis, Missouri, March 25-28, 1985. (4) Shin, K.G., M.E. Epstein and R.A. Volz, "A Module Architecture for an Integrated Multi-Robot System," 18th Annual Hawaii International Conference on System Sciences, pp. 120-129, January 1985. (5) Turney, J.L., T.N. Mudge and R.A. Volz, "Recognizing Partially Hidden Objects," Proceedings of the Society of Photo-optical Instrumentation August 1, 1984-July 31, 1985 88 Final Report

Engineering Cambridge Symposium on Intelligent Robots and Computer Vision, pp. 108-113, November 1984. (6) Volz, R.A., T.N. Mudge and A.W. Naylor, "Experience in the Use of Ada for Manufacturing Cell Control," SIGAda Conference, San Jose, California, February 1085. (7) Volz, R.A. and T.N. Mudge, "Robots are (Nothing More Than) Abstract Date Types," Proc. of 1st International Conference on Robotics Research: The Next 5 Years and Beyond, Lehigh University, Bethlehem, PA, August 14-16, 1984. (8) Volz, R.A., T.N. Mudge, J.H. Mayer and A.W. Naylor, "Some Problems in Distributing Real-Time Ada Programs Across Machines," 1985 International Ada Conference, Paris, France, May 1985. Improved Robot Programming (1) Irani, K.B. and D.G. Shin, "A Many-Sorted Resolution Based on an Extension of a" First-Order Language," Proc. of Ninth International Joint Conference on Artificial Intelligence, August 1985. Integrated Solid-State Sensors for Closed-Loop Robot Control (1) Chun, K.G. and K.D. Wise, "A Capacitive Silicon Tactile Imaging Array," The Third International Conference on Solid-State Sensors and Actuators (Transducer 1985), June 11-14, 1985, Philadelphia, Pennsylvania. 4.5. Technical Reports (Topics not covered in technical paper/conference listings) (1) Ha, I.J. and E.G. Gilbert, "Robust Tracking in Nonlinear Systems and its Applications to Robotics" RSD-TR-11-84, October 1984. (2) Iskanderani, A.I. and N.H. McClamroch, "Signal Estimation for Second Order Vector Difference Equations," RSD-TR-15-84, October 1984. (3) Shin, K.G. and N. McKay, "Selection of Minimum Time Geometric Paths for Robot Manipulation," RSD-TR-17-84, November 1984. (4) Besl, P. and R. Jain, "An Overview of Three-Dimensional Object Recognition," RSD-TR-19-84, December 1984. (5) Besl, P., and R. Jain, "Surface Characterization for Three-Dimensional Object Recognition in Depth Maps," RSD-TR-20-84, December 1984. (6) Lee, C.S.G and D. Huang, "A Geometric Approach to Deriving Position/Force Trajectory in Fine Motion," RSD-TR-1-85, February 1985. (7) Sethi, I.K. and R. Jain, "Finding Trajectories of Points in a Monocular Image Sequence," RSD-TR-3-85, April 1985. (8) Shin, K.G. and M. Epstein, "Intertask Communications in An Integrated Multiple Robot System," RSD-TR-4-85, May 1985. (9) Shin, K.G. and N. McKay, "Incorporation of Payload Uncertainties into Robot Trajectory Planning," RSD-TR-6-85, May 1985. Final Report 89 August 1, 1984-July 31, 1985

(10) Shah, M., A. Sood, and R. Jain, "Pulse and Staircase Models for Detecting Edges at Multiple Resolutions," RSD-TR-7-85, June 1985. (11) Ha, I.J., "Nonlinear Decoupling Theory with Applications to Robotics," (a Dissertation), RSD-TR-8-85, May 1985. (12) Knoll, T.F. and R. Jain, "Recognizing Partially Visible Objects Using Feature Indexed Hypotheses," RSD-TR-10-85, July 1985. (13) Lee, B.H., "An Approach to Motion Planning and Motion Control of Two Robots in a Common Workspace," (a Dissertation), RSD-TR-11-85, September 1985. (14) Haynes, S.M. and R. Jain, "Event Detection and Correspondence," RSDTR-12-85, September 1985. August 1, 1984-July 31, 1985 90 Final Report

5. Personnel 5.1. Faculty There are many ways in which a faculty member or student may be supported by a contract of this type. The names listed on the form DD1473 include only those faculty who received direct salary support from the contract. However, in addition, some faculty received support indirectly through support of their graduate or post doctoral students. This section lists faculty who have either directly received support for a portion of their salaries or received support for one or more of their graduate students. Professors Daniel E. Atkins -- Electrical Engineering and Computer Science Elmer G. Gilbert -- Aerospace Engineering Robert M. Howe -- Aerospace Engineering Keki B. Irani -- Electrical Engineering and Computer Science Emmett N. Leith -- Electrical Engineering and Computer Science Arch W. Naylor -- Electrical Engineering and Computer Science N. Harris McClamroch -- Aerospace Engineering Richard A. Volz -- Electrical Engineering and Computer Science Kensall D. Wise -- Electrical Engineering and Computer Science Associate Professors James Barber -- Mechanical Engineering and Applied Mechanics Ramesh Jain -- Electrical Engineering and Computer Science Dev S. Kochhar -- Industrial and Operations Engineering Trevor N. Mudge -- Electrical Engineering and Computer Science Kang G. Shin -- Electrical Engineering and Computer Science Anthony C. Woo -- Industrial and Operations Engineering Assistant Professors C. S. George Lee -- Electrical Engineering and Computer Science Y. C. Lee -- Electrical Engineering and Computer Science A. Galip Ulsoy -- Mechanical Engineering and Applied Mechanics T. Weymouth -- Electrical Engineering and Computer Science Final Report 91 August 1, 1984-July 31, 1985

5.2. Students The number of students who have received support during the third year of the contract significantly exceeds the number of "standard" student appointments indicated in the proposal. There are two primary reasons for this. First, and most importantly, we were able to leverage the student funding to increase the student involvement. In a number of cases we were able to obtain partial fellowship support for the student from internal funds, thus reducing the cost per student and increasing the number of students we could involve in the project. Similarly some students had partial outside fellowship support and only needed supplementary support to be able to participate in the project. The involvement of students with fellowship support in the project is deemed an acceptable activity since the work being performed is generally part of the student's dissertation work. Secondly, through natural attrition, graduation, and staff changeover, some replacement of the students supported occurred during the year. The students listed below, then, include all of the students who have received support during the contract year. T. S. Abdel-Rahman EECS1 Ph.D. expected 1985-86 Dan Angell EECS Ph.D. expected 1985-86 Paul Besl EECS Ph.D. expected 1985-86 Paul Bixel EECS M.S. expected 1985 Nabil Chalhoub ME&AM2 Ph.D. expected 1985 Jie Cheng CICE3 Ph.D. expected 1986 Il Hyun Choi EECS Ph.D. expected 1985-86 Kukjin Chun EECS Ph.D. expected 1985-86 Rajiv Desai ME&AM Ph.D expected 1986-87 Robert Dolezal EECS Ph.D. 1987-88 Paul Eichel EECS Ph.D. expected 1984-85 Paul Gottschalk CICE Ph.D. expected 1987-88 In-Joong Ha CICE Ph.D. expected 1984-85 Susan Haynes EECS Ph.D. expected 1986-87 Edwin Hou CICE Ph.D. expected 1987-88 D. Huang EECS Ph.D. expected 1987 1Electrical Engineering and Computer Science 2Mechanical Engineering and Applied Mechanics 3Computer Information and Control Engineering August 1, 1984-July 31, 1985 92 Final Report

H.P. Huang ECE Ph.D. expect 1985-86 Dan Johnson AE4 Ph.D. expected 1985-86 Suntaeg Jun EECS Ph.D. expected 1987-88 Richard Jungclas CICE Ph.D. expected 1986-87 H-S Kim CICE Ph.D expected 1986-87 Bum. H. Lee CICE Ph.D. expected 1985 G-H Lee CICE Ph.D. expected 1986 Kurt Lloyd M.S. expected 1985 Neil D. McKay ECE Ph.D. expected 1985 O. Oyenkunle EECS Ph.D. expected 1988-89 Bernard Morgowicz AE Ph.D expected 1985-86 Z. Qian CICE Ph.D. expected 1986 Ronitt Rubinfeld B.S. EECS expected 1985 Brian Schipper M.S. EECS expected 1986 Shao Lejun CICE Ph.D. expected 198? S. I. Yoo CICE Ph.D. expected 1985-86 5.3. Degrees Awarded Paul Eichel Ph.D in EECS In-Joong Ha Ph.D. in CICE Jehkwan Lah Ph.D in EECS Bum Lee Ph.D. in CICE David Beard Ph.D in EECS Neil McKay Ph.D. in EECS Suk I. Yoo Ph.D in CICE Dong Shin Ph.D. in CICE Paul Bixel M.S. in CICE Kurt Lloyd M.S.E. in CICE Paul Bhugra M.S.E. in CICE G-H Lee M.S.E. in CICE Ronitt Rubinfeld B.S.E in EECS 4Aerospace Engineering Final Report 93 August 1, 1984-July 31, 1985

5.4. Graduate Student Placement 1982 - 1983 Gary Swanson, Ph.D. Lincoln Labs Henry Chu, M.S. U of M Ph.D. Program Paul Eichel, M.S. U of M Ph.D. Program Stacy Leon, M.S. U of M Ph.D. Program Mike Steigerwald, M.S. U of M Ph.D. Program Jan Wolter, M.S. U of M Ph.D. Program Brian Dent, M.S. ADAPCO, Melville, New York Myrung Chung Ph.D. Korean Institute of Technology Dean Askounis M.S. FMC Robots, Troy, Michigan 1983- 1984 Thomas Wielenga, Ph.D. M.D.I., Ann Arbor, Michigan Joseph Pincu, M.S. U. of Ph.D. Program Daniel Barash, M.S. Harris Corporation, Washington, D.C. Paul Firehammer, M.S. unknown Nirwan Ansari, M.S. U of M Ph.D. Program David Gal, M.S. R.O.L.M. in California Mark Epstein, M.S. IBM, Boca Raton, Florida Neil Nathason, M.S. IBM, Toledo, Ohio Chi-Chan Tsang, M.S. Computerized Office Systems, Ann Arbor Glen Healy, B.S. Stanford Graduate Program Elesh Shah, M.S. unknown 1984 - 1985 D. Beard, Ph.D. University of North Carolina B. Lee, Ph.D. Purdue University N. McKay, Ph.D. General Motors Research Laboratory J. Ha, Ph.D. General Motors Research Laboratory P. Eichel, Ph.D. Sandia Laboratory J. Lah, Ph.D. Intel Corporation S. Yoo, Ph.D. Seoul University D. Shin, Ph.D. unknown P. Bixel, M.S. General Electric August 1, 1984-July 31, 1985 94 Final Report

K. Lloyd, M.S. Bell Laboratories R. Rubinfeld, B.S. Stanford 5.5. Permanent Staff Research Engineer John Caldwell Jerry Turney Systems Programmer Chris Born Ron Theriault Technician Rob Giles Administrative Assistant Jerry L. Jackson Secretaries Dolores Bolsenga Frances Bourdas Virginia Folsom Marianne Moylan Mary Ann Pruder Final Report 95 August 1, 1084-July 31, 1085

August 1, 1984-July 31, 1085 96 Final Report

6. Coupling Activities An important aspect of our program is the development of strong coupling activities both within and without the program. Four categories of interaction are being pursued: * interaction among people participating in the project * interactions with other university scientific groups * interactions with industry * interactions with the Air Force. The following sections describe the coupling activities which have taken place during the past year. 6.1. Intra Project Interactions The interactions begun over the past two year have continued. Frequent seminars held with both internal and external speakers provided a means of keeping everyone generally aware of work both within our project and at other institutions around the country. Indeed, at times the frequency of seminars was so high that it was difficult for any one person to attend all of interest. The joint seminar series sponsored by CRIM and the Industrial Technology Institute enabled us to bring truly outstanding speakers from both industry and academia to Michigan. There are regular technical discussion meetings among members of several of the research groups occurring on a weekly basis. Membership in more than one group is common, particularly among the faculty concerned. Also, most of the faculty are members of several doctoral committees on topics outside of their owI immediate research activities. This has proven to a useful form of information diffusion. 6.2. University Interactions Interactions with people from other universities have taken place primarily through seminars, both those given at the University of Michigan, and those given by Michigan people at other universities. Outside Seminar Speakers Among those presenting seminars at Michigan are: * Roy A. Maxion, Carnegie-Mellon University "Automated Diagnosis of Complex Systems" * Alex Pentland, Stanford University "Representation and Perception of Natural Shapes" * Subbiah Ramalingam, Director Productivity Center, Department of Mechanical Engineering, University of Minnesota Final Report 97 August 1, 1984-July 31, 1985

"Materials Processing for Automated Manufacturing" * Joel D. Goldhar, Illinois Institute of Technology "The Impacts of Computers on Traditional Manufacturing Thinking" * Mark H. Raibert, Carnegie-Mellon University "Machines That Walk-Balance in Legged Robots" * Warren Seering, M.I.T. "Design of an Assembly Robot" * T. Lozano-Perez, M.I.T. Artificial Intelligence Laboratory "Collision-Free Task Planning for Simple Robot Manipulators" * Nam Suh, National Science Foundation/M.I.T. "Design of Intelligent Machines Through the Axiomatic" * Prof. Dr.-Ing. Ulrich Rembold, University of Karlsruhe, Karlsruhe, Federal Republic of Germany "Advanced Assembly Robots" * Michael W. Walker, Clemson University "Dynamics of Manipulators" *. Paul Rankey, Trent University, England "Integrated to ComputerIntegrated and Robotized Assembly Systems" * Robert Kaplan, Harvard Business School and Carnegie-Mellon University "Capital Budgeting for the Factory of the Future" * J.K. Aggarwal, Electrical and Computer Engineering Department, University of Texas "Computer Vision: Recent Development and Future Directions" A copy of the seminar schedule for the Fall term is attached in Appendix A. Seminars at other Universities given by Michigan faculty * K.G. Shin - Tokyo Institute of Technology, Seoul National University and Koren Advanced Institute of Science and Technology, visited Rensselaer Polytechnic Institute in July 1985. August 1, 1984-July 31, 1085 98 Final Report

* R. A. Volz - presented a seminars at Purdue University and Karlsruhe University, Karlsruhe, West Germany entitled "The Use of Ada as a Systems Implementation Language for Distributed Manufacturing Systems" * R. Jain - "Analyzing Dynamic Scenes" and "Range Image Understanding" both presented at University of South Carolina, "Range Image Understanding" presented at North Carolina State. * A. G. Ulsoy presented seminars at Northwestern Polytechnical University, Xian, China, Nanjing Institute of Technology, Nanjing, China and Jiaton University, Shanghai, China in July-August 1085 on "Dynamics and Control of Manufacturing Systems". 6.3. Interactions with Industry Interactions with industry have taken place in three forms, seminars presented to industry, seminars presented at Michigan and private technical discussions. Seminars and Briefings for Industry Two industrial Affiliates reviews were held during the year. Each was a one day technical conference presenting recent research results to the affiliate members. Appendix B contains the current membership list in the Affiliates. Major technical briefings were held for the following firms: IBM General Motors Bell Laboratories Ford Apollo Computer Volkswagen Intel Verdix Autoflex Rapistan 3M Asea General Dynamics Intellimac Volvo Verbatium Vickers, Inc. United Technologies Martin Marietta Synthetic Vision Systems Seminars given by Michigan faculty for industry: * K.G. Shin - "Integrated Multi-Robot Systems," Industrial Technology Institute, April 1095. * N. H. McClamroch - Lawrence Livermore Laboratory, April 1985, and GM Research Lab, July 1985 * R.A. Volz presented a seminar at IBM entitled "Design for Assembly", a seminar at General Dynamics entitled "Distributed Languages," a seminar entitled "Software Systems for Manufacturing" at General Motors. * R. Jain taught a course through ESC on Computer Vision and Image Processing. * E. Gilbert presented a seminar entitled "'Distance Functions and Their Application to Robot Path Planning in the Presence of Obstacles," at Final Report 99 August 1, 1984-July 31, 1985

GM Research Laboratories. Seminars at Michigan by Industrial People * Gerald L. Elson, GM Technical Center "GM's Utilization of Machine Intelligence" * Walter Weisel, Robotic Industries Assoc. "The Future of Robotics in American Industrial Technology" * Robert P. Blanc, National Bureau of Standards "Advances in Standardization for Computer Communication" * James S. Albus, National Bureau of Standards "An Overview of the Advanced Manufacturing Research Facility at NBS-Programs and Plans" * W. Edwin Dorroh, Martin Marietta Denver "Applications of AI to Vision and Robot Workcell Planning" * Dr. John W. Boyse "Computer Modeling of Solids" * Nam Suh, National Science Foundation/M.I.T. "Design of Intelligent Machines Through the Axiomatic" * Michael K. Brown, AT&T Bell Laboratories "An Instrumented Controlled Impedance Gripper for Robotic Applications" * Robert M. Goor, General Motors Research Laboratories "A New Approach to Robot Control" * Dr. Howard Moraff, Program Director, NSF "Presentation on Programs in Design Theory and Automation and Manufacturing" * B. F. Buxton, GEC Research Laboratories, East Lane, Wembley, UK "Learning Functionals: A Structured Path from Maximum Likelihood to Boltzmann Machines" August 1, 1984-July 31, 1985 100 Final Report

Private technical discussions with industry * Discussions were held with Raycon Corporation: a division of Excello, on interferometric methods applied to robotic vision. This is a continuing relation, involving E. Leith and D. Angell. * Professor K. Irani met with General Motors Research, GTE Laboratories Incorporated and BNR, Ottawa. * Professor R. Jain meets regularly with ERIM, Autoflex, GM Research Labs, and Synthetic Vision Systems * Professor K. Shin met with representatives from General Motors Research Laboratories. * Professor R. Volz met with representatives from Bell Laboratories, Apollo Computer and General Electric. * Professors R. Volz, R. Jain and T. Mudge reviewed machine vision applications at a number of General Motors locations. * Professors R. Volz and T. Mudge meet regularly with Representations from General Dynamics * Professor R. Volz meets regularly with representations from IBM. * Professor Naylor has meet with representations from Autoflex and General Motors. 6.4. DoD Interaction The following interactions with DoD have taken place during the past year. * Professor K. Shin - U.S. Army BMDATC, U.S. Office of Naval Research * Professor R. Volz attend (1) AF Planning Meeting at Dayton, Ohio in June (2) Autonomous Land Vehicle demonstrations in Denver, Colorado in May 1985. (3) Attend meetings to organize consortium for Space Automation and Robotics Research for NASA. * Professor R. Jain - attended DARPA meetings on: (1) Autonomous Land Vehicle - Key West, December 1984 (2) Animate Systems - Santa Fe, June 1985 (3) Demo on ALV - Denver, May 1985 (4) Attended workshop for Space Missions for Automation and Robotics Technologies (SMART), August 1985. * Thomas Knoll attend the NATO ASI: "'Relational Database Machine Architecture" in September 1985. Final Report 101 August 1, 1984-July 31, 1985

* N. McClamroch attended NASA SMART workshop on Robotics and Automation in Space in September 1985. * T.N. Mudge participated in the NASA form on Space Station software in May 1985. * A formal review of the project was held in February of 1985. August 1, 1984-July 31, 1985 102 Final Report

7. Distribution List University and Research Institutes Professor Robert Cannon Dr. David Whitney Professor Thomas Binford Draper Laboratories Computer Science Dept. 555 Technology Square Stanford University Massachusetts Institute of Technology Stanford, CA 94305 Cambridge, MA 02139 Professor Robert McGhee Dr. David Nitzen Dept. of Electrical Engineering Director of Robotics Research Ohio State University Stanford Research Institute 2015 Neil Avenue 333 Ravenswood Avenue Columbus, OH 43210 Menlo Park, CA 94025 Professor Raj Reddy Dr. Robert Bolles Computer Science Dept. & Robotics Stanford Research Institute Carnegie Mellon University 333 Ravenswood Avenue Schenley Park Menlo Park, CA 94025 Pittsburgh, PA 15213 Dr. William Brown Professor Michael Brady ERIM Cambridge P.O. Box 8618 England Ann Arbor, MI 48107 Professor Richard Paul Dr. Azriel Rosenfeld Dept. of Computer Information Research - Room 4115C and Sciences Computer & Space Sciences Bldg. University of Pennsylvania University of Maryland N60 Town Bldg. College Park, MD 20742 Philadelphia, PA 19104 Dr. Geoffrey Boothroyd Professor Roger Nagel Mechanical Engineering Lehigh University University of Massachusetts Bethlehem, PA 18015 Amhurst, MA 01003 Professor Georges Saridis Dr. Jerome Smith ECSE Industrial Technology Institute Rensselaer Polytechnical Institute University of Michigan Troy, NY 12181 230 Stearns Ann Arbor, MI 48109 Professor Delbert Tesar Dept. of Mechanical Engineering University of Texas Austin, Texas Final Report 103 August 1, 1984-July 31, 1985

Government Dr. Vincent Russo Wright Patterson Air Force Base Dr. James Albus AFWAL/MLTC Room A127 Bldg 220 Wright Patterson, OH 45433 National Bureau of Standards Washington D.C. 20234 Mr. Michael Hitchcock Computer Integrated Dr. George Brosseau Manufacturing Branch Room 1121 Department of Air Force NSF Air Force Wright Aeronautical Lab Washington D.C. 20550 Wright Patterson Air Force Base, Ohio 45433 Mr. Norman Caplan Deputy Division Director Mr. William M. Spurgeon Division of Electrical, Computer & University of Michigan at Dearborn Systems Engineering Dearborn, Michigan NSF 1800 G. Street N.W. Dr. Nam Suh Washington D.C. 20550 NSF 1800 G. Street N.W. Mr. Bernie Chern Washington, DC 20550 Division of Electrical, Computer & System Engineering Mr. G. Ronald Green Room 1101 Electronics Division NSF U.S. Army Research Office Washington D.C. 20550 P.O. Box 12211 Research Triangle Park, NC 27709 Captain Robert Delyser DFEE Mr. Ted J. Brandewie U.S. Air Force Academy Project Manager Colorado 80840 Computer Integrated Manufacturing Branch Dr. James Gault Department of Air Force U.S. Army Research Development Air Force Wright Aeronautical Lab Standardization Group Wright Patterson Air Force Base, United Kingdom Ohio 45433 P.O. Box 65 FPO New York 09510 Mr. Daniel Herman NASA Headquarters Dr. Paul B. Schneck Code S Head Washington DC, 20546 Dept. of Navy Information Sciences Division Major Joe Hager Office of Naval Research USAF, AFSC 800 N. Quincy Air Force Office of Scientific Research Arlington, VA 22217 Building 410, Bolling AFB, D.C. 20332 August 1, 1984-July 31, 1985 104 Final Report

Industry Mr. Dwight Carlson Perceptron Mr. Bertil Thorvaldsson 23920 Freeway Park Drive ASEA Farmington Hills, MI 48024 Industrial Robot Division 16250 W. Glendale Mr. Clifford T. Anglewicz New Berlin, WI 53151 Volvo of America Corporation Volvo Automated Systems Division Mr. R. Gary Diaz 40712 Brentwood Drive General Dynamics Sterling Heights, MI 48078 Land Systems Division P.O. Box 1901 Dr. Gale Cutler Warren, MI 48090 Whirlpool Monte Road Mr. William Gruver Benton Harbor, MI 49022 General Electric Industrial Automation- Europe Dr. John Klein Praunheimer Landstrasse 50 IBM 6000 Frankfort/Main 90 1798 NW 40th Street West Germany, 0611-76071 Boca Raton, FL 33432 Mr. Bruce Haupt Dr. Steve Holland IBM Dr. Brian Schunck 951 N.W. 51st. Mr. Frank DiPietro Boca Raton, FL 33432 Mr. Gerald Elson Mr. Gabe Tiberio Dr. Michael Wesley Mr. Richard Beecher IBM General Motors T.J. Watson Research Center General Motors Research Laboratories P.O. Box 218 Warren, MI 48090 Yorktown Heights, NY 10598 Mr. Pete Matheson Mr. William Hollenback Mr. Emil Sarpa IBM Mr. Anthony Anderson Neighborhood Rd. Intel Corporation Kingston, NY 12401 26500 Northwestern Highway Southfield, MI 48075 Mr. James Lowrie Advanced Automation Mr. J.L. Escover Technology Section Mr. Robert Casey Martin Marietta Aerospace Lockheed P.O. Box 179 Missiles and Space Company, Inc. Denver, CO 80209 P.O. Box 1700 Austin, TX 78760 Final Report 105 August 1, 1984-July 31, 1985

Mr. Donald E. Waters Program Manager Technical Strategy Center Honeywell, Inc. 1700 W. Highway 36 St. Paul, MN 55113 August 1, 1984-July 31, 1985 108 Final Report

APPENDIX A M.-V~~~~~~~~ *- s ~ 2 -ob3@ 2 - Iq aoI't. lI agic ~ ~ ~~- bO c 2.- V s j - U -| U Z = C 30 de U I.2 0 V to -Z %!|g~~~~~ VO ll.! 1il!ndllo. ds~s.gan Msa 0 Us d >b d P ~, c o3= COU I P1 O > U Q> baS.- C - a 16 AC C i U a t!.- O0 *2U r - ~- 0 ^U | ~ l o d *? s 2 g s s l. 3- u~~~~ a oa" ~ a...' - a.09 bo 5 as c o O u 3 c E ~eg 41~~~~~~~~~~~~~~~~~~~~~~~4 0~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~4 0~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~4 ke * 4 ) d ) 0;, * 04 93Sm zL q-t14 -)i —)iX 8-4 AO V~Oel oOC v Cq~~~~'J 03 LM-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~a 0~~~~~~~~~~~~~~~~~~~~ CI) "4P) O CL c O 3 3~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~1 3- 0 s~qg;"J Pu r. E ab L OO I, U 3 d 4 0L -"J r E L O c d~ 3.='.J c u^U YS'~PB,~ r r;U a~~ C I ~ -~ C) 0 ~~O CL 33 ~~L a- C 41 ~ ~ r U 41 3~ UOoi U) ~ i ~ ~,? p a

APPENDIX B ROBOTICS INDUSTRIAL AFFILIATES MEMBERS December 3, 1985 ASEA Robotics, Inc. Industrial Robot Division 16250 W. Glendale New Berlin, Wisconsin 53151 Company Contact: Bertil Thorvaldsson Telephone: (414)785-3482 Digital Equipment Corp. 77 Reed Road Hudson, MA 01749 Company Contact: John W. McCredie Telephone: (617) 568-4000 Data Cube, Inc. 4 Dearborn Road Peabody, MA 01960 Company Contact: David Erickson Telephone: (617) 535-6644 General Dynamics Land Systems Division P.O. Box 1901 Warren, Michigan 48090 Company Contact: R. Gary Diaz Division Manager Telephone: (313) 362-8096 1902 Northwood Troy, MI 48084

UNIVERSITY OF ICHIGA 3 9015 02493 8394 Whirlpool The Elisah Gray II Research and Engineering Center Monte Road Benton Harbor, Michigan 49022 Company Contact: Dr. Gale Cutler Telephone: (616) 926-5215