Show simple item record

Affecting Fundamental Transformation in Future Construction Work Through Replication of the Master-Apprentice Learning Model in Human-Robot Worker Teams

dc.contributor.authorLiang, Ci-Jyun
dc.date.accessioned2021-09-24T19:05:25Z
dc.date.available2021-09-24T19:05:25Z
dc.date.issued2021
dc.identifier.urihttps://hdl.handle.net/2027.42/169666
dc.description.abstractConstruction robots continue to be increasingly deployed on construction sites to assist human workers in various tasks to improve safety, efficiency, and productivity. Due to the recent and ongoing growth in robot capabilities and functionalities, humans and robots are now able to work side-by-side and share workspaces. However, due to inherent safety and trust-related concerns, human-robot collaboration is subject to strict safety standards that require robot motion and forces to be sensitive to proximate human workers. In addition, construction robots are required to perform construction tasks in unstructured and cluttered environments. The tasks are quasi-repetitive, and robots need to handle unexpected circumstances arising from loose tolerances and discrepancies between as-designed and as-built work. It is therefore impractical to pre-program construction robots or apply optimization methods to determine robot motion trajectories for the performance of typical construction work. This research first proposes a new taxonomy for human-robot collaboration on construction sites, which includes five levels: Pre-Programming, Adaptive Manipulation, Imitation Learning, Improvisatory Control, and Full Autonomy, and identifies the gaps existing in knowledge transfer between humans and assisting robots. In an attempt to address the identified gaps, this research focuses on three key studies: enabling construction robots to estimate their pose ubiquitously within the workspace (Pose Estimation), robots learning to perform construction tasks from human workers (Learning from Demonstration), and robots synchronizing their work plans with human collaborators in real-time (Digital Twin). First, this dissertation investigates the use of cameras as a novel sensor system for estimating the pose of large-scale robotic manipulators relative to the job sites. A deep convolutional network human pose estimation algorithm was adapted and fused with sensor-based poses to provide real-time uninterrupted 6-DOF pose estimates of the manipulator’s components. The network was trained with image datasets collected from a robotic excavator in the laboratory and conventional excavators on construction sites. The proposed system yielded an uninterrupted and centimeter-level accuracy pose estimation system for articulated construction robots. Second, this dissertation investigated Robot Learning from Demonstration (LfD) methods to teach robots how to perform quasi-repetitive construction tasks, such as the ceiling tile installation process. LfD methods have the potential to be used in teaching robots specific tasks through human demonstration, such that the robots can then perform the same tasks under different conditions. A visual LfD and a trajectory LfD methods are developed to incorporate the context translation model, Reinforcement Learning method, and generalized cylinders with orientation approach to generate the control policy for the robot to perform the subsequent tasks. The evaluated results in the Gazebo robotics simulator confirm the promise and applicability of the LfD method in teaching robot apprentices to perform quasi-repetitive tasks on construction sites. Third, this dissertation explores a safe working environment for human workers and robots. Robot simulations in online Digital Twins can be used to extend designed construction models, such as BIM (Building Information Models), to the construction phase for real-time monitoring of robot motion planning and control. A bi-directional communication system was developed to bridge robot simulations and physical robots in construction and digital fabrication. Through empirical studies, the high accuracy of the pose synchronization between physical and virtual robots demonstrated the potential for ensuring safety during proximate human-robot co-work.
dc.language.isoen_US
dc.subjectHuman-Robot Collaboration
dc.subjectRobot Pose Estimation
dc.subjectRobot Learning from Demonstration
dc.subjectRobot Digital Twin
dc.subjectConstruction Safety
dc.subjectConstruction Robotics
dc.titleAffecting Fundamental Transformation in Future Construction Work Through Replication of the Master-Apprentice Learning Model in Human-Robot Worker Teams
dc.typeThesis
dc.description.thesisdegreenamePhDen_US
dc.description.thesisdegreedisciplineCivil Engineering
dc.description.thesisdegreegrantorUniversity of Michigan, Horace H. Rackham School of Graduate Studies
dc.contributor.committeememberKamat, Vineet Rajendra
dc.contributor.committeememberMenassa, Carol C
dc.contributor.committeememberYang, Xi (Jessie)
dc.contributor.committeememberLee, SangHyun
dc.contributor.committeememberMcgee, Jonathan Wesley
dc.subject.hlbsecondlevelCivil and Environmental Engineering
dc.subject.hlbtoplevelEngineering
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/169666/1/cjliang_1.pdf
dc.identifier.doihttps://dx.doi.org/10.7302/2711
dc.identifier.orcid0000-0002-0213-8471
dc.identifier.name-orcidLiang, Ci-Jyun; 0000-0002-0213-8471en_US
dc.working.doi10.7302/2711en
dc.owningcollnameDissertations and Theses (Ph.D. and Master's)


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.