Show simple item record

Demonstration of the Dyna Reinforcement Learning Framework for Reactive Close Proximity Operations

dc.contributor.authorMajumdar, Ritwik
dc.contributor.authorSternberg, David
dc.contributor.authorAlbee, Keenan
dc.contributor.authorJia-Richards, Oliver
dc.date.accessioned2025-01-06T14:54:42Z
dc.date.available2025-01-06T14:54:42Z
dc.date.issued2025-01
dc.identifier.citationAIAA SciTech Forum, AIAA 2025-1002, Orlando, FL, USA, 2025en_US
dc.identifier.urihttps://hdl.handle.net/2027.42/195994en
dc.description.abstractLessons from the International Space Station (ISS) emphasize the necessity of exterior inspection for anomaly detection and maintenance, but current methods rely on costly and limited human extravehicular activities and robotic arms. Deployable free-flying small spacecraft offer a flexible, autonomous solution, capable of comprehensive exterior inspections without human involvement. However, the safety of these spacecraft during close proximity operations remains a concern, particularly given uncertain variability in thruster performance. This paper presents SmallSat Steward, a reactive and integrated architecture for online model learning and trajectory planning based on the Dyna reinforcement learning architecture. By combining model-based planning and direct reinforcement learning, Dyna offers a potentially flexible and computationally efficient solution capable of adapting to changes in thruster performance and other system uncertainties. Preliminary results in both simulation and hardware environments demonstrate the potential of this architecture to successfully regulate position under single and double thruster failures. In simulation, the Dyna-based controller outperformed a PD-LQR controller in $\sim$70\% of all cases. On hardware, Dyna was able to eliminate the steady state error caused by thruster failures.en_US
dc.description.sponsorshipNASA University SmallSat Technology Partnerships (80NSSC23M0237)en_US
dc.language.isoen_USen_US
dc.publisherAmerican Institute of Aeronautics and Astronauticsen_US
dc.titleDemonstration of the Dyna Reinforcement Learning Framework for Reactive Close Proximity Operationsen_US
dc.typeConference Paperen_US
dc.subject.hlbsecondlevelAerospace Engineering
dc.subject.hlbtoplevelEngineering
dc.contributor.affiliationumAerospace Engineering, Department ofen_US
dc.contributor.affiliationotherJet Propulsion Laboratory, California Institute of Technologyen_US
dc.contributor.affiliationumcampusAnn Arboren_US
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/195994/1/10.2514:6.2025-1002.pdf
dc.identifier.doi10.2514/6.2025-1002
dc.identifier.doihttps://dx.doi.org/10.7302/24930
dc.identifier.sourceAIAA SciTech Forumen_US
dc.description.filedescriptionDescription of 10.2514:6.2025-1002.pdf : Main article
dc.description.depositorSELFen_US
dc.working.doi10.7302/24930en_US
dc.owningcollnameAerospace Engineering, Department of


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe its collections in a way that respects the people and communities who create, use, and are represented in them. We encourage you to Contact Us anonymously if you encounter harmful or problematic language in catalog records or finding aids. More information about our policies and practices is available at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.