DSpace Repository

Policy-gradient learning for motor control

Show simple item record

dc.contributor.author Field, Timothy P
dc.date.accessioned 2011-03-28T20:37:00Z
dc.date.accessioned 2022-10-25T07:31:02Z
dc.date.available 2011-03-28T20:37:00Z
dc.date.available 2022-10-25T07:31:02Z
dc.date.copyright 2005
dc.date.issued 2005
dc.identifier.uri https://ir.wgtn.ac.nz/handle/123456789/23563
dc.description.abstract Until recently it was widely considered that value function-based reinforcement learning methods were the only feasible way of solving general stochastic optimal control problems. Unfortunately, these approaches are inapplicable to real-world problems with continuous, high-dimensional and partially-observable properties such as motor control tasks. While policy-gradient reinforcement learning methods suggest a suitable approach to such tasks, they suffer from typical parametric learning issues such as model selection and catastrophic forgetting. This thesis investigates the application of policy-gradient learning to a range of simulated motor learning tasks and introduces the use of local factored policies to enable incremental learning in tasks of unknown complexity. en_NZ
dc.format pdf en_NZ
dc.language en_NZ
dc.language.iso en_NZ
dc.publisher Te Herenga Waka—Victoria University of Wellington en_NZ
dc.title Policy-gradient learning for motor control en_NZ
dc.type Text en_NZ
vuwschema.type.vuw Awarded Research Masters Thesis en_NZ
thesis.degree.discipline Computer Science en_NZ
thesis.degree.grantor Te Herenga Waka—Victoria University of Wellington en_NZ
thesis.degree.level Masters en_NZ


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Browse

My Account