Policy-gradient learning for motor control
dc.contributor.author | Field, Timothy P | |
dc.date.accessioned | 2011-03-28T20:37:00Z | |
dc.date.accessioned | 2022-10-25T07:31:02Z | |
dc.date.available | 2011-03-28T20:37:00Z | |
dc.date.available | 2022-10-25T07:31:02Z | |
dc.date.copyright | 2005 | |
dc.date.issued | 2005 | |
dc.description.abstract | Until recently it was widely considered that value function-based reinforcement learning methods were the only feasible way of solving general stochastic optimal control problems. Unfortunately, these approaches are inapplicable to real-world problems with continuous, high-dimensional and partially-observable properties such as motor control tasks. While policy-gradient reinforcement learning methods suggest a suitable approach to such tasks, they suffer from typical parametric learning issues such as model selection and catastrophic forgetting. This thesis investigates the application of policy-gradient learning to a range of simulated motor learning tasks and introduces the use of local factored policies to enable incremental learning in tasks of unknown complexity. | en_NZ |
dc.format | en_NZ | |
dc.identifier.uri | https://ir.wgtn.ac.nz/handle/123456789/23563 | |
dc.language | en_NZ | |
dc.language.iso | en_NZ | |
dc.publisher | Te Herenga Waka—Victoria University of Wellington | en_NZ |
dc.subject | Machine learning | |
dc.subject | Algorithms | |
dc.subject | Computer algorithms | |
dc.subject | Motor learning | |
dc.subject | Reinforcement learning | |
dc.subject | Stochastic control theory | |
dc.title | Policy-gradient learning for motor control | en_NZ |
dc.type | Text | en_NZ |
thesis.degree.discipline | Computer Science | en_NZ |
thesis.degree.grantor | Te Herenga Waka—Victoria University of Wellington | en_NZ |
thesis.degree.level | Masters | en_NZ |
thesis.degree.name | Master of Science | en_NZ |
vuwschema.type.vuw | Awarded Research Masters Thesis | en_NZ |
Files
Original bundle
1 - 1 of 1