Sujets

View

View Item

Domain Machine Learning-Robotics
Domain - extra
Year 2014
Starting sept. 2014
Status Open
Subject Deep Reinforcement Learning
Thesis advisor SEBAG Michèle
Co-advisors Marc Schoenauer, INRIA
Laboratory LRI A&O
Collaborations
Abstract Reinforcement learning achievements critically depend on the representation of the state space. High-
dimensional state spaces (e.g. described through the many sensors or camera pixels of the robot)
hinder the characterization of the value functions. Former attempts rely on function approximations
(e.g. to deal with continuous search spaces), feature selection (to cope with high state dimensionality), or the use of models to guide the sampling of the search space.
Basically, RL involves three interdependent problems: modelling the environment and the transi-
tion model (a.k.a forward model for a robot, which can be thought of as a simulator, estimating the
next state from the current state and the selected action); modelling the environment and the reward
(a.k.a. learning the value functions, estimating how much cumulative reward the robot will get from
a given state following an improving policy); exploring the action space to support a better modelling
of transitions and val
Context
Objectives
Work program
Extra information
Prerequisite
Details
Expected funding Institutional funding
Status of funding Expected
Candidates
user michele-martine.sebag
Created Thursday 12 of June, 2014 22:52:49 CEST
LastModif Thursday 12 of June, 2014 22:52:49 CEST
Comments
Attachments (0)

Attachments

 filenamecreatedhitsfilesize 
No attachments for this item


The original document is available at https://edips.lri.fr/tiki-view_tracker_item.php?itemId=4003