-
1
Oct
Dates: 7-8 December, 2007
Organizers:
Jan Peters (Max Planck Institute for Biological Cybernetics & USC),
Marc Toussaint (Technical University of Berlin)
WWW: http://www.robot-learning.de
email: nips07@robot-learning.de
Abstract Submission Deadline: October 21, 2007
Acceptance Notification: October 26, 2007
======== ==== CALL FOR POSTERS ==== ===========
Abstract:
Creating autonomous robots that can assist humans in situations of
daily life is a great challenge for machine learning. While this aim
has been a long standing vision of robotics, artificial intelligence,
and the cognitive sciences, we have yet to achieve the first step of
creating robots that can accomplish a multitude of different tasks,
triggered by environmental context or higher level
instruction. Despite the wide range of machine learning problems
encountered in robotics, the main bottleneck towards this goal has
been a lack of interaction between the core robotics and the machine
learning communities. To date, many roboticists still discard machine
learning approaches as generally inapplicable or inferior to
classical, hand-crafted solutions. Similarly, machine learning
researchers do not yet acknowledge that robotics can play the same
role for machine learning which for instance physics had for
mathematics: as a major application as well as a driving force for new
ideas, algorithms and approaches.
Some fundamental problems we encounter in robotics that equally
inspire current research directions in Machine Learning are:
– learning and handling models, (e.g., of robots, task or
environments)
– learning deep hierarchies or levels of representations (e.g., from
sensor & motor representations to task abstractions)
– regression in very high-dimensional spaces for model and policy
learning
– finding low-dimensional embeddings of movement as an implicit
generative model
– methods for probabilistic inference of task parameters from vision,
e.g., 3D geometry of manipulated objects
– the integration of multi-modal information (e.g., proprioceptive,
tactile, vision) for state estimation and causal inference
– probabilistic inference in non-linear, non-Gaussian stochastic
systems (e.g., for planning as well as optimal or adaptive control)
Robotics challenges can inspire and motivate new Machine Learning
research as well as being an interesting field of application of
standard ML techniques.
Inversely, with the current rise of real, physical humanoid robots in
robotics research labs around the globe, the need for machine learning
in robotics has grown significantly. Only if machine learning can
succeed at making robots fully adaptive, it is likely that we will be
able to take real robots out of the research labs into real, human
inhabited environments. To do so, future robots will need to be able
to make proper use of perceptual stimuli such as vision,
proprioceptive & tactile feedback and translate these into motor
commands.
To close this complex loop, machine learning will be needed on various
stages ranging from sensory-based action determination over high-level
plan generation to motor control on torque level. Among the important
problems hidden in these steps are problems which can be understood
from the robotics and the machine learning point of view including
perceptuo-action coupling, imitation learning, movement decomposition,
probabilistic planning problems, motor primitive learning,
reinforcement learning, model learning and motor control.
Format:
The goal of this one-day workshop is to bring together people that are
interested in robotics as a source and inspiration for new Machine
Learning challenges, or which work on Machine Learning methods as a
new approach to robotics challenges. In the robotics context, among
the questions which we intend to tackle are
Reinforcement Learning, Imitation, and Active Learning:
* What methods from reinforcement learning scale into the domain of
robotics?
* How can we improve our policies acquired through imitation by trial
and error?
* Can we turn many simple learned demonstrations into proper policies?
* Does the knowledge of the cost function of the teacher help the
student?
* Can statistical methods help for generating actions which actively
influencing our perception? E.g., Can these be used to plan
visuo-motor sequences that will minimize our uncertainty about the
scene?
* How can image understanding methods be extended to provide
probabilistic scene descriptions suitable for motor planning?
Motor Representations and Control:
* Can we decompose human demonstrations into elemental movements,
e.g., motor primitives, and learn these efficiently?
* Is it possible to build libraries of basic movements from
demonstration? How to create higher-level structured representations
and abstractions based on elemental movements?
* Can structured (e.g., hierarchical) temporal stochastic models
be used to plan the sequencing and superposition of movement
primitives?
* Is probabilistic inference the road towards composing complex
action sequences from simple demonstrations? Are superpositions of
motor primitives and the coupling in timing between these learnable?
* How to generate compliant controls for executing complex movement
plans which include both superposition and hierarchies of elemental
movements? Can we find learned versions of prioritized hierarchical
control?
* Can we learn how to control in task-space of redundant robots in
the presence of under-actuation and complex constraints? Can we learn
force or hybrid control in task-space?
* Is real-time model learning the way to cope with executing tasks on
robots with unmodeled nonlinearities and manipulating uncertain
objects in unpredictable environmental interactions?
* What new regression techniques can help real-time model learning to
improve the execution of tasks on robots with unmodeled
nonlinearities and manipulating uncertain objects in unpredictable
environmental interactions?
Learning structured models and representations:
* What kind of probabilistic models provide a compact and suitable
description of real-world environments composed of manipulable
objects?
* How can abstractions or compact representations be learnt from
sensori-motor data?
* How can we extract features of the sensori-motor data that are
relevant for motor control or decision making? E.g., can we extract
visual features of objects directly related to their manipulability or
“affordance”?
Posters:
We are open for any posters posing problems for machine learning and
for presenting machine learning algorithms with applications in
robotics.
The deadline for abstract submissions is October 21, 2007 and the
notification will be October 26, 2007
Abstract Submission Deadline: October 21, 2007
Acceptance Notification: October 26, 2007
- Published by Dimitrios A. Adamos in: External announcements
- RSS feed subscription!