The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Robots that operate in natural human environments must be capable of handling uncertain dynamics and underspecified goals. Current solutions for robot motion planning are split between graph-search methods, such as RRT and PRM which offer solutions to high-dimensional problems, and Reinforcement Learning methods, which relieve the need to specify explicit goals and action dynamics. This paper addresses...
Using data collected from human teleoperation, our goal is to learn a control policy that maps perception to actuation. Such policies are potentially multi-valued with regard to perception with a single input mapping to multiple outputs depending on the user's objective at a particular time. We propose a multi-valued function regressor to learn a larger class of robot control policies from human demonstration...
We report on the development of a new simulation environment for use in Multi-Robot Learning, Swarm Robotics, Robot Teaming, Human Factors and Operator Training. The simulator provides a realistic environment for examining methods for localization and navigation, sensor analysis, object identification and tracking, as well as strategy development, interface refinement and operator training (based...
In this paper we propose a process which is able to generate abstract service robot mission representations, utilized during execution for autonomous, probabilistic decision making, by observing human demonstrations. The observation process is based on the same perceptive components as used by the robot during execution, recording dialog between humans, human motion as well as objects poses. This...
Teleoperated rescue robots designed to explore disaster scenes and find victims face serious limitations due to the cluttered nature of the environments as well as the rescue operators becoming stressed and disoriented in these scenes. An alternative to using teleoperated control is to develop fully autonomous controllers for rescue robots. However, these robots are also not capable of traversing...
Demonstration learning is a powerful and practical technique to develop robot behaviors. Even so, development remains a challenge and possible demonstration limitations can degrade policy performance. This work presents an approach for policy improvement and adaptation through a tactile interface located on the body of a robot. We introduce the Tactile Policy Correction (TPC) algorithm, that employs...
In the paper we evaluate two learning methods applied to the ball-in-a-cup game. The first approach is based on imitation learning. The captured trajectory was encoded with Dynamic motion primitives (DMP). The DMP approach allows simple adaptation of the demonstrated trajectory to the robot dynamics. In the second approach, we use reinforcement learning, which allows learning without any previous...
Analytic modeling, imitation, and experience-based learning are three approaches that enable robots to acquire models of their morphology and skills. In this paper, we combine these three approaches to efficiently gather training data to learn a model of reachability for a typical mobile manipulation task: approaching a worksurface in order to grasp an object. The core of the approach is experience-based...
Programming a humanoid robot to perform an action that takes the robot's complex dynamics into account is a challenging problem. Traditional approaches typically require highly accurate prior knowledge of the robot's dynamics and environment in order to devise complex control algorithms for generating a stable dynamic motion. Training using human motion capture is an intuitive and flexible approach...
We know various strategies toward teaching and controlling humanoid robots. Some refer to direct joint or tip control and others use more intuitive approach such as mimicking human motion in a certain task. This kind of robot control, where a robot is considered a tool, controlled by a human demonstrator, is called visuo-motion control. In this paper, we present an improved approach to overcome a...
This paper focuses on developing a team of mobile robots capable of learning via human interaction. A modified Q-learning algorithm incorporating a teacher is proposed. The paper first concentrates on simplifying the Q-learning algorithm to be implemented on small and simple team of robots having limited capabilities of memory and computational power. Second it concentrates on the incorporation of...
Both self-learning architecture (embedded structure) and explicit/implicit teaching from other agents (environmental design issue) are necessary not only for one behavior learning but more seriously for life-time behavior learning. This paper presents a method for a robot to understand unfamiliar behavior shown by others through the collaboration between behavior acquisition and recognition of observed...
In most reported works about robot learning by demonstration (LbD), the demonstration is normally limited to simple gestures or grasp actions. In this paper, motion trajectory oriented LbD is studied in which free form 3-D motion trajectory is extracted to characterize certain human demonstrations. We propose to build effective description to motion trajectories to be learned by a robot instead of...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.