The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
In this work, a synergy-based reinforcement learning algorithm has been developed to confer autonomous grasping capabilities to anthropomorphic hands. In the presence of high degrees of freedom, classical machine learning techniques require a number of iterations that increases with the size of the problem, thus convergence of the solution is not ensured. The use of postural synergies determines dimensionality...
Reinforcement learning of motor skills is an important challenge in order to endow robots with the ability to learn a wide range of skills and solve complex tasks. However, comparing reinforcement learning against human programming is not straightforward. In this paper, we create a motor learning framework consisting of state-of-the-art components in motor skill learning and compare it to a manually...
Members of a team are able to coordinate their actions by anticipating the intentions of others. Achieving such implicit coordination between humans and robots requires humans to be able to quickly and robustly predict the robot's intentions, i.e. the robot should demonstrate a behavior that is legible. Whereas previous work has sought to explicitly optimize the legibility of behavior, we investigate...
This paper presents an application of the multi-agent reinforcement learning approach for the efficient control of a mobile robot. This approach is based on a multi-agent system applied to multi-wheel control. The robot's platform is decomposed into driving modules agents that are trained independently. The proposed approach incorporates multiple Q-learning agents, which permits them to effectively...
Direct transfer of human motion trajectories to humanoid robots does not result in dynamically stable robot movements due to the differences in human and humanoid robot kinematics and dynamics. We developed a system that converts human movements captured by a low-cost RGB-D camera into dynamically stable humanoid movements. The transfer of human movements occurs in real-time. As need arises, the developed...
To control the motion of a humanoid robot along a desired trajectory in contact with a rigid object, we need to take into account forces that arise from contact with the surface of the object. In this paper we propose a new method that enables the robot to adapt its motion to different surfaces. The initial trajectories are encoded by dynamic movement primitives, which can be learned from visual feedback...
Artificial potential field (APF) is well established for reactive navigation of robots. Initially, this paper describes a fast and robust fuzzy-APF on an ActivMedia AmigoBot platform. Obstacle-related information is fuzzified just by sensory fusion which results in shorter runtime. The membership functions of obstacles' range and direction have been also merged into one function for smaller block...
This paper presents an architecture for learning and reproducing movements with a robot in interaction with a human teacher. We focus on the movement representation and propose three enhancements to increase generalization capabilities: Firstly, we introduce a flexible task-level movement representation that is based on neuropsychological findings. Movement is represented in task-oriented frames of...
This paper presents a methodology for learning arbitrary discrete motions from a set of demonstrations. We model a motion as a nonlinear autonomous (i.e. time-invariant) dynamical system, and define the sufficient conditions to make such a system globally asymptotically stable at the target. The convergence of all trajectories is ensured starting from any point in the operational space. We propose...
We present an approach allowing a robot to acquire new motor skills by learning the couplings across motor control variables. The demonstrated skill is first encoded in a compact form through a modified version of Dynamic Movement Primitives (DMP) which encapsulates correlation information. Expectation-Maximization based Reinforcement Learning is then used to modulate the mixture of dynamical systems...
Reinforcement learning in the high-dimensional, continuous spaces typical in robotics, remains a challenging problem. To overcome this challenge, a popular approach has been to use demonstrations to find an appropriate initialisation of the policy in an attempt to reduce the number of iterations needed to find a solution. Here, we present an alternative way to incorporate prior knowledge from demonstrations...
This paper deals with time-optimization of trajectories of wheeled robots within the speed and other constraints. The cubic Hermite spline curve with the method of speed profile computation is used to determine the trajectory. This method is summarized and extended to allow the optimization with the described constraints. It ensures fulfilment of required initial parameters of motion. The parameters...
Models proposed within the literature of motor control have polarised around two classes of controllers which differ in terms of controlled variables: the Force-Control Models (FCMs), based on dynamic control, and the Equilibrium-Point Models (EPMs), based on kinematic control. This paper proposes a bioinspired model which aims to exploit the strengths of the two classes of models. The model is tested...
A neuro-fuzzy learning algorithm is applied to design a Takagi-Sugeno type Fuzzy Logic Controller (T-S FLC) for a biped robot walking problem. The control design considers an output function imposed on the feedback and several TS-FLC models are determined each by ANFIS, which represent a piece-wise control inputs that together to perform a walking cycle. Two simulations of the closed-loop system for...
In the paper we evaluate two learning methods applied to the ball-in-a-cup game. The first approach is based on imitation learning. The captured trajectory was encoded with Dynamic motion primitives (DMP). The DMP approach allows simple adaptation of the demonstrated trajectory to the robot dynamics. In the second approach, we use reinforcement learning, which allows learning without any previous...
In this paper, we propose a method for adding new items to an existing knowledge base, considering the information already collected via a human machine interface. The system can be used for acquiring the data, monitoring the fabrication cells and in the knowledge acquisition process. The knowledge base is an essential part, used in the monitoring and control of the robot in the fabrication cell....
The paper presents neural model for the kinematical analysis of six dof parallel robot. The modelling consists of two stages. The first stage is choosing a three-layer perceptron type neural network and it will be trained so that it will learn a set of training data well enough. The second stage means testing the model obtained through training, during the generalization phase. Both tasks was carried...
In this paper we consider the problem of ensuring that a multi-agent robot control system is both safe and effective in the presence of learning components. Safety, i.e., proving that a potentially dangerous configuration is never reached in the control system, usually competes with effectiveness, i.e., ensuring that tasks are performed at an acceptable level of quality. In particular, we focus on...
Reinforcement learning (RL) is one of the most general approaches to learning control. Its applicability to complex motor systems, however, has been largely impossible so far due to the computational difficulties that reinforcement learning encounters in high dimensional continuous state-action spaces. In this paper, we derive a novel approach to RL for parameterized control policies based on the...
In this paper a bio-inspired control architecture for a robotic hand is presented. It relies on the same mechanisms of learning inverse internal models studied in humans. The control is capable of developing an internal representation of the hand interacting with the environment and updating it by means of the interaction forces that arise during contact. The learning paradigm exploits LWPR networks,...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.