The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
In this paper, we train a robot to learn online a task of obstacles avoidance. The robot has at its disposal only its visual input from a linear camera in an arena whose walls are composed of random black and white stripes. The robot is controlled by a recurrent spiking neural network (integrate and fire). The learning rule is the spike-time dependent plasticity (STDP) and its counterpart – the so-called...
In this paper, we study the difference between two ways of setting synaptic weights in a “temporal” neural network. Used as a controller of a simulated mobile robot, the neural network is alternatively evolved through an evolutionary algorithm or trained via an hebbian reinforcement learning rule. We compare both approaches and argue that in the last instance only the learning paradigm is able to...
Measurements of protein motion in living cells and membranes consistently report transient anomalous diffusion (subdiffusion) that converges back to a Brownian motion with reduced diffusion coefficient at long times after the anomalous diffusion regime. Therefore, slowed-down Brownian motion could be considered the macroscopic limit of transient anomalous diffusion. On the other hand, membranes are...
This paper addresses the question of the functional role of the dual application of positive and negative Hebbian time dependent plasticity rules, in the particular framework of reinforcement learning tasks. Our simulations take place in a recurrent network of spiking neurons with inhomogeneous synaptic weights.A spike-timing dependent plasticity (STDP) rule is combined with its “opposite”, the “anti-STDP”...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.