The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
We present live demonstration of a hardware that can learn visual features on-line and in real-time during presentation of objects. Input Spikes are coming from a bio-inspired silicon retina or Dynamic Vision Sensor (DVS) and are processed in a Spiking Convolutional Neural Network (SCNN) that is equipped with a Spike Timing Dependent Plasticity (STDP) learning rule implemented on FPGA.
We present a highly hardware friendly STDP (Spike Timing Dependent Plasticity) learning rule for training Spiking Convolutional Cores in Unsupervised mode and training Fully Connected Classifiers in Supervised Mode. Examples are given for a 2-layer Spiking Neural System which learns in real time features from visual scenes obtained with spiking DVS (Dynamic Vision Sensor) Cameras.
In this live demonstration we exploit the use of a serial link for fast asynchronous communication in massively parallel processing platforms connected to a DVS for real-time implementation of bio-inspired vision processing on spiking neural networks.
We present a new passive and low power localization method for quadcopter UAVs (Unmanned aerial vehicles) by using dynamic vision sensors. This method works by detecting the speed of rotation of propellers that is normally higher than the speed of movement of other objects in the background. Dynamic vision sensors are fast and power efficient. We have presented the algorithm along with the results...
Fig. 1(a) shows the demo setup. Two DVS boards send events out through parallel buses to a merger board. This board merges all the event flow in one single AER bus, and sends it to a custom-made convolutional board, where a 2D grid array of convolution modules is implemented within a Spartan6 FPGA, as represented in Fig. 1(b) and (c). A USBAERmini2 board is used to timestamp the events coming out...
We present here an overview of a new vision paradigm where sensors and processors use visual information not represented by sequences of frames. Event-driven vision is inherently frame-free, as happens in biological systems. We use an event-driven sensor chip (called Dynamic Vision Sensor or DVS) together with event-driven convolution module arrays implemented on high-end FPGAs. Experimental results...
The recently developed Dynamic Vision Sensors (DVS) sense dynamic visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores...
This paper presents a new DVS sensor with one order of magnitude improved contrast sensitivity over previous reported DVSs. This sensor has been applied to a bio-inspired event-based binocular system that performs 3D event-driven reconstruction of a scene. Events from two DVS sensors are matched by using precise timing information of their ocurrence. To improve matching reliability, satisfaction of...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.