The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Optogenetics can be used to restore light responses in patients affected by retinal degenerative diseases. The light-sensitivity of the molecule introduced by genetic therapy is however very limited in terms of wavelength and irradiance needed to activate a useful neural response, and thus needs an external device to be correctly stimulated. Moreover, the visual signal needs to be encoded so as to...
Optogenetic therapy holds the promise to restore visual function in patients affected by retinal degenerative diseases. However, the light-sensitivity of the molecule mediating light responses is much less than the one of healthy retinal cells so that no photo-stimulation is expected under natural environmental conditions. In this work, we present a platform set up to stimulate optogenetically-engineered...
This live demonstration shows ultra-low bandwidth video streaming based on a scene-driven event-encoding imaging sensor. The approach exploits the inherent focal-plane redundancy suppression / video compression achieved by an array of autonomous, auto-sampling pixels. The data readout from the camera is optimized for transmission bandwidth using variable bit-length pixel address encoding and spatio-temporal...
Conventional image sensors acquire the visual information time-quantized at a predetermined frame rate. Each frame carries the information from all pixels, regardless of whether or not this information has changed since the last frame had been acquired. If future artificial vision systems are to succeed in demanding applications such as autonomous robot navigation, high-speed motor control and visual...
Imaging systems comprise the processes involved in the formation of a usually 2-D image from an object or a scene in the real world and in the majority of cases provide a direct point-to-point correspondence between points of the objects in the 3-D world and at the image plane. Practically all smart cameras and machine vision systems include a sensor that converts electromagnetic radiation into electrical...
Representing a new paradigm for the processing of sensor signals, one of the greatest success stories of neuromorphic systems to date has been the emulation of sensory signal acquisition and transduction, most notably in vision. This chapter reviews some of the recent developments in bioinspired artificial vision. The abstraction of two major types of retinal ganglion cells and corresponding retina...
This paper presents a real-time spiking neural network adaptation of the HMAX object recognition model on an event-driven platform. Visual input is provided by a spiking silicon retina, while the SpiNNaker system is used as a computational hardware platform for implementation. We show the implementation of a simple Leaky Integrate-and-Fire (LIF) neuron model on SpiNNaker to create an event driven...
This live demonstration shows real-time visual object recognition based on a spiking neural network adaptation of the HMAX model running on a purely event-based computational hardware platform. Visual input to the system is provided by an ATIS spiking silicon retina sensor. A SpiNNaker board processes the event-encoded visual information from the scene. Using a Leaky Integrate-and-Fire (LIF) neuron...
This live demonstration shows a biology-inspired highly efficient approach to high-speed video data acquisition based on the ATIS image sensor technology. Gray-level image information is acquired pixel-individually and event-driven at a temporal resolution equivalent to thousands of frames-per-second while the data rate is kept at a fraction of the rate encountered with conventional high-speed cameras...
This paper presents a frame-free time-domain imaging approach designed to alleviate the non-ideality of finite exposure measurement time (intrinsic to all integrating imagers), limiting the temporal resolution of the ATIS asynchronous time-based image sensor concept. The method uses the time-domain correlated double sampling (TCDS) and change detection circuitry already present in the data-driven...
This paper presents a hardware implementation for high-speed, event-based data processing. A full-custom Address-Event (AER) processing system (GAEP) features a 10ns-resolution 33M/5.125M events·s−1 peak/sustained event rate sensor data interface for precision time-stamping of asynchronous sensor data and implements hardware-accelerated event pre-processing including rate dependent IRQ generation...
A QVGA array of autonomous, event-based temporal contrast sensitive pixels is at the basis of an asynchronous, time-based CMOS dynamic vision and image sensor (ATIS). In this sensor, exposure measurements are initiated and carried out locally by individual pixels upon detection of temporal contrast in their field-of-view. The change detection pixel circuits respond with low latency to temporal contrast...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.