The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
We present an array of Mihalas-Niebur neurons with dynamically reconfigurable synapses implemented in 0.5 μm CMOS technology optimized for low-power, low-mismatch, and high-density. This neural array has two modes of operation: one is each cell in the array operates as independent leaky integrate-and-fire neurons, and the second is two cells work together to model the Mihalas-Niebur neuron dynamics...
Visual saliency models are difficult to implement in hardware for real time applications due to their computational complexity. The conventional digital implementation is not optimal because of the requirement of a large number of convolution operations for filtering on several feature channels across multiple image pyramids [1], [2]. Here, we propose an alternative approach to implement a neuromorphic...
Neuromorphic systems mimic the functionality and communication protocol of biological neurons with efforts to design the most size and power efficient computing systems. We demonstrate the capability of an in-house neural array of 4080 Mihalas-Niebur neurons designed in 0.5μm CMOS technology to perform various event-based image processing tasks including warping and filtering.
Autonomous aerial vehicles perofrming visual tasks must operate under low-power, small-size, and light-weight constraints. We demonstrate an event-based dewarping task using a neuromorphic camera and processor for efficiently performing this visual task for a mobile quadcopter in real-time.
We demonstrate dynamic visual saliency computation within a virtual environment using the Oculus Rift with a custom eye tracker. The visual display is representative of the real-time view of a mobile robot with two mounted first-person view cameras for stereoscopic vision.
There is increasing interest for aerial vehicles to perform image processing tasks (i.e. object recognition and detection) in real-time. Such systems systems should have minimal data throughput, low computational complexity, and low-power. Traditional frame-based digital cameras are not ideal for meeting such specifications. More recent cameras, inspired by biology, drastically reduce data throughput...
In this paper we present a highly scalable, dynamically reconfigurable, energy efficient silicon neuron model for large scale neural networks. This model is a simplification of the generalized linear integrate-and-fire neuron model. The presented model is capable of reproducing 9 of the 20 prominent biologically relevant neuron behaviors. The circuits are designed for a 0.5 μm process and occupy an...
Visual saliency is an important aspect of the human visual system, as it allows us to process an overwhelming amount of visual information in real-time. Rather than processing the entire visual field in parallel, we focus our attention on only interesting regions for performing higher levels of processing. There are many approaches to modeling visual saliency given visual information, however, few...
Image dewarping is vital for systems performing image processing tasks in real-time. We introduce a real-time emulation of a low-power, event-based system for image dewarping implemented on an FPGA. The system utilizes stochastic computation in conjunction with a neuromorphic system called the Integrate-and-Fire Array Transceiver (IFAT) for low hardware area and to reduce computational complexity...
When computing visual saliency on natural scenes, many current models do not consider temporal information that may exist within the visual stimuli. Most models are designed for predicting salient regions of static images only. However, the world is dynamic and constantly changing. Furthermore, motion is a naturally occurring phenomena that plays an essential role in both human and computer visual...
Deep Neural Networks (DNNs) have proven very effective for classification and generative tasks, and are widely adapted in a variety of fields including vision, robotics, speech processing, and more. Specifically, Deep Belief Networks (DBNs), are graphical model constructed of multiple layers of nodes connected as Markov random fields, have been successfully implemented for tackling such tasks. However,...
The human visual system has the inherent capability of using selective attention to rapidly process visual information across visual scenes. Early models of visual saliency are purely feature-based and compute visual attention for static scenes. However, to model the human visual system, it is important to also consider temporal change that may exist within the scene when computing visual saliency...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.