The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
This paper explores the long short-term memory (LSTM) recurrent neural network for human action recognition from micro-Doppler signatures. The recurrent neural network model is evaluated using the Johns Hopkins MultiModal Action (JHUMMA) dataset. In testing we use only the active acoustic micro-Doppler signatures. We compare classification performed using hidden Markov model (HMM) systems trained...
We present our work on a neuromorphic self-driving robot that employs retinomoprhic visual sensing and spike based processing. The robot senses the world through a spike-based visual system - the Asynchronous Time-based Image Sensor (ATIS) - and processes the sensory data stream using IBM's TrueNorth Neurosynaptic System. A convolutional neural network (CNN) running on the TrueNorth determines the...
Audio-visual beamforming involves both an acoustic sensor and an omni-camera to form a composite 3D audio-visual representation of the environment. Information from the respective modalities is combined in the process of acoustic localization taking into account high level cognitive features of the signals, namely the presence of specific sounds - speech and tones - which have characteristic signatures...
The IBM TrueNorth (TN) Neurosynaptic System, is a chip multi processor [1] with a tightly coupled processor/memory architecture, that results in energy efficient neurocomputing and it is a significant milestone to over 30 years of neuromorphic engineering! It comprises of 4096 cores each core with 65K of local memory (6T SRAM)-synapses- and 256 arithmetic logic units — neurons-that operate on a unary...
In this paper we discuss a brain-inspired system architecture for real-time big velocity BIGDATA processing that originates in large format tiled imaging arrays used in wide area motion imagery ubiquitous surveillance. High performance and high throughput is achieved through approximate computing and fixed point arithmetic in a variable precision (6 bits to 18 bits) architecture. The architecture...
We present a bio-inspired, hardware/software architecture to perform Markov Chain Monte Carlo sampling on probabilistic graphical models using energy aware hardware. We have developed algorithms and programming data flows for two recently developed multiprocessor architectures, the SpiNNaker and Parallella. We employ a neurally inspired sampling algorithm that abstracts the functionality of neurons...
We present a combined hardware/software architecture to perform Markov Chain Monte Carlo sampling on probabilistic graphical models in a brain-inspired, energy-aware manner. By combining massively-parallel neuromorphic hardware architecture (SpiNNaker) with algorithms we've have developed for the event-based framework employed in SpiNNaker, we achieve large speedups when performing inference as compared...
Organisms use the process of selective attention to optimally allocate their computational resources to interesting subsets of a visual scene - ensuring that they can parse the scene in realtime. Many models of attention assume that basic image features (e.g. intensity, color and orientation) behave as attractors for attention. Gestalt psychologists, however, argue that humans perceive the whole before...
In this paper we provide an overview of audiovisual saliency map models. In the simplest model, the location of auditory source is modeled as a Gaussian and use different methods of combining the auditory and visual information. We then provide experimental results with applications of simple audio-visual integration models for cognitive scene analysis. We validate the simple audio-visual saliency...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.