The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Deep convolutional neural networks (CNN) have shown great accuracy on object recognition and classification tasks. Deep CNNs are computation intensive algorithms, hence many customized RRAM crossbar-based accelerators are proposed to meet the computing demands in deep CNNs, but the area costs and the power consumption are still great challenges for RRAM crossbar-based accelerators. In this work, we...
Deep convolutional neural networks (DCNNs) have been successfully used in many computer vision tasks. Previous works on DCNN acceleration usually use a fixed computation pattern for diverse DCNN models, leading to imbalance between power efficiency and performance. We solve this problem by designing a DCNN acceleration architecture called deep neural architecture (DNA), with reconfigurable computation...
An energy-efficient hybrid neural network (NN) processor is implemented in a 65nm technology. It has two 16×16 reconfigurable heterogeneous processing elements (PEs)arrays. To accelerate a hybrid-NN, the PE array is designed to support on demand partitioning and reconfiguration for parallel processing different NNs. To improve energy efficiency, each PE supports bit-width adaptive computing to meet...
Artificial Neural Network (ANN) is widely used in machine learning and artificial intelligence areas. But ANN requires a long running time and induces a high power consumption when running on a GPU or CPU which may hinder its application in embedded system. This paper proposes a hardware accelerator design guideline for ANN with arbitrary scales and depths. We take full consideration of the hardware...
Approximate computing emerges as a promising technique for high energy efficiency. Multi-layer perceptron (MLP) models can be used to approximate many modern applications, with little quality loss. However, the various MLP topologies limits the hardwares performance in all cases. In this paper, a scheduling framework is proposed to guide mapping MLPs onto limited hardware resources with high performance...
As the energy problem has become a big concern in digital system design, one promising solution is combining the core processor with a multi-purpose accelerator targeting high performance applications. Many modern applications can be approximated by multi-layer perceptron (MLP) models, with little quality loss. However, many current MLP accelerators have several drawbacks, such as the unbalance of...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.