The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
We design a bandwidth regulation module, by adapting and extending the algorithm of MemGuard Linux kernel module for hardware implementation. Our extensions differentiate among NoC sources with rate-constrained and best-effort traffic provisions, support a violation free-guaranteed operating mode for rate-constrained flows, and support dynamic adaptivity through EWMA prediction. Our strategies enhance...
The explosion of network bandwidth poses greatchallenges to data-plane flow processing. Due to the variable andpoor worst-case performance, naive hash table is incapable ofwire-speed processing. State-of-the-art schemes rely on multiplehash functions for enhanced load balancing to improve the worst-case performance. These schemes exploit the memory hierarchyand allocate compact on-chip data structures...
Oblivious RAM can hide a client's access pattern from an untrusted storage server. However current ORAM schemes incur a large communication overhead and/or client storage overhead, especially as the server storage size grows. We have proposed a matrix-based ORAM, M-ORAM, that makes the communication overhead independent of the server size. This requires selecting a height of the matrix; we present...
Today's world is an era of information technology in which cloud computing arises as promising and developing technology. In cloud computing environment the resources are provisioned on the basis of demand, as and when required. In cloud computing a large number of cloud users can request a number of cloud services at the same time. So there must be an efficient way in which all the resources are...
The recent advent of stacked memory devices has led to a resurgence of researchassociated with the fundamental memory hierarchy and associated memory pipeline. The bandwidth advantages provided by stacked logic and DRAM devices haveinspired research associated with eliminating the bandwidth bottlenecksassociated with many applications in high performance computing. Further, recent efforts have focused...
To meet the requirements of the next high generation high-performance networking switches and routers, system integration based on the Three-dimensional (3D) System-in-Package (SiP) technology is being studied and developed. In this paper, we report the development of a 3D SiP using the organic interposer technology. A 3D SiP is designed and manufactured with a large size organic interposer with fine-pitch...
The Resistive RAM (RRAM) technology is emerging as one of the possible candidates in replacing state-of-the-art NAND Flash for Solid State Drives (SSDs) applications. However, the RRAM architectures developed so far evidence a granularity mismatch between their page size and the typical host application payloads, forcing the use of multi-plane approaches to mimic NAND Flash thus affecting the figures...
Emerging 3D stacked memory systems provide significantly more bandwidth than current DDR modules. However, general purpose processors do not take full advantage of these resources offered by the memory modules. Taking advantage of the increased bandwidth requires the use of specialized processing units. In this paper, we evaluate the benefits of placing hardware accelerators at the bottom layer of...
We propose an approach called buffered compares, a less-invasive processing-in-memory solution that can be used with existing processor memory interfaces such as DDR3/4 with minimal changes. The approach is based on the observation that multi-bank architecture, a key feature of modern main memory DRAM devices, can be used to provide huge internal bandwidth without any major modification. We place...
One of the main challenges for embedded systems is the transfer of data between memory and processor. In this context, Hybrid Memory Cubes (HMCs) can provide substantial energy and bandwidth improvements compared to traditional memory organizations, while also allowing the execution of simple atomic instructions in the memory. However, the complex memory hierarchy still remains a bottleneck, especially...
The Hybrid Memory Cube (HMC) is a promising solution to overcome memory wall by stacking DRAM chips on top of a logic die and connecting them with dense and fast Through Silicon Vias (TSVs). However, 3D stacking technique brings another problem: high temperature and temperature variations between the DRAM dies. The thermal problem may lead to chip failure of 3D stacked DRAMs since the temperature...
Three-dimensional DRAM stacking has emerged as a vehicle for scaling system densities and performance improvement. The two design choices for interfacing to processors are — i) a separate core die connected to the DRAM stack via a silicon interposer (2.5D), and ii) DRAM die stacked on top of the core die (3D). These alternatives have different performance, power, and reliability behaviors. Specifically,...
The Northrop Grumman Hemispherical Resonating Gyro and Scaleable Space Inertial Reference Unit (SSIRU) system is inherently High Bandwidth. Measurements obtained at a nominal 2 KHz sample rate can provide usable bandwidth up to 600 Hz after demodulation. The standard Scalable SIRU is limited to 100 Hz output rate by CPU duty cycle constraints, which limits bandwidth to 46 Hz. Modification to the existing...
Internet of Things (IoT) devices is getting increasingly popular in every aspect of life. From health care monitors, activity/sleep trackers to industry/home automation, IoT devices and system-on-chip (SoC) have huge research potential. This paper presents a new memory interface intellectual property (IP) developed for interfacing IoT SoC with storage class memory or non-volatile memory (NVM). So...
This paper deals with our Ultra Wide Band (UWB) receiver which is designed for precise indoor localization of fire fighters and members of the rescue teams. The purpose of this indoor positioning system is to search for UWB transmitters in a given area and to determine their position. This article describes not only the hardware design of the UWB receiver but also the FPGA firmware development. The...
Internet and mobile application have been the driving force for semiconductor innovation in the past 10 years. In this paper, we will focus on the system design challenges for today's and tomorrow's consumer gadgets from productivity laptop computers to wearable glasses. We will start with everyone's favorite apps such as finding the fastest route to a baseball game with Google maps, taking family...
In cloud computing resource allocation should be elastic, within the sense that it must have modification accurate and quickly based on the demand. In cloud computing, Virtual machine allocates the resource for user's needs. Some time workload of service, increase rapidly, the existing approaches solve aggressive resource provisioning tasks using SPRNT, but still some challenges occur in VM allocation...
We present a new hash function Argon2, which is oriented at protection of low-entropy secrets without secret keys. It requires a certain (but tunable) amount of memory, imposes prohibitive time-memory and computation-memory tradeoffs on memory-saving users, and is exceptionally fast on regular PC. Overall, it can provide ASIC-and botnet-resistance by filling the memory in 0.6 cycles per byte in the...
Three-dimensional stacked memory is considered to be one of the innovative elements for the next-generation computing system, for it provides high bandwidth and energy efficiency. Particularly, packet routing ability of Hybrid Memory Cubes (HMCs) enables new interconnects for the memories, giving flexibility to its topological design space. Since memory-processor communication is latency-sensitive,...
Prefetching significantly reduces the memory latencies of a wide range of applications and thus increases the system performance. However, as a speculative technique, prefetching may also noticeably increase the number of memory accesses, which in turns may negatively impact on the main memory bandwidth consumption, performance, and power. Main memory bandwidth consumption is a critical resource especially...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.