The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
In this work we analyze the complex trade-off between data transfer, computation time, and power consumption when a multi-stage data-intensive algorithm (in this case video stabilization) is split between a low power mobile device and high power cloud server. We evaluate design choices in terms of which intermediate representations should be transferred to the server and back to the mobile device,...
NAND flash is seeing increasing adoption in the data center because of its orders of magnitude lower latency and higher bandwidth compared to hard disks. However, flash performance is often degraded by (i) inefficient storage I/O stack that hides flash characteristics under Flash Translation Layer (FTL), and (ii) long latency network protocols for distributed storage. In this paper, we propose a minimalistic...
Conventional servers have achieved high performance by employing fast CPUs to run compute-intensive workloads, while making operating systems manage relatively slow I/O devices through memory accesses and interrupts. However, as the emerging workloads are becoming heavily data-intensive and the emerging devices (e.g., NVM storage, high-bandwidth NICs, and GPUs) come to enable low-latency and high-bandwidth...
Virtual machine (VM) technologies have made much progress in improving the efficiency of virtualizing CPU and memory. However, achieving high performance for I/O virtualization remains a challenge, especially for high speed networking devices such as 10 Gigabit Ethernet (10GbE) NICs, and commonly used software-based I/O virtualization approaches usually suffer significant performance degradation compared...
The deployment of 10 Gigabit Ethernet (10 GbE) connections to servers has been hampered by the "fast-network-slow-host" phenomenon. Recently, the integration of network interfaces (INICs) is proposed to tackle the performance mismatch. While significant advantages over PCI-based discrete NICs (DNICs) were shown in prior work using simulation methodologies, it is still unclear how INICs perform...
To bring the benefits of CMT to larger workloads, these systems had to scale beyond a single socket. Because CMT requires massive memory bandwidth to achieve adequate throughput performance, the challenge was to develop a coherency link and fabric that would allow performance to scale along with thread count in a multinode (that is, multisocket) system. In this article CoHub's coherency scheme, ASIC...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.