The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
The IEEE Rebooting Computing Initiative, proposed in 2012, has launched a 15-year technology roadmap to address escalating computing-performance pressures: stalled device-physics advances coupled with big data demands, novel machine-learning problems, and complex software paradigms. Potential solutions range from new transistor technology to quantum computing.
Year-over-year exponential computer performance scaling has ended. Complicating this is the coming disruption of the "technology escalator" underlying the industry: Moore's law. To fundamentally rethink and restart the performance-scaling trend requires bold new ways to compute and a commitment from all stakeholders.
For more than two decades, the TOP500 list has enjoyed incredible success as a metric for supercomputing performance and as a source of data for identifying technological trends. The project's editors reflect on its usefulness and limitations for guiding large-scale scientific computing into the exascale era.
Irregular applications present unpredictable memory-access patterns, data-dependent control flow, and fine-grained data transfers. Only a holistic view spanning all layers of the hardware and software stack can provide effective solutions to address these challenges.
The rapid and disruptive changes anticipated in hardware design over this next decade necessitate a more agile development process, such as the hardware-software co-design processes developed for rapid product development in the embedded space. This article will describe the structure of the co-design process as applied to supercomputing systems, introduce the role of architectural simulation and...
The confluence of emerging technologies and new data-centric workloads offers a unique opportunity to rethink traditional system architectures and memory hierarchies in future designs.
The end of dramatic exponential growth in single-processor performance marks the end of the dominance of the single microproessor in computing. The era of sequential computing must give way to an era in which parallelism holds the forefront. Although important scientific and engineering challenges lie ahead, this is an opportune time for innovation in programming systems and computing architectures.
A new architecture with a six-dimensional mesh/torus topology achieves highly scalable and fault-tolerant interconnection networks for large-scale supercomputers that can exceed 10 petaflops.
Several high-performance computers now use field-programmable gate arrays as reconfigurable coprocessors. The authors describe the two major contemporary HPRC architectures and explore the pros and cons of each using representative applications from remote sensing, molecular dynamics, bioinformatics, and cryptanalysis.
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.