The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Cloud computing enables end users to execute high-performance computing applications by renting the required computing power. This pay-for-use approach enables small enterprises and startups to run HPC-related businesses with a significant saving in capital investment and a short time to market. When deploying an application in the cloud, the users may a) fail to understand the interactions of the...
A novel approach of a testbed for embedded networking nodes has been conceptualized and implemented. It is based on the use of virtual nodes in a PC environment, where each node executes the original embedded code. Different nodes are running in parallel and are connected via so-called virtual interfaces. The presented approach is very efficient and allows a simple description of test cases without...
Software-defined networking (SDN) is an emerging and disruptive networking paradigm that emerged on campus networks but was soon recognized as having potential applicability in several other application areas, including Internet of Things (IoT) and Industrial IoT. Industrial IoT applications have requirements remarkably distinct from the ones of campus networks, particularly in what concerns timeliness...
The introduction of International Organization for Standardization (ISO) standard 26262 “Road vehicles — Functional safety” in 2011 provided a state-of-the-art methodology for achieving functional safety in automotive electrical and/or electronic (E/E) systems. The standard defines the probabilistic metric for random hardware failures (PMHF) as the average probability of a violation of a safety goal...
State-of-the-art GPU chips are designed to deliver extreme throughput for graphics as well as for data-parallel general purpose computing workloads (GPGPU computing). Unlike graphics computing, GPGPU computing requires highly reliable operation. The performance-oriented design of GPUs requires to jointly evaluate the vulnerability of GPU workloads to soft-errors with the performance of GPU chips....
Early design space evaluation of computer systems is usually performed using performance models (e.g., detailed simulators, RTL-based models, etc.). However, it is very challenging (often impossible) to run many emerging applications on detailed performance models owing to their complex software-stacks and long run times. To overcome such challenges in benchmarking these complex applications, we propose...
Combined (soft-hard) method for decoding block turbo-product codes is proposed in the paper. The method allows leveraging advantages of soft input data usage with the speed of hard-decoding procedure. The main peculiarity of the method is rule-based decoding stage. The proposed approach simplifies calculation procedure and reaches better correction ability than hard-decision decoder. Mathematical...
To manage and maintain large-scale cellular networks, operators need to know which sectors underperform at any given time. For this purpose, they use the so-called hot spot score, which is the result of a combination of multiple network measurements and reflects the instantaneous overall performance of individual sectors. While operators have a good understanding of the current performance of a network...
In order to provide the cloud computing research community with a full-system level datacenter server emulator with programmable hardware and software, and stimulate more innovative research work, this poster and demo shows a scientific research platform, Titian2, designed and implemented at ICT of CAS. Titian2 has the ability of on-line profiling and measuring, and the scalability of connecting with...
With the increasing adoption of embedded systems in critical automotive applications, the verification of hardware designs reliability is becoming a strictly regulated process where the ISO26262 standard plays a key role. Today crucial verification activities such as failure analysis and FMEA are still relying heavily on reliability engineer expertise, as automatic methods supporting them are still...
Deep neural networks are gaining in popularity as they are used to generate state-of-the-art results for a variety of computer vision and machine learning applications. At the same time, these networks have grown in depth and complexity in order to solve harder problems. Given the limitations in power budgets dedicated to these networks, the importance of low-power, low-memory solutions has been stressed...
Cloud computing provides support for hosting client's application. Cloud is a distributed platform that provides hardware, software and network resources to both execute consumer's application and also to store and mange user's data. Cloud is also used to execute scientific workflow applications that are in general complex in nature when compared to other applications. Since cloud is a distributed...
Fault attack becomes a serious threat to system security and requires to be evaluated in the design stage. Existing methods usually ignore the intrinsic uncertainty in attack process and suffer from low scalability. In this paper, we develop a general framework to evaluate system vulnerability against fault attack. A holistic model for fault injection is incorporated to capture the probabilistic nature...
A desirable feature of a development tool for SoC design is that, given the important applications in the domain to be targeted by the SoC, a powerful hardware-software partitioning engine is available to determine which function(s) shall be mapped to hardware. However, to provide high-quality partitioning, this engine must be able to consider a rich design space of possible alternate hardware and...
Approximate computing has applications in areas such as image processing, neural computation, distributed systems, and real-time systems, where the results may be acceptable in the presence of controlled levels of error. The promise of approximate computing is in its ability to render just enough performance to meet quality constraints. However, going from this theoretical promise to a practical implementation...
There is a spectrum of solutions are available for distributing content over the Internet today. One of these solutions is Content distribution networks (CDN). CDN need to make decisions, such as server selection and routing, to improve a performance of content distribution. But we must remember, that performance may be limited by various factors such as packet loss in the network, a small receive...
Cache hierarchies have long been utilized to minimize the latency of main memory accesses by caching frequently used data closer to the processor. Significant research has been done to identify the most crucial metrics of cache performance. Though the majority of research focuses on measuring cache hit rates and data movement as the major cache performance metrics, cache utilization can be equally...
Energy-aware resource-management strategies for web-server clusters use on/off strategies which are based on thresholds. The basic idea is to turn off currently unused nodes, and on again when the load increases. The biggest challenge for these strategies is to provide enough compute power in an unexpected peak load situation. We investigate this challenge using trace-driven simulation for different...
Betweenness centrality is a popular metric in social science, and recently it was adopted also in computer science. Betweenness identifies the node, or the nodes, that are most suitable to perform critical network functions, such as firewalling and intrusion detection. However, computing centrality is resource-demanding, we can not give for granted that it can be computed in real time at every change...
The performance of List Successive-Cancellation Decoding (LSCD) of Polar Codes with large list size have exceeded that of Turbo codes and Low-Density Parity-Check codes. However, large list size results in huge computation complexity and this limits the applicability of LSCD in high-throughput and power- sensitive applications. In this work, a low complexity design for LSCD with large list size based...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.