The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Large-scale data is often represented as graphs in the field of modern cloud computing. Graph processing attracts more and more attentions when utilizing the cloud computing service. With the increasing attentions to process massive graphs (e.g., social networks, web graphs, transport networks, and bioinformatics), many state-of-the-art open source graph computing systems on a single node have been...
Memory management systems have significantly affected the overall performance of modern multi-core smartphone systems. Android, as one of the most popular smartphone operating systems, adopts a global buddy system with the FCFS (first come, first served) principle for memory allocation, and releases requests to manage external fragmentations and maintain the memory allocation efficiency. However,...
Single-ISA heterogeneous multi-core processors have advantages over cost-equivalent homogeneous ones, which integrate cores having the same instruction set architecture (ISA) but offer different performance and power characteristics. When these cores share the off-chip main memory, requests from different cores will interfere with each other, leading to low system performance and unfairness even starvation...
Modern multi-core processors propose new cache management challenges (more cache conflicts and misses) due to the subtle interactions of simultaneously executing processes sharing on-chip resources. To address this issue, thread group scheduling scheme that cluster threads with high sharing cache as one group to schedule has been proposed. It has led to numerous academic and industrial attentions...
Improving power efficiency of processor and memory has recently received a lot of attention. However, most existing solutions concentrate on processor or memory separately and cannot combine well to simultaneously improve both. This paper presents a solution to improve both processor and memory power efficiency simultaneously through group scheduling (GS) which managing CPU frequency and memory rank...
Memory is responsible for a large and increasing fraction of the energy consumed by computers. To address this challenge, memory manufacturers have developed memory devices with different power states. In order to more effectively manage the power states in the operating system, in this paper, we propose a rank-sensitive buddy system (RS-Buddy) which clusters pages together to prolong the idle time...
Main memory is expected to grow significantly in both speed and capacity for it is a major shared resource among cores in a multi-core system, which will lead to increasing power consumption. Therefore, it is critical to address the power issue without seriously decreasing performance in the memory subsystem. In this paper, we firstly propose memory affinity which retains the active and low power...
Performance optimization and energy efficiency are the major challenges in multi-core system design. Of the state-of-the-art approaches, cache affinity aware scheduling and techniques based on dynamic voltage frequency scaling (DVFS) are widely applied to improve performance and save energy consumptions respectively. In modern operating systems, schedulers exploit high cache affinity by allocating...
On a Chip Multi-Processor (CMP) architecture, cache sharing impacts threads non-uniformly, where some threads may be slowed down significantly, while others are not. This may cause severe performance problems such as throughput decreasing, cache thrashing. This paper proposes a new predicting inter-thread cache contention model, FOM (Frequency of Miss), and schedules threads based on the results of...
On a CMP (Chip Multi-Processor) architecture, cache sharing impacts threads non-uniformly, where some threads may be slowed down significantly, while others are not. This may cause severe performance problems such as throughput decreasing, cache thrashing. This paper proposes an architectural support predicting method (ASPM) to predict inter-thread cache contention, and schedules threads based on...
Out-of-order execution is a fundamental technique to achieve instruction-level parallelization in processor designs. The verification of out-of-order processor is a main challenge in processor design. This paper presents a formal method to model and check the correctness of out-of-order design at instruction level. This method is based on model checking, a widely used formal verification technique...
Verification is one of the most complex and expensive tasks in current application specific instruction-set processor (ASIP) design process. Many existing approaches utilize a multi-level strategy to efficiently design and verify ASIP aiming to discover the flaws earlier. This paper presents a verification approach based on HDPN (hardware design based-on Petri net) and NuSMV. The validation of static...
With the development and application of pipeline technique in processor, the verification of pipeline designs is becoming important in academies and industries. This paper presents a method to model and check the correctness of pipeline with bypass configuration at instruction level. This method is based on model checking, a method of formal verification. This method not only suits for the complete...
Validation is one of the most complex and expensive tasks in current Application Specific Instruction Set Processors (ASIP) design process. Many existing approaches employ a multiple-level approach to efficiently design and verify ASIP design. This paper presents a novel extended timed Petri net model called HDPN-Hardware Design based-on Petri Net to model systems at multiple levels, and introduces...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.