The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Simulation is a critical tool for evaluating processor and program performance and behavior in newly proposed computer architectures. When modeling target machines with hundreds or thousands of cores, parallel simulation approaches are an increasingly popular method to reduce the long simulation times inherent in single-threaded simulation. Unfortunately, synchronization forces a tradeoffs between...
As modern systems are integrating exceeding number of components for better performance and functionality, early full-system simulation tools have become essential for validating complex concurrent system interaction activities. In the past decades, many useful timing-accurate system simulation tools have been developed; however, we find that even for the most efficient techniques, more than 90% of...
A common practice for reducing synchronization overheads in parallel simulation of a large-scale cluster is to relax synchronization with lengthened synchronous steps. However, as a side effect, simulation accuracy degrades considerably. This paper proposes a novel mechanism that keeps the running speeds of different nodes consistent by synchronizing logical clocks with the wall clock periodically...
The high level of heterogeneity of modern embedded systems forces designers to use different formalisms, thus making reuse and integration very difficult tasks. Reducing such an heterogeneity to a homogeneous implementation is a key solution to allow both simulation and validation of the system. This paper proposes two novel flows to gain a C++ and a SystemC-AMS homogeneous implementations of starting...
Empirical evidence shows that massive data sets have rarely (if ever) a stationary underlying distribution. To obtain meaningful classification models, partitioning data into different concepts is required as an inherent part of learning. However, existing state-of-the-art approaches to concept drift detection work only sequentially (i.e. in a non-parallel fashion) which is a serious scalability limitation...
This paper introduces PartitionSim, a parallel simulator for future thousand-core processors. The purpose of PartitionSim is to improve the simulation performance of many-core architectures at the expense of little accuracy sacrifice. To achieve this goal, we propose a novel technique: timing partition. Timing partition is based on such an observation: in a target system, interacting components communicate...
this paper addresses the workload partition strategies in simulating many-core architectures. The key observation behind this paper is: compared to multicore, manycore features with more non-uniform memory access and unpredictable network traffic; these features degrade simulation speed and accuracy of parallel discrete event simulators (PDES) in cases of static workload partition schemes. Based on...
The problem addressed in this paper is the limitation imposed by network elements, especially Ethernet elements, on the real-time performance of timecritical systems. Most current network elements are concerned only with data integrity, connection, and throughput with no mechanism for enforcing temporal semantics. Existing safety-critical applications and other applications in industry require varying...
Virtual platform simulation is an essential technique for early-stage system-level design space exploration and embedded software development. In order to explore the hardware behavior and verify the embedded software, simulation speed and accuracy are the two most critical factors. However, given the increasing complexity of the Multi-Processor System-on-Chip (MPSoC) designs, even the state-of-the-art...
Current trends signal an imminent crisis in the simulation of future CMPs (Chip Multiprocessors). Future micro-architectures will offer more and more thread contexts to execute parallel programs, but the execution speed of each thread will not improve at the same pace. CMPs with 10's or even 100's of cores are envisioned. Simulating these future CMP sefficiently without compromising accuracy is a...
Time synchronization is the basis of real-time connecting in distributed battle simulation systems, which is an important issue for battle simulation. The time management of HLA and approaches of time synchronization in distributed battle simulation systems are introduced, and then grounded on the analysis of time synchronization on real-time connecting, the approach of RTXTimer-based time synchronization...
Nowadays, the development of embedded system hardware and related system software is mostly carried out using virtual platform environments. The high level of modeling detail (hardware elements are partially modeled in a cycle-accurate fashion) is required for many core design tasks. At the same time, the high computational complexity of virtual platforms caused by the detailed level of simulation...
The Reliable and Self-Aware Clock (R&SAClock) is a new software clock aimed at providing resilient time information. It uses and exploits the information collected from any chosen clock synchronization mechanism to provide both the current time and the synchronization uncertainty, intended as a conservative and self-adaptive estimation of the distance from an external global time. This paper describes...
Parallel simulation is a technique to accelerate microarchitecture simulation of CMPs by exploiting the inherent parallelism of CMPs. In this paper, we explore the simulation paradigm of simulating each core of a target CMP in one thread and then spreading the threads across the hardware thread contexts of a host CMP. We start with cycle-by-cycle simulation and then relax the synchronization condition...
The relationship between structure and function is of central importance in neuroscience. Computational modeling techniques can play a crucial role in exploring this relationship. Neuroscientists have revealed an interesting patterning in the connectivity of visual cortical areas, where the receptive field sizes for feed-forward, lateral and feedback connections are monotonically increasing and these...
System-level software modeling and simulation have become important techniques for real-time embedded system early design space exploration. However, the timing accuracy issues have not been solved well in current methods, which produce unrealistic results or large simulation overheads. In this paper, we propose a mixed timing modeling and simulation approach to decouple conventionally interdependent...
If all clocks within a distributed system share the same notion of time, the application domain can gain several advantages. Among those is the possibility to implement real-time behavior, accurate time stamping, and event detection. However, with the wide spread application of clock synchronization another topic has to be taken into consideration: the fault tolerance. The well known clock synchronization...
The slow speed of conventional execution-driven architecture simulators is a serious impediment to obtaining desirable research productivity. This paper proposes and evaluates a fast manycore processor simulation framework called two-phase trace-driven simulation (TPTS), which splits detailed timing simulation into a trace generation phase and a trace simulation phase. Much of the simulation overhead...
The construction industry has been facing a paradigm shift to (i) increase; productivity, efficiency, infrastructure value, quality and sustainability, (ii) reduce; lifecycle costs, lead times and duplications, via effective collaboration and communication of stakeholders in construction projects. Digital construction is a political initiative to address low productivity in the sector. This seeks...
For several decades, the output from semiconductor manufacturers has been high volume products with process optimisation being continued throughout the lifetime of the product to ensure a satisfactory yield. However, product lifetimes are continually shrinking to keep pace with market demands. Furthermore there is an increase in dasiafoundrypsila business where product volumes are low; consequently...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.