The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Graph Processing has been widely used to capture complex data dependency and uncover relationship insights. Due to the ever-growing graph scale and algorithm complexity, distributed graph processing has become more and more popular. In this paper, we investigate how to balance performance and cost for large scale graph processing on configurable virtual machines (VMs). We analyze the system architecture...
Design space exploration refers to the evaluation of implementation alternatives for many engineering and design problems. A popular exploration approach is to run a large number of simulations of the actual system with varying sets of configuration parameters to search for the optimal ones. Due to the potentially huge resource requirements, cloud-based simulation execution strategies should be considered...
Graph analytics has become essential to uncover relationship insights in complex systems. As graphs grow in scale, several graph-parallel frameworks including Pregel, GraphLab, and PowerGraph are developed based on commodity computers and/or Cloud instances. According to recent research and empirical performance evaluation, system optimization on PowerGraph allow it to outperform others significantly...
With more parallel and distributed applications moving to Cloud and data centers, it is challenging to provide predictable and controllable resources to multiple tenants, and thus guarantee application performance. In this paper, we propose an integrated QoS-aware resource provisioning platform based on virtualization technology for computing, storage and network resources. Coarse-grained CPU mapping...
Data mining is a difficult task that relies on an exploratory and analytic process of processing large quantities of data in order to discover meaningful patterns for valuable insights. The increasing heterogeneity and complexity of data requires expert knowledge on how to combine multiple data mining techniques to process and analyze the data in an effective and efficient way. This paper presents...
Graph processing has become popular for various big data analytic applications. Google's Pregel framework enables vertex-centric graph processing in distributed environment based on Bulk Synchronous Parallel (BSP) model. However, the BSP model is inefficient for many complex graph algorithms requiring graph traversals, as only a small number of vertices really update states in each super step. In...
With more applications moving to cloud, scalable storage systems, composed of a cluster of storage servers and gateways, are deployed as the back-end infrastructure to accommodate high-volume data. In such an environment, it is a challenge to provide predictable and controllable storage performance for multitenanted users with multiple applications, due to performance violation from misbehaving applications...
Agent-based modeling is one of the promising modeling tools that can be used in the study of population dynamics. Two of the main obstacles hindering the use of agent-based simulation in practice are its scalability when the analysis requires large-scale models as in policy research, and its ease-of-use especially for users with no programming experience. While there has been a significant work on...
Due to diverse network latencies, participants in a Distributed Virtual Environment (DVE) may observe different inconsistency levels of the simulated virtual world, which can seriously affect fair competition among them. In this paper, we investigate how to disseminate Dead Reckoning (DR)-based updates with the objectives of achieving fairness among participants and reducing inconsistency as much...
Maintaining a consistent presentation of the virtual world among participants is a fundamental problem in the Distributed Virtual Environment (DVE). The problem is exacerbated due to the limited network bandwidth and errorprone transmission. This paper investigates Dead Reckoning (DR) update scheduling to improve consistency in the DVE against message loss. Using the metric of Time-Space Inconsistency...
A large scale HLA-based simulation (federation) is composed of a large number of simulation components (federates), which may be developed by different participants and executed at different locations. Byzantine failures, caused by malicious attacks and software/hardware bugs, might happen to federates and propagate in the federation execution. In this paper, a three-phases (i.e., failure detection,...
A large scale HLA-based simulation (federation) is composed of a large number of simulation components (federates), which may be developed by different participants and executed at different locations. These federates are subject to failures due to various reasons. What is worse, the risk of federation failure increases with the number of federates in the federation. In this paper, a fault tolerance...
Simulation is a low cost and safe alternative to solve complex problems in various areas. To promote reuse and interoperability of simulation applications and link geographically dispersed simulation components, distributed simulation was introduced. The High Level Architecture (HLA) is the IEEE standard for distributed simulation. The actual implementation of the HLA standard is provided by a Run...
Interactive multi-user Internet games require frequent state updates between players to accommodate the great demand for reality and interactivity. The large latency and limited bandwidth on the Internet greatly affects the game's scalability. The High Level Architecture (HLA) is the IEEE standard for distributed simulation with its Data Distribution Management (DDM) service group assuming the functionalities...
Parallel and distributed simulation facilitates the construction of a simulation application (i.e., federation in HLA terminology) with a number of simulation components (federates). Recently, an approach based on active replication technique has been proposed to improve the performance of simulations by exploring software diversity. To guarantee the correctness of the approach, all replicas of the...
The High Level Architecture (HLA), which is the IEEE standard for distributed simulation, defines six service groups. The Time Management (TM) service group ensures a Time-Stamp-Ordered (TSO) message delivery sequence and correct time advancement of each simulation component (federate) in an HLA-based distributed simulation application (federation). To control time advancement of a federation, a distributed...
The High Level Architecture provides a general framework for distributed simulation, promoting reusability and interoperability of simulation components (federates). Large scale distributed simulation, in which federates run on many heterogenous computing machines may benefit from migrating federates among these machines for load- balancing and fault-tolerance. However, the HLA framework does not...
Modeling and simulation permeate all areas of business, science and engineering. To promote the interoperability and reusability of simulation applications and link geographically dispersed simulation components, distributed simulation was introduced. While the high level architecture (HLA) is the IEEE standard for distributed simulation, a run time infrastructure (RTI) provides the actual implementation...
Simulation is a low cost and safe alternative to solve complex problems in various areas. To promote reuse and interoperability of simulation applications and link geographically dispersed simulation components, distributed simulation was introduced. The high level architecture (HLA) is the IEEE standard for distributed simulation. To optimize communication efficiency between simulation components,...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.