The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
The objective of this paper is to present a generic and extensible access framework architecture for WebLab integration. In this framework each Weblab becomes accessible by means of a preinstalled plug-in. This modular approach makes it possible to add, remove or modify a plug-in, and its corresponding Weblab, without framework recompilation.
Traditional parallel programming styles have many problems which hinder the development of parallel applications. The message passing style can be too complex for many programmers. While shared memory based parallel programming is relatively easy, it requires programmers to guarantee there is no data race in programs by using mutually exclusive locks. Data race conditions are generally difficult to...
There have been a few proposals aiming at bridging the gap between institutional grid infrastructures (e.g., Globus-based), popular cycle-sharing applications (e.g., SETIQhome), and massively used decentralized P2P file-sharing applications. Nonetheless, no such infrastructure was ever successful in allowing, in a large-scale, home users to run popular desktop applications faster, by using spare cycles...
Massively multiplayer online games have become increasingly popular. However, their operation is costly, as game servers must be maintained. To reduce these costs, we aim at providing a communication engine to develop massively multiplayer online games based on a peer-to-peer system. In this paper we analyze the requirements of such a system and present an overview of our current work.
Enhancing the quality of weather and climate forecasts are central scientific research objectives worldwide. However, simulations of the atmosphere, usually demand high processing power and large storage resources. In this context, we present the GBRAMS project, that applies grid computing to speed up the generation of a regional model climatology for Brazil. A grid infrastructure was built to perform...
Grid is a computational environment in which applications can use multiple distributed computational resources in a safe, coordinated, efficient and transparent way. Data Integration Middleware Systems (DIMS) are originally distributed systems that can make use of Grid environments to obtain a better performance and a rational use of available resources. This work describes a Distributed Query Execution...
We present a new distributed performance analysis service of the ASKALON integrated Grid environment for computing runtime overheads of dynamic workflows in realtime based on event correlation techniques. We illustrate a formal method to express precise overhead correlation rules, including several performance contracts as quality of service parameters based on fuzzy logic to be enforced in dynamic...
In this paper we focus on distributed visualization using the visualization toolkit (VTK) in grid environments. We propose a distributed architecture, based on data parallelism, that allows the distribution of visualization tasks over a grid environment. We decided for globus toolkit as a middleware to provide access and location transparencies. We also add facilities for dynamic allocation of resources...
The workflow model for composing Grid applications is based on an imperative model of computation prone to programming errors, which is an issue yet to consider in the Grid community. In this paper, we propose a new unconventional model for programming Grid applications based on two programming phases: (1) formal functional specification, written by the application scientist not interested in any...
While a grid represents a computing infrastructure for cross domain sharing of computational resources, the cyberinfrastructure, proposed by the US NSF Blue - Ribbon advisory panel, is expected to revolutionizing science and engineering by including more computer integrated resources, e.g. telescopes and observatories. As a part of the China national cyberinfrastructure for education and research,...
Tight coordination of resource allocation among end points in Grid networks often requires a data mover service to transfer a voluminous dataset from one site to another in a specified time interval. With flexibility at its best, the transfer can start from any time after its arrival, use any and even time variant bandwidth value, as long as it is completed before its deadline. Given a set of such...
In this paper we examine the issue of optimizing disk usage and of scheduling large-scale scientific workflows onto distributed resources where the workflows are data- intensive, requiring large amounts of data storage, and where the resources have limited storage resources. Our approach is two-fold: we minimize the amount of space a workflow requires during execution by removing data files at runtime...
The design and engineering of complex materials and products often requires intricate interactions between domain experts in science, material and engineering as well as the utilization of diverse software systems for discovery and optimization. If left as it is, design engineers would most likely be at a loss on how to engage the entire entourage of the multi- disciplinary processes as well as the...
The number of processors embedded on high performance computing platforms is continuously increasing to accommodate user desire to solve larger and more complex problems. However, as the number of components increases, so does the probability of failure. Thus, both scalable and fault-tolerance of software are important issues in this field. To ensure reliability of the software especially under the...
We study two problems directly resulting from organizational decentralization of the grid. Firstly, the problem of fair scheduling in systems in which the grid scheduler has complete control of processors' schedules. Secondly, the problem of fair and feasible scheduling in decentralized case, in which the grid scheduler can only suggest a schedule, which can be later modified by a processor's owner...
Scientific computing is being increasingly deployed over volunteer-based distributed computing environments consisting of idle resources on donated user machines. A fundamental challenge in these environments is the dissemination of data to the computation nodes, with the successful completion of jobs being driven by the efficiency of collective data download across compute nodes, and not only the...
Applications to be executed in grid computing environments become more and more complex and usually consist of multiple interdependent tasks. The coordinated execution of such tightly or loosely coupled tasks often requires simultaneous access to different grid resources. This leads to the problem of resource co-allocation. Efficient and robust scheduling algorithms have to be developed that can cope...
This paper shows how loosely coupled compute resources, managed by Condor, can be leveraged together with IBM OmniFind to implement a scalable environment for text analysis based on the Unstructured Information Management Architecture (UIMA). Text analysis can be used to extract valuable knowledge from unstructured text data such as entities and their relationships. When applied to large amounts of...
Data replication is an excellent technique to move and cache data close to users. By replication, data access performance can be improved dramatically. One of the challenges in data replication is to select the candidate sites where replicas should be placed. We use a multi-objective model to address the replica placement problem. The multi-objective model considers the objectives of p-median and...
We develop a consistent mutable replication extension for NFSv4 tuned to meet the rigorous demands of large-scale data sharing in global collaborations. The system uses a hierarchical replication control protocol that dynamically elects a primary server at various granularities. Experimental evaluation indicates a substantial performance advantage over a single server system. With the introduction...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.