The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Algorithm research of task scheduling is one of the key techniques in grid computing. This paper firstly describes an DAG task scheduling model used in grid computing environment, then discusses generational scheduling (GS) and communication inclusion generational scheduling (CIGS) algorithms. Finally, an improved CIGS algorithm is proposed to use in grid computing environment, and it has been proved...
To tackle complex and massive number of jobs sent by end users, a powerful and dedicated computational resources are required. Grid computing provides such environment in which applications may run for a quite long time. Therefore, efficient scheduling policy is indispensable. In our previous work, a mechanism named Swift Gap followed by developed completion time rule is introduced. This paper deeply...
With today's increasing use of technology, the physical resources that provide the infrastructure have begun to be inadequate and created the concept of virtualization. As virtualization is preferred in physical machines, it also ensures that the network structures that provide the infrastructure are better managed and used more efficiently. There is no need for migrations in virtual networks with...
The operation and maintenance of large distributed systems that are subject to high QoS conditions has led to the need of designing and developing advanced monitoring tools that facilitate the administration of the critical services required by the user communities. RASSMon is a portable, reliable, secure software platform able to collect monitoring data from multiple sources in heterogeneous environments...
Driven by the evolution of the Linear Hadron Collider (LHC) at CERN, the infrastructure of the Worldwide LHC Computing Grid undergoes rapid changes. The need to store and analyze experimental data at an ever growing rate has led to major architecture and software improvements for the optimization of data flow and access. This article presents, as a case study, the technological solutions that were...
Deformable part models (DPM) is a typical machine-learning based detection technique. It can achieve great success in detecting accuracy, but have compute-intensive tasks which severely restricts its utilization in many real world applications. In order to get high frame-rate for practical use, accelerators and grid computing infrastructure are needed. This paper propose a grid scheduling scheme which...
The drastic increase in the commodity computer and network performance for the last generation has a resultant of faster hardware and more sophisticated software. But, the supercomputers of the current generation are still incapable of solving the current problems in the field of science, engineering, and business. This problems arises as a single machine cannot facilitate the availability of various...
The advancement in the fields of science, engineering, social networking and e-commerce along with the tremendous growth in the pervasive technologies has generated a tsunami of data in digital form. To store and process this type of data is a big challenge for the researcher. Different distributed computing and processing systems have been developed to overcome these real world data computational...
New Grid and cloud solutions for distributed data mining and data processing are needed for execution of data intensive workflows. In contrast of the standard workflows, in which data between the jobs are exchanged in the form of files and the jobs are finished when they process the input data, data intensive workflows receive data organized in blocks which are streamed on inputs, analyze the data...
The important construction paradigms, the technological and energy efficiency in Distributed Computing are examined within the paper. The most responsible parameters like performance, speedup, energy consumption, PUE, ERE etc. are discussed. The case studies which are dedicated to Fog Computing, VM migration and utilization of waste heat in the clouds are analyzed.
This track started in 2009 with opening remarks from the Chair observing that the computing cloud evolution depends on research efforts from the infrastructure providers creating next generation hardware that is service friendly, service developers that embed business service intelligence in the computing infrastructure to create distributed business workflow execution services and service providers...
Cellular networks are significant challenges for high performance data-intensive system designers due to their complexity. Being able to study features and designs before building the actual system is an advantage that a simulation model can offer. This paper presents a Petri net models program generator of communication hexagonal grids of arbitrary size for verification of telecommunication systems...
Big RDF (Resource Description Framework) graphs, which populate the emerging Semantic Web, are the core data structure of the so-called Big Web Data, the "natural" transposition of Big Data on the Web. Managing big RDF graphs is gaining momentum, essentially due to the fact that this task represents the "baseline operation" of fortunate Web big data analytics. Here, it is required...
There has been increasing interests in processing large-scale real-world graphs, and recently many graph systems have been proposed. Vertex-centric GAS (Gather-Apply-Scatter) and Edge-centric GAS are two graph computation models being widely adopted, and existing graph analytics systems commonly follow only one computation model, which is not the best choice for real-world graph processing. In fact,...
Because of the rapid decreasing of sequencing cost, more research and clinical institutes are generating Next Generation Sequencing data at an increasing and impressive scale. University Medical Centers in the Netherlands are sequencing thousands patients a year each as part of their routine diagnosis. On the research front, the GoNL project and BIOS project coordinated by the BBMRI-NL consortium...
Large-scale medical imaging studies to date have predominantly leveraged in-house, laboratory-based or traditional grid computing resources for their computing needs, where the applications often use hierarchical data structures (e.g., NFS file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance for laboratory-base approaches reveal that performance is impeded...
Traditional in-house, laboratory-based medical imaging studies use hierarchical data structures (e.g., NFS file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance from these approaches is, however, impeded by standard network switches since they can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies...
An auto controlled Ant colony optimization algorithm it controls the behavior of the ant colony algorithm automatically based on a priori heuristic models. Lazy ants are basically mutated version of active ants that remains alive till the fitter lazy ants are generated in the successive generations. This work presents an improved auto controlled ACO algorithm using the lazy ant concept. Grid Scheduling...
The specialists know that the Servers are not equal speeds as it depends on several factors Such as processor speed, bandwidth, congestion. … etc. This means that if we use multiple Servers for the purpose of downloading some files there will be a problem that is the swift Server waits lazy one to complete its task. In this work, adoption new strategy for transformation files, called “Minimizing replica...
Data replication in data grid increases the availability of data and reduces the total execution time of the grid job. Replica replacement algorithm plays a vital role when storage space is limited. It is this algorithm that decides which replica should be replaced for the new one. The binomial prediction, Least Frequently Used (LFU) and Least Recently Used (LRU) replica replacement algorithms are...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.