The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
The multicore revolution promises potentially hundreds of cores in desktop computers. The ever increasing number of cores per chip will be accompanied by a pervasive data deluge whose size will probably increase even faster than CPU core count over the next few years. This suggests the importance of parallel data analysis and data mining applications with good multicore, cluster and grid performance...
This paper describes the application hosting services (AHS) that are a part of the Japan e-science project following NAREGI project. AHS provides application catalog information for the applications in the multiple grid environments that are deployed in heavy duty grid environments and lightweight laboratory level grid environments to users. In addition, community members can co-share applications...
Research data collections are tremendously important and thus need good curation. However data collections are significantly different to publication repositories and so we need to ensure that these differences are taken into account when managing research data. We believe that a good way of approaching this problem is to articulate the needs of research data stakeholders - particularly users and...
Multidimensional scaling constructs a configuration points into the target low-dimensional space, while the interpoint distances are approximated to the corresponding known dissimilarity values as much as possible. SMACOF algorithm is an elegant gradient descent approach to solve Multidimensional scaling problem. We design parallel SMACOF program using parallel matrix multiplication to run on a multicore...
This paper presents a new data manager that we developed for the DIET middleware: DAGDA. We present new data management features introduced by DAGDA and show the improvements DIET users can obtain by using them. Our experiments results show the validity of the DAGDA data management approach and show how the data management choices can improve the distributed applications performances.
This paper presents an adaptive XML parser that is based on table-driven XML (TDX) parsing technology. This technique can be used for developing extensible high-performance Web services for large complex systems that typically require extensible schemas. The parser integrates scanning, parsing, and validation into a single-pass without backtracking by utilizing compact tabular representations of schemas...
Job scheduling research for parallel systems has been widely exploited in recent years, especially in centers with high performance computing facilities. In the recent past we presented the eNANOS execution environment which is based on a coordinated architecture, from the CPU allocation to the grid scheduling, providing a good low level support to perform an efficient high level scheduling. In this...
Computer architecture is now at an important juncture as single-core CPU power is expected to be nearly constant. The microprocessor industry is rapidly moving towards chip multi-processors (CMPs), commonly referred to as multi-core processors. The transition of CPUs from single to multi-core implementations requires a corresponding shift in the programming paradigm for grid and e-science libraries...
Scientific communities are increasingly exposing information and tools as online services in an effort to abstract complex scientific processes and large data sets. Clients are then able to access services without knowledge of their internal workings therefore simplifying the process of replicating scientific research. Taking a service-oriented approach to science (SOS) facilitates reuse, extension,...
In this paper we present novel web services offered by the StrainInfo.net bioportal. This portal integrates information in the domain of microbiology and offers a uniform web interface to a multitude of data providers. By providing web services, the integration results of StrainInfo.net become available for automated processing. Several classes of web services are implemented and some interesting...
Domain scientists synthesize different data and computing resources to solve their scientific problems. Making use of distributed execution within scientific workflows is a growing and promising way to achieve better execution performance and efficiency. This paper presents a high-level distributed execution framework, which is designed based on the distributed execution requirements identified within...
To effectively support real-time monitoring and performance analysis of scientific workflow execution, varying levels of event data must be captured and made available to interested parties. This paper discusses the creation of an ontology-aware workflow monitoring system for use in the Trident system which utilizes a distributed publish/subscribe event model. The implementation of the publish/subscribe...
Cyberenvironment has emerged as a next generation cyberinfrastructure to support 21st century scientific research and education. In this paper, we introduce an on-going cyberenvironment project named e-AIRS, anabbreviation of 'e-science aerospace integrated research system'. e-AIRS is a cyberinfrastructure-based portal system, and integrates a set of tools and services to support the aerodynamics...
Simulation and thus scientific computing is the third pillar alongside theory and experiment in todays science and engineering. The term e-science evolved as a new research field that focuses on collaboration in key areas of science using next generation infrastructures to extend the powers of scientific computing. This paper contributes to the field of e-science as a study of how scientists actually...
Grid resource management tools have evolved from manual discovery and job submission to sophisticated brokering solutions. User requirements have created certain properties that resource managers have learned to support. This development is still continuing, and users already find it difficult to distinguish brokers and to migrate their applications when they move to a different grid. Moreover, new...
The UK national grid service (NGS) is responsible for standardised access to data and compute resources across UK academia regardless of research area. The NGS has been in production for four years and is currently in its second iteration with planning for NGS III currently at an advanced stage. This paper examines the organisational structure of the NGS which has a distributed structure across four...
Today basically two grid concepts rule the world: service grids and desktop grids. Service grids offer an infrastructure for grid users, thus require notable management to keep the service running. On the other hand, desktop grids aim to utilize free CPU cycles of cheap desktop PCs, are easy to set up, but the availability towards users is limited compared to the service grid. The aim of the EDGeS...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.