The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
We review the growing power and capability of commodity computing and communication technologies largely driven by commercial distributed information systems. These systems are built from CORBA, Microsoft’s COM, JavaBeans, and rapidly advancing Web approaches. One can abstract these to a three-tier model with largely independent clients connected to a distributed network of servers. The latter host...
Since the definition of the High Performance Fortran (HPF) standard, we have been maintaining a suite of application kernel codes with the aim of using them to evaluate the available compilers. This paper presents the results and conclusions from this study, for sixteen codes, on compilers from IBM, DEC, and the Portland Group Inc. (PGI), and on three machines: a DEC Alphafarm, an IBM SP-2, and a...
InfoMall is a programme lead by the Northeast Parallel Architectures Center (NPAC) featuring a partnership of over twenty-five organisations and a plan for accelerating development of the High Performance Computing and Communications (HPCC) software and systems industry. HPCC (or HPCN as it is known in Europe) is a critical technology which will have unprecedented impact on industry, education, society,...
We describe our InfoMall technology transfer program — a partnership of over sixty commercial, academic and federal organisations working together on HPCN technology development as well as HPCN-enabled business activities. We discuss a selection of some of the project activities being undertaken by InfoMall members and focus on an “Electronic InfoMall” activity in collaboration with the US Air Force...
The surge in the popularity of the World Wide Web (WWW) has corresponded to a decreasing market for specialised high performance computers. This paper discusses how, by making use of technology developed from the broader end of the computing pyramid, much of the past decade's work in distributed computing can be realised in the context of the larger WWW market. Not only do these new technologies offer...
There is a class of problems in computational science and engineering which require formulation in full matrix form and which are generally solved as dense matrices either because they are dense or because the sparsity can not be easily exploited. Problems such as those posed by computational electromagnetics, computational chemistry and some quantum physics applications frequently fall into this...
We discuss the High Performance Fortran data parallel programming language as an aid to software engineering and as a tool for exploiting High Performance Computing systems for computational fluid dynamics applications. We discuss the use of intrinsic functions, data distribution directives and explicitly parallel constructs to optimize performance by minimizing communications requirements in a portable...
Multidimensional Scaling (MDS) is a dimension reduction method for information visualization, which is set up as a non-linear optimization problem. It is applicable to many data intensive scientific problems including studies of DNA sequences but tends to get trapped in local minima. Deterministic Annealing (DA) has been applied to many optimization problems to avoid local minima. We apply DA approach...
FutureGrid provides novel computing capabilities that enable reproducible experiments while simultaneously supporting dynamic provisioning. This paper describes the FutureGrid experiment management framework to create and execute large scale scientific experiments for researchers around the globe. The experiments executed are performed by the various users of FutureGrid ranging from administrators...
We use two large simulations, the chemical reaction dynamics of H + H2 and the collision of two galaxies to show that current parallel machines are capable of large supercomputer level calculations. We contrast the different architectural tradeoffs for these problems and draw some implications for future production parallel supercomputers.
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.