The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Cloud computing is the Internet based computing to deliver service. The major components to establish cloud are distributed systems, service oriented computing, web2.0, virtualization and utility computing. The integration of these components are required to make available data anytime and anywhere. Migration of virtual machines is the key issue to manage heterogenous cloud for load balancing. The...
Time-varying loads introduce errors in the estimated model parameters of service-level predictors in computer networks. A load-adjusted modification of a traditional unadjusted service-level predictor is contributed, based on source separation. It mitigates these errors and improves the service-quality predictions for video-on-demand by $\approx 0.6$ –2 dB.
The speed of memory capacity expansion of the computer system has not kept up with the speed of the increase of the memory requirement of large memory applications. Also, big memory system has been too expensive for many researchers and students. Therefore, approaches to utilize remote memory has been considered as a cost effective way to run large memory applications in the cluster environment where...
Stream processing is a compute paradigm that promises safe and efficient parallelism. Its realization requires optimization of multiple parameters such as kernel placement and communications. Most techniques to optimize streaming systems use queueing network models or network flow models, which often require estimates of the execution rate of each compute kernel. This is known as the non-blocking...
While many cloud providers today offer powerful computing infrastructure as a service, and enterprises are already making routine use of it, the adoption of cloud computing for engineering and scientific applications is lagging behind. Despite the many benefits cloud resources provide, reasons for this slow adoption are many: complex access to clouds, inflexible software licensing, time-consuming...
Accelerators such as graphics processing units (GPUs) provide an inexpensive way of improving the performance of cluster systems. In such an arrangement, the individual nodes of the cluster are directly connected to one or more accelerator devices via PCI Express. This results in a static mapping of accelerators onto compute nodes, where each accelerator can only be accessed from exactly one compute...
Transparent computing separates computation and storage in different machines with a storage virtualization mechanism. In existing implementation, the user operating system needs to be modified to achieve storage virtualization. This paper presents a virtual machine-based network storage system for transparent computing, which uses a virtualized device model in service operating system to redirect...
This paper presents OpenCL Remote framework that extends the native OpenCL platform model to network scale and utilizes the native OpenCL's support of heterogeneous computing. OpenCL Remote boosts performance by distributing computation over network to many compute devices in parallel.
Cloud Computing provides an optimal infrastructure to utilise and share both computational and data resources whilst allowing a pay-per-use model, useful to cost-effectively manage hardware investment or to maximise its utilisation. Cloud Computing also offers transitory access to scalable amounts of computational resources, something that is particularly important due to the time and financial constraints...
The idea behind cloud computing is to deliver Infrastructure-, Platform- and Software-as-a-Service (IaaS, PaaS and SaaS) over the network on an easy pay-per-use business model. In this paper, we present our work, Virtual Cluster as a Service (ViteraaS), that provides on-demand high performance computing for research projects, and e-Learning and teaching purposes in a private cloud. Moreover, ViteraaS...
Managing the large volumes of data produced by emerging scientific and engineering simulations running on leadership-class resources has become a critical challenge. The data has to be extracted off the computing nodes and transported to consumer nodes so that it can be processed, analyzed, visualized, archived, etc. Several recent research efforts have addressed data-related challenges at different...
An automatic virtual metrology framework (AVMF) for the TFT-LCD industry is designed and implemented in this paper. The AVMF has capabilities of creating VM models, deploying and refreshing VM models, monitoring and managing VM systems remotely, storing model data and conjectured results centrally, and providing various friendly graphical user interfaces. Pluggable interfaces and functional modules...
As data sizes continue to increase, the concept of active storage is well fitted for many data analysis kernels. Nevertheless, while this concept has been investigated and deployed in a number of forms, enabling it from the parallel I/O software stack has been largely unexplored. In this paper, we propose and evaluate an active storage system that allows data analysis, mining, and statistical operations...
Graphics processing units (GPUs) have evolved into an extremely powerful and flexible processor, making them an attractive platform for high performance computing due to their extremely high floating-point processing performance, huge memory bandwidth and their comparatively low cost. This paper proposes a new platform named PConG for pervasive computing. We describe the design and implementation...
Fair-share scheduling attempts to grant access to a resource based on the amount of ??share?? that a task possesses. It is widely used in places such as Internet routing, and recently, in the Linux kernel. Software performance engineering is concerned with creating responsive applications and often uses modeling to predict the behaviour of a system before the system is built. This work extends the...
Virtualization technology has attracted much attention in recent years. This paper describes the vision and mission of ChinaV, which is the national fundamental research program for virtualization technology in China. Furthermore, related topics about single host virtualization, multiple VM management schemes and desktop virtualization will be introduced. We first describe a remote memory virtualization...
This paper presents the research work in designing the architecture of translation server of Internet based machine translation system by using the infrastructure of grid. Where the translation server is able to access a LAN (or WAN) whose spare computing resources could be employed to accomplish the massive translation works generated by overwhelming user requests. According to the characteristics...
The design of embedded control systems should be addressed in both the controller definition and its implementation. While the design of the controller is based on control theory, the implementation is designed by assuming the principle that control loops can be modeled and implemented as periodic activities. Periodic activities that can be organise attending to different implementation criteria....
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.