The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Cloud storage technology is an important research orientation in cloud computing field. Due to privacy leakage and security problems, it is difficult for organizations holding core data (such as innovative enterprise and army) to extensively apply public cloud storage service. In this paper, a secure private cloud storage system VI-PCS based on virtual isolation mechanism is put forward. The system...
In this paper, a throughput-aware transient fault detection method is presented with respect to the features of server processors. The proposed method takes the advantages of combination of reconfigurable redundant execution-based fault detection and speculative fault detection. The reconfigurable redundant execution-based fault detection method by using configuration manager module couples two free...
In this paper, a throughput-aware transient fault detection method is presented with respect to the features of server processors. The proposed method takes the advantages of combination of reconfigurable redundant execution-based fault detection and speculative fault detection. The reconfigurable redundant execution-based fault detection method by using configuration manager module couples two free...
Dynamic software updating (DSU) is a technique that can update running software systems without stopping them. Most existing approaches require programmer participation to guarantee the correctness of dynamic updating. However, manually preparing dynamic updating is error-prone and time-consuming. Therefore, other approaches prefer to aggressively perform updating without programmer intervention,...
The problem of solving complex and demanding tasks of production engineering is discussed. The complexity of the methods used in the solution of such tasks requires significant computational resources. It is proposed to use the widespread personal computers combined into a single computer network on the basis of the Grid technology. However, the informal aspect is the choice of an effective structure...
A new comprehensive optimization model based on queueing theory is proposed for resource allocation in cloud computing. Response time reliability is considered in our proposed model, which is proved to be essential through our case study. Using our optimization model, the optimal resource allocation of cloud computing strategy is achieved by minimizing the energy with the constraints of average response...
In this paper, we propose a new approach for cross-layer electromigration (EM) induced reliability modeling and optimization at physics, system and datacenter levels. We consider a recently proposed physics-based electromigration (EM) reliability model to predict the EM reliability of full-chip power grid networks for long-term failures. We show how the new physics-based dynamic EM model at the physics...
Today's IT services are moving to cloud computing environment in order to process client request and provide services effectively. In this case, reliability is a major factor to reduce the load in storage and network resource. In traditional method, it uses fault tolerance, replication of VM and storing checkpoint image in a neighboring server. Replication increases cost for a large system and checkpoint...
Through analyzing the storage system requirements of supercomputer this paper designs a near-line storage system called NLSS based on the combination of HDFS (Hadoop distributed file system) and ZFS (Zettabyte file system). NLSS uses fat storage nodes (large storage servers) to build near-line storage clusters based on HDFS, and uses the ZFS file system to further enhance HDFS. NLSS effectively reduces...
In P2P-based contents distribution, an overlay network is constructed by peers that reside in user domain not service provider domain, and each peer shares data with each other. Due to the unmanageability characteristics of P2P networking, it tends to lead to inefficiency due to ignorance of underlying network in spite of its advantages and unfairness caused by absence of incentives to contributing...
This, paper presents a novel approach towards a comprehensive analysis of various simulation-based tools to test and measure the Cloud Datacenter performance, scalability, robustness and complexity. There are different Cloud Datacenter resources in cloud Computing Infrastructure like Virtual Machine, CPU, RAM, SAN, LAN and WAN topologies. The server machines need to be analyzed for their extent of...
Nowadays, cloud storage provides some convenience for a user to store his files, which takes some threats to his files as well. At the same time, the cloud storage provider worries about being extorted by some malicious user. In this work, we study the problem that protects a cloud storage provider from being extorted by a malicious user and enables a user to retrieve his files from the cloud storage...
This paper describes a method to solve the complexity of distributed environment in cloud technology and the single-point failure problem. In this paper, we take the virtual machine failure and host failure in OpenStack into consideration. The ability of fast restoration of this service is achieved by components in OpenStack which is called ceilometer and new components named Senlin. The function...
The advent of Big Data has brought many challenges and opportunities in distributed systems, which have only amplified with the rate of growth of data. There is a need to rethink the software stack for supporting data intensive computing and big data analytics. Over the past decade, the data analytics applications have turned to finer granular tasks which are shorter in duration and much more in quantity...
With the increasing applications of big data computing on large scale cloud platforms, virtual machines (VM) are utilized to provide flexibility and availability for big data information systems. Energy efficient VM management and distribution on cloud platforms has become an important research subject. Mapping VMs "correctly" into PMs (Physical Machines) requires knowing the capacity of...
Nowadays, with the rapid development of distributed computing, message distribution has been more and more important to distributed systems. Kafka was developed to collect and distribute massive messages for distributed systems. It is a high-throughput distributed messaging system. Kafka can always keep stable performance even if it processes millions of messages per second. Messages are persistent...
Machine Reassignment is a challenging problem for constraint programming (CP) and mixed integer linear programming (MILP) approaches, especially given the size of data centres. The multi-objective version of the Machine Reassignment Problem is even more challenging and it seems unlikely for CP or MILP to obtain good results in this context. As a result, the first approaches to address this problem...
The Intelligent Generation Control (IGC) system based on intraday rolling schedule realizes the dynamic rolling adjustment on day-ahead generation schedule, taking full advantage of day-ahead generation schedule, grid real-time situation, grid constraint and all kinds of optimization goals. This system can realize friendly intelligent interaction among power plants, dispatchers and optimization systems...
Migrating to the cloud is becoming a necessity for the majority of businesses. Cloud tenants require certain levels of performance in aspects like high availability and service rate and deployment options. On the other hand, Cloud providers are in constant pursuit of a system that satisfies client demands for resources, maximizes availability, minimizes power consumption and, in turn, minimizes the...
During the past ten years, cloud computing gradually becomes mature which is a current research hotspot. OpenStack is currently one of the most popular open source cloud platforms that developed by Rack space and NASA. It provides some toolkits to deploy cloud environment and implement cloud infrastructure like Amazon EC2. In this paper, we describe OpenStack architecture in detail and introduce its...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.