The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Linked data mining has become one of the key questions in HPC graph mining in recent years. However, the existing RDF database engines are not scalable and are less reliable in heterogeneous clouds. In this paper we describe the design and implementation of Acacia-RDF which is a scalable distributed RDF graph database engine developed with X10 programming language to solve this issue. Acacia-RDF partitions...
Many popular e-commerce applications run on geo-distributed data centers requiring high availability. Fault-tolerant distributed data centers are designed by provisioning spare compute capacity to support the load of failed data center, apart from ensuring data durability. The main challenge during the planning phase is how to provision spare capacity such that the total cost of ownership (TCO) is...
This paper proposes a distributed processing communication scheme for a real-time network application that provides interactive services for multiple users. In the proposed scheme, the application is processed on a data processing function in the distributed servers. The distributed servers are selected to maximize the number of in-service users as the first priority and to minimize the delay time...
The data explosion in the emerging big data era imposes a big burden on the network infrastructure. This vision has urged the evolution of computer networks. By softwarizing traditional dedicated hardware based functions to virtualized network function (VNF) that can run on standard commodity servers, network function virtualization (NFV) technology promises increased networking efficiency, flexibility...
Service assurance for the telecom cloud is a challenging task and is continuously being addressed by academics and industry. One promising approach is to utilize machine learning to predict service quality in order to take early mitigation actions. In previous work we have shown how to predict service-level metrics, such as frame rate for a video application on the client side, from operational data...
Today we witness the fast growing scale of data generated, stored and processed in the digital world [1], which offers great opportunities to answer the questions that people have not been able to solve or even ask in the past due to the limitation of technologies. However, in provisioning the infrastructures for data-intensive services, people meet various challenges, which range from user-experienced...
In Software Defined Networking, the role of the centralized controller is paramount. Some SDN controllers adopt a distributed clustering mechanism for keeping normal operation of the whole network. Unfortunately, the data loss and the lack of consistency are unavoidable in this approach, because many controllers depend on volatile memory to store most of the important data. In case of restarting or...
Servers in data centers consume large amount of energy which increase the operational cost for cloud service providers, that spend a major portion of their revenue to pay bills due to inefficient workload assignment and wastage of resources. In order to minimize the operational cost of data centers, it is essential to optimize the scheduling of the jobs. In this paper, we address the problem of inefficient...
Implementing cloud computing empowers numerous paths to Web-based computing service offerings for meeting diverse needs. However, cloud data security and privacy information protection have also become a critical issue restraining the cloud applications. One of the major concerns in security is that cloud operators will have a chance to reach sensitive data, which dramatically increases users' anxiety...
Database developers all know the ACID acronym. It says that database transactions should be: Atomic, Consistent, Isolated, and Durable. These qualities seem indispensable, and yet they are incompatible with availability and performance in very large systems. For example, suppose you run an online book store and you proudly display how many of each book you have in your inventory. Every time someone...
We introduce a model for provable data possession (PDP) that allows a client that has stored data at an untrusted server to verify that the server possesses the original data without retrieving it. The model generates probabilistic proofs of possession by sampling random sets of blocks from the server, which drastically reduces I/O costs. The client maintains a constant amount of metadata to verify...
Providing security to the stored data on the cloud is one of the important challenges in cloud computing. Encrypted data which is stored on the cloud may be viewed or modified by the cloud service provider. To overcome this problem many techniques have been developed but, those cannot guarantee accurately about the security of the stored data. These modifications of the data by the service provider...
According to the security policy research in distributed data and information storage, considered from the data access patterns, this paper designs a system with new storage model and query mechanism for distributed platform and big data. This system can provide data sharing integrity check, while giving a hot backup solution, to improve the safety and maintainability of the cluster, and take a viable...
The use of digital applications is on the rise nowadays. So the processing of those data is done by a tool calledMapReduce. MapReduce has its own structure which cannot be modified. While processing those data, skew will occur in both map and reduce the phase. Map skew is easy to reduce but in case of reduced phase it may take time to reduce it. So a methodology is being created to reduce the Reducer...
A storage system in cloud is well thought-out as a very big scale storage system that has independent storage servers. The service that cloud storage provides is, that can store user's data from remote through network and other authenticated users can access the data easily. Hadoop distributed file system is used to store large files consistently and to retrieve those files at very high bandwidth...
Traditional database information and IoT real time data are now evolving to unified paradigms in whose accessing diverse data endpoints can be challenging due to interoperability problems that emerge when instant data are monitored and collected in typical database scenarios. In this paper, a new model is proposed to link persistent data with instantaneous information using publish-subscribe networks...
Data-security is a remarkable hurdle to the vast embracement of Cloud utility. In this paper, we introduce a distinctly secured data logging framework to keep track of the genuine usage of end-user's information in the Cloud. In particular, an approach focusing data is proposed to enable enclosed logging process with the user data and policies. A line-by-line security of file authentication is developed...
Fountain-code based cloud storage framework gives dependable online stockpiling arrangement through putting unlabeled substance obstructs into numerous capacity hubs. Luby Transform (LT) code is one of the prevalent wellspring codes for capacity frameworks because of its productive recuperation. However, to guarantee high achievement deciphering of wellspring codes based capacity, recovery of extra...
In several NoSQL database systems, among which is HBase, only one index is available for the tables, which is also the row key and the clustered index. Using other indexes does not come out of the box. As a result, the row key design is the most important thing when designing tables, because an inappropriate design can lead to detrimental consequences on performances and costs. Particular row key...
In several NoSQL database systems, among which is HBase, only one index is available for the tables, which is also the row key and the clustered index. Using other indexes does not come out of the box. As a result, the row key design is the most important thing when designing tables, because an inappropriate design can lead to detrimental consequences on performances and costs. Particular row key...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.