The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Sometimes parents don't have resources or time to attend to their young ones as they have certain predispositions. This document demonstrates the process of construction of a web-service/module, defines the algorithm, procedure of construction of the algorithm and the analysis/results of the procedures performed. The market for this system is the working class nuclear families or single parents that...
Computing machine learning models in the cloud remains a central problem in big data analytics. In this work, we introduce a cloud analytic system exploiting a parallel array DBMS based on a classical shared-nothing architecture. Our approach combines in-DBMS data summarization with mathematical processing in an external program. We study how to summarize a data set in parallel assuming a large number...
With the rapid development of WEB applications, the demand for dynamically adjusting computing resources based on the load variation is increasing. However, most of the traditional WEB systems have limited ability to respond to load changes. In order to solve the problem, software self-adaptation technology has been applied to the resource management of WEB systems. Many researchers have tried to...
Hardware failures in cloud data centers may cause substantial losses to cloud providers and cloud users. Therefore, the ability to accurately predict when failures occur is of paramount importance. In this paper, we present FailureSim, a simulator based on CloudSim that supports failure prediction. FailureSim obtains performance related information from the cloud and classifies the status of the hardware...
A primary challenge of the cyberinfrastructure research community is the need to define the Platforms for Science beyond 2020. We analyze major current trends and propose that in order to deliver the Platform for Science in 2020 the dominant research challenge is to manage the convergence of capabilities of traditional HPC systems with richness of Apache Big Data systems. In this vision paper, we...
The REliable CApacity Provisioning and enhanced remediation for distributed cloud applications (RECAP) project aims to advance cloud and edge computing technology, to develop mechanisms for reliable capacity provisioning, and to make application placement, infrastructure management, and capacity provisioning autonomous, predictable and optimized. This paper presents the RECAP vision for an integrated...
Finding the best model to reveal potential relationships of a given set of data is not an easy job and often requires many iterations of trial and errors for model sections, feature selections and parameters tuning. This problem is greatly complicated in the big data era where the I/O bottlenecks significantly slowed down the time needed to finding the best model. In this article, we examine the case...
A major challenge in Cloud computing is resource provisioning for computational tasks. Not surprisingly, previous work has established a number of solutions to provide Cloud resources in an efficient manner. However, in order to realize a holistic resource provisioning model, a prediction of the future resource consumption of upcoming computational tasks is necessary. Nevertheless, the topic of prediction...
Under the background of cyber-physical systems and Industry 4.0, intelligent manufacturing has become an orientation and produced a revolutionary change. Compared with the traditional manufacturing environments, the intelligent manufacturing has the characteristics as highly correlated, deep integration, dynamic integration, and huge volume of data. Accordingly, it still faces various challenges....
Nowadays MapReduce and its open source implementation, Apache Hadoop, are the most widespread solutions for handling massive dataset on clusters of commodity hardware. At the expense of a somewhat reduced performance in comparison to HPC technologies, the MapReduce framework provides fault tolerance and automatic parallelization without any efforts by developers. Since in many cases Hadoop is adopted...
Cloud computing is the one of the admired paradigms of current era, which facilitates the users with on demand services and pay as you use services. It has tremendous applications in almost every sphere such as education, gaming, social networking, transportation, medical, business, stock market, pattern matching, etc. Stock market is such an industry where lots of data is generated and benefits are...
Hadoop framework has recently been adapted for use by the video analytics community for intensive, distributed video processing, storage. However, the challenge is to estimate the required amount of resources to be used in such an environment to fulfil the requirements of a user with requirements constraints. Therefore, it is important to understand how to model the performance of a Hadoop based implementation...
The process of scientific discovery is traditionally assumed to be entirely executed by humans. This article highlights how increasing data volumes and human cognitive limits are challenging this traditional assumption. Relevant examples are found in observational astronomy and geoscience, disciplines that are undergoing transformation due to growing networks of space-based and ground-based sensors...
Machine learning (ML) approach to modeling and predicting real-world dynamic system behaviours has received widespread research interest. While ML capability in approximating any nonlinear or complex system is promising, it is often a black-box approach, which lacks the physical meanings of the actual system structure and its parameters, as well as their impacts on the system. This paper establishes...
As data become big and complex, it is also more challenging to data scientists to extract useful information in a timely fashion. Although many tools and packages are available to them, it is crucial to have a high productive and scalable big data analytics platform to carry out their daily work productively. The objective for our work is to build such a productive data analytics cloud platform by...
Growing amounts of data will be one of consequences in Industry 4.0. This paper deals about mining frequent patterns and important factors in data. Classification is one of the most common assignments in data analytics. We used letter recognition data from the UCI repository as data set for our experiment. Data set contains more than 20000 instances of 26 classes. In our case, it represents multi-class...
In this paper, we propose a novel pruning model of deep learning for large-scale distributed data processing to simulate a potential application in the geographical neighbor of Internet of Things. We formulate a general model of pruning learning, and we investigate the procedure of pruning learning to satisfy hard constraint and soft constraint. The hard constraint is a class of non-flexible setting...
Over the past decade, advances in the laser technology brought about an increase in the maximum achievable laser intensity of six orders. At the same time, the pulse duration was considerably shortened. The interaction of such ultrashort and intense laser pulses with solid targets and dense plasmas is a rapidly developing area of physics. Hence, a growing interest in characterizing as accurately as...
Elastic architectures and the "pay-as-you-go" resource pricing model offered by many cloud infrastructure providers may seem the right choice for companies dealing with data centric applications characterized by high variable workload. In such a context, in-memory transactional data grids have demonstrated to be particularly suited for exploiting advantages provided by elastic computing...
Given the elasticity, dynamicity and on-demand nature of the cloud, cloud-based applications require dynamic models for Quality of Service (QoS), especially when the sensitivity of QoS tends to fluctuate at runtime. These models can be autonomically used by the cloud-based application to correctly self-adapt its QoS provision. We present a novel dynamic and self-adaptive sensitivity-aware QoS modeling...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.