The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
The feasibility of large-scale decentralized networks for local computations, as an alternative to big data systems that are often privacy-intrusive, expensive and serve exclusively corporate interests, is usually questioned by network dynamics such as node leaves, failures and rejoins in the network. This is especially the case when decentralized computations performed in a network, such as the estimation...
Scientific workflows have become a popular computational model in a variety of application domains, such as astronomy, material science, physics, and biology. As scientific applications are moving to the cloud to take advantage of the elasticity and service level agreement of resources, there has been a number of recent research efforts on cloud-based workflow systems that support various types of...
We present a method for developing executable algorithms for quantitative cyber-risk assessment. Exploiting techniques from security risk modeling and actuarial approaches, the method pragmatically combines use of available empirical data and expert judgments. The input to the algorithms are indicators providing information about the target of analysis, such as suspicious events observed in the network...
Traffic anomalies can create network congestion, so its prompt and accurate detection would allow network operators to make decisions to guarantee the network performance avoiding services to experience any perturbation. In this paper, we focus on origin-destination (OD) traffic anomalies; to efficiently detect those, we study two different anomaly detection methods based on data analytics and combine...
Model-based simulation and monitoring are becoming part of advanced learning environments. In this paper, we propose a model-based simulation and monitoring framework for management of learning assessment and we describe its architecture and main functionalities. The proposed framework allows user-friendly learning simulation with a strong support for collaboration and social interactions. Moreover,...
It has been shown that up to 64 percent of personal computers in office buildings are left running during after-hours. Enabling power management options such as sleep mode is a straightforward method to reduce the energy consumption of computers. However, choosing the right timeout can be challenging. A sleep timeout which is too low leads to discomfort, whereas a timeout which is too high results...
Sensor data is extremely important to monitor machines at the shop-floor level and its environmental surrounding conditions for condition-based monitoring, machine diagnosis and process adaptation to new requirements. Based on the described scope, self-diagnostics and self-organizing capabilities are core functionalities of any Industrial Wireless Sensor Network (IWSN). In the present work, a simulated...
In this paper, we present Intercom, a simulator framework that provides separate components to address the interdependent aspects of IoT systems, such as sensing, physical interaction, wireless communication, and computation. We initially evaluate a scalable sensing and communication model, which simulates wireless signal strength measurements with an average error of 6.1dBm.
Service Level Agreement (SLA) is gaining more and more interest since the dynamic aspect of the cloud computing can adversely influence the guarantee of the Quality of Service (QoS). Proving an SLA violation is considered to be a complex operation to the cloud consumer. This task gets more and more difficult to the consumers as they use services from multiple providers, each with its own monitoring...
Nowadays, with the increasing burst of newly generated data everyday, as well as the vast expanding needs for corresponding data analyses, grand challenges have been brought to big data computing platforms. Computing resources in a single cluster are often not able to fulfill the computing capability needs. The requests of distributed computing resources are dramatically arising. In addition, with...
With the rapid development of cloud computing, more and more application providers are deploying their applications in the cloud in order to be free from the burden of system administration. Meanwhile, the world is full of information explosion, which makes some applications become hot in a short period of time unexpectedly. Thus these applications in cloud may encounter sudden traffic increment or...
During the last decade the integration of smart devices into humans' lives has witnessed an exponential increase. This has led to a new paradigm namely Cyber-Physical-Social Systems (CPSS) which consists of cyber components (computer systems) physical components (controlled objects) and social components (humans and their interactions). Although social components are playing a main role in CPSS their...
Cloud Computing, as a distributed computingparadigm, consists in provisioning infrastructure, software, andplatform resources as services. This paradigm is being increasinglyused for the deployment and execution of servicebasedapplications. To efficiently manage them according to theautonomic computing paradigm, service-based applications canbe associated with autonomic manager (AM) components thatmonitor,...
Traditional Cloud model is not designed to handle latency-sensitive Internet of Things applications. The new trend consists on moving data to be processed close to where it was generated. To this end, Fog Computing paradigm suggests using the compute and storage power of network elements. In such environments, intelligent and scalable orchestration of thousands of heterogeneous devices in complex...
Industrie 4.0 introduces a concept of digitalized production by allowing agile and flexible integration of new business models while maintaining manufacturing costs and efficiency at the reasonable level. In addition, cloud computing is one of the IT trends that is used nowadays to offer services on demand from a virtual environment in enterprise and office areas. The use of cloud computing in an...
This paper describes the extension of the work started in the first Cloud Computing session in WETICE 2009. A new computing paradigm using distributed intelligent managed elements (DIME) and DIME network architecture introduced in WETICE2010 is used to demonstrate globally interoperable public and private cloud network deploying cloud agnostic workloads. The workloads are cognitive and capable of...
Detecting anomalous behaviors of cloud platforms is one of critical tasks for cloud providers. Every anomalous behavior potentially causes incidents, especially some unaware and/or unknown issues, which severely harm their SLA (Service Level Agreement). Existing solutions generally monitor cloud platform at different layers and then detect anomalies based on rules or learning algorithms on monitoring...
In this paper, we introduce a runtime monitoring method for Actor-based programs and present a Scala module that realizes the proposed method. The primary characteristic of our method is that it supports asynchronous message-passing based on the Actor model. Besides, the module does not require specialized languages for describing application properties to be monitored. Once a developer incorporates...
Inside of any organization, the acquisition process of product or service is extremely important and if not done properly can cause uncountable damage. In this way, the software acquisition isn't different and can cause uncountable problems too, like delays in existing contracts or even the dependence on the company contracted to code the product. Thus, the objective of this work is provide a software...
In this paper, we develop a human social intelligence inspired population-based optimization algorithm called Higher Order Cognitive Optimization (HOCO) algorithm. Each of the individuals in this HOCO possess human-like characteristics such as decision making ability, self/social-awareness, self/social belief, shared information processing, and self-regulation. These characteristics are modeled as...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.