The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Runtime verification can be used to find bugs early, during software development, by monitoring test executions against formal specifications (specs). The quality of runtime verification depends on the quality of the specs. While previous research has produced many specs for the Java API, manually or through automatic mining, there has been no large-scale study of their bug-finding effectiveness....
We present a novel fault detection method for application in component-based robotic systems. In contrast to existing work, our method specifically addresses faults in the software system of the robot using a data-driven methodology which exploits the inter-process communication of the system. This enables an application of the approach without expert knowledge or availability of complex software...
The maturity of hardware virtualization has motivated Communication Service Providers (CSPs) to apply thisparadigm to network services. Virtual Network Functions (VNFs)result from this trend and raise new dependability challengesrelated to network softwarisation that are still not thoroughlyexplored. This paper describes a new approach to detect ServiceLevel Agreements (SLAs) violations and preliminary...
Allocating resources to virtualized network functions and services to meet service level agreements is a challenging task for NFV management and orchestration systems. This becomes even more challenging when agile development methodologies, like DevOps, are applied. In such scenarios, management and orchestration systems are continuously facing new versions of functions and services which makes it...
Today's science is more and more driven by collecting and evaluating increasing amounts of data. Utilizing Scientific Workflows is one suitable method how to organize processing pipelines for this purpose. In this work, we show that performance improvements on the execution of existing workflows can be achieved, if the conditions for starting selected tasks with certain data access characteristics...
Handling dynamically-evolving environments, where unpredictable scenarios, incomplete information and pressure for quick decisions are commonplace, might bring great complexity for teams during treatment. Variables considered for undertaking recommended procedures may yield a great number of decision alternatives. Additionally, expectations regarding the response to treatment may not match those actually...
BigDAWG is a polystore database system designed to work with heterogenous data that may be stored in disparate database and storage engines. A central component of the BigDAWG polystore system is the ability to submit queries that may be executed in different data engines. This paper presents a monitoring framework for the BigDawg federated database system which maintains performance information on...
We use supervised machine learning algorithms (i.e., Decision Trees, Random Forest, and K-nearest Neighbors) to predict performance characteristics such as runtime and IO traffic of batch jobs on high-end clusters, using only user job scripts as input. We show that decision trees outperform other algorithms and accurately predict the runtime of 73% of jobs within a error tolerance of 10 minutes, which...
Companies are increasingly incorporating commercial Business Process Management Systems (BPMSs) as mechanisms to automate their daily procedures. These BPMSs manage the information related to the instances that flow through the model (business data), and recover the information concerning the process performance (Process Performance Indicators). Process Performance Indicators (PPIs) tend to be used...
Engineering and computer science have come up with a variety of techniques to increase the confidence in systems, increase reliability, facilitate certification, improve reuse and maintainability, improve interoperability and portability. Among them are various techniques based on formal models to enhance testing, validation and verification. In this paper, we are concentrating on formal verification...
Automated file analysis is important in malware research for identifying malicious files in large collection of samples. This paper describes an automatic system that can classify a file as infected based on the dynamic behavior of the file observed inside a controlled monitored environment. Based on features revealed at runtime, we train a Support Vector Machine classifier that can be further used...
The increasing need for adaptive systems has led to the creation of many frameworks aiming to support their development. Nonetheless, the implementation of requirements related to adaptation, just as of any other kind of requirement, comes at a cost. In being so, it is necessary to consider requirements' priorities when creating such systems. In this work we analyze the relationship between requirements...
After a software system is compromised, it can be difficult to understand what vulnerabilities attackers exploited. Any information residing on that machine cannot be trusted as attackers may have tampered with it to cover their tracks. Moreover, even after an exploit is known, it can be difficult to determine whether it has been used to compromise a given machine. Aviation has long-used black boxes...
ETCS is an European signalling, control and automatic train protection system. Even with the most advanced quality assurance techniques, correctness of ETCS is hard to be ensured within the development phases. In this paper, we use runtime verification to provide on-going protection during the operational phase. To define a suitable monitoring specification language, we propose a graphic formalism...
Programmable Logic Controllers (PLC) are widely used in industrial to run an automated system. PLC must have some means of receiving and interpreting signal from another device such as sensors and switches. There are two main type of PLC input, discrete and analog. Arduino was built as a bridge or interface adding the capabilities of Siemens PLC CPU1215C to receive another type of input. In this study,...
Cloud computing becomes more and more popular for its on-demand services and pay-as-you-go model. Elasticity is the key feature of cloud computing technology, which can reduce and add resources flexibly to meet customers' need. Considering the important of elasticity of cloud computing platform, the objective of this paper is to study the evaluation elasticity of cloud computing platform. In this...
The emergence of the Industrial Internet results in an increasing number of complicated temporal interdependencies between automation systems and the processes to be controlled. There is a need for verification methods that scale better than formal verification methods and which are more exact than testing. Simulation-based runtime verification is proposed as such a method, and an application of Metric...
Cloud-based systems get changed more frequently than traditional systems. These frequent changes involve sporadic operations such as installation and upgrade. Sporadic operations may fail due to the uncertainty of cloud platforms. Each sporadic operation manipulates a number of cloud resources. The accessibility of resources manipulated makes it possible to build an accurate process model of the correct...
Huge pages have been widely supported by architecture and operating system. Huge pages map large fixed virtual memory regions, on the orders of 2MB to 1GB on Intel x86-64 architecture. The page size is the key to striking the balance between trade-off pairs. For example, initially, the use of huge pages aims to mitigate address translation overhead for memory-intensive workloads with large memory...
Inevitable tradeoff between read performance and space saving always shows up when applying offline deduplication for primary storage. We propose Mudder, a multi-tiered and dynamic SLA-driven deduplication framework to address such challenge. Based on specific Dedup-SLA configurations, Mudder conducts multi-tiered deduplication process combining Global File-level Deduplication (GFD), Local Chunk-level...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.