The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
It is essential for a constructivist teacher to monitor the apprenticeship of each student in order to facilitate the definition of the next steps in the development of a discipline. Since monitor the apprenticeship is a very complex and time-consuming task, a theoretical framework that supports the observations of the teacher becomes necessary. The use of concept maps to record a students understanding...
For runtime verification techniques, the most important part that limits its usage is how to reduce the influence of monitors. An important indicator is the amount of software codes after monitor instrumentation. The application of RV is hindered from the size-explosion problem of monitor construction. Namely, the state number of the monitor obtained is doubly exponential in the size of the input...
Concurrency is a requirement for much modern software, but the implementation of multithreaded algorithms comes at the risk of errors such as data races.Programmers can prevent data races by documenting and obeying a locking discipline, which indicates which locks must be held in order to access which data.This paper introduces a formal semantics for locking specifications that gives a guarantee of...
Event segmentation is an important step in monitoring and management applications that categorizes different events into different segments. This is important especially when applications, to be monitored and managed, are large-scale, comprehensive and data-intensive in nature. The process of segmentation is based on data clustering which is one of the key data mining methods used these days. There...
Engineering and computer science have come up with a variety of techniques to increase the confidence in systems, increase reliability, facilitate certification, improve reuse and maintainability, improve interoperability and portability. Among them are various techniques based on formal models to enhance testing, validation and verification. In this paper, we are concentrating on formal verification...
We introduce a hardware-based methodology for performing workload execution forensics in microprocessors. More specifically, we discuss the on-chip instrumentation required for capturing the operational profile of the Translation Lookaside Buffer (TLB), as well as an off-line machine learning approach which uses this information to identify the executed processes and reconstruct the workload. Unlike...
Security tagging schemes are known as promising mechanisms for providing security features in computer systems. Tags carry information about the tagged data throughout the system to be used in access control and other security mechanisms. This paper discusses several different uses of security tags related to different security policies, highlighting appropriate uses of the tags. The evaluation of...
Computer controlled machinery is perceived as a cyber-physical system. Such systems are vulnerable to cyber attacks of ever-growing sophistication with potentially severe consequences. To address this threat advanced security/status monitoring measures are to be developed and deployed within the framework of industrial control systems. A system diagnostic approach based on modeling and assessment...
Production systems are typically long-living, interdisciplinary systems which undergo continuous evolution. However, especially in the industry of the production automation, any formalized documentation of evolutionary changes is often neither created nor adapted to the application. Accordingly, no knowledge artefacts exist that can be automatically processed in order to support the evolution process...
This paper proposes a new monitor method which is based on semantic model, this paper also verifies it on both web sites and micro blogs. The method, which uses a series man-made news event templates of certain domain, is invoked by computer program. The program resolves it, and then searches and crawls web news event automatically, in the same way, the program will also analyse the news event and...
Network communication protocol reverse-engineering is important for malicious software analysis. Security analysts need to rewrite messages sent and received by malicious software according to the protocol to control the malware's malicious behaviors. To enable such rewriting, we need detailed information about the sent message by the malware program in target host in the network dialog. However,...
Run-time verification techniques based on monitors have become the basic means of detecting software failures in dynamic and open environments. One challenging problem is how the monitor can provide sufficient indications before the real failures, so that the system has enough time to act before the failures cause serious harm. To this end, this paper proposes the main idea on how to generate monitors...
Considering the status reporting activity on a Software Configuration Managment (SCM) process, this paper presents a new version of an ontology that aims to represent the knowledge domain related to the essential phases of: issue identification, issue evaluation, and change execution. Concepts, relations, properties, and axioms are presented for this ontology so competence questions can be answered...
Robustness is a key issue on any runtime system that aims to speed up the execution of a program. However, robustness considerations are commonly overlooked when new software-based, thread-level speculation (STLS) systems are proposed. This paper highlights the relevance of the problem, showing different situations when the use of incorrect data can irreversibly alter the speculative execution of...
Behavior based intrusion detection technologies are increasingly popular. Traditionally behavior patterns are expressed as specific signatures defined in the system call domain. This approach has various drawbacks and is vulnerable to possible obfuscations.
Software log file analysis helps immensely in software testing and troubleshooting. The first step in automated log file analysis is extracting log data. This requires decoding the log file syntax and interpreting data semantics. The expected output of this phase is an organization of the extracted data for further processing. Log data extractors can be developed using popular programming languages...
Based on the theory of cellular automata, this paper stimulates and studies the dynamic evolution of network software, especially in the context of Web services composition. First, the paper discussed the need of trust software engineering based on cellular automata stimulating the dynamic evolutions of network software. Then, a series of algorithms and rules have been proposed, such as how to replace...
Billions of devices are expected to be online by 2020. These will not only provide information by monitoring the real-world, but create complex collaborations in order to provide sophisticated value-added services. Slowly, we are witnessing the emergence of Cooperating Objects in the Internet of Things, which will rapidly change the way we design, develop and realize cyber-physical dependent applications...
The purpose of this paper is to present the status of the R&D efforts of our Laboratory concerning the development and the improvement of hardware and software means, appropriately designed to ensure Continuity of Medical Care among Primary Health-care Agencies, Hospitals and Home Care, according to existing or emerging National, European and International regulations and standards. Our R&D...
Timed failure propagation graph (TFPG) is a directed graph model that represents temporal progression of failure effects in physical systems. In this paper, a distributed diagnosis approach for complex systems is introduced based on the TFPG model settings. In this approach, the system is partitioned into a set of local subsystems each represented by a subgraph of the global system TFPG model. Information...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.