The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Developing reliable software at low cost is an objective for any software developer. Software errors are removed during the various review and test phases in the development lifecycle. Software is modified to eliminate these errors. Changes are implemented during the maintenance phases also. These software changes and fixes may introduce new errors and can cause defect propagation in different dependent...
These two issues are addressed in this paper: 1) the formal definitions of the concepts relevant to program faults, and 2) the comparison and classification of program faulttolerant abilities. We firstly analyze the subtle differences among these basic concepts: faults, errors and failures, and represent their formal definitions by using the state-based theory of program behavior; and then we propose...
Many classification techniques can automatically summarize text into topics and accordingly identify topic terms from the online reviews. Among these techniques Latent Dirichlet Allocation (LDA) and Latent Semantic Analysis (LSA) are some of the most often employed approaches. LDA is a probability generated model that projects a document into the topic space using Dirichlet Distribution, and each...
To improve software reliability, software defect prediction is utilized to assist developers in finding potential bugs and allocating their testing efforts. Traditional defect prediction studies mainly focus on designing hand-crafted features, which are input into machine learning classifiers to identify defective code. However, these hand-crafted features often fail to capture the semantic and structural...
Software watermarking is a general tool usedto combat software piracy by embedding identifying informationinto a program. Most existing schemes of softwarewatermarking have the weak point that the security onlydepends on the stealth of the watermark structure. Besides,the watermarking is independent of program in semantic leveland can be destroyed via fairly straightforward semanticspreservingcode...
For runtime verification techniques, the most important part that limits its usage is how to reduce the influence of monitors. An important indicator is the amount of software codes after monitor instrumentation. The application of RV is hindered from the size-explosion problem of monitor construction. Namely, the state number of the monitor obtained is doubly exponential in the size of the input...
Past AI systems were domain specific intended to fulfill a particular task. These days, there is a need for versatile machines that are capable of performing multiple tasks, should be adaptive in nature and should have decision-making capabilities for any situation. At this point, the role of Cognitive Computing comes into the picture. Cognitive computing is a promising area of research, it is depicting...
New customers often require custom features of a successfully marketed product. As the number of variants grow, new challenges arise in the maintenance and evolution activities. Software product line (SPL) architecture is a timely answer to these challenges. The SPL adoption however is a large one time investment that affects both technical and organizational issues. From the program code point of...
Network Function Virtualization (NFV) architectures are emerging to increase networks flexibility. However, this renewed scenario poses new challenges, because virtualized networks, need to be carefully verified before being actually deployed in production environments in order to preserve network coherency (e.g., absence of forwarding loops, preservation of security on network traffic, etc.). Nowadays,...
Automatic and semi-automatic approaches supporting program comprehension are sought by researchers and practitioners to facilitate software engineering tasks, such as development, maintenance, extension and so on. Using topic modeling is a promising way to automatically discover feature and structure from textual software assets. However, there are gaps between knowing and doing when applying topic...
Software testing is a sub area of software engineering which is also a knowledge intensive and collaborative activity. Our previous study results revealed that knowledge in the repositories were outdated, internal documents are unstructured and varied formats, less accessing facilities and lack of targeted delivery methods, such that software testers from software companies are highly affected by...
Meta-modeling has been a topic of interest in the modeling community for many years, yielding substantial number of papers describing its theoretical concepts. Many of them are aiming to solve the problem of traditional UML based domain-specific meta-modeling related to its non-compliance to the strict meta-modeling principle, such as the deep meta-modeling approach. In this paper, we show the practical...
Metamodelling and model transformation play important roles in model-driven engineering as they can be used to define domain-specific modelling languages. During the modelling phase, modellers encode domain knowledge into models which may include both structural and behavioral aspects of a system. The contribution of this paper is a new web-based metamodelling and model transformation tool called...
Existing registries organize functionally similar services into groups without considering past service-usage from the consumers' perspective, a.k.a. pragmatics. Pragmatics can help registries to calculate service similarity more effectively and improve organization schemes. However, pragmatics are not available beforehand and their highly accumulated number over time creates time and space efficiency...
As more and more companies become aware of the benefits of collecting and analyzing data, hiring employee with data analytics expertise is a key issue faced by HR practitioners. Although previous research empirically highlighted the differences of knowledge and skill requirements between big data (BD) and business intelligence (BI) in English-speaking countries, limited similar study is conducted...
Question answering (Q&A) communities have gained momentum recently as an effective means of knowledge sharing over the crowds, where many users are experts in the real-world and can make quality contributions in certain domains or technologies. Although the massive user-generated Q&A data present a valuable source of human knowledge, a related challenging issue is how to find those expert...
This paper presents a modular framework for traffic regulations based decision-making of automated vehicles. It builds on a semantic traffic scene representation formulated as ontology and includes knowledge about traffic regulations. The semantic representation supports traffic situation classification by reasoning, providing improved situational awareness for the automated vehicle. Decision-making...
To improve the accuracy of analysis results is one of the hard challenges for static analysis. Especially, static analyzers generally analyze all paths of a program, including infeasible paths, which undoubtedly decreases the analysis accuracy. To mitigate the issue, we design and implement a static analyzer, called ABAZER-SE, which is based on the meta-compilation and the GCC abstract syntax tree...
The Web has become a necessary resource of daily use, the benefits it offers, being a source of knowledge, and collaboration giving rise to new initiatives such as linked data, whose purpose is link the data scattered through the Web through of semantic relationships between them. The propose of this article is show the improving, consumption, and visualization of linked data in the Web, in such a...
The search for the efficiency of the use of time dedicated to an activity of reading academic texts is central to the research that is in progress. A prominent contribution of the research is related to an algorithm that generates questionnaires from those texts. This article presents and discusses the main strategy that subsidizes the concerns involved in the development of such algorithm. The strategy...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.