The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Development of software change prediction models, based on the change histories of a software, are valuable for early identification of change prone classes. Classification of these change prone classes is vital to yield competent use of limited resources in an organization. This paper validates Artificial Immune System (AIS) algorithms for development of change prediction models using six open source...
Predicting the changes in the next release of software, during the early phases of software development is gaining wide importance. Such a prediction helps in allocating the resources appropriately and thus, reduces costs associated with software maintenance. But predicting the changes using the historical data (data of past releases) of the software is not always possible due to unavailability of...
In large-scale software projects, build code has a high level of complexity, churn rate, and defect proneness. While it is desirable to have automated tools to help developers in localizing faults in build code, it is challenging to build such tools due to the dynamic nature of build code. Existing automatic fault localization methods focus on traditional code and none of them has such support for...
Software maintenance tasks such as feature location and traceability link recovery are search-oriented. Most of the recently proposed approaches for automation of search-oriented tasks are based on a traditional text retrieval (TR) model in which documents are unstructured representations of text and queries consist only of keywords. Because source code has structure, approaches based on a structured...
Software evolves and thus developers frequently make changes to systems that are logged in version control systems. These changes are often poorly documented -- often commit logs are empty or only contain minimal information. Thus, it is often a challenge to understand why certain changes are made especially if they were introduced many months or even years ago. Understanding these changes is important...
Fault localization is a critical procedure in software development process. Previous studies based their research on the precondition that test results are conveniently acquired and 100% correct, which does not happen in the real world. In this article, we propose the concept of gamma-reliable test-suite to demonstrate the potential unreliability of test results. By modeling this unreliability using...
Spectrum-based fault localization techniques leverage coverage information to identify the faulty elements of the program via passed and failed runs. However, the effectiveness of these techniques can be affected adversely by coincidental correctness, which occurs when faulty elements are executed, but the program produces the correct output. This paper proposes a clustering-based strategy to improve...
Refactorings are behavior-preserving source code transformations. While tool support exists for (semi) automatically identifying refactoring solutions, applying or not a recommended refactoring is usually up to the software developers, who have to assess the impact that the transformation will have on their system. Evaluating the pros (e.g., the bad smell removal) and cons (e.g., side effects of the...
Recent advances in computing technologies are increasing the expectations of high accuracy and reliability from sophisticated arithmetic programs. Multi Precision Arithmetic (MPA) plays a vital role in majority of scientific applications, where the accuracy levels are more considerable and even a small mistake may misguide the downstream experimental results. Normal testing strategies rely on test...
Machine learning techniques have been earnestly explored by many software engineering researchers. At present state of art, there is no conclusive evidence on the kind of machine learning techniques which are most accurate and efficient for software defect prediction but some recent studies suggest that combining multiple machine learners, that is, ensemble learning, may be a more accurate alternative...
With the substantial growth of IT sector in the 21st century, the need for system security has also become inevitable. While the developments in the IT sector have innumerable advantages but attacks on websites and computer systems are also increasing relatively. One such attack is zero day malware attack which poses a great challenge for the security testers. The malware pen testers can use bypass...
A technique was developed by which seafloor images gathered by a hovering AUV can be mosaicked based on the vehicle position at each image. In the first phase of development, software was produced to interpret the AUV logs and output parametric files for each image. The images and their parameters are then assembled into a mosaic using GIS software. The improvements to the overall chain of mosaic...
Dynamic taint analysis (DTA) is to analyze execution paths that an attacker may use to exploit a system. Dynamic taint analysis is a method to analyze executable files by tracing information flow without source code. DTA marks certain inputs to program as tainted, and then propagates values operated with tainted inputs. Due to the increased popularity of dynamic taint analysis, there have been a few...
Effort estimation is important part of software project management. Based on applied strategy these models can be classified into groups of algorithmic and non-algorithmic models. In this study we present the model for expert effort estimation developed using data mining techniques - a multilayer perceptron (MLP) artificial neural network. The data set used in the study contains 785 records collected...
One of the most important signs of systemic disease that presents on the retina is vascular abnormalities such as in hypertensive retinopathy. Manual analysis of fundus images by human readers is qualitative and lacks in accuracy, consistency and repeatability. Present semi-automatic methods for vascular evaluation are reported to increase accuracy and reduce reader variability, but require extensive...
In this study, we compared the effectiveness of two approaches to effort estimation for organizations utilizing SCRUM. We compared SCRUM's native effort estimation method Story Points and poker planning, with effort estimation models based on COSMIC Function Points (CFP) for a selection of projects. We utilized different regression models and ANN methodology to develop estimation model from the backlog...
The surveying technique of close-range Photogrammetry is based on an approach of representation of the image forming mechanism of photography and extract spatial information through computation on photos. This research used the fundamentals of close-range Photogrammetry and is applied to model leaves, measure the geometric model of leaf length and width and compare with the conventional measurement...
The exploding volume of network traffic and expanding Quality of Service (QoS) requirements from emerging multimedia and interactive applications in the last decade demand improved internet traffic engineering techniques. In particular, traffic classification and packet marking became essential components for end-to-end QoS assurance of different traffic classes. In this paper we present WekaTIE,...
Approaches to detect fault-prone modules have been studied for a long time. As one of these approaches, we proposed a technique using a text filtering technique. We assume that bugs relate to words and context that are contained in a software module. Our technique treats a module as text information. Based on the dictionary which was learned by classifying modules which induce bugs, the bug inducing...
Land-use maps provide important data and basic information for accomplishing the optimal allocation of resources and for ensuring sustainable development. However, the land-use data interpreted from various sources of remotely sensed, low-resolution imagery are highly variable. Therefore, an analysis of these deviations in land-use data is imperative for correcting the accuracy of the land-use maps...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.