The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Software effort estimation (SEE) is a crucial step in software development. Effort data missing usually occurs in real-world data collection. Focusing on the missing data problem, existing SEE methods employ the deletion, ignoring, or imputation strategy to address the problem, where the imputation strategy was found to be more helpful for improving the estimation performance. Current imputation methods...
Developing software extensions for Web Content Management Systems (WCMSs) like Joomla, WordPress, or Drupal can be a dicult and time consuming process. In this demo we present JooMDD, an environment for model-driven development of software extensions for the WCMS Joomla. JooMDD allows the rapid development of standardised software extensions requiring reduced technological knowledge of Joomla. This...
In this short position paper I consider the contributions that software engineering as a discipline can make to the development and implementation of government policy. It is intended to support the growing body of knowledge on scientific advice in government and to encourage software engineers to engage with policy and the policy community.
Many guidelines for safety-critical industries, such as aeronautics, medical devices, and railway communications, specify that traceability must be used to demonstrate that a rigorous development process has been followed and to provide evidence that the developed system is safe for use. However, creating accurate and complete traceability is costly and remains a practical challenge. The significant...
Defect prediction models are used to pinpoint risky software modules and understand past pitfalls that lead to defective modules. The predictions and insights that are derived from defect prediction models may not be accurate and reliable if researchers do not consider the impact of experimental components (e.g., datasets, metrics, and classifiers) of defect prediction modelling. Therefore, a lack...
Software Estimation is an important part of every Software Engineers’ skill set.At Stevens Institute of Technology, we have taught Estimation as part of our Software Engineering Masters Program since 2001.Over the past few years, we have evolved our teaching style to be more experiential and engaging. This case study describes an evolving software engineering pedagogical method using LEGOs, which...
Due to the confusion of fault-prone software modules and non-fault-prone ones, and the limit of traditional mothed such as LDA and PCA, the performance of software defect prediction model is difficult to improve. In this paper, we present GMCRF, a method based on dimensionality reduction technique and conditional random field (CRF) for software defect prediction. In our proposed method, firstly, we...
Large data handling and analysis either on industrial level or on research level has always been facing problems. These problems increase with the increase in machine dedicated software packages. Large data processing and analysis is prone to errors and is time consuming while moving data from data generation to data analysis. In this paper, first-methods of data generation, methods to move data from...
This paper describes a comparison of proposed model with simulation software and measurement data of PV power plant in Cambodia. The proposed model is based on behavior of PV module at a particular site. The parameters which affected to PV power output as solar irradiance and module temperature were used as input in this model. Weight function technics were used to improve accuracy of single diode...
The prosperity of online rating system makes it an important place for malicious vendors to mislead public's online decisions, whereas the security related studies are lagging behind. In this work, we adopt a quantile regression model to investigate influential factors on online user choices and reveal the “self-exciting” property of online market. Inspired by these findings, we propose a novel iterative...
In the big data era, it is vital to allocate the vast amount of data to various users efficiently. However, the data agents (data owners, collectors and users) are selfish and seek to maximize their own utilities instead of the overall system efficiency. In this paper, the data trading problem of a data market with multiple data owners, collectors and users is formulated and an iterative auction mechanism...
The development process of modern automated manufacturing plants requires a concurrent engineering process that integrates different engineering disciplines. Envisioning a concurrent development process, we propose an engineering process based on an AutomationML metamodel. The proposed metamodel contains standardized mechatronic component models, which acts as the base engineering data for the disciplines...
Research and development in the field of automated vehicles has increased along with related works about its software. Software testing in automated vehicles is key to launching safe and reliable vehicles. Several issues in the software testing of automated vehicles have been raised including extremely large space of test input, the high cost of test executions in a physical environment, test oracles...
Exhaustive testing of highly configurable software developed in continuous integration is rarely feasible in practice due to the configuration space of exponential size on the one hand, and strict time constraints on the other. This entails using selective testing techniques to determine the most failure-inducing test cases, conforming to highly-constrained time budget. These challenges have been...
Stochastic simulations are developed and employed across many fields, to advise governmental policy decisions and direct future research. Faulty simulation software can have serious consequences, but its correctness is difficult to determine due to complexity and random behaviour. Stochastic simulations may output a different result each time they are run, whereas most testing techniques are designed...
Evolving software systems includes data schema changes, and because of those schema changes data has to be converted. Converting data between two different schemas while continuing the operation of the system is a challenge when that system is expected to be available always. Data conversion in event sourced systems introduces new challenges, because of the relative novelty of the event sourcing architectural...
Non-volatile memory (NVM) is emerging as a fast byte-addressable alternative for storing persistent data. Ensuring atomic durability in NVM requires logging. Existing techniques have proposed software logging either by using streaming stores for an undo log, or, by relying on the combination of clflush and mfence for a redo log. These techniques are suboptimal because they waste precious execution...
Code review in practice is often performed change-based, i.e. using the code changes belonging to a task to determine which code to review. In previous studies, it was found that two variations of this process are used in industry: Pre commit review (review-then-commit) and post commit review (commit-then-review). The choice for one of these variants has implications not only for practitioners deciding...
Performance regressions, such as a higher CPU utilization than in the previous version of an application, are caused by software application updates that negatively affect the performance of an application.Although a plethora of mining software repository research has been done to detect such regressions, research tools are generally not readily available to practitioners. Application Performance...
Software cost and effort estimation is a necessary step in the software development lifecycle to track progress, manage resources, and negotiate. Though many accepted cost models exist, local calibration results in more accurate estimates. Locally calibrating Unified Code Count (UCC)’s dataset based on COCOMO (Constructive Cost Model)® II helped UCC’s development team learn which factors affected...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.