The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Users' acceptability of a Web application relies strictly on this applications' usability. One of the main means of evaluating and improving the usability of a system is by carrying out inspections. In this paper, we propose the automation of this process throw an Assistant for Usability Inspection Process ("APIU"). We also evaluate the assistant by carrying out an experimental study. The...
Preparation of test data that adequately tests a given piece of code is very expensive and effort intensive. This paper presents a tool AutoGen that reduces this cost and effort by automatically generating test data for C code. AutoGen takes the C code and a criterion such as statement coverage, decision coverage, or Modified Condition/Decision Coverage (MCDC) and generates non-redundant test data...
Formal behavioral models are used in model-driven software development to analyze and reason about system behavior. While scenario-based models highlighting interprocess communication are closer to distributed system requirements, state-based models highlighting intra-process behavior are suitable for code generation. In this paper we present dasiaFootprinterpsila, a tool which exploits the relative...
We describe a benchmark of publicly-available multi-threaded programs with documented bugs in them. This project was initiated a few years ago with the goal of helping research groups in the fields of concurrent testing and debugging to develop tools and algorithms that improve the quality of concurrent programs. We present a survey of usage of the benchmark, concluding that the benchmark had an impact...
The assessment of a testing strategy and the comparison of different testing strategies is a crucial part in current research on software testing. Often, manual error seeding is used to generate faulty programs. As a consequence, the results obtained from the examination of these programs are often not reproducible and likely to be biased. In this paper, a flexible approach to the benchmarking of...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.