The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Named Data Networking (NDN) represents one of the major Information Centric Networking (ICN) candidates for future Internet architectures. It treats data as the central element and it leverages in-network caching. Access control is a fundamental security feature in this project. It limits data access to only authorized entities. However, it can no longer be tied to a content location or to a particular...
One of the main objectives of the software engineers is to provide software related solutions for social problems and to increase the availability of social welfare. In that sense, the quality of the software is directly related to address the users' needs and their level of satisfaction. To reflect user requirements to the software processes, the correct design of the database model provides a critical...
Network attack path analysis is an important method for analyzing the security status of computer network, which can automatically analyze the correlation between network vulnerabilities and potential threats resulting from vulnerabilities. It plays a guiding role in establishing network security policy. This paper chooses NVD and Bugtraq as vulnerability data sources, and extracts key properties...
The selection of oligonucleotide probes for micro arrays is still very difficult task. With the rapid growth of environmental databases (metagenomics programs coupled to next generation sequencing), the computational capacity requirements of probe design algorithms have hugely increased. The use of parallel and distributed architectures can considerably reduce the complexity and the computational...
We tackle the problem of answering maximum probabilistic top-k tuple set queries. We use a sliding-window model on uncertain data streams and present an efficient algorithm for processing sliding-window queries on uncertain streams. In each sliding window, the algorithm selects the k tuples with the highest probabilities from sets of different numbers of the tuples with the highest scores. Then, the...
A considerable research effort has already been put on the identification (and consequently filtering) of local segments of “unusual” composition (Compositionally Biased or Low Complexity Regions; CBRs or LCRs) in protein sequences. This interest was mainly initiated due to the fact that CBR existence is known to create artifacts (i.e. biologically irrelevant hits) in sequence database search methods...
Numerous studies have identified measures that relate to the fault-proneness of software components. An issue practitioners face in implementing these measures is that the measures tend to provide predictions at a very high level, for instance the per-module level, so it is difficult to provide specific recommendations based on those predictions. We examine a more specific measure, called software...
Data protection is a challenge when outsourcing medical analysis, especially if one is dealing with patient related data. While securing transfer channels is possible using encryption mechanisms, protecting the data during analyses is difficult as it usually involves processing steps on the plain data. A common use case in bioinformatics is when a scientist searches for a biological sequence of amino...
In the processing of set-oriented ECA rules about views detecting, condition evaluating is the key. The method of condition evaluating based on R&H algorithm is introduced in this paper. By this way, we can filter irrelative data operations and evaluate conditions successfully.
This paper deals with the characterization of security-related vulnerabilities based on public data reported in the Open Source Vulnerability Database. We focus on the analysis of vulnerability life cycle events corresponding to the vulnerability discovery, the vulnerability disclosure, the patch release, and the exploit availability. We study the distribution of the time between these events considering...
Testing of data-centric health-care integrated systems involve numerous non-traditional testing challenges, particularly in the areas of input validation, functional testing, regression testing, and load testing. For these and other types of testing, the test-data suites typically need to be relatively large and demonstrate characteristics that are similar to real-data. Generating test-data for integrated...
Tic-tac-toe and Fanorona games are ones that can be played at an extremely high level by computer engines. Endgame databases are powerful tools to create these engines. These Endgame databases contain precious information's about how to play the game. Unfortunately, it is quite impossible for human players to learn the game strategies from endgame databases, which are only made of raw sequences of...
The rapid development of the Internet and the progress of network facilities, cloud database services are very popular with the web users, which allows users to retrieve, update and delete the records of the cloud database. However, these users of such services do not always trust the cloud storage provider to keep their privacy. Therefore, neither the user's database information nor the access pattern...
The complexity of today's networks and distributed systems makes the process of network monitoring difficult. The amount of data produced by many distributed security tools can be overwhelming. So it's very difficult and limited to get the most risky alert through manual process based on the huge network alerts with many attributes, such as asset, priority, reliability, risk, type et al. The common...
Micro-zone analysis involves use of data fusion and data mining techniques in order to understand the relative impact of many different variables. Data Fusion requires the ability to combine or “fuse” date from multiple data sources. Data mining involves the application of sophisticated algorithms such as Neural Networks and Decision Trees, to describe micro-zone behavior and predict future values...
Most keystroke dynamics studies have been evaluated using a specific kind of dataset in which users type an imposed login and password. Moreover, these studies are optimistics since most of them use different acquisition protocols, private datasets, controlled environment, etc. In order to enhance the accuracy of keystroke dynamics' performance, the main contribution of this paper is twofold. First,...
This paper presents a query-by-singing/humming method that enables fast melody comparison. The basic idea is to measure the distances between note sequences in the frequency domain instead of time domain. Thanks to the merit of fast Fourier transform, we can convert different-length note sequences into equal-dimension vectors via zero padding. The equal dimensionality allows us to compare the vectors...
This paper presents an overview of the new developments carried out to offer a reliable and efficient support for different computing infrastructures in the Kepler workflow orchestration system. The aim of the work is to help scientists to transparently use these infrastructures regardless of the underlying middleware. We introduce new complex workflow scenarios developed in the context of the EU...
This paper introduces a method for the efficient comparison and retrieval of near duplicates of a query video from a video database. The method generates video signatures from histograms of orientations of optical flow of feature points computed from uniformly sampled video frames concatenated over time to produce time series, which are then aligned and matched. Major incline matching, a data reduction...
Software projects are always increasing their complexity. The complexity of projects arises due to the increased sophistication of software applications and of their implemented features. However, most of the projects are developed by small organizations. Since these companies have a reduced dimension, the number of individuals that constitute each software development teams will also be significantly...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.