The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Sweeps is a well-known localization algorithm based on the consistent connection constraints in WSN, and utilizes the concept of finite localization to relax the constraint conditions from trilateration to bilateration. In this paper, we deeply analyze the finite localization in various localization environments, and uniquely localize the finitely localized node without flip ambiguities using iterative...
Fuzzy cognitive maps (FCM) are often represented and implemented using matrix-vector multiplication (MxV). Since the multiplication operation is critical to the performance of the FCM computations, it is important to secure its efficient implementation. Considering the connection matrix used to represent the FCM is often static and since it often contains only several nonzero elements, it is viable...
This work introduces a hard clustering algorithm based on Particle Swarm Optimization metaheuristic that is able to partition objects considering their relational descriptions given by a single dissimilarity matrix. The PSO is a metaheuristic based on population which is well known for its simplicity, good performance and it was already designed as clustering algorithm for vector data. The proposed...
In this study, we propose a hybrid knowledge-based framework for author name disambiguation. The developed approach helps incrementally identify authors of documents in data acquired from various sources. The nature of the problem calls for an orchestrated use of several methods; thus, the framework is composed of two levels. The first level contains a rule-based disambiguation algorithm. The second...
Multitask learning methods facilitate learning multiple related tasks together and improvise results as compared to the schemes where each task is considered independently. In order to incorporate the shared information in multiple tasks, various regularizers have been integrated in pre-existing techniques. In this paper, we explore the problem of convex formulations of multitask learning with sparsity...
Eye gaze patterns or scanpaths of subjects looking at art while answering questions related to the art have been used to decode those tasks with the use of certain classifiers and machine learning techniques. Some of these techniques require the artwork to be divided into several Areas or Regions of Interest. In this paper, two ways of clustering the static visual stimuli - k-means and the density...
With the rapid development of uncertain and large-scale datasets, Fuzzy Possibilistic C-means Clustering (FPCM) and Granular Computing (GrC) were introduced together with the aim to solve the feature selection and outlier detection problems. Utilizing the advantages of the FPCM and GrC, an Advanced Fuzzy Possibilistic C-means Clustering based on Granular Computing (GrFPCM) was proposed to select features...
One important aim in tire industry when finalizing a tire design is the modeling of the noise characteristics as received by the passengers of the car. In previous works, the problem was studied using heuristic algorithms to minimize the noise by looking for a sequence under constraints. These constraints are imposed by tire industry. We present a new technique to compute the noise. We also propose...
In this paper we present a decision process to auto-adapt and improve human-machine interaction, simplifying the integration of algorithms and functionalities. The decision process is part of an innovative approach that integrates contextual information to orchestrate behaviours of an interactive system (i.e. perception and actuation features involved during interaction). Classical approaches focus...
In this paper, a progressive learning algorithm for multi-label classification to learn new labels while retaining the knowledge of previous labels is designed. New output neurons corresponding to new labels are added and the neural network connections and parameters are automatically restructured as if the label has been introduced from the beginning. This work is the first of the kind in multi-label...
DBpia is the largest digital-bibliography service provider in Korea. It provides several convenience functions for researchers. DBpia users (i.e., researchers) can search for papers via several search routes such as publications, publishers, authors, and keywords. Although the researchers can exploit the search functions, they may still have a number of search results as candidate papers to read....
When traditional sample selection methods are used to compress large data sets, the computational complexity turns out to be very high and it is really time consuming. To avoid these shortcomings, we propose a new method to select samples based on non-stable cut points. With the basic characteristic of convex function that its extreme values occur at the endpoints of intervals, the method measures...
Uncertain data clustering is an essential task in the research of data mining. Lots of traditional clustering methods are extended with new similarity measurements to tackle this issue. Different from certain data clustering, uncertain data clustering focus more on the evaluation of distribution similarity between uncertain data objects. In this paper, based on the KL-divergence and the JS-divergence,...
The success of deep learning proves that deep models are able to achieve much better performance than shallow models in representation learning. However, deep neural networks with auto-encoder stacked structure suffer from low learning efficiency since common used training algorithms are variations of iterative algorithms based on the time-consuming gradient descent, especially when the network structure...
This paper proposed a new full automated detection algorithm for ultrasound follicle images. The proposed algorithm uses multiple concentric layers (MCL) technology, which is based on the presence of concentric layers surrounding a focal area in the follicle region. The algorithm experiment is based on three processes, which include image preprocessing, detection of focal areas and multiple concentric...
Sohrabi and Barforoush proposed the BVBUC (Bitwise vertical bottom up colossal) algorithm for mining colossal patterns based on a bottom up scheme. It, however, spends more time to check subsets and supersets, because it generates a lot of candidates and consumes more memory usage to store these. In this paper, we propose a new method for mining colossal patterns. Firstly, the CP (Colossal Pattern)-tree...
Mining of high-utility itemsets in transactional databases is emerging topic in recent years since it can be used to reveal more information for decision making, which has been widely used in many real-life applications. For the traditional high-utility itemset mining (HUIM), only the utility values of the itemsets are considered without timestamps or periodic constraints. In this paper, we present...
Meta-heuristics have been applied for a long time to the Travelling Salesman Problem (TSP) but information is still lacking in the determination of the parameters with the best performance. This paper examines the impact of the Simulated Annealing (SA) and Discrete Artificial Bee Colony (DABC) parameters in the TSP. One special consideration of this paper is how the Neighborhood Structure (NS) interact...
In this paper, a novel autonomous data-driven clustering approach, called AD_clustering, is presented for live data streams processing. This newly proposed algorithm is a fully unsupervised approach and entirely based on the data samples and their ensemble properties, in the sense that there is no need for user-predefined or problem-specific assumptions and parameters, which is a problem most of the...
Particle swarm optimization (PSO) is a stochastic population-based algorithm that is designed for real-parameter optimization problems. PSO is a simple and powerful algorithm. However, the performance of PSO is degraded in the case of non-separable and ill-conditioned problems. In this article, we discuss the relation between the Hessian matrix of a function and the covariance matrix of the search...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.