The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
This paper presents a parallel algorithm for computing one dimensional unstable manifold of a hyperbolic fixed point of discrete dynamical system. It is pointed out that parallel computing can be realized by subdividing the unstable manifold into mutually independent subsections. In each subsection, the one dimensional unstable manifold is grown by forward iteration. Curvature constraint and distance...
This paper deals with the advanced and developed methodology know for cancer multi classification using an Extreme Learning Machine (ELM) for microarray gene expression cancer diagnosis, this used for directing multicategory classification problems in the cancer diagnosis area. ELM avoids problems like local minima; improper learning rate and over fitting commonly faced by iterative learning methods...
This paper presents the results achieved by fault classifier ensembles based on a model-free supervised learning approach for diagnosing faults on oil rigs motor pumps. The main goal is to compare two feature-based ensemble construction methods, and present a third variation from one of them. The use of ensembles instead of single classifier systems has been widely applied in classification problems...
Ensemble pruning is concerned with the reduction of the size of an ensemble prior to its combination. Its purpose is to reduce the space and time complexity of the ensemble and/or to increase the ensemble's accuracy. This paper focuses on instance-based approaches to ensemble pruning, where a different subset of the ensemble may be used for each different unclassified instance. We propose modeling...
A new method is presented which combines a deterministic analytical method and a probabilistic measure to classify rock types on the basis of their hyperspectral curve shape. This method is a supervised learning algorithm using Gaussian Processes (GPs) and the Observation Angle Dependent (OAD) covariance function. The OAD covariance function makes use of the properties of the Spectral Angle Mapper...
Most classification studies are done by using all the objective data. It is expected to classify objects by using some subsets data effectively. A rough set based reduct is a minimal subset of features, which has almost the same discernible power as the entire features. Here, we propose multiple reducts which are followed by the k-nearest neighbor with confidence to classify documents with higher...
Requirement analysis is the preliminary step in software development process. The requirements stated by the clients are analyzed and an abstraction of it is created which is termed as requirements model. Unified Modeling Language (UML) models are helpful for understanding the problems, communicating with application experts and preparing documentation. The static design view of the system can be...
The paper deals with a method for accurate semisymbolic time-domain analysis of highly idealized linear lumped circuits. Pulse and step responses can be computed by means of the partial fraction decomposition. The procedure relies on an accurate computation of poles of the transfer function. The well known problem of the QR and QZ algorithms is their poor accuracy in the case of multiple roots. Moreover,...
Given data drawn from a mixture of multivariate Gaussians, a basic problem is to accurately estimate the mixture parameters. We give an algorithm for this problem that has running time and data requirements polynomial in the dimension and the inverse of the desired accuracy, with provably minimal assumptions on the Gaussians. As a simple consequence of our learning algorithm, we we give the first...
Boosting is a general method for improving the accuracy of learning algorithms. We use boosting to construct improved privacy-pre serving synopses of an input database. These are data structures that yield, for a given set Q of queries over an input database, reasonably accurate estimates of the responses to every query in Q, even when the number of queries is much larger than the number of rows in...
We present a new algorithm for learning a convex set in n-dimensional space given labeled examples drawn from any Gaussian distribution. The complexity of the algorithm is bounded by a fixed polynomial in n times a function of k and ϵ where k is the dimension of the normal subspace (the span of normal vectors to supporting hyperplanes of the convex set) and the output is a hypothesis that correctly...
Instance-based learning algorithms typically suffer influences of dissimilarity functions. The problem is frequently related to the Nearest Neighbor rules of these algorithms. This paper will introduce a new dissimilarity measure, called Heterogeneous Centered Difference Measure, which is tested over many known databases. The results are compared with other distance functions.
In this paper, a frequency domain feature extraction algorithm for palm-print recognition is proposed, which efficiently exploits the local spatial variations in a palm-print image. The entire image is segmented into several narrow-width spatial bands and a palm-print recognition scheme is developed based on extracting dominant spectral features from each of these bands using two-dimensional discrete...
Aimming at the ever-present problem of imbalanced data in text classification, the authors study on several forms of imbalanced data, such as text number, class size, subclass and class fold. Some useful conclusions are gotten from a series of correlative experiments: first, when the text of two class is almost the same number, the difference of word number become major factor to affect the accuracy...
Collaborative filtering exploits user preferences, generally ratings, to provide them with recommendations. However, the ratings may not be completely trustworthy: the rating scale is usually reduced and the rating values may be influenced by many factors. This paper is a first attempt at studying the expression of preferences under the form of preference relations where users are asked to compare...
The increasing protein sequences from the genome project require theoretical methods to predict transmembrane helical segments (TMHs). In this paper, a method based on discrete wavelet transform (DWT) has been developed to predict the number and location of TMHs in membrane proteins. PDB coded as 1F88 is chosen as an example to describe the prediction process with this method. One group of test data...
This work studies the use of Particle Swarm Optimization (PSO) as a classification technique. Beyond assessing classification accuracy, it investigates the following questions: does PSO present limitations for high dimensional application domains? Is it less efficient for multi class problems? To answer the questions, an experimental set up was realized that uses three high dimensional data sets....
Several adaptation approaches, such as policy-based and reinforcement learning, have been devised to ensure end-to-end quality-of-service (QoS) for enterprise distributed systems in dynamic operating environments. Not all approaches are applicable for distributed real-time and embedded (DRE) systems, however, which have stringent accuracy, timeliness, and development complexity requirements. Supervised...
An application of Parallel Radial Basis Function (PRBF) network model on prediction of chaotic time series is presented in this paper. The PRBF net consists of a number of radial basis function (RBF) subnets connected in parallel. The number of input nodes for each RBF subnet is determined by different embedding dimension based on chaotic phase-space reconstruction. The output of PRBF is a weighted...
Classification is a widely used mechanism for facilitating Web service discovery. Existing methods for automatic Web service classification only consider the case where the category set is small. When the category set is big, the conventional classification methods usually require a large sample collection, which is hardly available in real world settings. This paper presents a novel method to conduct...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.