The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Network science is often used to understand underlying phenomena that are reflected through data. In real-world applications, this understanding supports decision makers attempting to solve complex problems. Practitioners designing such systems must overcome difficulties due to the practical limitations of the data and the fidelity of a network abstraction. This paper explores the design of a network...
Conventional approaches for photovoltaic maximum power point tracking (PV MPPT) design based on rule-of-thumb assumptions might not result in the optimal performance in the field. To improve the field performance of practical MPPT designs, this paper proposes a comprehensive approach to MPPT design driven by experimentally measured field data. The data on the dynamic behavior of PV panel I-V characteristics...
We present Mixture of Support Vector Data Descriptions (mSVDD) for one-class classification or novelty detection. A mixture of optimal hyperspheres is automatically discovered to describe data. The model consists of two parts: log likelihood to control the fit of data to model (empirical risk) and regularization quantizer to control the generalization ability of model (general risk). Expectation Maximization...
Even though wine-drinkers generally agree that wines may be ranked by quality, wine-tasting is famously subjective. There have been many attempts to construct a more methodical approach to the assessment of wines. We propose a method of assessing wine quality using a decision tree, and test it against the wine-quality dataset from the UC Irvine Machine Learning Repository. Results are 60% in agreement...
Practical models of lithographic processes are usually empirically calibrated, making their accuracy dependent on the total number of samples used to build the models, and more specifically on the selection of a representative set of samples for calibration. An inadequate number of samples can adversely impact model accuracy, but a broadly comprehensive set will excessively increase measurement cost...
Understanding and modeling the spread of influence is an important topic in social network analysis and therefore attracts many researchers to this area. It has several practical applications such as viral marketing. In this paper, we propose a new method (Linear Threshold Behavioral Model) for modeling the spread of influence in social networks. Experiments were conducted on three, real-world datasets...
The quantification of harmonic emission requires an accurate representation of the network harmonic impedances. A number of techniques have been proposed to make the assessment of the network harmonic impedances practical and less invasive. Using actual event data, captured at a solar farm, this paper aims to utilize existing assessment techniques to develop an online network harmonic impedances assessment...
In this paper, we develop the max-margin similarity preserving factor analysis (MMSPFA) model. MMSPFA utilizes the latent variable support vector machine (LVSVM) as the classification criterion in the latent space to learn a discriminative subspace with max-margin constraint. It jointly learns factor analysis (FA) model, similarity preserving (SP) term and max-margin classifier in a united Bayesian...
Feature selection and learning through selected features are the two steps that are generally taken in classification applications. Commonly, each of these tasks are dealt with separately. In this paper, we introduce a method that optimally combines feature selection and learning through feature-based models. Our proposed method implicitly removes redundant and irrelevant features as it searches through...
Lending loans to borrowers is considered one of the main profit sources for banks and financial institutions. Thus, careful assessment and evaluation should be taken when deciding to grant credit to potential borrowers. With the rapid growth of credit industry and the massive volume of financial data, developing effective credit scoring models is very crucial. The literature in this area is very dense...
The increase of the frequency of Earth-Low Earth Orbit satellite links leads to the development of new propagation models in order to accurately characterize the propagation channel. Space-time models are developed for tropospheric attenuation and scintillation but their validation is difficult in the absence of measured data. The verification of scintillation models can be performed worldwide against...
The emerging new data types bring tremendous challenges to data mining. There is an enormous amount of high-dimensional class-imbalanced data in different fields. In this case, traditional classification methods are not appropriate because they are prone to ensure the accuracy of the majority class. Meanwhile, the curse of dimensionality makes situations more complicated. Finding a complicated classifier...
We present in this paper the analysis results of prominent educational characteristics differentiating people from the two regions in the world: advanced economies versus east Asia and the pacific countries. The automatic multivariate analysis of classification trends has been demonstrated through the visual data mining tool called KNIME. We found from the empirical studies that from the years 1950...
Associative Classification is a recent and rewarding approach which combines associative rule mining and classification. This technique has attracted many researchers as it derives accurate classifier with effective rules. Associative classifiers are useful for application where maximum predictive accuracy is desired. Increasing access to huge datasets and corresponding demands to analyze these data...
In machine learning, ensemble model is combining two or more models for obtaining the better prediction, accuracy and robustness as compared to individual model separately. Before getting ensemble model first we have to assign our training dataset into different models, after that we have to select the best model suited for our data sets. In this work we explored six machine learning parameter for...
Change in a software is crucial to incorporate defect correction and continuous evolution of requirements and technology. Thus, development of quality models to predict the change proneness attribute of a software is important to effectively utilize and plan the finite resources during maintenance and testing phase of a software. In the current scenario, a variety of techniques like the statistical...
Rainfall prediction is an important part of weather prediction. Compared to conventional methods predicting rainfall rate, the approach applying historical records and data mining technology shows obviously advantage in computing cost. Many excellent works have been done attempting to build predicting model with data mining methods, however, most of them just test the predicting accuracy on data set...
The location of a mobile user is used to deliver context sensitive information like advertisements and deals. Predicting the future possible locations of a mobile user can help target specific services. Nokia provided researchers with data collected from around 200 mobile users over a period of about 2 years for the purpose of research. Previous efforts have attempted either to predict the location...
This paper propose a novel learning framework for classification of messages into spam and legit. We introduce a classification method based on feature space segmentation. Naive Bayes (NB) model is a statistical filtering process which uses previously gathered knowledge. Instead of using a single classifier, we propose the use of local and global classifier, based on Bayesian hierarchal framework...
In this case study we investigate software reliability models and their applicability to process improvement at an IT help desk. We propose a model selection framework and demonstrate its success using real help desk incident data from a portfolio of 156 desktop software applications. Incidents are predicted at five intervals and measured against actual numbers of submitted incidents. We analyze incident...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.