The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Database-centric applications (DCAs) usually contain a large number of tables, attributes, and constraints describing the underlying data model. Understanding how database tables and attributes are used in the source code along with the constraints related to these usages is an important component of DCA maintenance. However, documenting database-related operations and their constraints in the source...
Feature selection is a process to select a subset of original features. It can improve the efficiency and accuracy by removing redundant and irrelevant terms. Feature selection is commonly used in machine learning, and has been wildly applied in many fields. we propose a new feature selection method. This is an integrative hybrid method. It first uses Affinity Propagation and SVM sensitivity analysis...
In a standard support vector machine (SVM), the training process has O(n3) time and O(n2) space complexities, where n is the size of training dataset. Thus, it is computationally infeasible for very large datasets. Reducing the size of training dataset is naturally considered to solve this problem. SVM classifiers depend on only support vectors (SVs) that lie close to the separation boundary. Therefore,...
Soft subspace clustering algorithms receive wide interests recently, because of their scalable and flexible ability at handling high dimensional sparse data. A disadvantage of those existing algorithms is their clustering results are affected by goodness of initial centroid selected by random initial method greatly. In this paper, we propose a heuristically weighting K-means algorithm and a corresponding...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.