The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
semantic information. Each keyword creates an n-dimensional instance of the object in the feature space. Feature inference is treated as the intersection of hyperplanes generated from the keywords in the feature space. These hyperplanes extend the meaning of keywords. With a previous basic algorithm, we explored the basis for
When using Information Retrieval (IR) systems, users often present search queries made of ad-hoc keywords. It is then up to the information retrieval systems (IRS) to obtain a precise representation of the user's information need and the context (preferences) of the information. To address this problem, we investigate
order, without any content based grouping. This paper presents an experimental deduction of a search result clustering methodology to group the links, returned by the search result page for a particular keyword, based on the contents of the HTML documents, represented by the links and label these resulted groups
having a relationship when the common citation rate between them is higher than the threshold. A modified TF-IDF calculates the weight of each keyword in the topic groups. The keyword-weight vector represents the main features of each group, while the category of a new-coming document is determined by a novel similarity
generate and calculate the associated relations and their strengths between documents within a domain. Each document is represented by a bag of words and their weights. We first build domain knowledge background based on the association rules at keyword level, and then we apply those association rules to generate and
With the exponential rise in the amount of information in the World Wide Web, there is a need for a much efficient algorithm for Web Search. The traditional keyword matching as well as the standard statistical techniques is insufficient as the Web Pages they recommend are not highly relevant to the query. With the
extends the VSM based on keyword, it consider that the keywords in the page have different weight in the different position;Integrating the principles of Page-Rank, link analysis also considers that anchor text and website of the web page relevant with the theme.
One of the most critical operations performed in a P2P system is the lookup of a resource. The main issues to be addressed by lookup algorithms are: (1) support for flexible search criteria (e.g., wildcard or multi-keyword searches), (2) effectiveness - i.e., ability to identify all the resources that match the search
The World Wide Web is growing at a rate of about a million pages per day, making it tougher for search engines to extract relevant information for its users. Earlier Search Engines used simple indexing techniques to search for keywords in websites and gave more weightage to pages with higher frequency of keyword
were used as case studies. The textual contents of the marking schemes were transcripted into electronic documents using same file format as the students' answers. The documents were pre-processed for stopwords removal and each keyword stemmed to address morphological variations. N-gram terms (N=2, 3) were then
articles. Then, we design a three-layered graph-based recommendation model that integrates fine-grained co-authorship as well as author–paper, paper–citation, and paper-keyword relations. Our model effectively generates query-oriented recommendations using a simple random walk algorithm. Extensive experiments
Making smart grids and microgrids an overwhelming and successful reality implies the consideration of a lot of challenges coming from different domains and collaboration is a keyword to seize the high ground. SEAS Shared Intelligence (SEAS SI) is a platform for algorithms sharing and execution developed under the
(MWE) and they do not scale very well. This paper proposes a clustering and classification algorithm for semantic similarity using sample web pages. Further improvement is to analyze the short text for classification and labeling the short text according to the keyword and producing the result for the end user. This type
(MWE) and they do not scale very well. This paper proposes a clustering and classification algorithm for semantic similarity using sample web pages. Further improvement is to analyze the short text for classification and labeling the short text according to the keyword and producing the result for the end user. This type
Efficient organization and analysis of academic information has many advantages. Most scholar retrieval systems appeared these years can perform keyword-based paper search. However, performing large-scale expert and paper retrieval is an intractable problem. Here we present a platform that can not only reduce the
fragmented time while screen-based reading. The central idea is to utilize semantic analysis programs to extract an extensive set of information that describes keyword spotting. And the auxiliary knowledge can be used for deeply reading. We discuss the strengths of our semantic analysis programs, namely, text extraction, name
High Performance Computing (HPC) is now days a keyword which talks about efficient and cost effective computing systems or frameworks. Applications with huge computations are dependent upon the hardware support and the parallel algorithms designed and executing on this hardware. Research in the domain of parallel
prefer to eat, approximated budget for each person and few other defined parameters as searching keyword. While searching, the system will fetch the entries from the database according to user defined parameters, convert each item's metadata to fuzzy parameters and pass the list to a fuzzy controller. Then the controller
In traditional collaborative filtering recommendation, the matrix sparsity and cold start restricted the accuracy of system. In this paper, we develop a way to enhance the recommendation effectiveness by merging neighborhood relationship and users keyword of social network information into collaborative filtering. We
, keyword extraction and similarity search in the broad fields of text mining, information retrieval, statistical language modeling. In this work, a dataset with 200 abstracts fall under four topics are collected from two different domain journals for tagging journal abstracts. The document models are built using LDA (Latent
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.