The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
is ensured by deploying Privacy Preserving n-keyword search scheme. We have also investigated and implemented a Key Exchange Mechanism to ensure access control, thus providing a holistic solution encompassing authorization, access control and data privacy.
includes direct and indirect similarities. Given a direct similarity matrix that represents a patent citation network, the method calculates indirect similarity matrices and then obtains a compound similarity matrix. Keyword analysis in the text mining is employed to obtain a similarity for a pair of patents. In addition, two
is obtained by keyword analysis for different time periods and research field maps. The integrated method is applied on etching technology which is a material processing technology particularly important for semiconductor industry. On the basis of the results obtained in the practice of etching technology, six different
Automatic image annotation is crucial for keyword-based image retrieval. There is a trend focusing on utilization of machine learning techniques, which learn statistical models from annotated images and apply them to generate annotations for unseen images. In this paper we propose MAGMA - new image auto-annotation
attributes must be shared to have at every node a more accurate estimation of the global classifier. When expanding the knowledge of the local classifiers, to reduce costs, the network traffic should be kept to a minimum. We propose a probabilistic model for a keyword selection method which makes a more thorough analysis
proposed a collective collaborative tagging (CCT) service architecture in which both service providers and individual users can merge folksonomy data (in the form of keyword tags) stored in different sources to build a larger, unified repository. We have also examined a range of algorithms that can be applied to different
classification/clustering as features. Also, this approach can be applied in keyword recommendation system in advertisement for different kinds of advertisers because of its expansibility and versatility.
The speedy evolution in web environment and progression in technology have led us to access and manage tremendous images easily in various areas. Current internet image search engines purely trust on the text based information around the images. Keywords supplied by user can not specify content of images exactly
In this research, we used a proxy server to search for information related to the userpsilas browsed Web pages. From the records of the proxy server we constructed a profile of the userpsilas browsing habits. At the end of the userpsilas search subsystem, we will use content based concept to extract keywords to obtain
Using disaggregated data from a Chinese search engine we jointly model ad rank and performance for hospitality related keyword searches. As a result of our modeling framework we can better determine the optimal keyword bidding strategy for an advertiser given the search engine's control over ad rank. Our approach
techniques use word bounding box ratio feature initially for matching words in the database of compressed document images. For all the matching test-words, the word spotting strategy in the first model is to decompress and OCR first two characters, and then match with the keyword characters. If the matching is successful, then
As personalization technologies are widely used, preference extraction is becoming important. In this work, we propose a preference extraction method on the basis of applications that are installed on a user's smart device. In this method, keywords are extracted from descriptions of the installed applications on an
This paper proposes a Research paper Similarity system that measures the similarity of an input paper with other papers based on the summarized version of each paper. Currently, This system will take into account 2 different types of summarization for papers based on the different types of keywords,i.e, Normal
The default page sorting algorithm in Nutch which is open source search engine is TF/IDF algorithm, but it's difficult to meet the demand of music page sorting. The paper presents a new page sorting algorithm bases on BM25 model for music users. According word count and keyword frequency in music web pages, the pages
these sites. Current techniques simply filter on the basis of URLs blocking and keyword matching or either rely on a large database of pre-classified web addresses. The problem is how to intelligently filter the negative contents, rather than filtering entire websites using their URLs or applying simple keyword matching
on the Web is mainly supported by text and keyword-based solutions which offer very limited semantic expressiveness to service developers and consumers. This paper presents a method using probabilistic machine-learning techniques to extract latent factors from semantically enriched service descriptions. The latent
Web service discovery is a vital problem in service computing with the increasing number of services. Existing service discovery approaches merely focus on WSDLbased keyword search, semantic matching based on domain knowledge or ontologies, or QoS-based recommendations. The keyword search omits the underlying
others. Traditional keyword search retrieves all the text data that contain the keywords you have specified. That is great as far as it goes, but people still have to read all those literatures to find out whether they actually contain any information that is relevant to your search. While text mining is aware of real text
for patent queries because the inherent search systems come from traditional keyword-based models, which inevitably lead to too many unrelated items in the search results. Consequently, these systems cost the patent experts lots of time to iteratively refine search results manually. In this paper, we propose a
keywords. These Web pages are ranked by a newly introduced equation. It is evident that all matched Web pages are not selected by this keyword selection procedure. Hence, unmatched Web pages are checked and named as ‘Secondary’ Web pages. These Web pages are ranked by another new equation. Ranking procedure of
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.