The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
The characteristic of poor information of short text often makes the effect of traditional keywords extraction not as good as expected. In this paper, we propose a graph-based ranking algorithm by exploiting Wikipedia as an external knowledge base for short text keywords extraction. To overcome the shortcoming of poor
With the development of Internet, more and more on-line information has become precious wealth that we can access to. High quality information is often stored in dedicated digital libraries. However, query system of most digital libraries based on keyword matching couldnpsilat make users satisfied. This paper presents
Granular computing is an emerging technology. This paper summarizes its applications to the Web. A high frequent co-occurring ordered set of keywords is called a keyword set;it represents some concept in the given document set. These concepts forms a simplicial complex of concepts,which is regarded as a knowledge base
. First, the related textual information associated with Web images is identified as the candidate annotations for Web images. Second, the word co-occurrence is utilized to eliminate irrelevant keywords for improving the annotation accuracy. Then, the keyword-based association analysis is exploited to further discover
The Web has the potential to become the world's largest knowledge base. In order to unleash this potential, the wealth of information available on the Web needs to be extracted and organized. There is a need for new querying techniques that are simple and yet more expressive than those provided by standard keyword
This paper described our development dialog system on Kyoto tourist information assistance. Dialog part of our system helped user to make an appropriate query. Information analysis part would be assisted for user to select the retrieved information. Nowadays we can get most information through the Internet. However, we have a trouble to pick up expected information from the huge results with conventional...
In the age of Internet, with the online information explosive growth, people want to find information we need in the cyberworld fleetly and exactly. The information retrieval method based on the keyword or the simple logic-combination of the keywords has been unable to meet the people's need of information getting to
This paper addresses the problem of entity linking for Chinese microblog. The entity linking aims to find the corresponding entity in knowledge base for a keyword in Chinese microblog. To deal with this problem, we propose an approach based on the information retrieval model. First, words segmentation, knowledge base
In this paper, we present the system of automatic MCQs (Multiple Choice Questions) generation for any given input text along with a set of distractors. The system is trained on a Wikipedia-based dataset consisting of URLs of Wikipedia articles. The important words (keywords) which consist of both bigrams and unigrams
evolving into a smart device providing Internet-based services. The evolution made Smart TV users get plenty of multimedia contents from broadcasting stations or content providers on Internet. However, it is quite difficult to find multimedia contents using ambiguous user search keywords. In this paper, we have designed the
inherited the probability from multiple fathers. We used N-gram based on Wikipedia words to extract the keywords from web page, and introduce Bayes classifier to estimate the page class probability. Experimental results shown that the proposed method has very good scalability, robustness and reliability for different web pages.
Web page recommendation model traces userspsila Web-surfing trails, extracts the useful information including keywords, Web page URLs and userspsila evaluations on Web pages, and automatically generates FCA (formal concept analysis) knowledge base and enterprise ontology knowledge base with WordNet. While users are
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.