The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
fields and provides to the researchers the application form best matched to the researcher's current research field. We have developed recommendation system of Grant-in-Aid system for researchers by using JSPS (Japan Society for the Promotion of Science) keywords. The system can determine some rules associated between the
In a real world, it is often in a group setting that sensitive information has to be stored in databases of a server. Although personal information does not need to be stored in a server, the secret information shared by group members is likely to be stored there. The shared sensitive information requires more security and privacy protection. To our best knowledge, there is no paper which deals with...
Language Model (LM) constitutes one of the key components in Keyword Spotting (KWS). The rapid development of the World Wide Web (WWW) makes it an extremely large and valuable data source for LM training, but it is not optimal to use the raw transcripts from WWW due to the mismatch of content between the web corpus
based spam topic detection strategy through keyword extraction. In particular, spam topic is detected by using the topic model of multiple features with the keywords of clues, which integrate the corresponding feature of News, BBS and Blog. We get the min cost of 0.282 through TDT4 evaluating corpus and the satisfaction of
integrate information from multiple interrelated pages to answer keyword queries meaningfully. Next-generation web search engines require link-awareness, or more generally, the capability of integrating correlative information items that are linked through hyperlinks. In this paper, we study the problems of identifying the
Keyword auctions were widely used by search engines to sell the advertisement on the result pages. To date, the most widely used auction mechanism is wGSP (weighted and generalized second-price). This paper presents an optimal pricing strategy under wGSP for advertiser to bid keywords for their websites. In brief, the
system called "WebAngels filter" which uses textual and structural content-based analysis. These analysis are based on a violent keyword dictionary. We focus our attention on the keyword dictionary preparation, and we demonstrate that a semi-automatic keyword dictionary can be used to improve the filtering efficiency of
Keyword extraction is an important application in the area of information technology. Automatic keyword extraction can help people know what is the article primarily talking about without reading the long passage carefully. This paper mainly introduced a keyword extraction algorithm using pagerank on Synonym. Firstly
The popularity of blogs (as part of online social networking services) has grown dramatically in the last decade. Guided by ethnographic research of these online communities, we have designed a graphical interface for users' exploration and navigation of large scale blog network. In our design, we use the keyword
This paper proposes a user's conception accelerator which supports making user's conception for data analysis in decision making. This paper introduces proposed system to an interactive information visualization system keyword map. The keyword map has been studied for supporting data analysis for decision making. It
The relevance feedback techniques have been studied in the field of document retrieval, aiming to generate appropriate queries for userspsila information needs.Conventional relevance feedback techniques are performed on document space, while the resultant queries should be represented in keyword space. In this paper
Query-recommendation systems based on inputted queries have become widespread. These services are effective if users cannot input relevant queries. However, the conventional systems do not take into consideration the relevance between recommended queries. This paper proposes a method of obtaining related queries and clustering them by using the history of query frequencies in query logs. We define...
The Net and Web technologies effectively and efficiently accelerate secure e-commerce transactions by reducing what is known as the Total Cost of Ownership (TCO) for commercial activities that a business normally incurs. Businesses choose a keywords advertising that best describes their main Web pages. The pages are
This paper proposes a mutual detection mechanism between spam blogs and keywords for filtering spam blogs from updated blog data. Spam blogs are problematic in extracting useful marketing information from the blogosphere; they often appear to be rich sources of information based on individual opinion and social
This paper describes an edu-mining technique for finding keywords to improve pupils' message- production skills. It automatically finds keywords from blog items pupils create. It then adaptively suggests some of the keywords to pupils when they create new blog items so that they can revise their items using the
Peer-to-peer approaches bring one perfect alternative for the Web content search. However, how to search and retrieve the data based on the content query is still an open problem for peer-to-peer systems. In this paper we propose History-based Multi-keywords Search(HMS) in unstructured peer-to-peer systems, which only
keywords from the Web pages. The system first identifies the section of the Web page that contains the multimedia file to be extracted and then extracts it by using clustering techniques and other tools of statistical origin. Experimental results on real-world image sharing Web sites are presented and discussed in this paper
Web pages for search engine. First we describe a scheme based on semantic keywords combined with sentence overlapping, and then show an implemented prototype, with the experimental results that suggest the prototype work well under a proper setting.
Content-based phishing detection extracts keywords from a target Web page, uses these keywords to retrieve the corresponding legitimate site, and detects phishing when the domain of the target page does not match that of the retrieved site. It often misidentifies a legitimate target site as a phishing site, however
The traditional layout of news websites, the combination of classified hierarchical browsing, headline recommendation and keyword-based search, has been used for many years. The keyword-based search is considered to be the most powerful tool for news browsing and retrieval. Unfortunately, the keyword-based query
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.