The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
fields and provides to the researchers the application form best matched to the researcher's current research field. We have developed recommendation system of Grant-in-Aid system for researchers by using JSPS (Japan Society for the Promotion of Science) keywords. The system can determine some rules associated between the
In a real world, it is often in a group setting that sensitive information has to be stored in databases of a server. Although personal information does not need to be stored in a server, the secret information shared by group members is likely to be stored there. The shared sensitive information requires more security and privacy protection. To our best knowledge, there is no paper which deals with...
The traditional layout of news websites, the combination of classified hierarchical browsing, headline recommendation and keyword-based search, has been used for many years. The keyword-based search is considered to be the most powerful tool for news browsing and retrieval. Unfortunately, the keyword-based query
needed to search and find relevant information. For tabular structures embedded in HTML documents, typical keyword or link-analysis based search fails. The next phase envisioned for the WWW is automatic ad-hoc interaction between intelligent agents, web services, databases and semantic web enabled applications. A large
Traditional information gathering systems are mostly keyword-based that are lack of semantic comprehension and analysis ability and can't guarantee the comprehensiveness and accuracy of information gathering. This paper proposes Chinese patent information gathering model based on domain ontology, which can visualize
interchangeable module which uses 2-Way SMS for allowing the interchange of messages, traffic query and result, between mobile device and the system. 2) The data retrieving module which feeds the really simple syndication (RSS) document from NECTEC real-time traffic report Website for traffic information retrieval. 3) The keyword
to keyword searching. Thus far, the identification of the facets was either a manual procedure, or relied on apriori knowledge of the facets that can potentially appear in the underlying collection. In this paper, we present an unsupervised technique for automatic extraction of facets useful for browsing text databases
In the past few years, there has been an exponential increase in the amount of information available on the World Wide Web. This plethora of information can be extremely beneficial for users. However, the amount of human intervention that is currently required for this is inconvenient. Information extraction (IE) systems try to solve this problem by making the task as automatic as possible. Most of...
of web resources. Thus finding the information related to specific topic or a keyword from largely available web resources enables promising opportunities for information discovery. In this paper we propose a system that finds the links related to specific keyword and then it performs the in-site searching to get the
commercial web search engines, a large fraction of returned images is not related to the query keyword. We present a SVM based active learning approach to selecting relevant images from noisy image search results. The resulting database is more diverse with more sample images, compared with other well established facial
The topic correlation judgment algorithm based on weight and threshold is proposed as for the problem that Web pages which are closely related to the given topic may be neglected due to not all keywords given by the users in the pages when users retrieve the topic they desire on the Internet. The algorithm retrieves
number of citations. The variation of the number of citations over time is useful for determining the recency of a database and it is related to the timeliness dimension. Regarding to relevancy, the keywords of papers are useful to indicate the main context of application of these databases.
similar product images on shopping websites, ranking product tags by text aggregation, and re-search textual items consisting of semantic meaningful tags to make a recommendation. In addition, users can choose automatically suggested keywords to reflect their intentions. Subjective evaluation has demonstrated the
to create a new entry in a database from the Neurology Laboratory measurements. The advantage of the web application is to view patient's saved records, filter entries with by keywords, show a preview of a curve and download electro physical data in three different formats (edf, dat, xml). The output of the proposed
the main objective of this paper is to evaluate the effectiveness of GoogleMap as one of the online GIS maps providers. System testing is used to assess three different study areas namely, Shah Alam, Kuala Lumpur and Taiping. To facilitate this assessment a query list consists of 78 keywords are gathered. Precision and
into the server. Each of the file data or Web data is viewed as a memex event that can be described by 4W1H form. The memex event ontology is used to transform the various types of data to the standard 4W1H form. Users can view their life log chronologically and search them by keywords. Moreover, the life logs can be
Traditional automatic classifiers often conduct misclassifications. Folksonomy, a new manual classification scheme based on tagging efforts of users with freely chosen keywords can effective resolve this problem. Even though the scalability of folksonomy is much higher than the other manual classification schemes, the
distinctive keywords used in Web pages or URLs in order to detect new phishing sites that are not yet listed in blacklists. However, these kinds of heuristics can be easily circumvented by phishers once their mechanism is revealed. In order to overcome this weakness, visual similarity-based detection techniques have been
As an ever-increasing amount of information on the Web today is available through search interfaces, users have to key in a set of keywords in order to access the pages from certain Web sites, which are often referred to as the hidden Web or the deep Web. Since there is no static links to the hidden Web pages, search
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.