The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
The success of the search engine may be our Newtonian paradigm for the Web. It enables us to do so much information discovery that it is difficult to imagine what we cannot do with it.
This paper presents a keyword extraction technique that can be used for tracking topics over time. In our work, keywords are a set of significant words in an article that gives high-level description of its contents to readers. Identifying keywords from a large amount of on-line news data is very useful in that it can
People perform keyword search on search engines in order to find information from the Web. Keywords given to search engines can be regarded as the resources for detecting people's information needs. It is often reported that many people perform search intensively after worldwide disasters or accidents. This paper
This paper presents an attempt to show the efficiency of some search engines in dealing with Arabic keywords. This can be achieved by comparing the number of retrieved pages, retrieving time, and stability (in both the number of retrieved pages and the order for each retrieved page) for each one of the selected 20
With the rapid rise in the number of weblogs, or blogs, on the World Wide Web (WWW), there is a growing need to be able to quickly search for discussion on specific topics. While keyword searches using tools such as Google or Technorati can yield useful results, we run into the problem of having to enter
Search engines are one of the most powerful tools in the Web world today for data retrieval and exploration. Most search engines identify the key word in the sentence or phrase or list of words given by the user and starts mining the Web for the occurrence of keyword in the Web pages. Quite often searching for the key
With the rapid development of network, traditional music charts do not work well with the popularity of musical recordings. The search charts of music do pretty well with that. However, there are many new questions with search charts need to be concerned and solved. In this paper, we choose the special keywords on
-processing of Web search results have been extensively studied to help user effectively obtain useful information. This paper has basically three parts. First part is the review study on how the keyword is expanded through truncation or wildcards (which is a little known feature but one of the most powerful one) by using
This paper proposes a novel method to generate labels for grouping and organizing the search results returned by auxiliary search engines. It has applied statistical techniques to measure the quantities of co-occurrence keywords for forming the label matrix of them, and then agglomerated them into higher-level
The field of Information Retrieval plays an important role in searching on the Internet. Most of the information retrieval systems are limited to the query processing based on keywords. In information retrieval system the matching of the query against a set of text record is the core of the system. Retrieval of the
The amount of information on the Web is growing at an exponential rate. Information overload has brought a heavy burden for modern life. Keyword based search engines no long fill the needs of many people. This paper introduces an approach towards intelligent information retrieval by providing clustered Web pages and
Keyword-based search engines often return an unexpected number of results. Zero hits are naturally undesirable, while too many hits are likely to be overwhelming and of low precision. We present an approach for predicting the number of hits for a given set of query terms. Using word frequencies derived from a large
Current search engines have two problems, losing useful information and including useless information. These two problems are aroused by the keyword matching retrieval model, which is adopted by almost all search engines. We introduce the conception of category attribute of a word. According to the category attribute
The task of researching information on a particular topic using the Web is mainly accomplished by using keyword-based search engines. Although this approach provides a good starting point, it remains a tedious task to collect additional information that puts this topic in greater context. In this paper we present
, for example) don't even enable keyword searches on their sites. The Web's increasingly dynamic nature complicates searching. New pages created on the fly using personalization information, and even static content, with dynamically inserted sidebars, navigation bars, advertising and commentary, can present a rapidly
This paper is aimed to identify and explain the limitations and the problems of Arabic texts retrieving in the general search engines, and we have made many experiences on Arabic documents from the Lebanese official journal. In our approach, we have used three "keyword matching" Arabic search engines: Google, Yahoo
performance. Apart from estimating the best path to follow, our system also expands its initial keywords by using genetic algorithm during the crawling process. To crawl Vietnamese web pages, we apply a hybrid word segmentation approach which consists of combining automata and part of speech tagging techniques for the Vietnamese
plus noun phrase learning for extraction of activity concepts in Chinese. We also propose an algorithm of relevance measurement for extracting relation instances by binary keywords based on co-occurrence statistics. Finally, we build a practical system of ontology learning through learning relation instances of the
relational database of web pages. So there are many researches focusing on the search in these relational database with keywords, compared with these researches, our algorithms are mainly based on bags using the greedy algorithms and supporting the phrase recognition by utilizing multiple dictionaries. We make a comparison
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.