The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
A Web search engine must update its index periodically to incorporate changes to the Web. Query processing is a major cost factor in operating large web search engines. To solve this problem, we propose a framework for optimizing the search result by combing inverted index compression and feature-based caching. We perform a comparison and evaluation of several inverted compression algorithms. We then...
Large web search engines have to answer thousands of queries per second with interactive response times under tight latency constraints. Query processing is a major cost factor in operating large web search engines. To keep up with this immense workload, a number of techniques such as caching, index compression, and index and query pruning are used to improve scalability. We focus on two techniques,...
Result caching is an efficient technique for reducing the query processing load, hence it is commonly used in search engines. In this paper, we study query result caching and proposes a cache management policy for achieving higher hit ratios compared to traditional heuristics methods. Our cache management policy comprises an eviction policy and an admission policy, and it divides the memory allocated...
We study the caching of query result pages in Web search engines. Popular search engines receive millions of queries per day, and efficient policies for caching query results may enable them to lower their response time. In this paper, we propose an architecture that uses a combination of cache result and admission policy to improve the efficiency of search engines. In our system, we divide the cache...
Image segmentation based on random walk model in graph theory can be transformed into large-scale sparse linear equations to solve problem. The final solution of the equation and the iteration convergence rate is depending on the selection of the initial value. It is a significant disadvantage for segmenting the large scale image while selecting initial value randomly. In this paper, we proposed a...
Recommender systems improve access to relevant products and information by making suggestions based on page ranking technology. Existing approaches to learning to rank, however, did not consider the pages in the deep Web which have valuable information. In this paper, we present a novel product recommendation algorithm based on the content of Web pages including the product information and customer...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.