The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Search engines on the Web have popularized the keyword-based search paradigm, while searching in databases users need to know a database schema and a query language. Keyword search techniques on the Web cannot directly be applied to databases because the data on the Internet and database are in different forms
This paper presents an attempt to show the efficiency of some search engines in dealing with Arabic keywords. This can be achieved by comparing the number of retrieved pages, retrieving time, and stability (in both the number of retrieved pages and the order for each retrieved page) for each one of the selected 20
It's important to eliminate noisy data for information extraction on the deep web. In this paper, we propose a new approach called ENDW(Eliminating Noisy Data in Web pages) based on query keywords and DOM tools to eliminate noisy data. Query keywords submitted to backend databases always appear in deep web pages. The
The traditional layout of news websites, the combination of classified hierarchical browsing, headline recommendation and keyword-based search, has been used for many years. The keyword-based search is considered to be the most powerful tool for news browsing and retrieval. Unfortunately, the keyword-based query
-processing of Web search results have been extensively studied to help user effectively obtain useful information. This paper has basically three parts. First part is the review study on how the keyword is expanded through truncation or wildcards (which is a little known feature but one of the most powerful one) by using
Internets are important in everyone's life like searching keyword, college, social network and online shopping, when user using the internet for searching the keyword they getting some problem. That is when user searching for the keyword for some meaning but they will get different meaning for that keyword. Because
based on keyword indexing, there are many records in their result lists that are irrelevant to the user's information needs. It is shown that for retrieving more relevant and precise results, the following two points should be concerned: First of all, the query (either it is generated by a human or an intelligent agent
the device to strengthen the defense. To enhance the security of the back-end application servers, we use keyword filtering and re-treatment to rule out the blacklist, and to adjust the system settings so that it can effectively block the assaults or reduce the possibility of successful attacks. In addition, we also
needed to search and find relevant information. For tabular structures embedded in HTML documents, typical keyword or link-analysis based search fails. The next phase envisioned for the WWW is automatic ad-hoc interaction between intelligent agents, web services, databases and semantic web enabled applications. A large
The World Wide Web has become a huge repository of data of interest for a variety of application domains. However, the same features that have made the Web so useful and popular also impose important restrictions on the way the data it contains can be manipulated. Particularly, in the traditional Web scenario, there is an inherent difficulty in gaining access to data that is implicitly present in...
Traditional information gathering systems are mostly keyword-based that are lack of semantic comprehension and analysis ability and can't guarantee the comprehensiveness and accuracy of information gathering. This paper proposes Chinese patent information gathering model based on domain ontology, which can visualize
machines interacting with other machines to yield results which are user oriented and precise. A New Integrated Case And Relation Based Page Rank Algorithm have been proposed to rank the results of a search system based on a user's topic or query. This paper proposes an optimized semantic searching of keywords represent by
Domain — specific search focuses on one area of knowledge. Applying broad based ranking algorithms to vertical search domains is not desirable. The broad based ranking model builds upon the data from multiple domains existing on the web. Vertical search engines attempt to use a focused crawler that index only relevant web pages to a predefined topic. With Ranking Adaptation Model, one can adapt an...
Presentations are crucial components of the knowledge sharing in organizations with the purpose of facilitating organizational acquisition and invention. Current web applications enabling users to collaboratively create presentation-like web contents and locate the created contents on a web page for knowledge sharing are insufficiently supportive of types of materials for presentations and the functions...
In recent years, the application of ontology has been already toward the diversification under the development of the semantic Web technology. The main application of ontology is information retrieval. With the utilization of ontology, we expect to offer more correct information for users. Although, most of the applications of ontology are information retrieval but they lacks of the interaction with...
they do well for keyword search strings such as "ocean'08 conference information", they are quite inadequate for searching against structured data such as "time- series ocean surface temperature or salinity levels in the Gulf of Mexico". Traditional search engines deploy various complex algorithms, take into account the
generate and calculate the associated relations and their strengths between documents within a domain. Each document is represented by a bag of words and their weights. We first build domain knowledge background based on the association rules at keyword level, and then we apply those association rules to generate and
keywords. These Web pages are ranked by a newly introduced equation. It is evident that all matched Web pages are not selected by this keyword selection procedure. Hence, unmatched Web pages are checked and named as ‘Secondary’ Web pages. These Web pages are ranked by another new equation. Ranking procedure of
since performing a keyword search using search engines like Google, Yahoo etc. presents them with a list of publication site where the user need to click through a series of link to reach the journal web site and go through the details of the journals like Impact Factor, SNIP etc. manually. Suppose if a publication web
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.