The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
With the virtually unlimited amount of information sources, search engines cannot find or index a large part of these information because they are located behind HTML forms. That part of Web are usually known as hidden Web or deep Web and because the traditional crawlers lack the suitable technique to past HTML forms, many hidden Web crawlers try to beat the problem of retrieving data behind forms...
Current search engines such as Google and Yahoo! are prevalent for searching the Web. Search on dynamic client-side Web pages is, however, either inexistent or far from perfect, and not addressed by existing work, for example on Deep Web. This is a real impediment since AJAX and Rich Internet Applications are already very common in the Web. AJAX applications are composed of states which can be seen...
This paper presents the architecture of a traffic advisory system based on the Deep Web. It automatically generates the meaning enquiries by the mechanism of Deep Web crawler, according to the query conditions filled in by the user. Then it will automatically find and download the relevant pages. The system more intuitively and easily display the information for users based on the theoretical framework.
Crawling deep web is the process of collecting data from search interfaces by issuing queries. With wide availability of programmable interface encoded in Web services, deep web crawling has received a large variety of applications. One of the major challenges crawling deep web is the selection of the queries so that most of the data can be retrieved at a low cost. We propose a general method in this...
Conventional search engines generally cannot find information from the Deep Web because they use hyper link-based crawling techniques to visit Web pages. Recently, lots of research efforts are being tried to crawl the Deep Web. One of the obstacles for crawling the Deep Web is the requirement of huge computing resources, but most of search engine companies hardly meet the needs. We, therefore, propose...
As an ever-increasing amount of information on the Web today is available through search interfaces, users have to key in a set of keywords in order to access the pages from certain Web sites, which are often referred to as the hidden Web or the deep Web. Since there is no static links to the hidden Web pages, search engines cannot discover and index such pages. However, according to recent studies,...
A lot of high quality and wealthy data are hidden in backend database and search engines can not index this page, which is called Deep Web. It is mostly accessible through query interfaces. SDWS, a semantic search engine for Deep Web is presented. We are studying and implementing semantic Web technology to the each process of Deep Web information integrated, and expertise in Deep Web discovering,...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.