The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
classic statistical method for sentence alignment, we propose an improved approach to align the initial bilingual resources, in which two factors, bilingual keyword pairs and matching patterns are introduced. Experimental results show that our sentence aligner supported by the new approach achieves performance enhancement by
The World Wide Web has become a huge repository of data of interest for a variety of application domains. However, the same features that have made the Web so useful and popular also impose important restrictions on the way the data it contains can be manipulated. Particularly, in the traditional Web scenario, there is an inherent difficulty in gaining access to data that is implicitly present in...
they do well for keyword search strings such as "ocean'08 conference information", they are quite inadequate for searching against structured data such as "time- series ocean surface temperature or salinity levels in the Gulf of Mexico". Traditional search engines deploy various complex algorithms, take into account the
In the age of Internet, with the online information explosive growth, people want to find information we need in the cyberworld fleetly and exactly. The information retrieval method based on the keyword or the simple logic-combination of the keywords has been unable to meet the people's need of information getting to
option, say, limiting search to few links. To reduce the time spent by users, a web link extraction tool has been designed and implemented in Java, that analyzes the ways of extracting web link information using a standard interface. The Test Scenario has been presented with various keywords like Higher Education
Most of the search engines search for keywords to answer the queries from users. The search engines usually search web pages for the required information. However they filter the pages from searching unnecessary pages by using advanced algorithms. These search engines can answer topic wise queries efficiently and
data-rich by keywords in the index path; generate extraction rule and obtain a wrapper according. The wrapper can extract data automatically in the same domain from a Website. It does relevant to the continuity, the structural similarity, and the location relations of the useful information in Web pages, but not the HTML
the tokens or between the checksums indicates a same origin violation. To reduce the scheme's performance ovrhead, this matching is performed only when a request originated from a page with no submission form has suspicious keywords. We analyze the protection potential, security, and performance overhead of our scheme.
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.