The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
fields and provides to the researchers the application form best matched to the researcher's current research field. We have developed recommendation system of Grant-in-Aid system for researchers by using JSPS (Japan Society for the Promotion of Science) keywords. The system can determine some rules associated between the
A keyword choice and analysis approach in SEO is studied to deal with the issues such as low efficiency, poor reliability and instability optimization in artificial SEO processing in this paper. A keyword expansion method is proposed by reversing search engine's related search keywords to meet user's requirements
This paper presents a keyword extraction technique that can be used for tracking topics over time. In our work, keywords are a set of significant words in an article that gives high-level description of its contents to readers. Identifying keywords from a large amount of on-line news data is very useful in that it can
keywords from the Web pages. The system first identifies the section of the Web page that contains the multimedia file to be extracted and then extracts it by using clustering techniques and other tools of statistical origin. Experimental results on real-world image sharing Web sites are presented and discussed in this paper
It's important to eliminate noisy data for information extraction on the deep web. In this paper, we propose a new approach called ENDW(Eliminating Noisy Data in Web pages) based on query keywords and DOM tools to eliminate noisy data. Query keywords submitted to backend databases always appear in deep web pages. The
Content-based phishing detection extracts keywords from a target Web page, uses these keywords to retrieve the corresponding legitimate site, and detects phishing when the domain of the target page does not match that of the retrieved site. It often misidentifies a legitimate target site as a phishing site, however
needed to search and find relevant information. For tabular structures embedded in HTML documents, typical keyword or link-analysis based search fails. The next phase envisioned for the WWW is automatic ad-hoc interaction between intelligent agents, web services, databases and semantic web enabled applications. A large
ordinary users to use. In this paper, we propose a novel keyword-based user interface system EasyUI for achieving web-scale data integration and easy to use for ordinary users. Dealing with heterogeneity on the web-scale presents many new challenges. We proposed new methods to address these challenges, i.e., indexing schemata
The motivation behind sub-topic or topic specific keyword discovery through Web pages is helping a user, who is insufficient in knowledge and experience about a topic, to find important concepts without much effort. Intuitively, a Web user would start searching the Web via querying search engines, visiting some pages
information is especially important. Keyword-search, a de-facto standard to search over Electronic Health Records (EHR), being simple and therefore popular technique, however, is not ideal and often returns either too many irrelevant or too few relevant search results. Clinicians, usually very short on time, just cannot afford
agent that targets a particular topic and visits and gathers only relevant web pages. In this dissertation I had worked on design and working of web crawler that can be used for copyright infringement. We will take one seed URL as input and search with a keyword, the searching result is based on keyword and it will fetch
The Web represents one of the largest repositories of information ever compiled by mankind and as such search techniques are essential to navigating its depths and returning pertinent information. Typically the search techniques employed in search engines such as Google entail the use of keywords in which Web pages
While the problem to find needed information on the Web is being solved by the major search engines, access to the information in large text documents (e-books, conference proceedings, product manuals, etc) is still very rudimentary. Thus, keyword-search is often the only way to find the needle in the haystack. There
Mobile web browsing signifies accessing the content on web pages using a mobile device. It is common for Internet search engines to use keyword searching in which rank is assigned to each page based on several features. But it is an arduous task for a user to inscribe a keyword in such a delicate small mobile screen
desirable. In this paper, some existing achievements are investigated firstly. Then our current technique on web information extraction is discussed in detail. In our approach, rules and patterns are extracted from sample pages through training process, with human involvements. We use both keywords and regular expressions to
Applying automatic summarization to search engine can make it easier for users to find out the content of the Web page. In this paper, the results of search engine are analyzed. On the basis of query keywords expansion, we propose a new summary approach which calculates the sentence weight utilizing the information of
posts were collected from a selected hacker forum using a customized web-crawler. Posts were analyzed using a parts of speech tagger, which helped determine a list of keywords used to query the data. Next, a sentiment analysis tool scored these keywords, which were then analyzed to determine the effectiveness of this
option, say, limiting search to few links. To reduce the time spent by users, a web link extraction tool has been designed and implemented in Java, that analyzes the ways of extracting web link information using a standard interface. The Test Scenario has been presented with various keywords like Higher Education
synthesis and learning about individual users. Amongst these the most common use is finding relevant information. We simply specify a set of keywords or query as a request or a reference and we get a list of pages, ranked as per similarity of query. Currently searching web face with one problem that many times outcome is not
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.