The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
The search engine, keyword extraction is an important technique. In this paper, aiming at the defects of the traditional keyword extraction algorithm, we proposed an improved weight computation strategy. The experimental results show that, the improved method's results are significantly better results than the
This paper investigates the semantic search performance of search engines. Initially, three keyword-based search engines (Google, Yahoo and Msn) and a semantic search engine (Hakia) were selected. Then, ten queries, from various topics, and four phrases, having different syntax but similar meanings, were determined
This paper presents an attempt to show the efficiency of some search engines in dealing with Arabic keywords. This can be achieved by comparing the number of retrieved pages, retrieving time, and stability (in both the number of retrieved pages and the order for each retrieved page) for each one of the selected 20
We have become able to get enough approvable images of a target object just by submitting its object-name to a conventional keyword-based Web image search engine. However, because the search results rarely include its uncommon images, we can often get only its common images and cannot easily get exhaustive knowledge
The Web represents one of the largest repositories of information ever compiled by mankind and as such search techniques are essential to navigating its depths and returning pertinent information. Typically the search techniques employed in search engines such as Google entail the use of keywords in which Web pages
, presenting ranked results in a web browser window. The query, reason for search, results, and documents visited were logged to maintain HIPAA compliance. Indexing averaged approximately 25,000 reports per hour. Keyword search of a common term like “pneumothorax” yielded the first ten most relevant results of 705,550 total
search the Web effectively. In this paper, we present a QS module, denoted CQS, which assists children in finding appropriate query keywords to capture their information needs by (i) analyzing content written for/by children, (ii) examining phrases and other metadata extracted from reputable (children's) websites, and (iii
identifying Tweets that describe cases with acute and more critical symptoms from those referring to milder cases. We found that making use of mereley very small n-gram keyword lexica, the automatic identification of critical cases reaches an accuracy of 92%.
(timestamp). Given the huge web as a temporal data collection, in this paper, we introduce a framework based on our current work. The main task is to find the association between two topics in different time slots (durations). Given a keyword as the main topic, we expect to find three kinds of topics which are relevant to the
the correct answer among their top 10 search results. Internet's redundancy of information and the recent proliferation of user generated content helps search engines to remain almost entirely keyword oriented and still robustly handle equivalent versions of queries. In this paper we propose a family of metrics to
In order to solve the problem that we can only collect data from one single data source at some fixed time after mining the keywords in a rather superficial level, and to take full use of the information returned by search engines to construct the social relationship network based on the semantic link of the searched
According to a report online [34], more than 200 million unique users search for jobs online every month. This incredibly large and fast growing demand has enticed software giants such as Google and Facebook to enter this space, which was previously dominated by companies such as LinkedIn, Indeed, Dice and CareerBuilder. Recently, Google released their “AIpowered Jobs Search Engine”, “Google For Jobs”...
All-about diaries—software platforms that record, in a browsable and machine-processable format, the everyday activities of people, communities, and objects—offer a wealth of application opportunities, but their full-fledge implementation will require overcoming several challenges.
Text Big Data Analytics Study «Third Wave» is described. Morphological matrix of several Keywords Phrases was collected from Internet's open textual resources using API. The results are analyzed from the point of view that global Internet's audience forms «people-to-IT» system
Most web search engines use only the search keywords for searching. Due to the ambiguity of semantics and usages of the search keywords, the results are noisy and many of them do not match the user's search goals. This paper presents the design of an intelligent Search Bot, which operates as an agent for a user by
option, say, limiting search to few links. To reduce the time spent by users, a web link extraction tool has been designed and implemented in Java, that analyzes the ways of extracting web link information using a standard interface. The Test Scenario has been presented with various keywords like Higher Education
Intrastate Stability was prepared. The main components for ranking are considered: Social Impact, Sixth Kondratiev's wave capture, Askari-Rehman Economic Islamicity Index, growth of keywords number. Cluster analysis k-means was implemented for results improvement.
, we review 79 alarms handling studies collected through a systematic literature search and classify the approaches proposed into seven categories. The literature search is performed by combining the keywords-based database search and snowballing. Our review is intended to provide an overview of various alarms handling
. GeoContext includes methods for filtering a social media stream by keywords and location coordinates in order to provide more specific topics. GeoContext includes a geolocation module, called GeoContext Locator, for predicting the locations of tweets that are not associated with explicit coordinates, in order to model topics in
becomes really difficult because name abbreviation, interdisciplinary, especially tautonym for Chinese scholars. The scholar classification can be achieved by the publications, journals that they published with, keywords in their publications using big-data techniques.
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.