The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
creative ideas to customers. In order to solve this problem, this paper presents algorithms to achieve customers' target. This project can be divided into three parts. The first part is to enrich and to analyse the input keywords by semantic web. The second part is to general raw ideas and relevant ideas by an inference
Images that used to characterize high-definition Images from web is very difficult task. So, In this paper we propose unique web Image re-ranking framework that offline and online learned Images visual and semantic meaning regarding with numerous query keywords. These visual and semantic meaning of Images extended to visual
parse to the filtering engine for filtering and cleaning the data sources using suitable keywords prior to store it in the large data repository. After that, the large data will be processed and analyzed using algorithm or mathematical calculation to determine the expected dengue cases. Then, the processed information will
, using pay-as-you-go billing strategy, a Service Level Agreement (SLA)- based dynamic model is designed, where customer satisfaction serves as the primary factor of the model. Finally, some simulations are conducted to show the effectiveness of the proposed model. Keywords-Massive data, cloud computing, service level
Users often fail to find the right keywords to precisely describe their queries in the information seeking process. Techniques such as user intention predictions and personalized recommendations are designed to help the users figure out how to formalize their queries. In this work, we aim to help users identify their
Construal median scrap, Inquiry localization, Data pick up and Launch. The Construal median scrap is used to select a variety of responses to the receiver. Inquiry localization is used for extracting keywords from the source in question is widely used in the investigation. Data pick up and Launch is used for choose the correct
-based Internet of Things scenarios, a huge potential exists in combining graph-processing functionality with temporal and geospatial information and keywords extracted from high-throughput twitter streams. Using SAP HANA as the running example, we want to demonstrate what moving a set of individual engines and infra
, extract keywords and type of the question. Then the second step is to retrieve relevant pages through web search engines. The last and most important step is answer extraction, evaluate all extracted candidate answers, and the final answer will be the one with highest score. In addition to the system implementation, we also
keywords, the number of communities, the average clustering coefficient, and the average similarities of web pages. These five impact factors contain statistic and content information of an event. Empirical experiments on real datasets including Google Zeitgeist and Google Trends show that that the number of web pages and the
Information retrieval becomes a very complex process for search engines on the Web, this is due to, first, the staggering growth speed of the number of web site and, in the other hand, the search algorithms by keywords (terms) used currently are not suitable to better exploit this huge information quantity. These
Cloud Service Providers (CSPs) services under standard name conventions. Our proposed work also covers developing a generic search engine for searching cloud services. The keywords of the query are ordered based on cloud ontology and the ranking is based on the service attributes.
novels in online novel services. Comments and book marking are facilitated by readers, and it is possible to use the data as a resource for social ranking and recommendation. In this paper, we focused on an online novel service, and analyzed the frequency of keywords, number of authors, and links from readers to novels
. 2) Creating rules for combining individuals and atoms of concepts. To overcome these, we propose an approach of extending SWRL by adding new keywords as OWL ontology classes and properties, and post-translating them using rewrite meta-rules. These internal and external enrichments of the concepts leads to hybrid
In this paper we present several content-based recommendation methods for a QA system that rely and use extensively the structure of a domain-specific taxonomy. Our goal is to add semantics to a typical content-based RS in order to improve the quality of the recommendations by mapping relevant keywords from the
letting students take the lecture, most do not include advanced search methods or implement topic detection or extraction. To aid for e-Learning users, we have been developing search facilities for VOD lectures. For this purpose, this study proposes a method of extracting the topic by creating a graph of keywords and their
This paper proposes a method that discovers trend rules from complex sequential data. The rules represent relationships among evaluation objects, keywords, and changes of numerical values related to the evaluation objects. The data is composed of numerical sequential data and text sequential data. The method extracts
Service (QoS). The aim of this paper is to utilize the resource to improve the throughput using Aging Technique. So the low priority jobs cannot be affected in the grid environment. Various recourse allocation strategies are there which provide guidance for grid systems to make resource allocation decisions. Keywords-Grid
service, HTML pages are parsed, stop words are removed, stemming of keywords are carried out as pre-processing steps and the result is stored in the form of inverted index. We have evaluated the performance of the proposed design specification of the crawler with indexer and found that the number of pages retrieved is
the document tags is considered as cluster name. Thus in short, web search results that are fetched from the prevailing web search engines grouped under phrases that contain one or more search keywords. This paper aims at organizing web search results into clusters facilitating quick browsing options to the browser
spatial information grid more simply and quickly. Experiments on the implementation and test show that situation driven application generation can create situation applications from some keywords or phrases inputted by users based on situation processing.
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.