The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
The need for parallel task execution has been steadily growing in recent years since manufacturers mainly improve processor performance by scaling the number of installed cores instead of the frequency of processors. To make use of this potential, an essential technique to increase the parallelism of a program is to parallelize loops. However, a main restriction of available tools for automatic loop...
Online searching of books have gained astounding popularity worldwide. It has also attracted variety of researchers globally. Searching of books (e.g. Amazon.com, aNobii, LibraryThing etc) with the help of Social metadata(e.g. tags, reviews) and professional metadata (e.g. ISBN Number, Title, Publisher) is gradually becoming a sizzling hot topic under the aegis of Information Retrieval. In this paper,...
, service discovery and service selection operations for building SBSs based on keyword search. KS3 assists system engineers without detailed knowledge of SOA techniques in searching for component services to build SBSs by typing a few keywords that represent the tasks of the SBSs with quality constraints and optimisation
In many cases keywords from a restricted set of possible keywords have to be assigned to texts. A common way to find the best keywords is to rank terms occurring in the text according to their tf.idf value. This requires a corpus of texts from which document frequencies can be derived. In this paper we show that we
This paper proposes a systematic full text search on document using a combined keyword and structural similarity of documents under consideration. The approach operates in two steps. The first step uses a set of designated keywords to acquire potential desired documents by means of an open source tool. The second step
In order to improve the reusability for the software testing automation procedures, at the same time support the program test for Linux, keyword-driven distributed test automation framework (LKDT) is proposed in this paper, which is based on the analyze of the recent automated testing frameworks and combined with the
reuse repository for software under development. In this paper we have discussed various search techniques for efficient retrieval of components from reuse repository. The paper highlights the concept of keyword-based search technique in a lucid manner by exemplifying the working procedure of the technique via its
Factory Acceptance Testing should involve customer's experts and knowledge in defining, reading and validating tests, while keeping labor costs at moderate level. This involvement requires a testing approach, which hides implementation details and emphasizes domain terminology. Keyword driven testing is seen a viable
Real-time keywords potentially demonstrate positive effects when they are provided in cross-cultural communication. Previously real-time keywords generated by a speaker during talking were investigated, and it was found it contributes to build mutual understanding and knowledge. However the use of keywords was not
in the emergent ocean of information. The upcoming demand for data storage in petabytes and exabytes of data has also resulted in putting pressure in organizing the file structure in such a way that retrieval results of searching a keyword should match with the growing pace of data storage. As a result, there is an
Scientific documents are unstructured data consisting of natural language and hard for scientists to read and manage. Keywords are very helpful for scientists to search the related documents and know about their contents in a prompt way. In this paper we investigate a kind of data preprocessing technique used in SVM
Due to the huge number of research articles in the biomedical domain, it becomes more and more important to develop methods to find relevant articles of our specific research interests. Keyword extraction is a useful method to find important topics from documents and summarize their major information. Unfortunately
repository and the associated search techniques viz. keyword-based search and signature matching in a reasoned manner by exemplifying the working methodology of these techniques via its automated implementation. To address the purpose a new tool has been developed name ARE (Automated Repository Exploration) ver. 1.0.0. Readers
needs. In this paper, we present the design, architecture and implementation of an open-source keyword-based paradigm for the search of software resources in Grid infrastructures, called Minersoft. A key goal of Minersoft is to annotate automatically all the software resources with keyword-rich metadata. Using advanced
classic statistical method for sentence alignment, we propose an improved approach to align the initial bilingual resources, in which two factors, bilingual keyword pairs and matching patterns are introduced. Experimental results show that our sentence aligner supported by the new approach achieves performance enhancement by
With the development of Internet, more and more on-line information has become precious wealth that we can access to. High quality information is often stored in dedicated digital libraries. However, query system of most digital libraries based on keyword matching couldnpsilat make users satisfied. This paper presents
be easily extracted, building respective data banks. Keywords are the important terms, sometimes called, index terms that contain some kind of valuable information about the document. Automatic keyword extraction is the task to identify a small set of words, which can be designated as keywords for that document, and
We propose a Discovery approach to find web services composition flows sorted by similarity. The approach extracts information from BPEL files. When creating new web services composition, the discovery result can be reused directly or provide reference. We import the lexical semantic in matching keywords. By analysis
precision ratio and novelty ratio than that of web search engines. Based on case studies, we found that there are four main types of query suggestion within digital library environments, namely spelling suggestion, hot keyword suggestion, personalized suggestion and semantic suggestion. These approaches are, however, hardly to
interest areas coinciding with the related book categories. This paper suggests that bloggerspsila interests can be known through extracting keywords from blog entry titles and using book classification schemes. Because there were instances in which the keywords alone did not provide adequate information, the Naver (Korean
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.