The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
order to support wrongly spelled keyword, many techniques have been proposed including edit distance, wild-card and n-gram. The n-gram index has language-neutral and errortolerant advantage. However, it has a drawback of large size and less performance. In this paper, we have proposed NOVEL technique to search fuzzy
Document Summarization (ADS) systems are suitable for the task of outlining useful data. The ADS system model takes a text document as input, and outputs a semantically-relevant summary of this information. This information can be further separated and outlined as keywords, or keyphrases. This paper proposes a novel
By borrowing ideas from a cryptographic algorithm of low key authentic degree, a novel steganographic method based on keyword shift is presented. The master key of the method is to shift the sensitive keywords in the text. The conditions to guarantee the reversibility of the method are analyzed and found out, the
in the emergent ocean of information. The upcoming demand for data storage in petabytes and exabytes of data has also resulted in putting pressure in organizing the file structure in such a way that retrieval results of searching a keyword should match with the growing pace of data storage. As a result, there is an
problem and the first one is always neglected. In this paper, we propose a novel XML keyword query algorithm. In the first phase of the algorithm, users can select the suitable context meanings of the keyword matching nodes to match their query intentions, and the eligible keyword matching nodes will be found in the second
Search engines have become the main way for people to get expected information, most of them are based on keyword search. However, keyword search is based on computing the similarity of letters of the keywords, instead of semantic meaning, therefore the searching results often include irrelevant information to user
Results diversification for keyword search on XML documents has attracted considerable attentions from research community in recent years. Though search results were diversified from different perspectives in the existing methods, the effects were still far away from satisfactory. This paper proposes a new way to
in the paper, a new model (MAK-Chord) is presented which is expanded from Chord. It generates fingerprints for each resource which include all the attribute keyword information and take into account the query frequency difference. It gives two different mappings between resources and nodes and effectively supports
The main purpose of analyzing the social network data is to observe the behaviors and trends that are followed by people. How people interact with each other, what they usually share, what are their interests on social networks, so that analysts can focus new trends for the provision of those things which are of great
data, i.e. location and also conditions on their text attached to locations. Instead of asking restaurant nearest from the current location these new forms of queries will ask the restaurant nearest from current location with having some attached keywords like famous for brandy, or other menus as keywords attached with
In order to over the shortcoming of the incomprehensive of summarization, a new lexical-chain-based keywords extraction and automatic summarization algorithm from Chinese texts based on the unknown word recognition using co-occurrence of neighbor words is proposed in this paper, and an algorithm for constructing
which shows the concept and content of the original text. In this article, a new approach is presented with the aim of extracting keywords with respect to combined words, and extracting key sentences in Persian documents so as to classify them efficiently. Studies performed on several Persian documents, and comparisons
suggest the ways that make and renew the ontology, which are related with the keywords that users input in the search engine, automatically for the automatic generation of ontology that is not limited by specific domain. Input keyword and relation keywords become OWL, and the relation among the created OWL is expressed by
possible sub-graphs satisfying the query). Most existing techniques for evaluating keyword queries over graphs run on a centralized computer. We propose a new approach, SOverlapping, to evaluate keyword queries over graphs on MapReduce framework by utilizing probabilistic theory to partition graphs. The new approach has shown
The complex network theory is widely used in the field of keyword extraction. Through analyzing the insufficient of keyword extraction algorithms using traditional complex network, this paper proposes a new method to extract Chinese keyword based on semantically weighted network. On the basis of K-nearest neighbor
propose new approaches to compute structural statistics keyword queries, perform extensive performance studies using two large real datasets and a large synthetic dataset, and confirm the effectiveness and efficiency of our approach.
Matching search technology based on query keyword has been widely used by traditional search way. It still belongs to pure keyword matching and can not acquire satisfactory search results. The essential reason is that traditional Web search lacks semantic understanding to user's search behaviors. In this study, we
minimization and pattern recognition of segment slopes for multiplier-less error compensation. A novel, near-zero-average error quadratic compensation scheme is also presented. A test chip was fabricated in a commercial 130nm technology including fifteen base-two logarithm approximations and each was evaluated for its standalone
In this paper, we propose a novel expansion terms selection model, in which Google similarity distance is adopted to estimate the relevance between query and candidate expansion terms. In previous method, expansion terms are usually selected by counting term co-occurrences in the documents. However, term co
The inability of the present search engines to map the retrieved result set using semantics of the query keywords has been discussed. The present study suggests a framework to improve the mapping of Concept and Context of the query keywords and thereby remove noise from the query. This ensures more relevant and
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.