The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
framework is based on the popular keyword concept and is seen as especially applicable in online testing, which requires continuous message passing between the generator and SUT. The architecture is based on plug-ins, similar to the Eclipse integrated development environment, enabling easy extension. We also discuss
Nowadays the existing search engines are always lack of the consideration of personalization. They display the same search results for different users despite their differences in interesting and purpose. In order to solve this problem, this paper introduces a new method of using keyword query series to express the
Search engines are one of the most powerful tools in the Web world today for data retrieval and exploration. Most search engines identify the key word in the sentence or phrase or list of words given by the user and starts mining the Web for the occurrence of keyword in the Web pages. Quite often searching for the key
Our goal is to use the vast repositories of available open source code to generate specific functions or classes that meet a user's specifications. The key words here are specifications and generate. We let users specify what they are looking for as precisely as possible using keywords, class or method signatures
of web resources. Thus finding the information related to specific topic or a keyword from largely available web resources enables promising opportunities for information discovery. In this paper we propose a system that finds the links related to specific keyword and then it performs the in-site searching to get the
databases are termed as Web Databases (WDB). Web databases have been frequently employed to search the products online for retail industry. They can be private to a retailer/concern or publicly used by a number of retailers. Whenever the user queries these databases using keywords, most of the times the user will be deviated
of the relevant news to the readers of certain news by using the keywords generated from both news and comments. The challenge lies in how to select the keywords that are related with the drifted topics according to the userspsila preference. In our work, we have utilized the number of votes received by a reader as an
Abstract-In e-learning platforms the social interaction between students and teachers as well as among students is important for effective and efficient learning. Therefore, e-learning platforms need technological support for online presence and communication. In this paper we present the Presence in Learning Spaces (PILS) infrastructure, which provides advanced concepts for mutual presence information...
BBS provides users with a space of free communication and plentiful information resources. However, to gain manually useful information from constantly updated, huge and unstructured data is very difficult for users. This paper applies Prolog to BBS data mining, and builds a housing information mining system based on Prolog, which extracts structured house leasing information from the large number...
option, say, limiting search to few links. To reduce the time spent by users, a web link extraction tool has been designed and implemented in Java, that analyzes the ways of extracting web link information using a standard interface. The Test Scenario has been presented with various keywords like Higher Education
The web mining is a cutting edge technology, which includes information gathering and classification of information over web. This paper puts forth the concepts of document pre-processing, which is achieved by extraction of keywords from the documents fetched from the web, processing it and generating a term-document
to the purchase page of the product or the service being promoted. The paper proposes a method that suggests the keywords of a web page based on the frequent terms in a web page while including the lexical relationship (synonyms) of these words. An experiment is executed to validate the method while the method's result
The purpose of this research is to propose an algorithm for translating class diagram into relational database. At first, class diagram is converter into Java source code. The algorithm analyses it by using the keywords of ??class??, ??attribute??, and ??relation?? and then generates relational database. It shows that
Our goal is to use the vast repositories of available open source code to generate specific functions or classes that meet a user's specifications. The key words here are specifications and generate. We let users specify what they are looking for as precisely as possible using keywords, class or method signatures
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.