The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
The volume of adult content on the world wide web is increasing rapidly. This makes an automatic detection of adult content a more challenging task, when eliminating access to ill-suited websites. Most pornographic webpage–filtering systems are based on n-gram, naïve Bayes, K-nearest neighbor, and keyword-matching mechanisms, which do not provide perfect extraction of useful data from unstructured...
The semantic web is evolving and helping users to find, share, understand and combine the information and be processed by the automated tools. With the traditional web, it is a time-consuming and tedious task to find the necessary information. By using the semantic web, we obtain information in a way that is more efficient and fast and avoid all the unnecessary data. Semantic web helps to make intelligent...
The recent technology of human voice capture and interpretation has spawned the social robot to convey information and to provide recommendations. This technology helps people obtain information about a particular topic after giving an oral query to a humanoid robot. However, most of the search engines are keyword-matching mechanism-based, and the existing full-text query search engines are inadequate...
Data Analytics is widely used in many industries and organization to make a better Business decision. By applying analytics to the structured and unstructured data the enterprises brings a great change in their way of planning and decision making. Sentiment analysis (or) opinion mining plays a significant role in our daily decision making process. These decisions may range from purchasing a product...
One of the most accepted tool for represent information in knowledge management application is the ontology. There are many theory describe for ontology development method but in a loosely instruction. This paper contributes the new method to develop ontology by using a structured model of the software requirement engineering process. We demonstrate that requirement engineering process can provide...
This paper presents an epigrammatic overview of role of knowledge in image retrieval and recent advances in image retrieval techniques. We also proposed a new image retrieval technique for content based image retrieval system which incorporates image mining, amalgam knowledge management technique and advance concept based image retrieval technique. Proposed system excavated knowledge by mining large...
Large amount of data is created and stored in electronic media. Agriculture is no exception. Large unprocessed text are available on the various Government and other websites. Despite of large volume and availability, this data is underutilized. This data should be converted to an effective form so as to facilitate better information dissemination. Ontology is an efficient medium to carry out this...
Data retrieval is a key process of acquiring information as per requirement. The necessity of proper information has increased. The most basic tools which provide this service are browser. It traverses the data as per user's query and gives the search results of all related information. Hence, it becomes a time consuming process to find required information. In this paper, the focus is done on content...
Mining useful knowledge from data readily available in today's information systems has been a common challenge in recent years as more and more events are being recorded, and there is need to improve and support many organisational processes in a competitive and rapidly changing environments. The work in this paper shows using a case study of Learning Process — how data from various process domains...
The domain of traditional web is gradually evolving with the adaptation of newer techniques, which includes semantic web. Integration of web content using ontologies in a language independent manner is a required feature in this process. For better utilization of the resources, it is necessary that the ontology, which is working as a central knowledge repository, to be language independent as well...
Information Extraction (IE), one of the important tasks in text analysis and Natural Language Processing (NLP), involves extracting meaningful pieces of knowledge from unstructured information sources, as unstructured data is computationally opaque. The intent of IE is to produce a knowledge base i.e. organize the information in a way that it is useful to people and arrange the information in a semantic...
Traditional Information Retrieval (IR) methods were initially used for searching and ranking web pages on the Web. These methods were progressively modified to exploit the peculiarities of the Web including the use of the hyperlinked structure of the Web for relevance ranking. These Web IR techniques, however, are also being applied for searching and ranking to other forms of text collections which...
Discrimination discovery from data consists of designing data mining methods for the actual discovery of discriminatory situations and practices hidden in a large amount of historical decision records. Approaches based on classification rule mining consider items at a flat concept level, with no exploitation of background knowledge on the hierarchical and inter-relational structure of domains. On...
There is currently a burst of Big Data (BD) processed and stored in huge raw data repositories, commonly called Data Lakes (DL). These BD require new techniques of data integration and schema alignment in order to make the data usable by its consumers and to discover the relationships linking their content. This can be provided by metadata services which discover and describe their content. However,...
Pattern-based methods of IS-A relation extraction rely heavily on so called Hearst patterns. These are ways of expressing instance enumerations of a class in natural language. While these lexico-syntactic patterns prove quite useful, they may not capture all taxonomical relations expressed in text. Therefore in this paper we describe a novel method of IS-A relation extraction from patterns, which...
Big data is a broad term with numerous dimensions, most notably: big data characteristics, techniques, software systems, application domains, computing platforms, and big data milieu (industry, government, and academia). In this paper we briefly introduce fundamental big data characteristics and then present seven case studies of big data techniques, systems, applications, and platforms, as seen from...
The Treatise on Invertebrate Paleontology is the most reliable information source of invertebrate paleontology research. Based on this Treatise, an Invertebrate Paleontology Knowledgebase (IPKB) has been built as a digital library to provide these data through a web interface. However, the search functions provided by the old IPKB system are only based on textual information, while some more important...
More and more electric power data are generated with the development of the electric power field. Hyponymy relationships extraction is an important task in building semantic knowledge base, information retrieval and other semantic applications etc. This paper uses sentences to match “is a” pattern as research subjects and proposes a novel hyponymy relation extraction approach, which combines lexical...
The orientation is occupying an increasingly important role in the process of determining the future of the students, which leads to the obligation to make it automatic and accessible, both in terms of immediate accessibility throughout the world or in the offering of several languages that correspond to the language used by the majority of students or persons in need of guidance. These problems can...
According to the sensitive information exposure problem without authority in the classified network caused by negligence or even malice during the web construction in secret units, this paper proposes a discovery method to extract the sensitive information in the web pages based on Web information. Through hypertext transfer protocol connection module this method firstly realizes the function of acquiring...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.