The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Building digital library applications is often a search for an applicable and adequate data or document model as well as for software tools which meets the requirements. Especially in digital archives, there are several data and document models to be considered. Unfortunately, there is no one-size-fits-all document model or system. For each application, the requirements and properties of a project...
Identifying the parameters of a model such that it best fits an observed set of data points is fundamental to the majority of problems in computer vision. This task is particularly demanding when portions of the data has been corrupted by gross outliers, measurements that are not explained by the assumed distributions. In this paper we present a novel method that uses the Least Quantile of Squares...
Ranking objects is an essential problem in recommendation systems. Since comparing two objects is the simplest type of queries in order to measure the relevance of objects, the problem of aggregating pair wise comparisons to obtain a global ranking has been widely studied. In order to learn a ranking model, a training set of queries as well as their correct labels are supplied and a machine learning...
Many primary care clinics have transitioned from paper-based record keeping to computer-based Electronic Medical Record (EMR) systems. This transition provides opportunities for computer-based data analytics in support of practice improvement and more evidence-based clinical research. Unfortunately, the data in primary care EMRs is often not readily accessible to researchers, who often have to overcome...
Web Services Description Language (WSDL) is an XML-based specification for describing the functionalities provided by web services [1], and widely-adopted in the IT industry. However, the message types defined in WSDL for interoperation are insufficient when describe complex data (such as, geodata) by using conventional data types in many application scenarios. For example, geodata (such as, a digital...
Archaeological maps based on the location of sites are strongly biased by the degree of archaeological recognition and inform little about the real pattern of past human activities, especially on areas poorly covered by surveys. Continuous maps and spatial models, independent of the degree of archaeological recognition of the area, can used as a tool for explanation of the patterns of past human activity...
The method of constructing testing impacts for digital devices is proposed. The method is based on the representation device as a set of models data exchange interfaces. This approach allows to build models of input impacts as a collection of standard operations inherent to the corresponding interface.
XACML is a powerful and flexible access control (AC) policy language. It is an OASIS standard that is now widely used in a variety of applications, particularly those that require inter-operability between AC systems. The language definition includes a precise grammar, syntax, and semantics, and it is both expressive and verbose. This combination of expressive power and verbosity can lead to difficulty...
The key to successful integration and interoperability between applications and software products depends on the intelligent use and management of metadata. This can be accomplished by the use of the CWM standard which provides a mechanism for exchanging metadata in the data warehousing and business intelligence domain. As for, it is known that the development and maintenance of information systems...
Timed data processing belongs to one of the most important task of the development of current database systems. Conventional database approach offers paradigm for current valid data processing. However, it is necessary to store and manage also historical values. Moreover, temporal system should provide structure for future valid data processing. The basic structure for processing temporal data was...
This paper discusses a virtual lab, which includes the simulation modeling of a virtual objects and the construction of their statistical models in the LabVIEW environment. The proposed lab simulates the functioning of the virtual object, formally represented as the stochastic postulated model, using non-degenerate multivariate normal distribution method. Based on the generated statistical data, obtained...
The concept of data element was shown and illustrated in this paper, perspiration-inquiring was taken for example to show how to construct data element of Traditional Chinese Medicine diagnosis and treatment informations, and its data element dictionary was put forward.
This work presents a conceptual framework for learning an ontological structure of domain knowledge, which combines Jaccard similarity coefficient with the Infinite Relational Model (IRM) by (Kemp et al. 2006) and its extended model, i.e. the normal-Infinite Relational Model (n-IRM) by (Herlau et al. 2012). The proposed approach is applied to a dataset where legal concepts related to the Japanese...
In this paper, we present a method to attach affinity scores to the implicit labels of individual points in a clustering. The affinity scores capture the confidence level of the cluster that claims to "own" the point. We demonstrate that these scores accurately capture the quality of the label assigned to the point. We also show further applications of these scores to estimate global measures...
An approach to modeling and implementation of the CRIS software system for the University of Novi Sad (CRIS UNS) is described in this paper. CRIS UNS system is implemented with an intention to fulfill all specific requirements prescribed by rule books of the University of Novi Sad, Provincial Secretariat for Science and Technological Development of Autonomous Province of Vojvodina, and Ministry of...
Various systems for searching scientific-research results were analyzed. Based on the analysis of these systems, conclusions related to the development of a web application for searching dissertations were reached. The application for searching dissertations was specified and implemented using Web 2.0 technologies for creation of user-friendly interface. This application is described in this paper...
This paper uses the data of 16 China's listed commercial banks between 2007 and 2010 as the samples to go through the substantial evidence test of the relationship between commercial bank's earnings management and cash dividends policy, on the basis of using the abnormal loan loss provision measurement and avoiding surplus loss measurement to measure commercial banks' earnings management. The study...
POI updates have a direct influence on data up-to-date state, thereby affecting the data value of POI. Aimed at solving the problem facing POI rapid and accurate update, an update approach for POI based on Weibo check-in data is brought forward in this paper. Firstly, regarding to the quality issue of check-in data, a pre-processing approach with spatial registration is proposed. Then, a POI data...
One of the key challenges for users of social media is judging the topical expertise of other users in order to select trustful information sources about specific topics and to judge credibility of content produced by others. In this paper, we explore the usefulness of different types of user-related data for making sense about the topical expertise of Twitter users. Types of user-related data include...
MapReduce, as a popular tool for distributed and scalable processing of voluminous data, has been used in many areas. However, it is not efficient when handing skewed data, since it only considers the key and adopts a uniform hash method to distribute the workload to each reducer, while ignores the key's distribution. This can lead to load imbalance, increase the processing time, generate the "straggler"...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.