The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
The exponential growth of complex, heterogeneous, dynamic, and unbounded data, generated by a variety of fields including health, genomics, physics, climatology, and social networks pose significant challenges in data processing and desired speed-performance. Existing processor-based software-only algorithms are incapable of analyzing and processing this enormous amount of data, efficiently and effectively...
Problem reports at NASA are similar to bug reports: they capture defects found during test, post-launch operational anomalies, and document the investigation and corrective action of the issue. These artifacts are a rich source of lessons learned for NASA, but are expensive to analyze since problem reports are comprised primarily of natural language text. We apply {topic modeling to a corpus of NASA...
The usability of comparable technology in the higher education is very important for supporting knowledge management at industry assisted technology. An Information Technology (IT) platform with administrators, instructors, and students is necessary to achieve a powerful learning content. This process requires an educational lifeloop management approach. This approach provides some features such as...
In this paper a universal, coarse-grained reconfigurable architecture for hardware acceleration of decision trees (DTs), artificial neural networks (ANNs), and support vector machines (SVMs) is proposed. Using proposed architecture, two versions of DTs (Functional DT and Axis-Parallel DT), two versions of SVMs (with polynomial and radial kernels) and two versions of ANNs (Multi Layer Perceptron and...
It is predicted that fifty billion sensor-based devices are to be connected to the Internet by 2020 with the fast development of Internet of Things (IoT). Stream data mining on these tremendous sensor-based devices has become an urgent task. Dynamic time warping (DTW) is a popular similarity measure, which is the foundation of stream data mining. In the last decade, DTW has been well accelerated with...
Data processing and combinatorial search are widely used techniques in the scope of information and communication. Examples of practical applications are sorting, frequent items encountering, matrix/set covering, graph/map coloring, data mining, priority management, and many others. Often information/data processing that involves the listed above tasks has to be done in embedded systems where high...
Change-impact analysis, namely “identifying the potential consequences of a change” is an important and well studied problem in software evolution. Any change may potentially affect an application's behaviour, performance, and energy consumption profile. Our previous work demonstrated that changes to the system-call profile of an application correlated with changes to the application's energy-consumption...
Preventing system failure in cloud has become more important as a result of the prevalence of cloud use for mission-critical applications. One of the major causes of system failure in clouds is misconfiguration, as shown in recent studies. Hence, it is essential first to detect misconfiguration before it causes outage or degradation of service. Although cloud provides us flexible and auto-configurable...
Many algorithms in informatics require a set of objects with similar properties to be grouped (clustered) on the basis of some predefined criteria. The proposed technique involves hierarchical merging in which software, responsible for solving the entire problem, is enhanced with highly parallel networks in hardware accelerators. Additional improvements are achieved with the aid of support methods...
The paper presents a system for machinery diagnostics, developed by KOMAG Institute of Mining Technology. This system consists of a software part, installed on a PC, and a hardware part, integrated with the diagnosed machine. The idea of the system is the early damage detection of gears components and the identification of the damage location with the prediction of its development, which allows to...
While mining software repositories is a field which has greatly grown over the last ten years, Large Scale Integrated circuit (LSI) design repository mining has yet to reach the momentum of software's. We felt that it represents untouched potential especially for defect prediction. In an LSI, referred to as hardware later on, verification has a high cost compared to design. After studying existing...
To achieve software quality it is critical to quickly understand the current test status, its changes over time as well as its relation to source code changes. However, even if this information is available in test logs and code repositories it is seldomly put to good use in supporting decision processes in software development. The amount of information is often large, is time consuming to extract...
Main trend of countries around the world is to speed up their development rate by implementing advantages of Information and Communication Technology-(ICT), which determines features in all social sectors of current society. During this research work has examined possibilities to distribute integrated network of ICT in all sectors of Mongolia, to develop and implement future advanced technology, to...
Decision makers must know if their cyber assets are ready to execute critical missions and business processes. Net-work operators need to know who relies on a failed network asset (e.g. IP address, network service, application) and what critical operations are impacted. This requires a mapping between net-work assets and the critical operations that depend on them, cur-rently a manual and tedious...
As SoC and general purpose controllers become more complex and more tightly integrated, gaining access to the runtime data required for software verification and debug has become more challenging. This means software engineers are now more reliant than ever upon software instrumentation and the on-chip mechanisms provided to extract this vital information. In this paper the author proposes using on-chip...
This paper describes the technical aspects of the transition to a software product line approach in the automotive domain. One major challenge is the current existence of two different emerging standards for this domain, AUTOSAR and EAST-ADL2. These potential standards should be borne in mind during the software product line introduction because they may someday become mandatory. In addition, the...
Software evolution is the term used to describe the process of developing and updating software systems. Software repositories such as versioning systems and bug tracking systems are used to manage the evolution of software projects. The mining of this information is used to support predictions and improve design and reuse. Integrated circuit development can also benefit from these techniques. Nowadays,...
The following topics are dealt with: computational linguistics; data mining; data warehousing; bioinformatics; distributed computing; information security; ad hoc networks; information management; wireless sensor networks; and digital image processing.
Java is one of the most popular programming architectures because of its platform-independence. Its platform-independence depends on Java virtual machine JVM. Compared to executing class file on hardware JVM, executing class file on software JVM is much slower. By studying we master the component architecture and the working principle of software JVM based on the class file to explain the implementation...
This paper investigates some theoretical aspects of so called educational data mining, its characteristics and performances. It is well known that different Course Management Systems - CMS are widely available. They can store a vast amount of information about learners and learning objects. It is also well known that the estimation of conformity of learning objects to the learner personal profile...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.