The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
This paper describes in detail, the workings of a prototype version of a parallel processor for high-performance computing of large-scale tasks for identification (classification). The use of such processors will provide the required computing power with a guaranteed quality of decision making. Program settings and architecture cascading, as well as the knowledge of the dependence of performance of...
This paper describes an Interactive Programming Assistance tool (iPAT) which is designed to assist students in solving introductory programming problems and help instructors in conducting programming lab sessions effectively. In a large computer lab setting with over 30 students, communication can be very limited between the students and the lab instructors. To address this problem iPAT was developed...
Summary form only given. Since the first mobile computer, power efficiency was a key measure for success. As the need for performance ever increases, the energy cost of performance has metric well beyond just the life of the battery in mobile. Energy efficiency is now the driver in most consumer products, the compute density of a server, and has become the primary limit in the delivery of high performance...
Parallel computing operates on the principle that large problems can often be divided into smaller ones, which are then solved concurrently to save time (wall clock time) by taking advantage of non-local resources and overcoming memory constraints. The main aim is to form a cluster oriented parallel computing architecture for MPI based applications which demonstrates the performance gain and losses...
Access Control over large scale distributed system like Cloud computing are one of the most debated topics of computer security. Despite the common use and the popularity of the Cloud computing paradigm, significant risks and challenges are inherent to this new concept, especially when we talk about storage of sensitive data via insecure network. In this paper we look at the problem of protecting...
Recent developments in computational sciences, involving both hardware and software, allow reflection on the way that computers of the future will be assembled and software for them written. In this contribution we combine recent results concerning possible designs of future processors, ways they will be combined to build scalable (super)computers, and generalized matrix multiplication. As a result...
Todays prevalent solutions for modern embedded systems and general computing employ many processing units connected by an on-chip network leaving behind complex super-scalar architectures In this paper, we couple the concept of distributed computing with parallel applications and present a workload-aware distributed run-time framework for malleable applications on many-core platforms. The presented...
Rapid advancement in information technology, business applications and its data storage are distributed in nature. Due to this distributed nature of the transaction databases, distributed association rule mining plays on important role to discover the interesting association and/or correlation relationships among large set of data items. Current research on distributed association rule mining focused...
The ability to extract frequent pairs from a set of baskets (or frequent word co-occurrences from a set of documents) is one of the fundamental building blocks of data mining. When the number of items in a given basket is relatively small the problem is trivial. Even when dealing with millions of baskets it is still trivial providing that the number of unique items in the basket set is small. The...
We discuss in this paper the emerging need for an operation support system to support fast, reconfigurable, time-shared testbeds. We articulate the needs for building an operation support system for such testbeds in order to provide better utilization of testbed resources, enable testers to closely examine and analyze tests, streamline the process of test setup and execution, as well as enhance the...
SOCs designed for embedded systems are now widely used on embedded multimedia devices. Processors in these devices may need capabilities to support some compound computations, and the most important of all is multiply and accumulate (MAC) operation. Thus DSP processors or special ALUs are designed to accelerate these computations. Besides the computation issue, how to improve the reliability, stability...
Cytogenetic biodosimetry is the definitive test for assessing exposure to ionizing radiation. It involves manual assessment of the frequency of dicentric chromosomes (DCs) on a microscope slide, which potentially contains hundreds of metaphase cells. We developed an algorithm that can automatically and accurately locate centromeres in DAPI-stained metaphase chromosomes and that will detect DCs. In...
We present a multicore-enabled smart storage for clusters in general and MapReduce clusters in particular. The goal of this research is to improve performance of data-intensive parallel applications on clusters by offloading data processing to multicore processors in storage nodes. Compared with traditional storage devices, next-generation disks will have computing capability to reduce computational...
With the popularity and development of heterogeneous computing, proper communication performance measurement tools are needed to explore new communication patterns under heterogeneous computing systems and optimize program's performance. This paper proposes a hardware-based communication performance measurement tool, named as HCPM, which brings little influence on original program, and can collect...
We consider a notion of computer capacity as a novel approach to evaluation of computer performance. Computer capacity is based on the number of different tasks that can be executed in a given time. This characteristic does not depend on any particular task and is determined only by the computer architecture. It can be easily computed at the design stage and used for optimizing architectural decisions.
Data organization for matrices and arrays in memory has been extensively studied since the early 70's and until the mid 90's - the vector computers golden age. But this old SIMD model seems more topical than ever, as shown by the use of GPU in high performance computers or the architecture of the Nec SX-9. Such memory organization should then be considered again in order to access efficiently data...
The continued increase in microprocessor clock frequency that has come from advancements in fabrication technology and reductions in feature size, creates challenges in maintaining both manufacturing yield rates and long-term reliability of devices. Methods based on defect detection and reduction may not offer a scalable solution due to cost of eliminating contaminants in the manufacturing process...
We propose application of method, based on notion of computer capacity, for evaluating the performance of computer systems. Computer capacity is based on the number of different tasks that can be executed in a given time. This characteristic determined only by the computer architecture, and it can be easily computed at the design stage and used for optimizing architectural decisions.
Cyber-Physical Systems (CPSs) consist of as well as interact with cyber and physical elements. This creates multiple vectors for CPS-internal (i.e., within CPS) as well as for CPS-external (i.e., between CPS itself and its environment) Cyber-Physical Attacks. We argue that an effective Cyber-Physical Defense can only be elaborated if possible attacks on CPS can be identified and assessed in a systematic...
Multiplication is one of the most studied implementations in computing. Some architectures implement it as a single operation, while some others implement it as a combination of other operations. A typical implementation is the repeated addition method, where the operands are repeatedly added to get the result. Here, we try to modify this implementation by using the barrel shifter. The barrel shifter...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.