The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
We present a hierarchical test and repair flow for shared BISR (Built-In Self-Repair) in asynchronous multi-processors. The flow partitions the memories local to a processor in groups and treats the groups as a whole when doing the repair. The flow runs automatically with few interventions except at the beginning stage. It can be used effectively for practical industrial test and repair. Its test...
The submitted paper describes the construction of modern digital equipment protection automation using Ethernet technology as the main medium of information exchange, allowing building a power system communication architecture that uses all aspects of the IEC61850 standard.
Mapping complex mathematical expressions to DSP blocks by relying on synthesis from pipelined code is inefficient and results in significantly reduced throughput. We have developed a tool to demonstrate the benefit of considering the structure and pipeline arrangement of the DSP block in mapping of functions. Implementations where the structure of the DSP block is considered during pipelining achieve...
The computational demands on spacecraft are rapidly increasing. Current on-board computing components and architectures cannot keep up with the growing requirements. Only a small selection of space-qualified processors and FPGAs are available and current architectures stick with the inflexible cold-redundant structure. The objective of the ongoing project OBC-NG (On-board Computer - Next Generation)...
Massive amount of genomics data are being produced nowadays by Next Generation Sequencing machines. The suffix array is currently the best choice for indexing genomics data, because of its efficiency and large number of applications. In this paper, we address the problem of constructing the suffix array on computer cluster in the cloud. We present a solution that automates the establishment of a computer...
Dynamic power management (DPM) is critical to maximizing the performance of systems ranging from multicore processors to datacenters. However, one formidable challenge with DPM schemes is verifying that the DPM schemes are correct as the number of computational resources scales up. In this paper, we develop a DPM scheme such that it is scalably verifiable with fully automated formal tools. The key...
Soft errors, caused by radiation, have become a major challenge in today's computer systems and networking equipment, making it imperative that systems be designed to be resilient to errors. Error injection is a powerful approach to evaluate system resilience, and current practice is to inject errors in architectural registers of processors, program variables of applications, or storage elements in...
SECloud is a automatic platform to deal with the resource-intensive and labor-intensive nature of high-quality software analysis. SECloud parallelizes symbolic execution in computing cloud to cope with path explosion. To our knowledge, SECloud is the first binary analyzing software that scales to large clusters of machines and can automatically test real-world softwares (e.g., Squirrel, Aeon, Socat,...
This paper presents the analysis of automation hardware market, choosing and implementation of an optimal device to create remote supervision of workshops. An idea to create a remotely controllable workshop is not new, but the challenges with interaction scenarios between machines (CNC machines, industrial robots) and humans became more and more complex. Nowadays, industrial automation is mainly organized...
Convolution computing plays an important role in scientific computing. However, traditional Message Passing Interface (MPI) model has the disadvantages such as massive message passing and load-unbalancing. According to these problems, this paper propose a new parallel convolution based on MPI model, which is able to effectively balance the load as well as bring great reduction on message passing....
We consider the problem of design tools of knowledge bases in the environment of the Semantic Web. The dynamics of the development of software tools and design architecture are considered in the context of software technology in Semantic Web. There is given also a short description of the software platform for operation of Knowledge bases as Projects of virtual reality in Semantic Web.
This paper describes Akara 2010, the distributed shogi system that has defeated a professional shogi player in a public game for the first time in history. The system employs a novel design to build a high-performance computer shogi player for standard tournament conditions. The design enhances the performance of the entire system by means of distributed computing. To utilize a large number of computers,...
Medical technology verges on incorporating directly into our anatomy processors with the computational power of the famous Watson IBM computer and Internet-like communications. As the size of computers spiral downward, their wholesale use (as well as RFID-type technology) will extend lifetimes, enhance our intellect, and assist in controlling technology outside the body via digital I/O and thought...
Internet supercomputing is an approach to solving partitionable, computation-intensive problems by harnessing the power of a vast number of interconnected computers. Forthe problem of using network supercomputing to perform a large collection of independent tasks, prior work introduced a decentralized approach and provided randomized synchronousalgorithms that perform all tasks correctly with high...
In this paper we present a vision of the sequence of three embedded systems design courses being taught for computer engineering technology students. The focus of these courses is to prepare strong students in both digital hardware design as well as embedded software design. The courses have been evolving from traditional delivery of the content to a highly interactive hands-on industry-like experience...
In distributed real-time systems, when resource cannot meet workload demand, some jobs have to be removed from further execution. The decision as to which job to remove directly influences the system computation efficiency, i.e., the ratio between computation contributed to successful completions of real-time jobs and total computation contributed to the execution of jobs that may or may not be completed...
The today computers, as taught in the elementary computer architecture courses, are based essentially on the principles, suggested by von Neumann in a 1946 report. Since that, only minor improvements have been carried out, although the impressive technical developments seem to hide this fact. In the meantime, also a whole software industry has grown up, and masses of society are using computers, sometimes...
Data interoperability in scientific research is a major challenge. Heterogeneous data file formats, data structure formats, and data storage schemas are prevalent in the scientific research community. This poses a great challenge for collaboration between researchers due to data interoperability issues. In our prior work, we have addressed these challenges by proposing a web-enabled approach for generating...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.