The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
The following topics were dealt with: defect and fault tolerance; dependability analysis and evaluation; reliability; error detection and correction; testing.
Summary form only given. This paper will focus on the key trends driving these changes to test and the emerging methods that are enabling test to be a "value add" operation. Industry examples will be shown where test has provided unique insight to IC process performance and defect behavior using these methods.
With the technology entering the nano dimension, manufacturing processes are less and less reliable, thus drastically impacting the yield. A possible solution to alleviate this problem in the future could consist in using fault tolerant architectures to tolerate manufacturing defects. In this paper, we use the classical triple modular redundancy (TMR) fault tolerant architecture as a case study. Firstly...
Designing a nanoscale memory system with defect rate as high as 10% poses a significant challenge. Redundancies at various levels have been employed to tolerate the high defect rates. Multiple crossbar modules that share the same address space can be used to build a simple and robust memory architecture to overcome the defects in the crossbar. In this paper, we presents a module grouping scheme for...
Critical applications are more and more relying on electronic components to provide the services they are designed for. The obsolescence of such electronic components has already been recognized as a critical issue that must be properly addressed. Among the different possibilities, FPGA-based emulation of obsolete digital components seems particularly interesting. This paper proposes an automatic...
Process, voltage, and temperature (PVT) variations are difficult to manage in multi-core SoCs, as each core may have different voltage and reliability requirements. Indeed, common implementations of variation-tolerant techniques (e.g. dynamic voltage and frequency scaling) are ineffective in multi-core SoCs because of the large overheads they incur. In this work, we propose a simple low-power safety-mode...
Integrated circuits (IC) targeting at the streaming applications for tomorrow are becoming a fast growing market. Applications such as beamforming require mass computing capability on a single chip as well as flexibility to adapt to new algorithms. A reconfigurable IC with many processing tiles based on the Network-on-Chip architecture is considered ideal for such applications as it balances efficiency...
This paper presents a network-based fault model for dependability assessment of distributed applications built over networked embedded systems. This fault model represents global failures in terms of wrong behavior of packet-based asynchronous data transmissions. Packets are subject to different faults, i.e., drop, cut, bit errors, and duplication; these events can model either HW/SW failures of the...
Aggressive technology down-scaling increases the vulnerability of microprocessors to runtime errors, in particular radiation-induced soft errors. In this paper, we present a technique based on Boolean satisfiability (SAT) to obtain reliability parameters of microprocessors in the presence of soft errors. We use a metric called microprocessor vulnerability factor (MVF) which captures the soft error...
Nanoelectronic systems are extremely likely to demonstrate high defect and fault rates. As a result, defect and/or fault tolerance may be necessary at several levels throughout the system. Methods for improving defect tolerance, in order to prevent faults, at the component level for QCA have been studied. However, methods and results considering fault tolerance in QCA have received less attention...
This talk will summarize our design for reliability initiatives that anticipate the paradigm shift to error-aware and error-tolerant design of integrated circuits, both of which are required to address the problem of increasing hardware failures in future technology nodes. These concerns are only exacerbated as we look forward to emerging technology alternatives. Using graphene as an example, I will...
This paper addresses a new threat to the security of integrated circuits (ICs). The migration of IC fabrication to untrusted foundries has made ICs vulnerable to malicious alterations, that could, under specific conditions, result infunctional changes and/or catastrophic failure of the system in which they are embedded. Such malicious alternations and inclusions are referred to as Hardware Trojans...
VLSI circuits in nanometer VLSI technology experience significant aging effects, which are embodied by performance degradation over operation time. Although this degradation can be compensated by over-design, it induces remarkable power overhead which is undesirable in tightly power-constrained designs. Dynamic voltage scaling (DVS) is a more power-efficient approach. However, its coarse granularity...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.