The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
In this paper, we propose a learning algorithm that speeds up the search in task and motion planning problems. Our algorithm proposes solutions to three different challenges that arise in learning to improve planning efficiency: what to predict, how to represent a planning problem instance, and how to transfer knowledge from one problem instance to another. We propose a method that predicts constraints...
The article aimed at offering new ways of routing in networks with the DTN architecture, which allows using the most optimal routes for determining the routes depending on various conditions. To achieve these goals, it is necessary to analyze the available developments in this field with the subsequent identification of advantages and disadvantages. An attempt to realize advantages with a minimum...
Cooperative communication approaches are being increasingly used to improve the reliability of Wireless Sensor Networks (WSNs) communication, allowing better spatial and temporal diversity. For the success of these techniques, a proper selection of the relay nodes is a crucial task. This paper proposes a new technique, named Smart, for the selection of cooperating WSN nodes according to criteria considered...
A novel approach of a testbed for embedded networking nodes has been conceptualized and implemented. It is based on the use of virtual nodes in a PC environment, where each node executes the original embedded code. Different nodes are running in parallel and are connected via so-called virtual interfaces. The presented approach is very efficient and allows a simple description of test cases without...
State-of-the-art LTE Turbo-Code decoder architectures support throughputs of several Gbps by employing parallelism on different architectural levels. However, a very high flexibility with respect to code block sizes and code rates must also be retained. Sophisticated techniques are used to maintain communications performance at high code rates which are critical. In this paper, we propose new techniques...
In this paper, we study the performance of two cross-layer optimized dynamic routing techniques for radio interference mitigation across multiple coexisting wireless body area networks (BANs), based on real-life measurements. At the network layer, the best route is selected according to channel state information from the physical layer, associated with low duty cycle TDMA at the MAC layer. The routing...
In this paper, we design a game theoretical framework for improving the Quality of Service (QoS) in cooperative RAN caching. Considering the cooperation under both single cell transmission and joint transmission, the QoS metric is uniformly quantified as the total content delivery time. Although the formulated cooperative content placement problem is proved NP-hard, noticing the local cooperative...
Flexible substrates have been widely used for the fabrication of flexible or wearable electronics. The large deformation of flexible material in wearable electronics can reduce the reliability or cause premature failure of the electronics interconnects. Rapid adoption of flexible electronics for high reliability applications requires the development of methods for in-situ non-destructive measurement...
Multiple routing metrics have been proposed for maximizing throughput or guaranteeing reliable delivery of the data. Some of the most common used metrics, neglect the bursty behavior of the channel. A new metric improve this by taking burstiness into account. This creates a burst estimation of all links and allocates the maximum number of required slots in worst-case. Even though this is a solution...
The Hadoop Distributed File System (HDFS) of Apache Hadoop provides a highly reliable static replication technique for computation, which makes a number of applications to follow Apache Hadoop. However, due to the fact that the access rate of every file is different, maintaining the same replication scheme for every file results in the deterioration of performance. By considering this drawback, this...
State-of-the-art GPU chips are designed to deliver extreme throughput for graphics as well as for data-parallel general purpose computing workloads (GPGPU computing). Unlike graphics computing, GPGPU computing requires highly reliable operation. The performance-oriented design of GPUs requires to jointly evaluate the vulnerability of GPU workloads to soft-errors with the performance of GPU chips....
Technology evolution has raised serious reliability considerations, as transistor dimensions shrink and modern microprocessors become denser and more vulnerable to faults. Reliability studies have proposed a plethora of methodologies for assessing system vulnerability which, however, highly rely on traditional reliability metrics that solely express failure rate over time. Although Failures In Time...
Aircraft are generally designed and produced to be maintainable. Recently, the U.S. Air Force, due to increasing aircraft unit costs, began to investigate early conceptual designs for attritable (unmanned) aircraft. Attritability is a system characteristic that trades reliability and maintainability for low-cost of a system meant for reuse — at least a few times. This characteristic is affected by...
Semantic web services represent the potential of the web and they have significant impact on the discovery process. Due to the high proliferation of web services, selecting the best web services from functional equivalent service providers have become a real challenge when a large number of services have been published in a registry. If these services have been functionally-equivalent, it is difficult...
Suppose Alice and Bob share a secret key, of which Eve is initially oblivious. Clearly, Alice and Bob can use this key to ensure that any particular plain-message sent is both authentic and secure. This paper investigates how many plain-messages can be sent per bit of secret key, while still ensuring both secrecy and authentication. In particular the secrecy tolerance relates to the min-entropy of...
This paper presents an effective design space exploration strategy for the development of dependable systems using selective hardening techniques based on software. Instead of design space exploration approaches based on brute-force or time-consuming fault injection experiments, this strategy is grounded in an early estimation of the register file criticality in microprocessor-based systems. This...
Single-phase (1-phase) transformers have been utilized in some countries to supply rural distribution regions which particularly have relatively low load densities (farms, villages, etc.). Leading countries include US, Canada, Mexico, New Zealand, and Australia. However, utilization of 1-phase transformers is very limited (almost negligible) at European countries. Turkey is among those countries in...
In this paper, we evaluate the error criticality of radiation-induced errors on modern High-Performance Computing~(HPC) accelerators (Intel Xeon Phi and NVIDIA K40) through a dedicated set of metrics. We show that, as long as imprecise computing is concerned, the simple mismatch detection is not sufficient to evaluate and compare the radiation sensitivity of HPC devices and algorithms. Our analysis...
Reliability to soft errors is an increasingly important issue as technology continues to shrink. In this paper, we show that applications exhibit different reliability characteristics on big, high-performance cores versus small, power-efficient cores, and that there is significant opportunity to improve system reliability through reliability-aware scheduling on heterogeneous multicore processors....
Modern DRAM-based systems suffer from significant energy and latency penalties due to conservative DRAM refresh standards. Volatile DRAM cells can retain information across a wide distribution of times ranging from milliseconds to many minutes, but each cell is currently refreshed every 64ms to account for the extreme tail end of the retention time distribution, leading to a high refresh overhead...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.