The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
The surf-zone presents unique challenges and opportunities for observational oceanography. Physical and biogeochemical signals change quickly in and around breaking surface waves due to high magnitudes of momentum and mass transfer. Autonomous instruments can be challenging to deploy in this energetic zone. We are developing the Smartfin, a surfboard fin capable of measuring geolocated ocean chemistry...
A common approach for developing robotic systems leverages separate simulation and control softwares. Although this approach requires minimal coordination and orchestration between softwares, the separation of simulation and control applications presents the designer with unnecessary challenges during development. This paper describes the Autonomous Robot Control and Simulation (ARCS) software, a...
When oceangoing ships become disabled at sea, the process of establishing an emergency towing connection using conventional methods can be extremely dangerous, especially in heavy weather, low light, or low visibility conditions. The responding towing vessel must position itself beneath, or near to, the flare of the disabled ship's bow while a towline is hauled vertically to the fo'c'sle deck. This...
Traditionally unmanned undersea vehicles (UUVs) have been large, complex and expensive. Development of UUVs has been driven by defense and commercial requirements. This has allowed for extensive research and development budgets and long development cycles. This paper briefly discuses these historic drivers and how this situation has changed. It focuses on a new compact affordable UUV. In 2015, several...
There are various leak test methods used to verify the integrity of seals for underwater hardware; these methods have a wide range of sensitivity and cost. Helium leak testing with the use of a high vacuum mass spectrometer leak detector provides the greatest sensitivity; however, in more complex sealing systems, the utility of the results is entirely dependent upon the test configuration and method...
Solar power plays a significant and sometimes primary role in the energy budgets of many terrestrial sensor systems due to its reliability, power density, and simplicity. More importantly, photoelectric generation can be accurately predicted for a given terrestrial location. This knowledge of expected harvested power is critical for both provisioning in design and scheduling of activities in operations...
In the past years the Autonomous Underwater Vehicle technology has increased in complexity and expanded its role in the subsea environment. More and more, AUVs are performing critical and key activities. Therefore, Reliability of AUV systems is a topic with crescent relevance in the underwater field. When developing a complex system such as an AUV some issues might appear during the execution of hardware...
As the size of Deep Neural Networks (DNNs) continues to grow to increase accuracy and solve more complex problems, their energy footprint also scales. Weight pruning reduces DNN model size and the computation by removing redundant weights. However, we implemented weight pruning for several popular networks on a variety of hardware platforms and observed surprising results. For many networks, the network...
Convolutional neural networks (CNNs) are revolutionizing machine learning, but they present significant computational challenges. Recently, many FPGA-based accelerators have been proposed to improve the performance and efficiency of CNNs. Current approaches construct a single processor that computes the CNN layers one at a time; the processor is optimized to maximize the throughput at which the collection...
CPU-FPGA heterogeneous platforms offer a promising solution for high-performance and energy-efficient computing systems by providing specialized accelerators with post-silicon reconfigurability. To unleash the power of FPGA, however, the programmability gap has to be filled so that applications specified in high-level programming languages can be efficiently mapped and scheduled on FPGA. The above...
The trend of unsustainable power consumption and large memory bandwidth demands in massively parallel multicore systems, with the advent of the big data era, has brought upon the onset of alternate computation paradigms utilizing heterogeneity, specialization, processor-in-memory and approximation. Approximate Computing is being touted as a viable solution for high performance computation by relaxing...
The increasing demand for extracting value out of ever-growing data poses an ongoing challenge to system designers, a task only made trickier by the end of Dennard scaling. As the performance density of traditional CPU-centric architectures stagnates, advancing compute capabilities necessitates novel architectural approaches. Near-memory processing (NMP) architectures are reemerging as promising candidates...
Non-Volatile Memories (NVMs) can significantly improve the performance of data-intensive applications. A popular form of NVM is Battery-backed DRAM, which is available and in use today with DRAMs latency and without the endurance problems of emerging NVM technologies. Modern servers can be provisioned with up-to 4 TB of DRAM, and provisioning battery backup to write out such large memories is hard...
Caches are traditionally organized as a rigid hierarchy, with multiple levels of progressively larger and slower memories. Hierarchy allows a simple, fixed design to benefit a wide range of applications, since working sets settle at the smallest (i.e., fastest and most energy-efficient) level they fit in. However, rigid hierarchies also add overheads, because each level adds latency and energy even...
Most systems that support speculative parallelization, like hardware transactional memory (HTM), do not support nested parallelism. This sacrifices substantial parallelism and precludes composing parallel algorithms. And the few HTMs that do support nested parallelism focus on parallelizing at the coarsest (shallowest) levels, incurring large overheads that squander most of their potential. We present...
PHP is the dominant server-side scripting language used to implement dynamic web content. Just-in-time compilation, as implemented in Facebook's state-of-the-art HipHopVM, helps mitigate the poor performance of PHP, but substantial overheads remain, especially for realistic, large-scale PHP applications. This paper analyzes such applications and shows that there is little opportunity for conventional...
Stochastic gradient descent (SGD) is one of the most popular numerical algorithms used in machine learning and other domains. Since this is likely to continue for the foreseeable future, it is important to study techniques that can make it run fast on parallel hardware. In this paper, we provide the first analysis of a technique called BUCKWILD! that uses both asynchronous execution and low-precision...
Heterogeneous memory management combined with server virtualization in datacenters is expected to increase the software and OS management complexity. State-of-the-art solutions rely exclusively on the hypervisor (VMM) for expensive page hotness tracking and migrations, limiting the benefits from heterogeneity. To address this, we design HeteroOS, a novel application-transparent OS-level solution for...
Artificial Neural Networks are a widely used computing system implemented for a wide variety of tasks and problems. A common application of such networks is classification problems. However, a significant amount of this research focuses on one and two-dimensional information, such as vectorized data and images. There is limited research performed on three-dimensional media such as video clips. This...
Keypoint matching between images is an important technique for computer vision applications such as image retrieval. Although binary feature descriptors such as BRIEF enable fast measurement of distance, exhaustive search is still time-consuming. Hashing methods such as Locality Sensitive Hashing (LSH), while being effective to accelerate searching, result in large memory consumption and thus are...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.