The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
MAESTRO is a project of the European Ambient Assisted Living Programme which aims at enhancing the quality of life and whenever possible the autonomy of seniors. The project objective is to develop a web-based ICT platform that will enable any user, either customer, producer or prescribing party of self monitoring devices, the possibility to evaluate their relevance, effectiveness and performance...
Cloud users have little visibility into the performance characteristics and utilization of the physical machines underpinning the virtualized cloud resources they use. This uncertainty forces users and researchers to reverse engineer the inner workings of cloud systems in order to understand and optimize the conditions their applications operate. At Massachusetts Open Cloud (MOC), as a public cloud...
Benchmarking and profiling virtual network functions (VNFs) generates input knowledge for resource management decisions taken by management and orchestration systems. Such VNFs are usually not executed in isolation but are often deployed as part of a service function chain (SFC) that connects single functions into complex structures. To manage such chains, isolated performance profiles of single functions...
Service Assurance (SA) is a significant part of Network Function Virtualization (NFV) to enable automated and efficient service delivery from end to end (E2E). In NFV, SA should be integrated into the design and development loop from the beginning. However, it sees slower pace than other NFV management and orchestration (MANO) components. Most of present NFV-SA solutions are partial and do not provide...
Network Function Virtualization is an emerging paradigm to allow the creation, at software level, of complex network services by composing simpler ones. However, this paradigm shift exposes network services to faults and bottlenecks in the complex software virtualization infrastructure they rely on. Thus, NFV services require effective anomaly detection systems to detect the occurrence of network...
In spite of their growing maturity, current web monitoring tools are unable to observe all operating conditions. For example, clients in different geographical locations might get very diverse latencies to the server; the network between client and server might be slow; or third-party servers with external page resources might underperform. Ultimately, only the clients can determine whether a site...
This work deals with modern trends in the design and development of monitoring and data acquisition systems for potentially explosive gases in underground sites, as well as implementing this kind of a system, with dedicated software, in mine openings made by Hidroconstructia Company in Buzau County.
Camera-enabled sensors deployed for visual monitoring will cover a region of the target field, providing information for many innovative applications based on wireless sensing. Actually, some areas of the monitored field may have more relevance than others, according to the characteristics of the applications, which may indicate that such areas need better coverage to avoid blind spots and achieve...
Research objects were designed in data-intensive science under the premises of interoperability and machine-readability to describe scientific processes and findings including all the resources that were used in the research endeavour. In this poster we present our work with Earth Science communities, that have embraced the research object model for long-term preservation and reuse of knowledge, to...
Quality of service and quality of experience are of increasingly interest for successful communication network applications, particularly for decentralized networks, such as peer-to-peer (p2p) networks. They need sophisticated monitoring mechanisms to be able to adapt the system's parameters to maintain a certain level of quality of service. In the literature, several well-studied approaches, classified...
As enterprises continue to move their workloads from traditional server-room environments to private cloud-based systems, there is an increasing desire and ability for companies like IBM to centrally monitor the systems on behalf of their customers to proactively help to mitigate any potential failure scenarios. In this paper, we investigate failures caused by software aging affecting an enterprise-class...
Our research work aims to develop a monitoring and control system in potentially explosive environments using microcontrollers. The paper presents a micro-system designed and achieved within the Metrology Laboratory of S.C. SIP S.A. It describes the user interface software structure built and used with the acquisition and transfer modules as well as managing the signals received in a SQL SERVER 2008...
Inferring fine-grained link metrics by using aggregated path measurements, known as network tomography, is essential for various network operations, such as network monitoring, load balancing, and failure diagnosis. Given a set of interesting links and the changing topologies of a dynamic network, we study the problem of calculating the link metrics of these links by end-to-end cycle-free path measurements...
Industrial Cyber-physical systems (ICPSs) are expected to provide effective solutions for improving the operation of many existing industrial manufacturing systems. Wireless sensor networks in the industrial field is classified as low-power and lossy network due to energy constrained devices, the dynamic environment and a high packet loss rate. Energy efficiency and delivery reliability need to be...
A low-complexity algorithm is presented that clusters sensor nodes based on similarity in the sensed signals. This feature makes it an enabler for distributed detection of events that are impossible to identify using information available to a single node. The algorithm does not require system training prior to deployment nor does it assume statistical knowledge of the signal. Experimental results...
Today, The cloud industry is adopting the container technology both for internal usage and as commercial offering. The use of containers as base technology for large-scale systems opens many challenges in the area of resource management at run-time. This paper addresses the problem of selecting the more appropriate performance metrics to activate auto-scaling actions. Specifically, we investigate...
System monitoring is an established tool to measure the utilization and health of HPC systems. Usually system monitoring infrastructures make no connection to job information and do not utilize hardware performance monitoring (HPM) data. To increase the efficient use of HPC systems automatic and continuous performance monitoring of jobs is an essential component. It can help to identify pathological...
Resource usage data, collected using tools such as TACC_Stats, capture the resource utilization by nodes within a high performance computing system. We present methods to analyze the resource usage data to understand the system performance and identify performance anomalies. The core idea is to model the data as a three-way tensor corresponding to the compute nodes, usage metrics, and time. Using...
Scaling clusters is no longer the only struggle in moving towards exascale in HPC. While scaling components such as the network and file systems is a widely accepted need, monitoring, on the other hand, is often left behind in the procurement of these large systems. Monitoring is often quite an afterthought that is expected to be incorporated in existing infrastructure. While that often works for...
Because data collection in HPC systems happens on the nodes and is easily related to the job running on the node, tools presenting the data and subsequent analyses to the user generally present them at the job level. Our position is that this is the wrong level of abstraction and thus limits the value of the analyses, often dissuading users from using any of the offered tools. In this paper we present...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.