The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Recursive Principal Components Analysis is explored as a method to identify and classify fault sources in a 12MW steam dual fuel power plant. The algorithm assessment is performed off-line by using data of relevant plant wide-information. A simple contributions matrix based in normalized data is proposed to diagnose plant faults. Results indicate it is possible to detect, classify and possibly even...
As more electronic sensors for control units are added into next generation vehicles, and linked to communication infrastructure, greater amount of operational data has become unprecedentedly available close to real time. The ability to extract patterns in such real-time massive operational data to detect, isolate, predict, and mitigate faults is the key to enhance vehicle ownership experiences. In...
Smart home service has been an emerging and profitable new business for IT and telecomm corporations. The nature of integrated heterogeneous gadgets and devices makes smart home system builders face unprecedented challenges in maintenance tasks. In this paper, we propose a symptomproblem correlation model to locate system faults in smart home services. First, we investigate the tree-like structure...
The large-scale dynamic cloud computing environment has raised great challenges for fault diagnosis in Web applications. First, fluctuating workloads cause traditional application models to change over time. Moreover, modeling the behaviors of complex applications always requires domain knowledge which is difficult to obtain. Finally, managing large-scale applications manually is impractical for operators...
With cloud computing, a cycle of fault diagnosis and recovery becomes the norm. There is a large amount of monitoring data and log events available, but it is hard to figure out which events or metrics are critical in fault diagnosis. Other approaches model faults as a deviation from normal behaviors, and thus are less applicable in cloud where changes in the environment may impact what is considered...
In a large-scale complex chemical process, hundreds of variables are measured. Since statistical process monitoring techniques such as PCA typically involve dimensionality reduction, all measured variables are often provided as input without pre-selection of variables. In our previous work [1], we demonstrated that reduced models based on only a small number of important variables, called key variables,...
In this work, we address problem determination in virtualized clouds. We show that high dynamism, resource sharing, frequent reconfiguration, high propensity to faults and automated management introduce significant new challenges towards fault diagnosis in clouds. Towards this, we propose CloudPD, a fault management framework for clouds. CloudPD leverages (i) a canonical representation of the operating...
A multivariate process monitoring and fault identification model using decision tree (DT) learning techniques is proposed. We Use one DT classifier for process monitoring and other p (p is the number of the variables) DT classifiers for fault identification. The Mahalanobis distance contours based method for selecting model training samples is proposed to decrease the number of training samples. Numerical...
This work presents a relatively new method known as empirical mode decomposition (EMD) for power quality disturbances. In a comprehensive and wider range of approaches and engineering activities, there is a increasing concern for power system disturbances monitoring techniques. The need of increasing performances in terms of accuracy and computation speed is permanently demanding new efficient processing...
Log event correlation is an effective means of detecting system faults and security breaches encountered in information technology environments. Centralized, database-driven log event correlation is common, but suffers from flaws such as high network bandwidth utilization, significant requirements for system resources, and difficulty in detecting certain suspicious behaviors. Distributed event correlation...
Direct reading ferrograph is the instrument used to determine the wear debris concentration and size distribution data in lubricating oil, characterized by its quick and simple operation and low price. It can be used in machine with oil lubrication as a prevention method against failure judgment(forecast). In order to explore new oil monitoring technology and give the accurate monitoring and fault...
In this paper, a new method named cross-correlation approximate entropy is proposed based on the correlation analysis and the approximate entropy theory. It can detect anomaly of running state in a quantitative manner without any priori knowledge. The method takes a section of signal with fixed-length of running state of equipment as a window. By sliding the window through the state signal, the paper...
For the engine modules performance deterioration and fault diagnostics, the modules operation contributions to engine parameters changes are analyzed to find the modules deterioration impact on parameters shifts to the EPR (Engine Pressure Ratio) based baselines optimally modeled by Quasi-Newton method. The modules performance assessment strategies for the engine type in this study are finally achieved...
The normal operation of enterprise software systems can be modeled by stable correlations between various system metrics; errors are detected when some of these correlations fail to hold. The typical approach to diagnosis (i.e., pinpoint the faulty component) based on the correlation models is to use the Jaccard coefficient or some variant thereof, without reference to system structure, dependency...
In this work, a new method for PQ analysis is proposed based on the use of neural processing aiming to decouple the information from different disturbances from the power system signal. The system was implemented and tested using a simulated database showing the good performance of the proposed technique for signal separation. It was shown that the proposed method could also be used as a preprocessing...
On-line monitoring of penicillin cultivation processes is crucial to the safe production of high-quality products. Multi-way principal component analysis(MPCA), a multivariate projection method, has been widely used to monitor batch and fed-batch processes. However, when MPCA is used for on-line batch monitoring, the future behavior of each new batch must be inferred up to the end of the batch operation...
The statistical process control (SPC) chart is effective in detecting process shifts. One important assumption for using the traditional SPC charts requires that the plotted observations are independent to each other. Otherwise, the so called "false alarm" would be increased, and these improper signals result in the wrong interpretation. However, this independent assumption is often not...
This paper presents a fault diagnosis approach that is the combination with Gaussian mixture models and variable reconstruction. Usually, the traditional multivariate process monitoring techniques has the fundamental assumption that the operating data should follow a unimodal Gaussian distribution, but it often becomes invalid due to the practice different operating conditions. The Gaussian mixture...
As the size of a centrally managed IP network increases, the cost of monitoring network devices and the number of reported events increase super-linearly. This in turn degrades the performance of the event correlation engine that is responsible for suppressing dependent events and escalating root cause events to a network administrator. To solve this scalability problem, we propose a distributed framework...
Analyzed here is a fault localization approach based on directed graph in view point of business software. The fault propagation model solved the problem of obtaining the dependency relationship of faults and symptoms semi-automatically. The main idea includes: get the deployment graph of managed business from the topography of network and software environment; generate the adjacency matrix of the...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.