False alarms are frustrating test and diagnostic complications. Finding more faults than actually exist within a unit under test (UUT) results in increased costs, unnecessary and longer down times, and loss of confidence by the end user. In this paper we deviate from the classical definition of false alarm and replace it with a pragmatic one that measures the maintenance actions taken as a result of invalid failure indications. This helps formulate various false alarm metrics and explores the proximate causes of false alarms from cannot duplicate (CND) and retest OK (RTOK) events. It demonstrates why the fraction of false alarm (FFA) specified for systems (usually at <10%) is out of scope with empirical results found at maintenance sites, where returns caused by false alarms tend to be >70%. This huge discrepancy between expectations and results are partly due to definitions of terms and the difficulty to develop proper and verifiable metrics. Using Bayes formula as well as other metrics, however, this paper develops formulas for occurrence of false alarms (OFA) and cost of false alarms (CFA) for various CND and RTOK events. The paper includes typical examples that illustrate a match between the formulas and empirical results. Finally, the paper offers practical strategies for mitigating occurrences and costs resulting from false alarm indications.