The growing importance, large scale, and high server density of high-performance computing data centers make them prone to strategic attacks, misconfigurations, and failures (cooling as well as computing infrastructure). Such unexpected events lead to thermal anomalies - hotspots, fugues, and cold spots - which significantly impact the total cost of operation of data centers. A model-based thermal anomaly detection mechanism, which compares expected (obtained using heat generation and extraction models) and observed thermal maps (obtained using thermal cameras) of data centers is proposed. In addition, a Thermal Anomaly-aware Resource Allocation (TARA) scheme is designed to create time-varying thermal fingerprints of the data center so to maximize the accuracy and minimize the latency of the aforementioned model-based detection. TARA significantly improves the performance of model-based anomaly detection compared to state-of-the-art resource allocation schemes.