We develop two probabilistic methods that allow us to analyze the maximum data structure size encountered during a sequence of insertions and deletions in data structures such as priority queues, dictionaries, linear lists, and symbol tables, and in sweepline structures for geometry and VLSI applications. The notion of the “maximum” is basic to issues of resource preallocation. We apply our methods to combinatorial models of file histories and probabilistic models, as well as to a non-Markovian process (algorithm) for processing sweepline information in an efficient way, called “hashing with lazy deletion” (HwLD). We derive expressions for the expected maximum data structure size that are asymptotically exact, that is, correct up to lower-order terms; in several cases of interest the expected value of the maximum size is asymptotically equal to the maximum expected size. At a high level, our first method isolates the primary contribution to the maximum and bounds the lesser effects. In our second technique we relate the continuous-time probabilistic model to its discrete analog—the maximum slot occupancy in hashing.