Performance evaluation of computer networks through traditional packet-level simulation is becoming increasingly difficult as networks grow in size along different dimensions. Due to its higher level of abstraction, fluid simulation is a promising approach for evaluating large-scale network models. In this paper we focus on evaluating and comparing the computational effort required for fluid- and packet-level simulation. To measure the computational effort required by a simulation approach, we introduce the concept of “simulation event rate”, a measure that is both analytically tractable and adequate. We identify the fundamental factors that contribute to the simulation event rate in fluid- and packet-level simulations and provide an analytical characterization of the simulation event rate for specific network models. Among such factors, we identify the “ripple effect” as a significant contributor to the computational effort required by fluid simulation. We also show that the parameter space of a given network model can be divided into different regions where one simulation technique is more efficient than the other. In particular, we consider a realistic large-scale network and demonstrate how the computational effort depends on simulation parameters. Finally, we show that flow aggregation can effectively reduce the impact of the ripple effect and that the ripple effect has less impact when simulating the WFQ scheduling policy.