PARSEC is a popular benchmark suite designed to facilitate the study of CMPs. It is composed of 13 parallel applications, each with an input set intended for native execution, as well as three reduced-size simulation input sets. Each benchmark also demarcates a Region of Interest (ROI) that indicates the parallel code in the application. The PARSEC developers state that users should model only the ROI when using simulation inputs; in other cases the native input set should be used to obtain results representative of full program execution. We analyzed the runtime scalability of PARSEC using real multiprocessor systems and present our results in this paper. For each benchmark we analyzed the runtime scalability of both the ROI and full execution for all the input sets. We found that for 6 of the benchmarks the ROI scalability matches that of the full program regardless of the input set used. For the remaining 7 benchmarks, for at least some of the input sets there is significant divergence between the scalability of the ROI and the full program. Three of these benchmarks have much lower scalability for the full program than the ROI, even when run with the native input set. We found that for most of the benchmarks the runtime scalability of the simulation inputs differs significantly from that of the native input set, both for the ROI and the full program.