An important sensing operation is to detect the presence of specific signals with unknown transmission parameters. This task, referred to as “link acquisition,” is typically a sequential search over the transmitted signal space. Recently, the use of sparsity in similar estimation or detection problems has received considerable attention. These works typically focus on the benefits of compressed sensing, but not generally on the cost brought by sparse recovery. Our goal is to examine the tradeoff in complexity and performance when using sparse recovery with compressed or uncompressed samples. To do so, we propose a compressive sparsity aware (CSA) acquisition scheme, where a compressive multichannel sampling (CMS) front-end is followed by a sparsity regularized likelihood ratio test (SR-LRT) module. The CSA scheme borrows insights from the models studied in sub-Nyquist sampling and finite rate of innovation (FRI) signals. We further optimize the CMS front-end by maximizing the average Kullback–Leibler distance of all the hypotheses in the SR-LRT. We compare the CSA scheme vis-à-vis other popular alternatives in terms of performance and complexity. Simulations suggest that one can use the CSA scheme to scale down the implementation cost with greater flexibility than other alternatives. However, we find that they both have overall complexities that scale linearly with the search space. Furthermore, it is shown that compressive measurements used in the SR-LRT lead to a performance loss when noise prevails, while providing better performance in spite of the compression when noise is mild.