The ZA-NLMS (for zero-attractor) represents arguably the seminal sparsity-aware gradient adaptive algorithm. As it is constraint by the ℓ1-norm of the filter weights, the underlying problem turns convex, hence with unique solution (in expected sense). Despite these friendly properties, the algorithm convergence and, more important, the best-performing sparsity tradeoff are yet to be effectively studied. This paper presents a comprehensive analytical study on ZA-NLMS' convergence, which results in the optimal (constant) sparsity tradeoff. The value of this decisive hyperparameter from a practitioner point of view turns out related to the 3/2-power of the adaptive filter length. This outcome, difficult to argue intuitively, as well as the convergence model, have been exhaustively validated with numerical experiments.