Convex optimization has become an essential technique in many different disciplines. In this paper, we consider the primal-dual algorithm minimizing augmented models with linear constraints, where the objective function consists of two proper closed convex functions; one is the square of a norm and the other one is a gauge function being partly smooth relatively to an active manifold. Examples of this situation can be found in signal processing, optimization, statistics and machine learning literature. We present a unified framework to understand the local convergence behaviour of the primal-dual algorithm for these augmented models. This result explains some numerical results shown in the literature on local linear convergence of the algorithm.