The projection-type neural networks $\tau \frac{dx}{dt}=-x+P_{\Omega }(x-\Lambda (t)\partial ^{0}E(x))$ are generic and useful models for solving the constrained optimization problems min {E(x)|x ∈ Ω}. In the existing convergence/ stability analysis, the most are deduced based on the assumptions that E is uniformly or strictly convex and Ω is box-shaped. In this talk we present a generalized theory on convergence/stability of the networks. In the general setting that E is only convex and Ω is any closed bounded convex set, it is shown the global convergence/asymptotic stability of the networks in a specified sense. The presented theory sharpens and generalizes the existing results, and, consequently, underlies the applicability of the neural networks for a broader type of optimization problems.