Factor analysis, in its original formulation, deals with the linear statistical model 1 $$ Y = HX + \varepsilon $$ where H is a deterministic matrix, X and ge independent random vectors, the first with dimension smaller than Y, the second with independent components. What makes this model attractive in applied research is the data reduction mechanism built in it. A large number of observed variables Y are explained in terms of a small number of unobserved (latent) variablesX perturbed by the independent noise ge. Under normality assumptions, which are the rule in the standard theory, all the laws of the model are specified by covariance matrices. More precisely, assume that X and ge are zero mean independent normal vectors with ℂov(X) = P and ℂov(ε) = D, where D is diagonal. It follows from (1) that ℂov(Y) = HPH T + D.