A direct optimal procedure is developed for the control of linear, time varying, spatially discrete mechanical systems. An assumed-time-modes method in which the dependent variables of the dynamics problem are expanded in terms of admissible basis functions in time is extended to include a similar representation of the control inputs. This approach, together with direct use of Hamilton's law of varying action, allows explicita prioriintegration in time of the energy related quantities leading to the algebraic equations of motion, thus the conventional first order differential state equations are obviated. Expansion coefficients for the dependent variables and those for input forces constitute the states and controls, respectively. A typical performance measure of linear quadratic regulator theory is considered for the optimal control problem and it is transformed into an algebraic objective function by using the assumed-time-modes expansions. The resulting algebraic optimality problem yields closed-form solutions for the optimal control gains directly. Some computational aspects of the proposed methodology are pointed out; numerical solutions for single- and multi-degrees-of-freedom problems are included.