Most distributed optimization methods used for distributed model predictive control (DMPC) are gradient based. Gradient based optimization algorithms are known to have iterations of low complexity. However, the number of iterations needed to achieve satisfactory accuracy might be significant. This is not a desirable characteristic for distributed optimization in distributed model predictive control. Rather, the number of iterations should be kept low to reduce communication requirements, while the complexity within an iteration can be significant. By incorporating Hessian information in a distributed accelerated gradient method in a well-defined manner, we are able to significantly reduce the number of iterations needed to achieve satisfactory accuracy in the solutions, compared to distributed methods that are strictly gradient-based. Further, we provide convergence rate results and iteration complexity bounds for the developed algorithm.