We present a 3D reconstruction algorithm designed to support various autonomous vehicle navigation applications. The algorithm presented focuses on the 3D reconstruction of a scene using only a single moving camera. Utilizing video frames captured at different points in time allows us to determine the relative depths in a scene. The original reconstruction process resulting in a point cloud was computed based on feature matching and depth triangulation analysis. In an improved version of the algorithm, we utilized optical flow features to create an extremely dense representation model. Although dense, this model is hindered due to its low disparity resolution. With the third algorithmic modification, we introduce the addition of the preprocessing step of nonlinear super resolution. With this addition, the accuracy and quantity of features is significantly increased since the number of features is directly proportional to the resolution and high frequencies of the input images. Our final contribution of additional pre and post processing steps are designed to filter noise points and mismatched features, completing the presentation of our Dense Point-cloud Representation (DPR) technique. We measure the success of DPR by evaluating the visual appeal, density, usability and computational expense of the reconstruction technique and compare with two state-of-the-art techniques.