In this paper, we present an algorithm to haptically render point clouds without first building a corresponding polygonal mesh. Our algorithm deduces necessary surface information, which is inherently unavailable in point clouds, by building neighborhood knowledge into each surface point, and also by using bounding boxes to imply surfaces in place of the voids found between neighboring points in the point cloud. Collision detection in our approach is a matter of checking line segments (representing the motion of the haptic probe) against the aforementioned bounding boxes, while force response (3-DOF) is based on our adaptation to point clouds of the standard god-object/proxy method. This adaptation implies the required constraint surfaces from the cloudpsilas surface points by exploiting their knowledge of their neighboring points. Our algorithm has a runtime that is highly insensitive to (but not independent of) the complexity of the point cloud, and is designed to relay all the coordinate information found in the point cloud to the extent allowed by the haptic devicepsilas resolution.