We propose a framework to reconstruct complete object trajectories across the field-of-views of multiple semi-overlapping cameras. The framework is based on three main steps: local trajectory and object metadata extraction; trajectory transformation onto a common virtual top-view of the scene (ground plane); and global trajectory reconstruction. The extraction of trajectory metadata is based on object detection and tracking in each sensor (partial view) and is followed by an homographic projection on a common ground plane. We then associate the projections generated by the same moving object and refine them by interpolating possible gaps due to detection in-accuracies. Finally, we link the trajectories to reconstruct the full traces from the transformed projections (trajectory fragments) across the entire network of cameras. We demonstrate the proposed algorithm on a challenging sport scenario (football match) where objects (players) move in close proximity thus generating ambiguous observations.