This paper examines the use of three different background subtraction algorithms -- Mixture of Gaussians (MOG), Visual Background Extractor (ViBe), and Pixel-Based Adaptive Segmentation (PBAS) -- to detect events of interest within uncontrolled outdoor avian nesting video for the Wildlife@Home project. Many computer vision techniques are unsuccessful in this domain due to low frame-rates and resolution of battery powered surveillance cameras in combination with the cryptic coloration (camouflage) of the animals. Modifications to ViBe and PBAS are presented which provide more robust results in this challenging video, and address issues caused by the cryptic coloration of the species being monitored by the project. These algorithms were run on over 250 hours of video and compared to human observations generated by Wildlife@Home's project scientists and volunteer citizen scientists. All three algorithms provide accurate detection of events however we see much fewer false postives from the modified versions of the ViBe and PBAS algorithms. This is especially true for Interior Least Tern (Sternula antillarum) and Piping Plover (Charadrius melodus) video, which do not suffer from as much moving vegetation as the Sharp-Tailed Grouse (Tympanuchus phasianellus) footage. These results provide initial justification for utilizing Wildlife@Home's 2,000+ volunteered computers to analyze the project's 85,000 hours of avian nesting video, so that this information can be integrated into the Wildlife@Home user interface. Further, the videos and human observations used to test these algorithms have been made available as part of Wildlife@Home's first data release, to encourage future study by computer vision researchers.