This paper presents an overview of a novel multimodal system being developed at UC San Diego for vehicle detection and traffic flow analysis. A distributed multimodal array (DiMMA) framework is presented for sensory data acquisition, processing, analysis, fusion, and "active" control mechanisms needed to recognize objects, events, and activities which have multi-modal signatures. Current sensing modalities being researched include video, audio, seismic, magnetic, and passive infrared. Feature extraction and data fusion techniques are being investigated to improve robustness and study the advantages and disadvantages of each sensing modality. Preliminary results of this rapidly deployable system are discussed, along with possible future expansions, including laser range scanners, geophones, pneumatic road tubes, and traditional inductive loops