The presented asynchronous, time-based CMOS dynamic vision and image sensor is based on a QVGA (304×240) array of fully autonomous pixels containing event-based change detection and PWM imaging circuitry. Exposure measurements are initiated and carried out locally by the individual pixel that has detected a brightness change in its field-of-view. Thus pixels do not rely on external timing signals and independently and asynchronously request access to an (asynchronous arbitrated) output channel when they have new illumination values to communicate. Communication is address-event based (AER)-gray-levels are encoded in inter-event intervals. Pixels that are not stimulated visually do not produce output. This pixel-autonomous and massively parallel operation ideally results in optimal lossless video compression through complete temporal redundancy suppression at the focal-plane. Compression factors depend on scene activity. Due to the time-based encoding of the illumination information, very high dynamic range - intra-scene DR of 143dB static and 125dB at 30fps equivalent temporal resolution - is achieved. A novel time-domain correlated double sampling (TCDS) method yields array FPN of <;0.25%. SNR is >56dB (9.3bit) for >10Lx.