Attention plays an important role in natural vision. Implementing realistic attentional processes in artificial vision systems could greatly improve their performance. However, existing models of attention do not adequately capture all of its complex effects on neural activity. In particular, existing models cannot reproduce recently reported effects such as shifting and scaling of receptive fields. Here we suggest that many of these effects arise naturally from feedback connections between visual areas (which redistribute top-down attentional modulation) and local, nonspecific short-range inhibition (which produce competition between stimuli that is automatically scaled to receptive field size). We show that a simple model with two reciprocally connected layers and short-range inhibition can generate many known effects of attention, including receptive field shift and resizing. Due to its conceptual simplicity, the model may be easily integrated into a broad range of computer vision systems.