FPGA devices in reconfigurable computers (RCs) allow datapath, memory, and processing elements (PEs) to be customized in order to achieve very efficient algorithm implementations. However, the maximum speedup on RCs is bounded by the bandwidth available between muPs and FPGA hardware accelerators. In this paper, an image processing architecture is presented to fully exploit this bandwidth for achieving the maximum possible speedup. This architecture can be used to implement any convolution operation between an image and a kernel, and comprises four fully pipelined components: a line buffer, a data window, an array of PEs and a data concatenating block. Multiple image processing algorithms have been successfully implemented using this architecture, such as digital filters, edge detectors, and image transforms. In all cases, the maximum throughput is upper-bounded by the muP-FPGA I/O bandwidth, regardless of the complexity of the algorithm. This end-to-end throughput has been measured to be 1.2 GB/s on Cray XD1 and 2.1 GB/s on SGI RC100.