A significant challenge in creating machines with artificial vision is designing systems which can process visual information as efficiently as the brain. To address this challenge, we identify key algorithms which model the process of attention and recognition in the visual cortex of mammals. This paper presents Cover - an FPGA framework for generating systems which can potentially emulate the visual cortex. We have designed accelerators for models of attention and recognition in the cortex and integrated them to realize an end-to-end attention-recognition system. Evaluation of our system on a Dinigroup multi-FPGA platform shows high performance and accuracy for attention and recognition systems and speedups over existing CPU, GPU and FPGA implementations. Results show that our end-to-end system which emulates the cortex can achieve near real-time speeds for high resolution images. This system can be applied to many artificial vision applications such as augmented virtual reality and autonomous vehicle navigation.