Simple motion detection by background subtraction

This is a project(code in Github repository) that my team in EC551 Advanced digital design with Verilog and FPGA has been doing.

We kind of created the prototype of and due to time limit it was submitted as our final project in that class actually. But much more improvements are needed, so we are still planning to get it on a right and decent track.

So far what we have done can be watched here:

Introduction

In a nutshell, this project is for detecting moving objects in the camera view by a weighted average method, and is implemented on a FPGA board. The devices we were planning to use are:

And the final stuff we used are:

The purpose for this post is mainly to put down the failure we’d been through so that they might be helpful. The implementation, which I’ll not pay much attention here, is simple and will vary depending on different approaches or equipments.

Roadmap

Specification and module distribution

We planned to use the Nexys board, because we’ve only got this guy in our class. This is not so comfortable by any means. The block memory in the FPGA is merely 574 kb or so, making it hardly a good device to store any image. The formula we need is

background = (127 * background + new frame) / 128  

which means that we must store 1 entire frame at least, if calculated in a pixel fashion. That’s ok if we downsample it or cut the resolution. This board has SRAM but we didn’t use that.

Why we turned to Atlys board is that we failed to make the Vmod Cam work on this board.

Working on Camera

At early phase when we were working hard like horses trying to get that camera to work, we already noticed that this board may not get alone well with the camera, despite they are from the same company.

The company says on their website that the VHDCI interface on Nexys is typically used for output rather than streaming from the camera. The only suggestion they give is to change the board to Atlys, which also has a support code for the camera in VHDL.

We went on working on it whatsoever since we didn’t want to surrender without a fight and we didn’t want to buy the Atlys board. But after several desperate days’ attempts it still didn’t stream data. The control code we used mainly consists of two state machines, one of which is to switch among IDLE, POWER_UP, CONFIGURATION, and STREAMING, the other is a small one only used to transmit I2C control to write the registers in the camera, during the CONFIGURATION stage.

The best result we got up to that point was, when we entered the CONFIGURATION stage and started transmitting, the led light we set to indicate the pixel clk blinked for one or two times. That might indicate that the camera did respond to us but somehow it then stopped and the pixel clock, which should be consistently provided by the camera, never clocked.

So maybe it’s true that this board is not the Mr.right. I don’t know. Would be more than excited to hear any successful implementation.

Then we bought the Atlys board anyways. And by adapting the VHDL code from their website, finally the monitor displayed what was saw by the camera. Though, this board does not have a VGA interface so we have to use a HDMI monitor.

TBC..

22 December 2012