Search This Blog

03 January 2017

Detect object movement

    Here is our small surveillance camera software. The application uses the camera of the device. The user positions the device to a position and does not have to look all of the time if an event in the camera view happens, when something in the image changes (there is activity on the field that the camera is observing it draws a rectangle on the screen where something has changed or is changing and plays a sound). 
    - feature to select what source to use - Camera; Video file; Sequence of images.
    - feature to not show movements due to camera movement - image stabilization.
    - feature to play a sound when there is a difference detected

    The libraries that we considered for this projects are listed below. Initially we wanted to use Marvin framework, but it uses Java packages that are not available in Android and porting the framework to Android would be a project of its own. That is why our next choice was Catalano Framework which can be used in an Android application without any modifications.

Marvin Image Processing Framework

Processing

JMagick - Java wrapper for ImageMagick API

Catalano Framework

OpenCV - (Open Computer Vision)

    For the image stabilization feature we considered using Phase correlation or Convolution, and decided to use a filter we are already using which is provided by the Catalano Framework - Catalano.Imaging.Filters.Difference.
We thought that it was using Convolution to calculate the difference, but after looking at the source it turns out it is not. If it was using convolution matrix it would probably have had better results in finding differences.
Instead of performing the image stabilization using the whole size of the image we use a subset of the image. We get part of the image and another slightly bigger part of the image and calculate differences with increasing offsets to find the smallest difference and calculate the offset point from it.
Would we have used Phase correlation, we would have done the same with different delays to find where the frequencies match the most.
When there is a movement in different parts of the screen - probably different objects, the whole area will be reported and not individual objects, because we decided that it is not so important to split the areas. To add this functionality, we have to just calculate changes and group them to different clusters, so that we can separate different objects in movement on the screen.
 
Repository of the project at GitHub:
https://github.com/ektodorov/sentinel