Tracking Colored Pixels in Videos

Open Source Aerospace Projects

Tracking Colored Pixels in Videos

This work was adopted from code written by Dr. Adrian Rosebrock (https://www.pyimagesearch.com/2015/09/14/ball-tracking-with-opencv/).

ColorTrack tracks the positions of objects in a video stream specified by their colors. The code is listed below. Each object is identified by a circle drawn around the object’s largest diameter. The tracked positions correspond to the center of this circle. Multiple colors can be tracked simultaneously, but each object has to be a different, distinct color. The positions can be saved in a .csv file for more analysis. This program is written in Python 3 and it requires NumPy, Matplotlib, and OpenCV 3.4. Later versions of OpenCV are incompatible with this program. For installing specific package versions, it is recommended to use Anaconda (https://www.anaconda.com/distribution/). If you do not already have Python installed, Anaconda will also handle that.

Running ColorTrack should be done from the command line. You specify a video file to stream with the flag ‘-i’ followed by the path to the file. If this flag is excluded, the program instead tries to stream video from an available webcam. Using the flag ‘-c’ followed by any of the these characters allows you to control which colors to track:

r:red
o:orange
y:yellow
g:green
b:blue
k:black
w:white
e:grey

Specifying a .csv output file is done with ‘o’ followed by the name of the file. If the file already exists, it will be overwritten. Example usage to stream from a file named tennis_swing.mp4, output to a new file called tennis_output.csv, and track the colors yellow and green:

python.exe color_track.py -i tennis_swing.mp4 -o tennis_output.csv -c y g

Other features such as cropping or flipping the video stream, displaying a graph of tracked positions, or real-time graphing can be specified with flags also. For a list of these, use the help flag, ‘-h’.

Doing a kinematic analysis of the tracked positions can be done with the .csv position data. The steps of the analysis are shown in the attached example file swing.xlsx, which was converted from an output .csv file. This data was collected using a 240 fps camera in which a yellow band was tied to a tennis player’s elbow, and the position of that band was tracked over the course of a single stationary swing. We first list out the time that each video frame occurs and the associated elbow positions in both the x and y axes. Then we take the derivative to calculate velocity by dividing the change in position of each frame by the change in time. Similarly, we can determine the elbows acceleration in pixels by dividing the change in velocity of each frame by the change in position. This all results in the following (we are only showing the kinematics for the x axis here):

Time (s)Position in x (px)Velocity in x (px/s)Accelerationin x (px/s/s)
0518
0.004166667519240
0.008333333516-720-230400
0.01255160172800
0.016666667519720172800
0.020833333517-480-288000

Notice that a value is lost from the velocity and two values are lost from the acceleration when taking the derivative. Also, if the physical dimensions of any object in the video are known, the pixel units above can be converted into something more meaningful.

Files

You can see the kinematics graphed in swing.xlsx. Download the code in color_track.zip.

References

Rosebrock, A. (2015). Ball Tracking with OpenCV. Retrieved November 1, 2018, from https://www.pyimagesearch.com/2015/09/14/ball-tracking-with-opencv/

Leave a Reply