Thursday, May 15, 2014

Structured light camera calibration

Hello,

Things have been super crazy with work and finishing my thesis so I've forgotten to update things.  I wanted to get back to my discussion on structured light since I did the intro to that and didn't finish it.  I also want to commit to posting more often so I can showcase the cool work that I've been doing.

So on to camera calibration.....

There are quite a few great tools out there that do camera calibration for you, one example is the GML camera calibration toolbox, which can automatically detect grid corners and solve for an excellent calibration.  But I ran into an interesting issue with this toolkit that made it insufficient for structured light calibration.
When performing laser calibration and motor calibration, it is necessary to calculate an extrinsic transformation between the camera and the calibration pattern.  What I found with GML was that this extrinsic transformation was not stable.  I discovered this when I couldn't calibrate motor motion with extrinsic values extracted using GML.  This manifested itself in large variation in extrinsic parameter displacements for small constant motor displacements.  I then manually selected the checkerboard corners with the MatLab camera calibration toolbox and found much more consistent extrinsic parameters.  My guess is this is due to the optimization routines used in GML.

So with that being said, the right combination of MatLab toolboxes and some custom code can make a completely automated calibration routine.  Today we'll talk about camera calibration.

For just camera calibration, we can use AMCC toolbox.  This is a modification to the MatLab camera calibration toolbox to include automatic checkerboard extraction, which I consider a must since manually selecting checkerboard corners is a huge pain.

First print out a checkerboard pattern and attach it to a rigid flat surface.  I like to use the patterns provided with GML in its install directory since they're pre-made.
Collect images of the pattern with your camera with a consistent naming convention.  I like to call them Left-*.jpg and Right-*.jpg when using a stereo setup, and just Image-*.jpg otherwise.
The images should look similar to this (without the laser for now though)


Place about twenty images in different poses into a folder.  Enter that folder with Matlab and edit auto_mono_calibrator_efficient (provided by AMCC toolbox).
Add / edit these lines at the top of auto_mono_calibrator_efficient.

dX = 18.7452;
dY = 18.7452;
nx_crnrs = 7;
ny_crnrs = 4;
proj_tol = 2.0;
format_image = 'jpg';
calib_name = 'Image-';

Where dX and dY are the measured sizes of the checkerboard squares in mm.  nx_crnrs and ny_crnrs are the number of checkerboard corners along each axis.  format_image is obviously the image format, and calib_name is the base name of the image.

Now run auto_mono_calibrator_efficient and it should spit out the calibration data for this camera.
For a Logitech webcam running at 640x480, I got the following results:

Focal Length:          fc = [ 783.70794   771.22158 ] ± [ 8.81920   9.13420 ]
Principal point:       cc = [ 237.23567   270.25039 ] ± [ 12.64706   6.05165 ]
Skew:             alpha_c = [ 0.00000 ] ± [ 0.00000  ]   => angle of pixel axes = 90.00000 ± 0.00000 degrees
Distortion:            kc = [ -0.06032   0.06316   -0.00330   -0.02552  0.00000 ] ± [ 0.02146   0.06905   0.00272   0.00393  0.00000 ]
Pixel error:          err = [ 0.20168   0.14721 ]

Keep in mind the pixel error values really really should be between 0.1 and 1.  If they aren't then there is something very wrong.
Things that can go wrong include:
1) Blurry / poorly exposed images.
2) Incorrect input number of corners, or incorrect correspondences. (Checkerboard shouldn't be too skewed from the image sensor)
The amcc toolbox is fairly robust, thus it will likely reject bad images and still provide a good result, also since it's automatic, there's no frustration after hand selecting 130 images to find half of them are too blurry or dark. (Special project involving NIR cameras)

Next I'll discuss how to calibrate cameras with OpenCV for those who don't have Matlab.

Monday, March 10, 2014

The start of my flexible computer vision program.

Hello all,

My apologies for not posting for a while, I've been incredibly busy with finishing my thesis.

I wanted to blog about the new computer vision program I've been working on.  This program will hopefully help me develop computer vision algorithms faster and it could be a good teaching tool to demonstrate how computer vision works.

This program is designed to give a flexible interface to OpenCV using QT as a user interface.  My motivation behind this is to create a program that can let me easily create a processing pipeline then modify parameters of that pipeline.
Some key features of this are:

Selecting multiple sources.
Storing a history of all filters applied with settings.
Allows you to create a processing pipeline and then view the results with different input sources.
Save the pipeline to a text file and which can then be loaded by a command line program to execute.

So here's an early example of usage:

Main Window:



Select image source: 



After images are selected:


Once the sources images are loaded, you can create a processing pipeline with different filters.  A temporary image is shown with the current filter settings, as settings are changed you can view in real time the changes made.  Once you are satisfied you can commit the changes to the filter history.
After you've built up some operations, you can then see what the operations look like on other images simply by clicking on a different source image.

This is still the very early stages of development, I've only had two days to work on this so far.  But I'm very excited for how it will make image processing easier for me in the future.
My goal is to add a section for extracting features from a processed image and then perform machine learning on those statistics.  I also plan on adding an image labeling section to make this the go to application for performing learning.