Lib4U

‎"Behind every stack of books there is a flood of knowledge."

Structure from Motion and 3D reconstruction on the easy in OpenCV 2.3+ [w/ code]

Screen-shot-2012-02-06-at-6.44.42-PM-300x71

Getting a motion map

The basic thing when doing reconstruction from pairs of images, is that you know the motion: How much “a pixel has moved” from one image to the other. This gives you the ability to reconstruct it’s distance from the camera(s). So our first goal is to try and understand that from a pair of two images.

In calibrated horizontal stereo rigs this is called Disparity, and it refers to the horizontal motion of a pixel. And OpenCV actually has some very good tools to recover horizontal disparity, that can be seen in this sample.

But in our case we don’t have a calibrated rig as we are doing monocular (one camera) depth reconstruction, or in other words: Structure from motion.

You can go about getting a motion map in many different ways, but two canonical ways are: optical flow and feature matching.
Also, I will stick to what OpenCV has to offer, but obviously there is a whole lot of work.

Input pair of images, rotation and translation is unknown

Optical Flow

In optical flow you basically try to “track the pixels” from image 1 to 2, usually assuming a pixel can move only within a certain window in which you will search. OpenCV offers some ways to do optical flow, but I will focus on the newer and nicer one: Farenback’s method for dense optical flow.
The word dense means we look for the motion for every pixel in the image. This is usually costly, but Farneback’s method is linear which is easy to solve, and they have a rocking implementation of it in OpenCV so it basically flies.
Running the function on two images will provide a motion map, however my experiments show that this map is wrong in a fair bit of the times… To cope with that, I am doing an iterative operation, also leveraging the fact the this OF method can use an initial guess.
An example of using Farneback method exists in the samples directory of OpenCV’s repo: here.

Dense O-F using Farneback

Feature Matching

The other way of getting motion is matching features between the two images.
In each image we extract salient features and invariant descriptors, and then match the two sets of features.
It’s very easily done in OpenCV and widely covered by examples and tutorials.
This method however, will not provide a dense motion map. It will provide a very sparse one at best… so that depth reconstruction will also be sparse. We may talk about how to overcome that by hacking some segmentation methods, like superpixels and graph-cuts, in a different post.

SURF features matching, with Fundamental matrix pruning via RANSAC

A hybrid method

Another way that I am working on to get motion is a hybrid between Feature Matching and Optical Flow.
Basically the idea is to perform feature matching at first, and then O-F. When the motion is big, and features move quite a lot in the image, O-F sometimes fails (because pixel movement is usually confined to a search window).
After we get features pairs, we can try to recover a global movement in the image. We use that movement as an initial guess for O-F.

The rigid transform flow recovered from sparse feature matching

Estimating Motion

Once we have a motion map between the two images, it should pose no problem to recover the motion of the camera. The motion is described in the 3×4 matrix P, which is combined of two elements: P = [R|t], which are the Rotational element R and Translational element t.
H&Z give us a bunch of ways of recovering the P matrices for both cameras in Chapter 9 of their book. The central method being – using the Fundamental Matrix. This special 3×3 matrix encodes the epipolar constraint between the images, to put simply: for each point x in image 1 and corresponding point x’ in image 2 the following equation holds: x’Fx = 0.
How does that help us? Well H&Z also prove that if you have F, you can infer the two P matrices. And, if you have (sufficient) point matches between images, which we have, you can find F! Hurray!
This is simply visible in the linear sense. F has 9 entries (but only 8 degrees of freedom), so if we have enough point pairs, we can solve for F in a least squares sense. But… F is better estimated in a more robust way, and OpenCV takes care of all of this for us in the function findFundamentalMat. There are several methods for recovering F there, linear and non-linear.
However, H&Z also point to a problem with using F right away – projective ambiguity. This means that the recovered camera matrices may not be the “real” ones, but instead have gone through some 3D projective transformation. To cope with this, we will use the Essential Matrix instead, which is sort of the same thing (holds epiploar constraint over points) but for calibrated cameras. Using the Essential matrix removes the projective ambiguity and provides a Metric (or Singular) Reconstruction, which means the 3D points are true up to scaling alone, and not up to a projective transformation.

1
2
3
4
5
6
cv::FileStorage fs;
fs.open("camera_calibration.yml",cv::FileStorage::READ);
fs["camera_matrix"]>>K;
Mat F = findFundamentalMat(imgpts1, imgpts2, FM_RANSAC, 0.1, 0.99, status);
Mat E = K.t() * F * K; //according to HZ (9.12)

Now let’s assume one camera is P = [I|0], meaning it hasn’t moved or rotated, getting the second camera matrix, P’ = [R|t], is done as follows:

1
2
3
4
5
6
7
8
9
10
11
12
SVD svd(E);
Matx33d W(0,-1,0,   //HZ 9.13
      1,0,0,
      0,0,1);
Matx33d Winv(0,1,0,
     -1,0,0,
     0,0,1);
Mat_ R = svd.u * Mat(W) * svd.vt; //HZ 9.19
Mat_ t = svd.u.col(2); //u3
P1 = Matx34d(R(0,0),    R(0,1), R(0,2), t(0),
         R(1,0),    R(1,1), R(1,2), t(1),
         R(2,0),    R(2,1), R(2,2), t(2));

Looks good, now let’s move on to reconstruction.

Reconstruction via Triangulation

Once we have two camera matrices, P and P’, we can recover the 3D structure of the scene. This can be seen simply if we think about it using ray intersection. We have two points in space of the camera centers (one in 0,0,0 and one in t), and we have the location in space of a point both on the image plane of image 1 and on the image plane of image 2. If we simply shoot a ray from from one camera center through the respective point and another ray from the other camera – the intersection of the two rays must be the real location of the object in space.
In real life, none of that works. The rays usually will not intersect (so H&Z refer to the mid-point algorithm, which they dismiss as a bad choice), and ray intersection in general is inferior to other triangulation methods.
H&Z go on to describe their “optimal” triangulation method, which optimizes the solution based on the error from reprojection of the points back to the image plane.
I have implemented the linear triangulation methods they present, and wrote a post about it not long ago: Here.
I also added the Iterative Least Squares method that Hartley presented in his article “Triangulation“, which is said to perform very good and very fast.

“Depth Map”

3D reconstruction

A word of notice, many many times the reconstruction will fail because the Fundamental matrix came out wrong. The results will just look aweful, and nothing like a true reconstruction. To cope with this, you may want to insert a check that will make sure the two P matrices are not completely bogus (you could check for a reasonable rotation for example). If the P matrices, that are derived from the F matrix, are strange, then you can discard this F matrix and compute a new one.

Example of when things go bad…

Toolbox and Framework

I created a small toolbox of the various methods I spoke about in this post, and created a very simple UI. It basically allows you to load two images and then try the different methods on them and get the results.
It’s using FLTK3 for the GUI, and PCL (VTK backend) for visualization of the result 3D point cloud.
It also includes a few classes with a simple API that let’s you get the features matches, motion map, camera matrices from the motion, and finally the 3D point cloud.

FLTK GUI

Code & Where to go next

The code, as usual, is up for grabs at github:

https://github.com/royshil/SfM-Toy-Library

Now, that have a firm grasp of SfM :) you can go on to visit the following projects, which implement a much more robust solution:

http://phototour.cs.washington.edu/bundler/
http://code.google.com/p/libmv/
http://www.cs.washington.edu/homes/ccwu/vsfm/
http://www.inf.ethz.ch/personal/chzach/opensource.html

And Wikipedia points to some interesting libraries and code as well: http://en.wikipedia.org/wiki/Structure_from_motion

Source:

http://www.morethantechnical.com/2012/02/07/structure-from-motion-and-3d-reconstruction-on-the-easy-in-opencv-2-3-w-code/

One comment on “Structure from Motion and 3D reconstruction on the easy in OpenCV 2.3+ [w/ code]

  1. Calin Dragos George
    July 28, 2013

    very good tutorial for distance estimation with a stereo vision and Opencv. I add a link of your article to my post about stereo vision systems uses in robotics, tutorials and resources

Leave a comment

Virtual Fashion Technology

Virtual Fashion Education

toitocuaanhem

"chúng tôi chỉ là tôi tớ của anh em, vì Đức Kitô" (2Cr 4,5b)

Theme Showcase

Find the perfect theme for your blog.

lsuvietnam

Learn to Learn

Gocomay's Blog

Con tằm đến thác vẫn còn vương tơ

Toán cho Vật lý

Khoa Vật lý, Đại học Sư phạm Tp.HCM - ĐT :(08)-38352020 - 109

Maths 4 Physics & more...

Blog Toán Cao Cấp (M4Ps)

Bucket List Publications

Indulge- Travel, Adventure, & New Experiences

Lib4U

‎"Behind every stack of books there is a flood of knowledge."

WordPress.com News

The latest news on WordPress.com and the WordPress community.