Hello, I have been investgating how to create an orthomosaic with the Snaveley's Bundler output. Cloudcompare sort of does that.
From what I understood, it introduces some orthorectification on the images (with parameters estimated via Least squares). I and then it projects the images and saves them individually.
However, I do not understand what is being estimated with the Least Squares in function computeOrthoRectificationParams() as, as far as I read, it does not account for the rotarion and translation matrices output by Bundler? If it does, how?
Another question, how would I project the images onto a single plane image accounting for the translation?
Thank you
Bruno
Orthomosaic Computation
Re: Orthomosaic Computation
Hi,
The ortho-rectification method tries to find the parameters of the 'collinearity equation' (http://en.wikipedia.org/wiki/Collinearity_equation). This equation(s) transforms a point in the object space (in our case a plane in 3D on which images are projected - by default the XY plane) to the corresponding pixel in the 2D image. The orientation of the image is implicitly taken into account as we use the 3D keypoints output by Bundler to conduct the optimization (and the orientation/translation has already been applied to those 3D keypoints). I can't remember exactly why we had to use this optimization instead of using only the information given by Bundler but I believe something was missing (hard to tell what right now!).
By default CC projects all images on the same plane (the XY plane by default) and in the same reference system. But with the latest 2.5.6 beta version you can choose another default plane (XZ, or YZ) or even input a custom plane (as a 4x4 matrix). We don't handle the fusion of all images as this is a very tricky process and CC is definitely not an image processing tool. Moreover Microsoft ICE (for instance) do this very well.
The ortho-rectification method tries to find the parameters of the 'collinearity equation' (http://en.wikipedia.org/wiki/Collinearity_equation). This equation(s) transforms a point in the object space (in our case a plane in 3D on which images are projected - by default the XY plane) to the corresponding pixel in the 2D image. The orientation of the image is implicitly taken into account as we use the 3D keypoints output by Bundler to conduct the optimization (and the orientation/translation has already been applied to those 3D keypoints). I can't remember exactly why we had to use this optimization instead of using only the information given by Bundler but I believe something was missing (hard to tell what right now!).
By default CC projects all images on the same plane (the XY plane by default) and in the same reference system. But with the latest 2.5.6 beta version you can choose another default plane (XZ, or YZ) or even input a custom plane (as a 4x4 matrix). We don't handle the fusion of all images as this is a very tricky process and CC is definitely not an image processing tool. Moreover Microsoft ICE (for instance) do this very well.
Daniel, CloudCompare admin
Re: Orthomosaic Computation
Hello Daniel, tank you for your answer.
So this process is a bit different from what I expected. Do you use the camera R and t matrices in this process, or do you estimate them all again with conjugate gradient?
So this process is a bit different from what I expected. Do you use the camera R and t matrices in this process, or do you estimate them all again with conjugate gradient?
Re: Orthomosaic Computation
As I said they are implicitly used. I guess we could use them explicitly instead (and avoid the optimization process) but there might be something missing... I'd be interested to see such a solution of course.
Daniel, CloudCompare admin
Re: Orthomosaic Computation
Ok, I am going to try and use it and if you are interested i can show you some results. Thank you very much
Re: Orthomosaic Computation
Hey, I am trying to use the R matrix from Bundler to compute the a[3], b[3] and c[3] vectors. It is not working.
I have been struggling to understand the following equations in the orthoRectifyAsImage function
Here you are computing the image extrema:
xi = -halfWidth;
yi = -halfHeight;
qi = 1.0 + c1*xi + c2*yi;
corners[0] = (a0 + a1*xi + a2*yi) / qi;
corners[1] = (b0 + b1*xi + b2*yi) / qi;
Here you are computing the mapping between the input and ortho image,
double yip = minC[1] + static_cast<double>(j)*_pixelSize;
double q = (c2*xip - a2)*(c1*yip - b1) - (c2*yip - b2)*(c1*xip - a1);
double p = (a0 - xip)*(c1*yip - b1) - (b0 - yip)*(c1*xip - a1);
double yi = p / q;
I understand you are using the collinearity equations but I dont understand the assumptions and operations involved. In the equations from the wikipedia article you cited,
- why dont you use (Z-Z0)?
- which transformation are you applying in each step? (direct or inverse?)
Perhaps you can point me your source?
I have been struggling to understand the following equations in the orthoRectifyAsImage function
Here you are computing the image extrema:
xi = -halfWidth;
yi = -halfHeight;
qi = 1.0 + c1*xi + c2*yi;
corners[0] = (a0 + a1*xi + a2*yi) / qi;
corners[1] = (b0 + b1*xi + b2*yi) / qi;
Here you are computing the mapping between the input and ortho image,
double yip = minC[1] + static_cast<double>(j)*_pixelSize;
double q = (c2*xip - a2)*(c1*yip - b1) - (c2*yip - b2)*(c1*xip - a1);
double p = (a0 - xip)*(c1*yip - b1) - (b0 - yip)*(c1*xip - a1);
double yi = p / q;
I understand you are using the collinearity equations but I dont understand the assumptions and operations involved. In the equations from the wikipedia article you cited,
- why dont you use (Z-Z0)?
- which transformation are you applying in each step? (direct or inverse?)
Perhaps you can point me your source?
Re: Orthomosaic Computation
Oh when I was saying to use the transformation explicitly I wasn't thinking necessarily to use the collinearity equation. What we want is only to deduce the 2D pixel associated to a 3D point. So I guess that projecting the 3D point in the camera referential then using the focal and other intrinsic parameters could do. It's just a guess though ;)
And to answer your questions:
- when I developed this component, my main source was http://wtlab.iis.u-tokyo.ac.jp/~wataru/ ... /cp9-6.htm (so pretty much the same as Wikipedia)
- we use the 'flat plane' case here I think (see the last paragraph in this page). In the method documentation I wrote:
And to answer your questions:
- when I developed this component, my main source was http://wtlab.iis.u-tokyo.ac.jp/~wataru/ ... /cp9-6.htm (so pretty much the same as Wikipedia)
- we use the 'flat plane' case here I think (see the last paragraph in this page). In the method documentation I wrote:
Code: Select all
Collinearity equation:
* x'i = (a0+a1.xi+a2.yi)/(1+c1.xi+c2.yi)
* y'i = (b0+b1.xi+b2.yi)/(1+c1.xi+c2.yi)
Daniel, CloudCompare admin
Re: Orthomosaic Computation
What I dont understand for now is that you are not projecting the 3d points onto the orthorectified image: you are using the 2d points from the original image.
Re: Orthomosaic Computation
Indeed we "orthorectify" the images, we don't generate an orthophoto from a point cloud.
P.S. : if we had a very dense point cloud we could do it but we wouldn't need any camera or photo to do this! A simple ortho projection would do ;)
P.S. : if we had a very dense point cloud we could do it but we wouldn't need any camera or photo to do this! A simple ortho projection would do ;)
Daniel, CloudCompare admin
Re: Orthomosaic Computation
That I understand. I don't understand why this needs another parameter estimation and does not use the outputs from the Bundler.
Do you understand how the x= (Z-Z0)... Y=(Z-Z0)... degenerates in those simple equations? I think the key to the problem relies on that.
Another question, how do you project the images to the 3D space that is viewed in the main window?
Sorry to request your time but if I can get anywhereI think this could bring some features to CloudCompare.
Do you understand how the x= (Z-Z0)... Y=(Z-Z0)... degenerates in those simple equations? I think the key to the problem relies on that.
Another question, how do you project the images to the 3D space that is viewed in the main window?
Sorry to request your time but if I can get anywhereI think this could bring some features to CloudCompare.