OpenCV: rotation/translation vector to OpenGL modelview matrix

I'm trying to use OpenCV to do some basic augmented reality. The way I'm going about it is using findChessboardCorners to get a set of points from a camera image. Then, I create a 3D quad along the z = 0 plane and use solvePnP to get a homography between the imaged points and the planar points. From that, I figure I should be able to set up a modelview matrix which will allow me to render a cube with the right pose on top of the image.

The documentation for solvePnP says that it outputs a rotation vector "that (together with [the translation vector] ) brings points from the model coordinate system to the camera coordinate system." I think that's the opposite of what I want; since my quad is on the plane z = 0, I want aa modelview matrix which will transform that quad to the appropriate 3D plane.

I thought that by performing the opposite rotations and translations in the opposite order I could calculate the correct modelview matrix, but that seems not to work. While the rendered object (a cube) does move with the camera image and seems to be roughly correct translationally, the rotation just doesn't work at all; it on multiple axes when it should only be rotating on one, and sometimes in the wrong direction. Here's what I'm doing so far:

std::vector<Point2f> corners;
bool found = findChessboardCorners(*_imageBuffer, cv::Size(5,4), corners,
                                      CV_CALIB_CB_FILTER_QUADS |
                                      CV_CALIB_CB_FAST_CHECK);
if(found)
{
  drawChessboardCorners(*_imageBuffer, cv::Size(6, 5), corners, found);

  std::vector<double> distortionCoefficients(5);  // camera distortion
  distortionCoefficients[0] = 0.070969;
  distortionCoefficients[1] = 0.777647;
  distortionCoefficients[2] = -0.009131;
  distortionCoefficients[3] = -0.013867;
  distortionCoefficients[4] = -5.141519;

  // Since the image was resized, we need to scale the found corner points
  float sw = _width / SMALL_WIDTH;
  float sh = _height / SMALL_HEIGHT;
  std::vector<Point2f> board_verts;
  board_verts.push_back(Point2f(corners[0].x * sw, corners[0].y * sh));
  board_verts.push_back(Point2f(corners[15].x * sw, corners[15].y * sh));
  board_verts.push_back(Point2f(corners[19].x * sw, corners[19].y * sh));
  board_verts.push_back(Point2f(corners[4].x * sw, corners[4].y * sh));
  Mat boardMat(board_verts);

  std::vector<Point3f> square_verts;
  square_verts.push_back(Point3f(-1, 1, 0));                              
  square_verts.push_back(Point3f(-1, -1, 0));
  square_verts.push_back(Point3f(1, -1, 0));
  square_verts.push_back(Point3f(1, 1, 0));
  Mat squareMat(square_verts);

  // Transform the camera's intrinsic parameters into an OpenGL camera matrix
  glMatrixMode(GL_PROJECTION);
  glLoadIdentity();

  // Camera parameters
  double f_x = 786.42938232; // Focal length in x axis
  double f_y = 786.42938232; // Focal length in y axis (usually the same?)
  double c_x = 217.01358032; // Camera primary point x
  double c_y = 311.25384521; // Camera primary point y


  cv::Mat cameraMatrix(3,3,CV_32FC1);
  cameraMatrix.at<float>(0,0) = f_x;
  cameraMatrix.at<float>(0,1) = 0.0;
  cameraMatrix.at<float>(0,2) = c_x;
  cameraMatrix.at<float>(1,0) = 0.0;
  cameraMatrix.at<float>(1,1) = f_y;
  cameraMatrix.at<float>(1,2) = c_y;
  cameraMatrix.at<float>(2,0) = 0.0;
  cameraMatrix.at<float>(2,1) = 0.0;
  cameraMatrix.at<float>(2,2) = 1.0;

  Mat rvec(3, 1, CV_32F), tvec(3, 1, CV_32F);
  solvePnP(squareMat, boardMat, cameraMatrix, distortionCoefficients, 
               rvec, tvec);

  _rv[0] = rvec.at<double>(0, 0);
  _rv[1] = rvec.at<double>(1, 0);
  _rv[2] = rvec.at<double>(2, 0);
  _tv[0] = tvec.at<double>(0, 0);
  _tv[1] = tvec.at<double>(1, 0);
  _tv[2] = tvec.at<double>(2, 0);
}

Then in the drawing code...

GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, 0.0f);
modelViewMatrix = GLKMatrix4Translate(modelViewMatrix, -tv[1], -tv[0], -tv[2]);
modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, -rv[0], 1.0f, 0.0f, 0.0f);
modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, -rv[1], 0.0f, 1.0f, 0.0f);
modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, -rv[2], 0.0f, 0.0f, 1.0f);

The vertices I'm rendering create a cube of unit length around the origin (ie from -0.5 to 0.5 along each edge.) I know with OpenGL translation functions performed transformations in "reverse order," so the above should rotate the cube along the z, y, and then x axes, and then translate it. However, it seems like it's being translated first and then rotated, so perhaps Apple's GLKMatrix4 works differently?

This question seems very similar to mine, and in particular coder9's answer seems like it might be more or less what I'm looking for. However, I tried it and compared the results to my method, and the matrices I arrived at in both cases were the same. I feel like that answer is right, but that I'm missing some crucial detail.


You have to make sure the axis are facing the correct direction. Especially, the y and z axis are facing different directions in OpenGL and OpenCV to ensure the xyz basis is direct. You can find some information and code (with an iPad camera) in this blog post.

-- Edit -- Ah ok. Unfortunately, I used these resources to do it the other way round (opengl ---> opencv) to test some algorithms. My main issue was that the row order of the images was inverted between OpenGL and OpenCV (maybe this helps).

When simulating cameras, I came across the same projection matrices that can be found here and in the generalized projection matrix paper. This paper quoted in the comments of the blog post also shows some link between computer vision and OpenGL projections.


I'm not an IOS programmer, so this answer might be misleading! If the problem is not in the order of applying the rotations and the translation, then suggest using a simpler and more commonly used coordinate system.

The points in the corners vector have the origin (0,0) at the top left corner of the image and the y axis is towards the bottom of the image. Often from math we are used to think of the coordinate system with the origin at the center and y axis towards the top of the image. From the coordinates you're pushing into board_verts I'm guessing you're making the same mistake. If that's the case, it's easy to transform the positions of the corners by something like this:

for (i=0;i<corners.size();i++) {
  corners[i].x -= width/2;
  corners[i].y = -corners[i].y + height/2;
}

then you call solvePnP(). Debugging this is not that difficult, just print the positions of the four corners and the estimated R and T, and see if they make sense. Then you can proceed to the OpenGL step. Please let me know how it goes.

链接地址: http://www.djcxy.com/p/37168.html

上一篇: 行进多维数据集(C ++到C#)

下一篇: OpenCV:旋转/平移向量到OpenGL模型视图矩阵