视觉SLAM十四讲从理论到实践第二版源码调试笔记(实践应用7-14章)

举报
zhangrelay 发表于 2021/07/15 04:10:32 2021/07/15
【摘要】 视觉SLAM十四讲从理论到实践第二版源码调试笔记(理论基础1-6章) 第七章和第八章:视觉里程计 1+2 使用示例,需要OpenCV4,报错如下: ROS:~/SLAM/slambook2/ch8/build$ cmake ..-- The C compiler identification is GNU 7.4.0-- The CXX compi...

视觉SLAM十四讲从理论到实践第二版源码调试笔记(理论基础1-6章)


第七章和第八章:视觉里程计 1+2

使用示例,需要OpenCV4,报错如下:


  
  1. ROS:~/SLAM/slambook2/ch8/build$ cmake ..
  2. -- The C compiler identification is GNU 7.4.0
  3. -- The CXX compiler identification is GNU 7.4.0
  4. -- Check for working C compiler: /usr/bin/cc
  5. -- Check for working C compiler: /usr/bin/cc -- works
  6. -- Detecting C compiler ABI info
  7. -- Detecting C compiler ABI info - done
  8. -- Detecting C compile features
  9. -- Detecting C compile features - done
  10. -- Check for working CXX compiler: /usr/bin/c++
  11. -- Check for working CXX compiler: /usr/bin/c++ -- works
  12. -- Detecting CXX compiler ABI info
  13. -- Detecting CXX compiler ABI info - done
  14. -- Detecting CXX compile features
  15. -- Detecting CXX compile features - done
  16. CMake Error at CMakeLists.txt:8 (find_package):
  17. Could not find a configuration file for package "OpenCV" that is compatible
  18. with requested version "4".
  19. The following configuration files were considered but not accepted:
  20. /usr/share/OpenCV/OpenCVConfig.cmake, version: 3.2.0
  21. -- Configuring incomplete, errors occurred!
  22. See also "/home/relaybot/SLAM/slambook2/ch8/build/CMakeFiles/CMakeOutput.log".

安装OpenCV4参考:Ubuntu安装OpenCV4记录


  
  1. ROS:~/SLAM/slambook2/ch8/build$ cmake ..
  2. -- The C compiler identification is GNU 7.4.0
  3. -- The CXX compiler identification is GNU 7.4.0
  4. -- Check for working C compiler: /usr/bin/cc
  5. -- Check for working C compiler: /usr/bin/cc -- works
  6. -- Detecting C compiler ABI info
  7. -- Detecting C compiler ABI info - done
  8. -- Detecting C compile features
  9. -- Detecting C compile features - done
  10. -- Check for working CXX compiler: /usr/bin/c++
  11. -- Check for working CXX compiler: /usr/bin/c++ -- works
  12. -- Detecting CXX compiler ABI info
  13. -- Detecting CXX compiler ABI info - done
  14. -- Detecting CXX compile features
  15. -- Detecting CXX compile features - done
  16. -- Found OpenCV: /usr/local (found suitable version "4.1.2", minimum required is "4")
  17. -- Configuring done
  18. -- Generating done
  19. -- Build files have been written to: /home/relaybot/SLAM/slambook2/ch8/build
  20. ROS:~/SLAM/slambook2/ch8/build$ make

编译时如果报错,分别如下:

optical_flow.cpp:


  
  1. $ make
  2. Scanning dependencies of target optical_flow
  3. [ 25%] Building CXX object CMakeFiles/optical_flow.dir/optical_flow.cpp.o
  4. /home/relaybot/SLAM/slambook2/ch8/optical_flow.cpp: In function ‘int main(int, char**)’:
  5. /home/relaybot/SLAM/slambook2/ch8/optical_flow.cpp:143:37: error: ‘CV_GRAY2BGR’ was not declared in this scope
  6. cv::cvtColor(img2, img2_single, CV_GRAY2BGR);
  7. ^~~~~~~~~~~
  8. CMakeFiles/optical_flow.dir/build.make:62: recipe for target 'CMakeFiles/optical_flow.dir/optical_flow.cpp.o' failed
  9. make[2]: *** [CMakeFiles/optical_flow.dir/optical_flow.cpp.o] Error 1
  10. CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/optical_flow.dir/all' failed
  11. make[1]: *** [CMakeFiles/optical_flow.dir/all] Error 2
  12. Makefile:83: recipe for target 'all' failed
  13. make: *** [all] Error 2

将CV_GRAY2BGR,更新为COLOR_GRAY2BGR。


  
  1. //
  2. // Created by Xiang on 2017/12/19.
  3. //
  4. #include <opencv2/opencv.hpp>
  5. #include <string>
  6. #include <chrono>
  7. #include <Eigen/Core>
  8. #include <Eigen/Dense>
  9. using namespace std;
  10. using namespace cv;
  11. string file_1 = "./LK1.png"; // first image
  12. string file_2 = "./LK2.png"; // second image
  13. /// Optical flow tracker and interface
  14. class OpticalFlowTracker {
  15. public:
  16. OpticalFlowTracker(
  17. const Mat &img1_,
  18. const Mat &img2_,
  19. const vector<KeyPoint> &kp1_,
  20. vector<KeyPoint> &kp2_,
  21. vector<bool> &success_,
  22. bool inverse_ = true, bool has_initial_ = false) :
  23. img1(img1_), img2(img2_), kp1(kp1_), kp2(kp2_), success(success_), inverse(inverse_),
  24. has_initial(has_initial_) {}
  25. void calculateOpticalFlow(const Range &range);
  26. private:
  27. const Mat &img1;
  28. const Mat &img2;
  29. const vector<KeyPoint> &kp1;
  30. vector<KeyPoint> &kp2;
  31. vector<bool> &success;
  32. bool inverse = true;
  33. bool has_initial = false;
  34. };
  35. /**
  36. * single level optical flow
  37. * @param [in] img1 the first image
  38. * @param [in] img2 the second image
  39. * @param [in] kp1 keypoints in img1
  40. * @param [in|out] kp2 keypoints in img2, if empty, use initial guess in kp1
  41. * @param [out] success true if a keypoint is tracked successfully
  42. * @param [in] inverse use inverse formulation?
  43. */
  44. void OpticalFlowSingleLevel(
  45. const Mat &img1,
  46. const Mat &img2,
  47. const vector<KeyPoint> &kp1,
  48. vector<KeyPoint> &kp2,
  49. vector<bool> &success,
  50. bool inverse = false,
  51. bool has_initial_guess = false
  52. );
  53. /**
  54. * multi level optical flow, scale of pyramid is set to 2 by default
  55. * the image pyramid will be create inside the function
  56. * @param [in] img1 the first pyramid
  57. * @param [in] img2 the second pyramid
  58. * @param [in] kp1 keypoints in img1
  59. * @param [out] kp2 keypoints in img2
  60. * @param [out] success true if a keypoint is tracked successfully
  61. * @param [in] inverse set true to enable inverse formulation
  62. */
  63. void OpticalFlowMultiLevel(
  64. const Mat &img1,
  65. const Mat &img2,
  66. const vector<KeyPoint> &kp1,
  67. vector<KeyPoint> &kp2,
  68. vector<bool> &success,
  69. bool inverse = false
  70. );
  71. /**
  72. * get a gray scale value from reference image (bi-linear interpolated)
  73. * @param img
  74. * @param x
  75. * @param y
  76. * @return the interpolated value of this pixel
  77. */
  78. inline float GetPixelValue(const cv::Mat &img, float x, float y) {
  79. // boundary check
  80. if (x < 0) x = 0;
  81. if (y < 0) y = 0;
  82. if (x >= img.cols) x = img.cols - 1;
  83. if (y >= img.rows) y = img.rows - 1;
  84. uchar *data = &img.data[int(y) * img.step + int(x)];
  85. float xx = x - floor(x);
  86. float yy = y - floor(y);
  87. return float(
  88. (1 - xx) * (1 - yy) * data[0] +
  89. xx * (1 - yy) * data[1] +
  90. (1 - xx) * yy * data[img.step] +
  91. xx * yy * data[img.step + 1]
  92. );
  93. }
  94. int main(int argc, char **argv) {
  95. // images, note they are CV_8UC1, not CV_8UC3
  96. Mat img1 = imread(file_1, 0);
  97. Mat img2 = imread(file_2, 0);
  98. // key points, using GFTT here.
  99. vector<KeyPoint> kp1;
  100. Ptr<GFTTDetector> detector = GFTTDetector::create(500, 0.01, 20); // maximum 500 keypoints
  101. detector->detect(img1, kp1);
  102. // now lets track these key points in the second image
  103. // first use single level LK in the validation picture
  104. vector<KeyPoint> kp2_single;
  105. vector<bool> success_single;
  106. OpticalFlowSingleLevel(img1, img2, kp1, kp2_single, success_single);
  107. // then test multi-level LK
  108. vector<KeyPoint> kp2_multi;
  109. vector<bool> success_multi;
  110. chrono::steady_clock::time_point t1 = chrono::steady_clock::now();
  111. OpticalFlowMultiLevel(img1, img2, kp1, kp2_multi, success_multi, true);
  112. chrono::steady_clock::time_point t2 = chrono::steady_clock::now();
  113. auto time_used = chrono::duration_cast<chrono::duration<double>>(t2 - t1);
  114. cout << "optical flow by gauss-newton: " << time_used.count() << endl;
  115. // use opencv's flow for validation
  116. vector<Point2f> pt1, pt2;
  117. for (auto &kp: kp1) pt1.push_back(kp.pt);
  118. vector<uchar> status;
  119. vector<float> error;
  120. t1 = chrono::steady_clock::now();
  121. cv::calcOpticalFlowPyrLK(img1, img2, pt1, pt2, status, error);
  122. t2 = chrono::steady_clock::now();
  123. time_used = chrono::duration_cast<chrono::duration<double>>(t2 - t1);
  124. cout << "optical flow by opencv: " << time_used.count() << endl;
  125. // plot the differences of those functions
  126. Mat img2_single;
  127. cv::cvtColor(img2, img2_single, COLOR_GRAY2BGR);
  128. for (int i = 0; i < kp2_single.size(); i++) {
  129. if (success_single[i]) {
  130. cv::circle(img2_single, kp2_single[i].pt, 2, cv::Scalar(0, 250, 0), 2);
  131. cv::line(img2_single, kp1[i].pt, kp2_single[i].pt, cv::Scalar(0, 250, 0));
  132. }
  133. }
  134. Mat img2_multi;
  135. cv::cvtColor(img2, img2_multi, COLOR_GRAY2BGR);
  136. for (int i = 0; i < kp2_multi.size(); i++) {
  137. if (success_multi[i]) {
  138. cv::circle(img2_multi, kp2_multi[i].pt, 2, cv::Scalar(0, 250, 0), 2);
  139. cv::line(img2_multi, kp1[i].pt, kp2_multi[i].pt, cv::Scalar(0, 250, 0));
  140. }
  141. }
  142. Mat img2_CV;
  143. cv::cvtColor(img2, img2_CV, COLOR_GRAY2BGR);
  144. for (int i = 0; i < pt2.size(); i++) {
  145. if (status[i]) {
  146. cv::circle(img2_CV, pt2[i], 2, cv::Scalar(0, 250, 0), 2);
  147. cv::line(img2_CV, pt1[i], pt2[i], cv::Scalar(0, 250, 0));
  148. }
  149. }
  150. cv::imshow("tracked single level", img2_single);
  151. cv::imshow("tracked multi level", img2_multi);
  152. cv::imshow("tracked by opencv", img2_CV);
  153. cv::waitKey(0);
  154. return 0;
  155. }
  156. void OpticalFlowSingleLevel(
  157. const Mat &img1,
  158. const Mat &img2,
  159. const vector<KeyPoint> &kp1,
  160. vector<KeyPoint> &kp2,
  161. vector<bool> &success,
  162. bool inverse, bool has_initial) {
  163. kp2.resize(kp1.size());
  164. success.resize(kp1.size());
  165. OpticalFlowTracker tracker(img1, img2, kp1, kp2, success, inverse, has_initial);
  166. parallel_for_(Range(0, kp1.size()),
  167. std::bind(&OpticalFlowTracker::calculateOpticalFlow, &tracker, placeholders::_1));
  168. }
  169. void OpticalFlowTracker::calculateOpticalFlow(const Range &range) {
  170. // parameters
  171. int half_patch_size = 4;
  172. int iterations = 10;
  173. for (size_t i = range.start; i < range.end; i++) {
  174. auto kp = kp1[i];
  175. double dx = 0, dy = 0; // dx,dy need to be estimated
  176. if (has_initial) {
  177. dx = kp2[i].pt.x - kp.pt.x;
  178. dy = kp2[i].pt.y - kp.pt.y;
  179. }
  180. double cost = 0, lastCost = 0;
  181. bool succ = true; // indicate if this point succeeded
  182. // Gauss-Newton iterations
  183. Eigen::Matrix2d H = Eigen::Matrix2d::Zero(); // hessian
  184. Eigen::Vector2d b = Eigen::Vector2d::Zero(); // bias
  185. Eigen::Vector2d J; // jacobian
  186. for (int iter = 0; iter < iterations; iter++) {
  187. if (inverse == false) {
  188. H = Eigen::Matrix2d::Zero();
  189. b = Eigen::Vector2d::Zero();
  190. } else {
  191. // only reset b
  192. b = Eigen::Vector2d::Zero();
  193. }
  194. cost = 0;
  195. // compute cost and jacobian
  196. for (int x = -half_patch_size; x < half_patch_size; x++)
  197. for (int y = -half_patch_size; y < half_patch_size; y++) {
  198. double error = GetPixelValue(img1, kp.pt.x + x, kp.pt.y + y) -
  199. GetPixelValue(img2, kp.pt.x + x + dx, kp.pt.y + y + dy);; // Jacobian
  200. if (inverse == false) {
  201. J = -1.0 * Eigen::Vector2d(
  202. 0.5 * (GetPixelValue(img2, kp.pt.x + dx + x + 1, kp.pt.y + dy + y) -
  203. GetPixelValue(img2, kp.pt.x + dx + x - 1, kp.pt.y + dy + y)),
  204. 0.5 * (GetPixelValue(img2, kp.pt.x + dx + x, kp.pt.y + dy + y + 1) -
  205. GetPixelValue(img2, kp.pt.x + dx + x, kp.pt.y + dy + y - 1))
  206. );
  207. } else if (iter == 0) {
  208. // in inverse mode, J keeps same for all iterations
  209. // NOTE this J does not change when dx, dy is updated, so we can store it and only compute error
  210. J = -1.0 * Eigen::Vector2d(
  211. 0.5 * (GetPixelValue(img1, kp.pt.x + x + 1, kp.pt.y + y) -
  212. GetPixelValue(img1, kp.pt.x + x - 1, kp.pt.y + y)),
  213. 0.5 * (GetPixelValue(img1, kp.pt.x + x, kp.pt.y + y + 1) -
  214. GetPixelValue(img1, kp.pt.x + x, kp.pt.y + y - 1))
  215. );
  216. }
  217. // compute H, b and set cost;
  218. b += -error * J;
  219. cost += error * error;
  220. if (inverse == false || iter == 0) {
  221. // also update H
  222. H += J * J.transpose();
  223. }
  224. }
  225. // compute update
  226. Eigen::Vector2d update = H.ldlt().solve(b);
  227. if (std::isnan(update[0])) {
  228. // sometimes occurred when we have a black or white patch and H is irreversible
  229. cout << "update is nan" << endl;
  230. succ = false;
  231. break;
  232. }
  233. if (iter > 0 && cost > lastCost) {
  234. break;
  235. }
  236. // update dx, dy
  237. dx += update[0];
  238. dy += update[1];
  239. lastCost = cost;
  240. succ = true;
  241. if (update.norm() < 1e-2) {
  242. // converge
  243. break;
  244. }
  245. }
  246. success[i] = succ;
  247. // set kp2
  248. kp2[i].pt = kp.pt + Point2f(dx, dy);
  249. }
  250. }
  251. void OpticalFlowMultiLevel(
  252. const Mat &img1,
  253. const Mat &img2,
  254. const vector<KeyPoint> &kp1,
  255. vector<KeyPoint> &kp2,
  256. vector<bool> &success,
  257. bool inverse) {
  258. // parameters
  259. int pyramids = 4;
  260. double pyramid_scale = 0.5;
  261. double scales[] = {1.0, 0.5, 0.25, 0.125};
  262. // create pyramids
  263. chrono::steady_clock::time_point t1 = chrono::steady_clock::now();
  264. vector<Mat> pyr1, pyr2; // image pyramids
  265. for (int i = 0; i < pyramids; i++) {
  266. if (i == 0) {
  267. pyr1.push_back(img1);
  268. pyr2.push_back(img2);
  269. } else {
  270. Mat img1_pyr, img2_pyr;
  271. cv::resize(pyr1[i - 1], img1_pyr,
  272. cv::Size(pyr1[i - 1].cols * pyramid_scale, pyr1[i - 1].rows * pyramid_scale));
  273. cv::resize(pyr2[i - 1], img2_pyr,
  274. cv::Size(pyr2[i - 1].cols * pyramid_scale, pyr2[i - 1].rows * pyramid_scale));
  275. pyr1.push_back(img1_pyr);
  276. pyr2.push_back(img2_pyr);
  277. }
  278. }
  279. chrono::steady_clock::time_point t2 = chrono::steady_clock::now();
  280. auto time_used = chrono::duration_cast<chrono::duration<double>>(t2 - t1);
  281. cout << "build pyramid time: " << time_used.count() << endl;
  282. // coarse-to-fine LK tracking in pyramids
  283. vector<KeyPoint> kp1_pyr, kp2_pyr;
  284. for (auto &kp:kp1) {
  285. auto kp_top = kp;
  286. kp_top.pt *= scales[pyramids - 1];
  287. kp1_pyr.push_back(kp_top);
  288. kp2_pyr.push_back(kp_top);
  289. }
  290. for (int level = pyramids - 1; level >= 0; level--) {
  291. // from coarse to fine
  292. success.clear();
  293. t1 = chrono::steady_clock::now();
  294. OpticalFlowSingleLevel(pyr1[level], pyr2[level], kp1_pyr, kp2_pyr, success, inverse, true);
  295. t2 = chrono::steady_clock::now();
  296. auto time_used = chrono::duration_cast<chrono::duration<double>>(t2 - t1);
  297. cout << "track pyr " << level << " cost time: " << time_used.count() << endl;
  298. if (level > 0) {
  299. for (auto &kp: kp1_pyr)
  300. kp.pt /= pyramid_scale;
  301. for (auto &kp: kp2_pyr)
  302. kp.pt /= pyramid_scale;
  303. }
  304. }
  305. for (auto &kp: kp2_pyr)
  306. kp2.push_back(kp);
  307. }

但是,direct_method.cpp依然报错,如下:


  
  1. make
  2. [ 50%] Built target optical_flow
  3. Scanning dependencies of target direct_method
  4. [ 75%] Building CXX object CMakeFiles/direct_method.dir/direct_method.cpp.o
  5. /home/relaybot/SLAM/slambook2/ch8/direct_method.cpp: In function ‘void DirectPoseEstimationSingleLayer(const cv::Mat&, const cv::Mat&, const VecVector2d&, std::vector<double, std::allocator<double> >, Sophus::SE3d&)’:
  6. /home/relaybot/SLAM/slambook2/ch8/direct_method.cpp:206:35: error: ‘COLOR_GRAY2BGR’ was not declared in this scope
  7. cv::cvtColor(img2, img2_show, COLOR_GRAY2BGR);
  8. ^~~~~~~~~~~~~~
  9. /home/relaybot/SLAM/slambook2/ch8/direct_method.cpp:206:35: note: suggested alternative:
  10. In file included from /usr/local/include/opencv4/opencv2/opencv.hpp:74:0,
  11. from /home/relaybot/SLAM/slambook2/ch8/direct_method.cpp:1:
  12. /usr/local/include/opencv4/opencv2/imgproc.hpp:542:5: note: ‘COLOR_GRAY2BGR’
  13. COLOR_GRAY2BGR = 8,
  14. ^~~~~~~~~~~~~~
  15. CMakeFiles/direct_method.dir/build.make:62: recipe for target 'CMakeFiles/direct_method.dir/direct_method.cpp.o' failed
  16. make[2]: *** [CMakeFiles/direct_method.dir/direct_method.cpp.o] Error 1
  17. CMakeFiles/Makefile2:104: recipe for target 'CMakeFiles/direct_method.dir/all' failed
  18. make[1]: *** [CMakeFiles/direct_method.dir/all] Error 2
  19. Makefile:83: recipe for target 'all' failed
  20. make: *** [all] Error 2

在程序中,添加using namespace cv;

修正后的程序如下:


  
  1. #include <opencv2/opencv.hpp>
  2. #include <sophus/se3.hpp>
  3. #include <boost/format.hpp>
  4. #include <pangolin/pangolin.h>
  5. using namespace std;
  6. using namespace cv;
  7. typedef vector<Eigen::Vector2d, Eigen::aligned_allocator<Eigen::Vector2d>> VecVector2d;
  8. // Camera intrinsics
  9. double fx = 718.856, fy = 718.856, cx = 607.1928, cy = 185.2157;
  10. // baseline
  11. double baseline = 0.573;
  12. // paths
  13. string left_file = "./left.png";
  14. string disparity_file = "./disparity.png";
  15. boost::format fmt_others("./%06d.png"); // other files
  16. // useful typedefs
  17. typedef Eigen::Matrix<double, 6, 6> Matrix6d;
  18. typedef Eigen::Matrix<double, 2, 6> Matrix26d;
  19. typedef Eigen::Matrix<double, 6, 1> Vector6d;
  20. /// class for accumulator jacobians in parallel
  21. class JacobianAccumulator {
  22. public:
  23. JacobianAccumulator(
  24. const cv::Mat &img1_,
  25. const cv::Mat &img2_,
  26. const VecVector2d &px_ref_,
  27. const vector<double> depth_ref_,
  28. Sophus::SE3d &T21_) :
  29. img1(img1_), img2(img2_), px_ref(px_ref_), depth_ref(depth_ref_), T21(T21_) {
  30. projection = VecVector2d(px_ref.size(), Eigen::Vector2d(0, 0));
  31. }
  32. /// accumulate jacobians in a range
  33. void accumulate_jacobian(const cv::Range &range);
  34. /// get hessian matrix
  35. Matrix6d hessian() const { return H; }
  36. /// get bias
  37. Vector6d bias() const { return b; }
  38. /// get total cost
  39. double cost_func() const { return cost; }
  40. /// get projected points
  41. VecVector2d projected_points() const { return projection; }
  42. /// reset h, b, cost to zero
  43. void reset() {
  44. H = Matrix6d::Zero();
  45. b = Vector6d::Zero();
  46. cost = 0;
  47. }
  48. private:
  49. const cv::Mat &img1;
  50. const cv::Mat &img2;
  51. const VecVector2d &px_ref;
  52. const vector<double> depth_ref;
  53. Sophus::SE3d &T21;
  54. VecVector2d projection; // projected points
  55. std::mutex hessian_mutex;
  56. Matrix6d H = Matrix6d::Zero();
  57. Vector6d b = Vector6d::Zero();
  58. double cost = 0;
  59. };
  60. /**
  61. * pose estimation using direct method
  62. * @param img1
  63. * @param img2
  64. * @param px_ref
  65. * @param depth_ref
  66. * @param T21
  67. */
  68. void DirectPoseEstimationMultiLayer(
  69. const cv::Mat &img1,
  70. const cv::Mat &img2,
  71. const VecVector2d &px_ref,
  72. const vector<double> depth_ref,
  73. Sophus::SE3d &T21
  74. );
  75. /**
  76. * pose estimation using direct method
  77. * @param img1
  78. * @param img2
  79. * @param px_ref
  80. * @param depth_ref
  81. * @param T21
  82. */
  83. void DirectPoseEstimationSingleLayer(
  84. const cv::Mat &img1,
  85. const cv::Mat &img2,
  86. const VecVector2d &px_ref,
  87. const vector<double> depth_ref,
  88. Sophus::SE3d &T21
  89. );
  90. // bilinear interpolation
  91. inline float GetPixelValue(const cv::Mat &img, float x, float y) {
  92. // boundary check
  93. if (x < 0) x = 0;
  94. if (y < 0) y = 0;
  95. if (x >= img.cols) x = img.cols - 1;
  96. if (y >= img.rows) y = img.rows - 1;
  97. uchar *data = &img.data[int(y) * img.step + int(x)];
  98. float xx = x - floor(x);
  99. float yy = y - floor(y);
  100. return float(
  101. (1 - xx) * (1 - yy) * data[0] +
  102. xx * (1 - yy) * data[1] +
  103. (1 - xx) * yy * data[img.step] +
  104. xx * yy * data[img.step + 1]
  105. );
  106. }
  107. int main(int argc, char **argv) {
  108. cv::Mat left_img = cv::imread(left_file, 0);
  109. cv::Mat disparity_img = cv::imread(disparity_file, 0);
  110. // let's randomly pick pixels in the first image and generate some 3d points in the first image's frame
  111. cv::RNG rng;
  112. int nPoints = 2000;
  113. int boarder = 20;
  114. VecVector2d pixels_ref;
  115. vector<double> depth_ref;
  116. // generate pixels in ref and load depth data
  117. for (int i = 0; i < nPoints; i++) {
  118. int x = rng.uniform(boarder, left_img.cols - boarder); // don't pick pixels close to boarder
  119. int y = rng.uniform(boarder, left_img.rows - boarder); // don't pick pixels close to boarder
  120. int disparity = disparity_img.at<uchar>(y, x);
  121. double depth = fx * baseline / disparity; // you know this is disparity to depth
  122. depth_ref.push_back(depth);
  123. pixels_ref.push_back(Eigen::Vector2d(x, y));
  124. }
  125. // estimates 01~05.png's pose using this information
  126. Sophus::SE3d T_cur_ref;
  127. for (int i = 1; i < 6; i++) { // 1~10
  128. cv::Mat img = cv::imread((fmt_others % i).str(), 0);
  129. // try single layer by uncomment this line
  130. // DirectPoseEstimationSingleLayer(left_img, img, pixels_ref, depth_ref, T_cur_ref);
  131. DirectPoseEstimationMultiLayer(left_img, img, pixels_ref, depth_ref, T_cur_ref);
  132. }
  133. return 0;
  134. }
  135. void DirectPoseEstimationSingleLayer(
  136. const cv::Mat &img1,
  137. const cv::Mat &img2,
  138. const VecVector2d &px_ref,
  139. const vector<double> depth_ref,
  140. Sophus::SE3d &T21) {
  141. const int iterations = 10;
  142. double cost = 0, lastCost = 0;
  143. auto t1 = chrono::steady_clock::now();
  144. JacobianAccumulator jaco_accu(img1, img2, px_ref, depth_ref, T21);
  145. for (int iter = 0; iter < iterations; iter++) {
  146. jaco_accu.reset();
  147. cv::parallel_for_(cv::Range(0, px_ref.size()),
  148. std::bind(&JacobianAccumulator::accumulate_jacobian, &jaco_accu, std::placeholders::_1));
  149. Matrix6d H = jaco_accu.hessian();
  150. Vector6d b = jaco_accu.bias();
  151. // solve update and put it into estimation
  152. Vector6d update = H.ldlt().solve(b);;
  153. T21 = Sophus::SE3d::exp(update) * T21;
  154. cost = jaco_accu.cost_func();
  155. if (std::isnan(update[0])) {
  156. // sometimes occurred when we have a black or white patch and H is irreversible
  157. cout << "update is nan" << endl;
  158. break;
  159. }
  160. if (iter > 0 && cost > lastCost) {
  161. cout << "cost increased: " << cost << ", " << lastCost << endl;
  162. break;
  163. }
  164. if (update.norm() < 1e-3) {
  165. // converge
  166. break;
  167. }
  168. lastCost = cost;
  169. cout << "iteration: " << iter << ", cost: " << cost << endl;
  170. }
  171. cout << "T21 = \n" << T21.matrix() << endl;
  172. auto t2 = chrono::steady_clock::now();
  173. auto time_used = chrono::duration_cast<chrono::duration<double>>(t2 - t1);
  174. cout << "direct method for single layer: " << time_used.count() << endl;
  175. // plot the projected pixels here
  176. cv::Mat img2_show;
  177. //Mat img2_show;
  178. cv::cvtColor(img2, img2_show, COLOR_GRAY2BGR);
  179. VecVector2d projection = jaco_accu.projected_points();
  180. for (size_t i = 0; i < px_ref.size(); ++i) {
  181. auto p_ref = px_ref[i];
  182. auto p_cur = projection[i];
  183. if (p_cur[0] > 0 && p_cur[1] > 0) {
  184. cv::circle(img2_show, cv::Point2f(p_cur[0], p_cur[1]), 2, cv::Scalar(0, 250, 0), 2);
  185. cv::line(img2_show, cv::Point2f(p_ref[0], p_ref[1]), cv::Point2f(p_cur[0], p_cur[1]),
  186. cv::Scalar(0, 250, 0));
  187. }
  188. }
  189. cv::imshow("current", img2_show);
  190. cv::waitKey();
  191. }
  192. void JacobianAccumulator::accumulate_jacobian(const cv::Range &range) {
  193. // parameters
  194. const int half_patch_size = 1;
  195. int cnt_good = 0;
  196. Matrix6d hessian = Matrix6d::Zero();
  197. Vector6d bias = Vector6d::Zero();
  198. double cost_tmp = 0;
  199. for (size_t i = range.start; i < range.end; i++) {
  200. // compute the projection in the second image
  201. Eigen::Vector3d point_ref =
  202. depth_ref[i] * Eigen::Vector3d((px_ref[i][0] - cx) / fx, (px_ref[i][1] - cy) / fy, 1);
  203. Eigen::Vector3d point_cur = T21 * point_ref;
  204. if (point_cur[2] < 0) // depth invalid
  205. continue;
  206. float u = fx * point_cur[0] / point_cur[2] + cx, v = fy * point_cur[1] / point_cur[2] + cy;
  207. if (u < half_patch_size || u > img2.cols - half_patch_size || v < half_patch_size ||
  208. v > img2.rows - half_patch_size)
  209. continue;
  210. projection[i] = Eigen::Vector2d(u, v);
  211. double X = point_cur[0], Y = point_cur[1], Z = point_cur[2],
  212. Z2 = Z * Z, Z_inv = 1.0 / Z, Z2_inv = Z_inv * Z_inv;
  213. cnt_good++;
  214. // and compute error and jacobian
  215. for (int x = -half_patch_size; x <= half_patch_size; x++)
  216. for (int y = -half_patch_size; y <= half_patch_size; y++) {
  217. double error = GetPixelValue(img1, px_ref[i][0] + x, px_ref[i][1] + y) -
  218. GetPixelValue(img2, u + x, v + y);
  219. Matrix26d J_pixel_xi;
  220. Eigen::Vector2d J_img_pixel;
  221. J_pixel_xi(0, 0) = fx * Z_inv;
  222. J_pixel_xi(0, 1) = 0;
  223. J_pixel_xi(0, 2) = -fx * X * Z2_inv;
  224. J_pixel_xi(0, 3) = -fx * X * Y * Z2_inv;
  225. J_pixel_xi(0, 4) = fx + fx * X * X * Z2_inv;
  226. J_pixel_xi(0, 5) = -fx * Y * Z_inv;
  227. J_pixel_xi(1, 0) = 0;
  228. J_pixel_xi(1, 1) = fy * Z_inv;
  229. J_pixel_xi(1, 2) = -fy * Y * Z2_inv;
  230. J_pixel_xi(1, 3) = -fy - fy * Y * Y * Z2_inv;
  231. J_pixel_xi(1, 4) = fy * X * Y * Z2_inv;
  232. J_pixel_xi(1, 5) = fy * X * Z_inv;
  233. J_img_pixel = Eigen::Vector2d(
  234. 0.5 * (GetPixelValue(img2, u + 1 + x, v + y) - GetPixelValue(img2, u - 1 + x, v + y)),
  235. 0.5 * (GetPixelValue(img2, u + x, v + 1 + y) - GetPixelValue(img2, u + x, v - 1 + y))
  236. );
  237. // total jacobian
  238. Vector6d J = -1.0 * (J_img_pixel.transpose() * J_pixel_xi).transpose();
  239. hessian += J * J.transpose();
  240. bias += -error * J;
  241. cost_tmp += error * error;
  242. }
  243. }
  244. if (cnt_good) {
  245. // set hessian, bias and cost
  246. unique_lock<mutex> lck(hessian_mutex);
  247. H += hessian;
  248. b += bias;
  249. cost += cost_tmp / cnt_good;
  250. }
  251. }
  252. void DirectPoseEstimationMultiLayer(
  253. const cv::Mat &img1,
  254. const cv::Mat &img2,
  255. const VecVector2d &px_ref,
  256. const vector<double> depth_ref,
  257. Sophus::SE3d &T21) {
  258. // parameters
  259. int pyramids = 4;
  260. double pyramid_scale = 0.5;
  261. double scales[] = {1.0, 0.5, 0.25, 0.125};
  262. // create pyramids
  263. vector<cv::Mat> pyr1, pyr2; // image pyramids
  264. for (int i = 0; i < pyramids; i++) {
  265. if (i == 0) {
  266. pyr1.push_back(img1);
  267. pyr2.push_back(img2);
  268. } else {
  269. cv::Mat img1_pyr, img2_pyr;
  270. cv::resize(pyr1[i - 1], img1_pyr,
  271. cv::Size(pyr1[i - 1].cols * pyramid_scale, pyr1[i - 1].rows * pyramid_scale));
  272. cv::resize(pyr2[i - 1], img2_pyr,
  273. cv::Size(pyr2[i - 1].cols * pyramid_scale, pyr2[i - 1].rows * pyramid_scale));
  274. pyr1.push_back(img1_pyr);
  275. pyr2.push_back(img2_pyr);
  276. }
  277. }
  278. double fxG = fx, fyG = fy, cxG = cx, cyG = cy; // backup the old values
  279. for (int level = pyramids - 1; level >= 0; level--) {
  280. VecVector2d px_ref_pyr; // set the keypoints in this pyramid level
  281. for (auto &px: px_ref) {
  282. px_ref_pyr.push_back(scales[level] * px);
  283. }
  284. // scale fx, fy, cx, cy in different pyramid levels
  285. fx = fxG * scales[level];
  286. fy = fyG * scales[level];
  287. cx = cxG * scales[level];
  288. cy = cyG * scales[level];
  289. DirectPoseEstimationSingleLayer(pyr1[level], pyr2[level], px_ref_pyr, depth_ref, T21);
  290. }
  291. }

 

第九章和第十章:后端 1+2

编译示例,不会遇到问题。

第十一章:回环检测

需要先编译第三方功能包:DBoW3。

然后再编译时候,可能出错,信息如下:


  
  1. ROS:~/SLAM/slambook2/ch11/build$ make
  2. Scanning dependencies of target gen_vocab
  3. [ 16%] Building CXX object CMakeFiles/gen_vocab.dir/gen_vocab_large.cpp.o
  4. make[2]: *** No rule to make target '/usr/local/lib/libDBoW3.a', needed by 'gen_vocab'. Stop.
  5. CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/gen_vocab.dir/all' failed
  6. make[1]: *** [CMakeFiles/gen_vocab.dir/all] Error 2
  7. Makefile:83: recipe for target 'all' failed
  8. make: *** [all] Error 2

原因:

看一下~/slambook2/ch11/CMakeLists.txt,发现如下:


  
  1. # dbow3
  2. # dbow3 is a simple lib so I assume you installed it in default directory
  3. set( DBoW3_INCLUDE_DIRS "/usr/local/include" )
  4. set( DBoW3_LIBS "/usr/local/lib/libDBoW3.a" )

实际是:


  
  1. ROS:~/SLAM/slambook2/3rdparty/DBoW3/build$ sudo make install
  2. [sudo] password for relaybot:
  3. [ 60%] Built target DBoW3
  4. [ 73%] Built target create_voc_step0
  5. [ 86%] Built target demo_general
  6. [100%] Built target create_voc_step1
  7. Install the project...
  8. -- Install configuration: "Release"
  9. -- Installing: /usr/local/lib/cmake/FindDBoW3.cmake
  10. -- Installing: /usr/local/lib/cmake/DBoW3/DBoW3Config.cmake
  11. -- Installing: /usr/local/lib/libDBoW3.so.0.0.1
  12. -- Installing: /usr/local/lib/libDBoW3.so.0.0
  13. -- Installing: /usr/local/lib/libDBoW3.so
  14. -- Installing: /usr/local/include/DBoW3/BowVector.h
  15. -- Installing: /usr/local/include/DBoW3/DBoW3.h
  16. -- Installing: /usr/local/include/DBoW3/Database.h
  17. -- Installing: /usr/local/include/DBoW3/DescManip.h
  18. -- Installing: /usr/local/include/DBoW3/FeatureVector.h
  19. -- Installing: /usr/local/include/DBoW3/QueryResults.h
  20. -- Installing: /usr/local/include/DBoW3/ScoringObject.h
  21. -- Installing: /usr/local/include/DBoW3/Vocabulary.h
  22. -- Installing: /usr/local/include/DBoW3/exports.h
  23. -- Installing: /usr/local/include/DBoW3/quicklz.h
  24. -- Installing: /usr/local/include/DBoW3/timers.h
  25. -- Installing: /usr/local/bin/demo_general
  26. -- Set runtime path of "/usr/local/bin/demo_general" to ""
  27. -- Installing: /usr/local/bin/create_voc_step0
  28. -- Set runtime path of "/usr/local/bin/create_voc_step0" to ""
  29. -- Installing: /usr/local/bin/create_voc_step1
  30. -- Set runtime path of "/usr/local/bin/create_voc_step1" to ""

将/usr/local/lib/libDBoW3.a改为/usr/local/lib/libDBoW3.so!!!

然后就一切ok。

第十二章:建图

正常编译,一切ok!

第十三章:实践:设计SLAM系统

需要先编译3rdparty/googletest,否则会报如下错误:


  
  1. ROS:~/SLAM/slambook2/ch13/build$ cmake ..
  2. -- The C compiler identification is GNU 7.4.0
  3. -- The CXX compiler identification is GNU 7.4.0
  4. -- Check for working C compiler: /usr/bin/cc
  5. -- Check for working C compiler: /usr/bin/cc -- works
  6. -- Detecting C compiler ABI info
  7. -- Detecting C compiler ABI info - done
  8. -- Detecting C compile features
  9. -- Detecting C compile features - done
  10. -- Check for working CXX compiler: /usr/bin/c++
  11. -- Check for working CXX compiler: /usr/bin/c++ -- works
  12. -- Detecting CXX compiler ABI info
  13. -- Detecting CXX compiler ABI info - done
  14. -- Detecting CXX compile features
  15. -- Detecting CXX compile features - done
  16. -- Found OpenCV: /usr (found suitable version "3.2.0", minimum required is "3.1")
  17. -- Found Glog: /usr/include
  18. CMake Error at /usr/share/cmake-3.10/Modules/FindPackageHandleStandardArgs.cmake:137 (message):
  19. Could NOT find GTest (missing: GTEST_LIBRARY GTEST_MAIN_LIBRARY)
  20. Call Stack (most recent call first):
  21. /usr/share/cmake-3.10/Modules/FindPackageHandleStandardArgs.cmake:378 (_FPHSA_FAILURE_MESSAGE)
  22. /usr/share/cmake-3.10/Modules/FindGTest.cmake:196 (FIND_PACKAGE_HANDLE_STANDARD_ARGS)
  23. CMakeLists.txt:38 (find_package)
  24. -- Configuring incomplete, errors occurred!
  25. See also "/home/relaybot/SLAM/slambook2/ch13/build/CMakeFiles/CMakeOutput.log"

安装好googletest,就一切正常了。

第十四章:SLAM:现在与未来

自学各种SLAM案例,推荐一个网址:OpenSLAM


附录A和附录B为数学基础,必须掌握

附录C~ROS入门:参考之前一篇博文如下:


这只是将全书案例在自己电脑上复现的过程,重点是:

SLAM理论和实践!!!

SLAM理论和实践!!!

SLAM理论和实践!!!


每章具体备课内容,在开课前更新。


 

文章来源: zhangrelay.blog.csdn.net,作者:zhangrelay,版权归原作者所有,如需转载,请联系作者。

原文链接:zhangrelay.blog.csdn.net/article/details/103169383

【版权声明】本文为华为云社区用户转载文章,如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@huaweicloud.com
  • 点赞
  • 收藏
  • 关注作者

评论(0

0/1000
抱歉,系统识别当前为高风险访问,暂不支持该操作

全部回复

上滑加载中

设置昵称

在此一键设置昵称,即可参与社区互动!

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。