OpenCv-C++-ORB特徵檢測與匹配
阿新 • • 發佈:2018-12-23
影象的特徵點可以簡單的理解為影象中比較顯著顯著的點,如輪廓點,較暗區域中的亮點,較亮區域中的暗點等。
ORB的全稱是ORiented Brief,採用FAST(features from accelerated segment test)演算法來檢測特徵點。
與Brisk,AKAZE類似,ORB也分兩部分,即特徵點提取和特徵點描述。特徵提取是由FAST(Features from Accelerated Segment Test)演算法發展來的,它基於特徵點周圍的影象灰度值,檢測候選特徵點周圍一圈的畫素值,如果候選點周圍領域內有足夠多的畫素點與該候選點的灰度值差別夠大,則認為該候選點為一個特徵點。而特徵點描述是根據BRIEF(Binary Robust Independent Elementary Features)特徵描述演算法改進的。
將FAST特徵點的檢測方法與BRIEF特徵描述子結合起來,並在它們原來的基礎上做了改進與優化。據說ORB演算法的速度是sift的100倍,是surf的10倍。
ORB演算法是為解決BRIEF的缺陷而改進的,主要解決兩個缺點:噪聲敏感、旋轉不變性。
關於具體的理論部分可以參考下面兩篇文章:
參考文章:https://blog.csdn.net/gaotihong/article/details/78712017
參考文章:https://blog.csdn.net/guoyunfei20/article/details/78792770
程式碼部分:
#include<opencv2/opencv.hpp> #include<iostream> #include<math.h> using namespace cv; using namespace std; Mat img1, img2; void ORB_demo(int, void*); int main(int argc, char** argv) { img1 = imread("D:/test/box.png"); img2 = imread("D:/test/box_in_scene.png"); if (!img1.data|| !img2.data) { cout << "圖片未找到!" << endl; return -1; } namedWindow("ORB_demo",CV_WINDOW_AUTOSIZE); ORB_demo(0,0); imshow("input image of box",img1); imshow("input image of box_in_scene", img2); waitKey(0); return 0; } /*---------------檢測與匹配--------------*/ void ORB_demo(int, void *) { int Hession = 400; double t1 = getTickCount(); //特徵點提取 Ptr<ORB> detector = ORB::create(400); vector<KeyPoint> keypoints_obj; vector<KeyPoint> keypoints_scene; //定義描述子 Mat descriptor_obj, descriptor_scene; //檢測並計算成描述子 detector->detectAndCompute(img1, Mat(), keypoints_obj, descriptor_obj); detector->detectAndCompute(img2, Mat(), keypoints_scene, descriptor_scene); double t2 = getTickCount(); double t = (t2 - t1) * 1000 / getTickFrequency(); //特徵匹配 FlannBasedMatcher fbmatcher(new flann::LshIndexParams(20, 10, 2)); vector<DMatch> matches; //將找到的描述子進行匹配並存入matches中 fbmatcher.match(descriptor_obj, descriptor_scene, matches); double minDist = 1000; double maxDist = 0; //找出最優描述子 vector<DMatch> goodmatches; for (int i = 0; i < descriptor_obj.rows; i++) { double dist = matches[i].distance; if (dist < minDist) { minDist=dist ; } if (dist > maxDist) { maxDist=dist; } } for (int i = 0; i < descriptor_obj.rows; i++) { double dist = matches[i].distance; if (dist < max(2 * minDist, 0.02)) { goodmatches.push_back(matches[i]); } } Mat orbImg; drawMatches(img1, keypoints_obj, img2, keypoints_scene, goodmatches, orbImg, Scalar::all(-1), Scalar::all(-1), vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS); //----------目標物體用矩形標識出來------------ vector<Point2f> obj; vector<Point2f>scene; for (size_t i = 0; i < goodmatches.size(); i++) { obj.push_back(keypoints_obj[goodmatches[i].queryIdx].pt); scene.push_back(keypoints_scene[goodmatches[i].trainIdx].pt); } vector<Point2f> obj_corner(4); vector<Point2f> scene_corner(4); //生成透視矩陣 Mat H = findHomography(obj, scene, RANSAC); obj_corner[0] = Point(0, 0); obj_corner[1] = Point(img1.cols, 0); obj_corner[2] = Point(img1.cols, img1.rows); obj_corner[3] = Point(0, img1.rows); //透視變換 perspectiveTransform(obj_corner, scene_corner, H); Mat resultImg=orbImg.clone(); for (int i = 0; i < 4; i++) { line(resultImg, scene_corner[i]+ Point2f(img1.cols, 0), scene_corner[(i + 1) % 4]+ Point2f(img1.cols, 0), Scalar(0, 0, 255), 2, 8, 0); } imshow("result image",resultImg); cout << "ORB執行時間為:" << t << "ms" << endl; cout << "最小距離為:" <<minDist<< endl; cout << "最大距離為:" << maxDist << endl; imshow("ORB_demo", orbImg); }
這裡呢,我也把ORB的特徵點檢測與描述子計算的執行時間打印出來了,在1000ms左右,也就是1s左右。如下圖所示:
檢測與匹配的結果: