1. 程式人生 > >影象特徵點匹配(視訊質量診斷、畫面抖動檢測)

影象特徵點匹配(視訊質量診斷、畫面抖動檢測)

在視訊質量診斷中,我們通常會涉及到“畫面抖動”的檢測。在此過程中就需要在視訊中隔N幀取一幀影象,然後在獲取的兩幀影象上找出特徵點,並進行相應的匹配。

當然了,這一過程中會出現很多的問題,例如:特徵點失配等。

本文主要關注特徵點匹配及去除失配點的方法

主要功能:對統一物體拍了兩張照片,只是第二張圖片有選擇和尺度的變化。現在要分別對兩幅影象提取特徵點,然後將這些特徵點匹配,使其儘量相互對應

下面,本文通過採用surf特徵,分別使用Brute-force matcherFlann-based matcher對特徵點進行相互匹配

1、 BFMatcher matcher

第一段程式碼摘自opencv官網的教程:

#include "stdafx.h"
#include <iostream>
#include "opencv2/core/core.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/nonfree/features2d.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/imgproc/imgproc.hpp"

using namespace cv;
using namespace std;


int _tmain(int argc, _TCHAR* argv[])
{
	Mat img_1 = imread( "haha1.jpg", CV_LOAD_IMAGE_GRAYSCALE );
	Mat img_2 = imread( "haha2.jpg", CV_LOAD_IMAGE_GRAYSCALE );

	if( !img_1.data || !img_2.data )
	{ return -1; }

	//-- Step 1: Detect the keypoints using SURF Detector
	//Threshold for hessian keypoint detector used in SURF
	int minHessian = 15000;

	SurfFeatureDetector detector( minHessian );

	std::vector<KeyPoint> keypoints_1, keypoints_2;

	detector.detect( img_1, keypoints_1 );
	detector.detect( img_2, keypoints_2 );

	//-- Step 2: Calculate descriptors (feature vectors)
	SurfDescriptorExtractor extractor;

	Mat descriptors_1, descriptors_2;

	extractor.compute( img_1, keypoints_1, descriptors_1 );
	extractor.compute( img_2, keypoints_2, descriptors_2 );

	//-- Step 3: Matching descriptor vectors with a brute force matcher
	BFMatcher matcher(NORM_L2,false);
	vector< DMatch > matches;
	matcher.match( descriptors_1, descriptors_2, matches );
	
	//-- Draw matches
	Mat img_matches;
	drawMatches( img_1, keypoints_1, img_2, keypoints_2, matches, img_matches );

	//-- Show detected matches
	imshow("Matches", img_matches );

	waitKey(0);

	return 0;
}

Brute-forcedescriptor matcher. For each descriptor in the first set, this matcher findsthe closest descriptor in the second set by trying each one. This descriptormatcher supports masking permissible matches of descriptor sets.

   上面是那個bfmatcher的介紹。我上面程式碼把surf的閾值故意設定的很大15000,否則圖片全是線,沒法看。上面程式碼的執行結果:

如圖,有很多匹配失誤。書中對匹配失誤有兩種定義:

False-positivematches:特徵點健全,只是對應關係錯誤;

False-negativematches:特徵點消失,導致對應關係錯誤;

我們只關心第一種情況,解決方案有兩種,一種是將BFMatcher建構函式的第二個引數設定為true,作為cross-match filter。

BFMatcher matcher(NORM_L2,true);  

他的思想是:to match train descriptors with the query set and viceversa.Only common matches for these two matches are returned. Such techniquesusually produce best results with minimal number of outliers when there areenough matches

為了使用查詢集來匹配訓練特徵描述子。只有完成匹配了才返回。在有足夠的匹配的特徵點個數時,這種技術通常能夠在異常值最小的情況下產生最好的結果。

效果圖:


可以看到匹配錯誤的線段比第一副圖少了。

2、Flann-based matcher

uses the fastapproximate nearest neighbor search algorithm to find correspondences (it usesfast third-party library for approximate nearest neighbors library for this).

用法:

FlannBasedMatcher matcher1;
matcher1.match(descriptors_1, descriptors_2, matches );

效果圖:


下面介紹第二種去除匹配錯誤點方法,KNN-matching

We performKNN-matching first with K=2. Two nearest descriptors are returned for eachmatch.The match is returned only if the distance ratio between the first andsecond matches is big enough (the ratio threshold is usually near two).


#include "stdafx.h"
#include <iostream>
#include "opencv2/core/core.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/nonfree/features2d.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/imgproc/imgproc.hpp"

using namespace cv;
using namespace std;


int _tmain(int argc, _TCHAR* argv[])
{
	Mat img_1 = imread( "test.jpg", CV_LOAD_IMAGE_GRAYSCALE );
	Mat img_2 = imread( "test1.jpg", CV_LOAD_IMAGE_GRAYSCALE );

	if( !img_1.data || !img_2.data )
	{ return -1; }

	//-- Step 1: Detect the keypoints using SURF Detector
	//Threshold for hessian keypoint detector used in SURF
	int minHessian = 1500;

	SurfFeatureDetector detector( minHessian );

	std::vector<KeyPoint> keypoints_1, keypoints_2;

	detector.detect( img_1, keypoints_1 );
	detector.detect( img_2, keypoints_2 );

	//-- Step 2: Calculate descriptors (feature vectors)
	SurfDescriptorExtractor extractor;

	Mat descriptors_1, descriptors_2;

	extractor.compute( img_1, keypoints_1, descriptors_1 );
	extractor.compute( img_2, keypoints_2, descriptors_2 );

	//-- Step 3: Matching descriptor vectors with a brute force matcher
	BFMatcher matcher(NORM_L2,false);
	//FlannBasedMatcher matcher1;
	vector< DMatch > matches;
	vector<vector< DMatch >> matches2;
	matcher.match( descriptors_1, descriptors_2, matches );
	
	//matcher1.match(descriptors_1, descriptors_2, matches );

	const float minRatio = 1.f / 1.5f;
	matches.clear();
	matcher.knnMatch(descriptors_1, descriptors_2,matches2,2);
	for (size_t i=0; i<matches2.size(); i++)
	{
		const cv::DMatch& bestMatch = matches2[i][0];
		const cv::DMatch& betterMatch = matches2[i][1];
		float distanceRatio = bestMatch.distance /betterMatch.distance;
		// Pass only matches where distance ratio between
		// nearest matches is greater than 1.5
		// (distinct criteria)
		if (distanceRatio < minRatio)
		{
			matches.push_back(bestMatch);
		}
	}

	//-- Draw matches
	Mat img_matches;
	drawMatches( img_1, keypoints_1, img_2, keypoints_2, matches, img_matches );

	//-- Show detected matches
	imshow("Matches", img_matches );

	waitKey(0);

	return 0;
}

這裡,我把surf閾值設為1500了,效果圖:


使用單應性矩陣變換來進一步細化結果:

單應性矩陣findHomography: 計算多個二維點對之間的最優單對映變換矩陣 H(3行x3列) ,使用最小均方誤差或者RANSAC方法 。

        //refine
	const int minNumberMatchesAllowed = 8;

	if (matches.size() < minNumberMatchesAllowed)
		return false;
	// Prepare data for cv::findHomography
	std::vector<cv::Point2f> srcPoints(matches.size());
	std::vector<cv::Point2f> dstPoints(matches.size());

	for (size_t i = 0; i < matches.size(); i++)
	{
		//cout<<i<<' '+matches[i].trainIdx<<' '+matches[i].queryIdx<<endl;
		srcPoints[i] = keypoints_1[matches[i].trainIdx].pt;
		dstPoints[i] = keypoints_2[matches[i].queryIdx].pt;
		
	}

	// Find homography matrix and get inliers mask
	std::vector<unsigned char> inliersMask(srcPoints.size());
	Mat homography = findHomography(srcPoints, dstPoints, CV_FM_RANSAC, 3.0f, inliersMask);

	std::vector<cv::DMatch> inliers;
	for (size_t i=0; i<inliersMask.size(); i++)
	{
		if (inliersMask[i])
			inliers.push_back(matches[i]);
	}

	matches.swap(inliers);

這段程式碼直接承接上一段程式碼即可。效果圖: