1. 程式人生 > >【特徵匹配】ORB原理與原始碼解析

【特徵匹配】ORB原理與原始碼解析

相關 :

CSDN-勿在浮沙築高臺

  為了滿足實時性的要求,前面文章中介紹過快速提取特徵點演算法Fast,以及特徵描述子Brief。本篇文章介紹的ORB演算法結合了Fast和Brief的速度優勢,並做了改進,且ORB是免費。

Ethan Rublee等人2011年在《ORB:An Efficient Alternative to SIFT or SURF》文章中提出了ORB演算法。結合Fast與Brief演算法,並給Fast特徵點增加了方向性,使得特徵點具有旋轉不變性,並提出了構造金字塔方法,解決尺度不變性,但文章中沒有具體詳述。實驗證明,ORB遠優於之前的SIFT與SURF演算法。

-------------------------------------------------------------------------------------------------------------------------------

論文核心內容概述:

1.構造金字塔,在每層金字塔上採用Fast演算法提取特徵點,採用Harris角點響應函式,按角點響應值排序,選取前N個特徵點。

2. oFast:計算每個特徵點的主方向,灰度質心法,計算特徵點半徑為r的圓形鄰域範圍內的灰度質心位置。從中心位置到質心位置的向量,定義為該特 徵點的主方向。

  定義矩的計算公式,x,y∈[-r,r]:

                                 

             質心位置:

                               

               主方向:

                                  

3.rBrief:為了解決旋轉不變性,把特徵點的Patch旋轉到主方向上(steered Brief)。通過實驗得到,描述子在各個維度上的均值比較離散(偏離0.5),同時維度間相關性很強,說明特徵點描述子區分性不好,影響匹配的效果。論文中提出採取學習的方法,採用300K個訓練樣本點。每一個特徵點,選取Patch大小為wp=31,Patch內每對點都採用wt=5大小的子視窗灰度均值做比較,子視窗的個數即為N=(wp-wt)*(wp-wt),

從N個視窗中隨機選兩個做比較即構成描述子的一個bit,論文中採用M=205590種可能的情況:

       ---------------------------------------------------------------------------------

        1.對所有樣本點,做M種測試,構成M維的描述子,每個維度上非1即0;

        2.按均值對M個維度排序(以0.5為中心),組成向量T;

        3.貪婪搜尋:把向量T中第一個元素移動到R中,然後繼續取T的第二個元素,與R中的所有元素做相關性比較,如果相關性大於指定的閾值Threshold,           拋棄T的這個元素,否則加入到R中;

        4.重複第3個步驟,直到R中有256個元素,若檢測完畢,少於256個元素,則降低閾值,重複上述步驟;

       ----------------------------------------------------------------------------------

rBrief:通過上面的步驟取到的256對點,構成的描述子各維度間相關性很低,區分性好;

                                        

                                              訓練前                                            訓練後

---------------------------------------------------------------------------------------------------------------------------------

ORB演算法步驟,參考opencv原始碼:

1.首先構造尺度金字塔;

   金字塔共n層,與SIFT不同,每層僅有一副影象;

   第s層的尺度為,Fator初始尺度(預設為1.2),原圖在第0層;

   第s層影象大小:

                              

2.在不同尺度上採用Fast檢測特徵點;在每一層上按公式計算需要提取的特徵點數n,在本層上按Fast角點響應值排序,提取前2n個特徵點,然後根據Harris   角點響應值排序, 取前n個特徵點,作為本層的特徵點;

3.計算每個特徵點的主方向(質心法);

4.旋轉每個特徵點的Patch到主方向,採用上述步驟3的選取的最優的256對特徵點做τ測試,構成256維描述子,佔32個位元組;

                   ,,n=256

4.採用漢明距離做特徵點匹配;

----------OpenCV原始碼解析-------------------------------------------------------

ORB類定義:位置..\features2d.hpp

nfeatures:需要的特徵點總數;

scaleFactor:尺度因子;

nlevels:金字塔層數;

edgeThreshold:邊界閾值;

firstLevel:起始層;

 WTA_K:描述子形成方法,WTA_K=2表示,採用兩兩比較;

 scoreType:角點響應函式,可以選擇Harris或者Fast的方法;

 patchSize:特徵點鄰域大小;

/*!
 ORB implementation.
*/
class CV_EXPORTS_W ORB : public Feature2D
{
public:
    // the size of the signature in bytes
    enum { kBytes = 32, HARRIS_SCORE=0, FAST_SCORE=1 };

    CV_WRAP explicit ORB(int nfeatures = 500, float scaleFactor = 1.2f, int nlevels = 8, int edgeThreshold = 31,//建構函式
        int firstLevel = 0, int WTA_K=2, int scoreType=ORB::HARRIS_SCORE, int patchSize=31 );

    // returns the descriptor size in bytes
    int descriptorSize() const;   //描述子佔用的位元組數,預設32位元組
    // returns the descriptor type
    int descriptorType() const;//描述子型別,8位整形數

    // Compute the ORB features and descriptors on an image
    void operator()(InputArray image, InputArray mask, vector<KeyPoint>& keypoints) const;

    // Compute the ORB features and descriptors on an image
    void operator()( InputArray image, InputArray mask, vector<KeyPoint>& keypoints,    //提取特徵點與形成描述子
                     OutputArray descriptors, bool useProvidedKeypoints=false ) const;

    AlgorithmInfo* info() const;

protected:

    void computeImpl( const Mat& image, vector<KeyPoint>& keypoints, Mat& descriptors ) const;//計算描述子
    void detectImpl( const Mat& image, vector<KeyPoint>& keypoints, const Mat& mask=Mat() ) const;//檢測特徵點

    CV_PROP_RW int nfeatures;//特徵點總數
    CV_PROP_RW double scaleFactor;//尺度因子
    CV_PROP_RW int nlevels;//金字塔內層數
    CV_PROP_RW int edgeThreshold;//邊界閾值
    CV_PROP_RW int firstLevel;//開始層數
    CV_PROP_RW int WTA_K;//描述子形成方法,預設WTA_K=2,兩兩比較
    CV_PROP_RW int scoreType;//角點響應函式
    CV_PROP_RW int patchSize;//鄰域Patch大小
};

特徵提取及形成描述子:通過這個函式對影象提取Fast特徵點或者計算特徵描述子

_image:輸入影象;

_mask:掩碼影象;

_keypoints:輸入角點;

_descriptors:如果為空,只尋找特徵點,不計算特徵描述子;

_useProvidedKeypoints:如果為true,函式只計算特徵描述子;

/** Compute the ORB features and descriptors on an image
 * @param img the image to compute the features and descriptors on
 * @param mask the mask to apply
 * @param keypoints the resulting keypoints
 * @param descriptors the resulting descriptors
 * @param do_keypoints if true, the keypoints are computed, otherwise used as an input
 * @param do_descriptors if true, also computes the descriptors
 */
void ORB::operator()( InputArray _image, InputArray _mask, vector<KeyPoint>& _keypoints,
                      OutputArray _descriptors, bool useProvidedKeypoints) const
{
    CV_Assert(patchSize >= 2);

    bool do_keypoints = !useProvidedKeypoints;
    bool do_descriptors = _descriptors.needed();

    if( (!do_keypoints && !do_descriptors) || _image.empty() )
        return;

    //ROI handling
    const int HARRIS_BLOCK_SIZE = 9;//Harris角點響應需要的邊界大小
    int halfPatchSize = patchSize / 2;.//鄰域半徑
    int border = std::max(edgeThreshold, std::max(halfPatchSize, HARRIS_BLOCK_SIZE/2))+1;//採用最大的邊界

    Mat image = _image.getMat(), mask = _mask.getMat();
    if( image.type() != CV_8UC1 )
        cvtColor(_image, image, CV_BGR2GRAY);//轉灰度圖

    int levelsNum = this->nlevels;//金字塔層數

    if( !do_keypoints )   //不做特徵點檢測
    {
        // if we have pre-computed keypoints, they may use more levels than it is set in parameters
        // !!!TODO!!! implement more correct method, independent from the used keypoint detector.
        // Namely, the detector should provide correct size of each keypoint. Based on the keypoint size
        // and the algorithm used (i.e. BRIEF, running on 31x31 patches) we should compute the approximate
        // scale-factor that we need to apply. Then we should cluster all the computed scale-factors and
        // for each cluster compute the corresponding image.
        //
        // In short, ultimately the descriptor should
        // ignore octave parameter and deal only with the keypoint size.
        levelsNum = 0;
        for( size_t i = 0; i < _keypoints.size(); i++ )
            levelsNum = std::max(levelsNum, std::max(_keypoints[i].octave, 0));//提取特徵點的最大層數
        levelsNum++;
    }

    // Pre-compute the scale pyramids
    vector<Mat> imagePyramid(levelsNum), maskPyramid(levelsNum);//建立尺度金字塔影象
    for (int level = 0; level < levelsNum; ++level)
    {
        float scale = 1/getScale(level, firstLevel, scaleFactor);  //每層對應的尺度
		/*
		static inline float getScale(int level, int firstLevel, double scaleFactor)
			{
				   return (float)std::pow(scaleFactor, (double)(level - firstLevel));
			}	
		*/
        Size sz(cvRound(image.cols*scale), cvRound(image.rows*scale));//每層對應的影象大小
        Size wholeSize(sz.width + border*2, sz.height + border*2);
        Mat temp(wholeSize, image.type()), masktemp;
        imagePyramid[level] = temp(Rect(border, border, sz.width, sz.height));
        if( !mask.empty() )
        {
            masktemp = Mat(wholeSize, mask.type());
            maskPyramid[level] = masktemp(Rect(border, border, sz.width, sz.height));
        }

        // Compute the resized image
        if( level != firstLevel )    //得到金字塔每層的影象
        {
            if( level < firstLevel )
            {
                resize(image, imagePyramid[level], sz, 0, 0, INTER_LINEAR);
                if (!mask.empty())
                    resize(mask, maskPyramid[level], sz, 0, 0, INTER_LINEAR);
            }
            else
            {
                resize(imagePyramid[level-1], imagePyramid[level], sz, 0, 0, INTER_LINEAR);
                if (!mask.empty())
                {
                    resize(maskPyramid[level-1], maskPyramid[level], sz, 0, 0, INTER_LINEAR);
                    threshold(maskPyramid[level], maskPyramid[level], 254, 0, THRESH_TOZERO);
                }
            }

            copyMakeBorder(imagePyramid[level], temp, border, border, border, border,//擴大影象的邊界
                           BORDER_REFLECT_101+BORDER_ISOLATED);
            if (!mask.empty())
                copyMakeBorder(maskPyramid[level], masktemp, border, border, border, border,
                               BORDER_CONSTANT+BORDER_ISOLATED);
        }
        else
        {
            copyMakeBorder(image, temp, border, border, border, border,//擴大影象的四個邊界
                           BORDER_REFLECT_101);
            if( !mask.empty() )
                copyMakeBorder(mask, masktemp, border, border, border, border,
                               BORDER_CONSTANT+BORDER_ISOLATED);
        }
    }

    // Pre-compute the keypoints (we keep the best over all scales, so this has to be done beforehand
    vector < vector<KeyPoint> > allKeypoints;
    if( do_keypoints )//提取角點
    {
        // Get keypoints, those will be far enough from the border that no check will be required for the descriptor
        computeKeyPoints(imagePyramid, maskPyramid, allKeypoints,  //對每一層影象提取角點,見下面(1)的分析
                         nfeatures, firstLevel, scaleFactor,
                         edgeThreshold, patchSize, scoreType);

        // make sure we have the right number of keypoints keypoints
        /*vector<KeyPoint> temp;

        for (int level = 0; level < n_levels; ++level)
        {
            vector<KeyPoint>& keypoints = all_keypoints[level];
            temp.insert(temp.end(), keypoints.begin(), keypoints.end());
            keypoints.clear();
        }

        KeyPoint::retainBest(temp, n_features_);

        for (vector<KeyPoint>::iterator keypoint = temp.begin(),
             keypoint_end = temp.end(); keypoint != keypoint_end; ++keypoint)
            all_keypoints[keypoint->octave].push_back(*keypoint);*/
    }
    else  //不提取角點
    {
        // Remove keypoints very close to the border
        KeyPointsFilter::runByImageBorder(_keypoints, image.size(), edgeThreshold);

        // Cluster the input keypoints depending on the level they were computed at
        allKeypoints.resize(levelsNum);
        for (vector<KeyPoint>::iterator keypoint = _keypoints.begin(),
             keypointEnd = _keypoints.end(); keypoint != keypointEnd; ++keypoint)
            allKeypoints[keypoint->octave].push_back(*keypoint);    //把角點資訊存入allKeypoints內

        // Make sure we rescale the coordinates
        for (int level = 0; level < levelsNum; ++level)   //把角點位置資訊縮放到指定層位置上
        {
            if (level == firstLevel)
                continue;

            vector<KeyPoint> & keypoints = allKeypoints[level];
            float scale = 1/getScale(level, firstLevel, scaleFactor);
            for (vector<KeyPoint>::iterator keypoint = keypoints.begin(),
                 keypointEnd = keypoints.end(); keypoint != keypointEnd; ++keypoint)
                keypoint->pt *= scale;   //縮放
        }
    }

    Mat descriptors;        
    vector<Point> pattern;

    if( do_descriptors ) //計算特徵描述子
    {
        int nkeypoints = 0;
        for (int level = 0; level < levelsNum; ++level)
            nkeypoints += (int)allKeypoints[level].size();//得到所有層的角點總數
        if( nkeypoints == 0 )
            _descriptors.release();
        else
        {
            _descriptors.create(nkeypoints, descriptorSize(), CV_8U);//建立一個矩陣存放描述子,每一行表示一個角點資訊
            descriptors = _descriptors.getMat();
        }

        const int npoints = 512;//取512個點,共256對,產生256維描述子,32個位元組
        Point patternbuf[npoints];
        const Point* pattern0 = (const Point*)bit_pattern_31_;//訓練好的256對資料點位置

        if( patchSize != 31 )
        {
            pattern0 = patternbuf;
            makeRandomPattern(patchSize, patternbuf, npoints);
        }

        CV_Assert( WTA_K == 2 || WTA_K == 3 || WTA_K == 4 );

        if( WTA_K == 2 )  //WTA_K=2使用兩個點之間作比較
            std::copy(pattern0, pattern0 + npoints, std::back_inserter(pattern));
        else
        {
            int ntuples = descriptorSize()*4;
            initializeOrbPattern(pattern0, pattern, ntuples, WTA_K, npoints);
        }
    }

    _keypoints.clear();
    int offset = 0;
    for (int level = 0; level < levelsNum; ++level)//依次計算每一層的角點描述子
    {
        // Get the features and compute their orientation
        vector<KeyPoint>& keypoints = allKeypoints[level];
        int nkeypoints = (int)keypoints.size();//本層內角點個數

        // Compute the descriptors
        if (do_descriptors)
        {
            Mat desc;
            if (!descriptors.empty())
            {
                desc = descriptors.rowRange(offset, offset + nkeypoints);
            }

            offset += nkeypoints;  //偏移量
            // preprocess the resized image
            Mat& workingMat = imagePyramid[level];
            //boxFilter(working_mat, working_mat, working_mat.depth(), Size(5,5), Point(-1,-1), true, BORDER_REFLECT_101);
            GaussianBlur(workingMat, workingMat, Size(7, 7), 2, 2, BORDER_REFLECT_101);//高斯平滑影象
            computeDescriptors(workingMat, keypoints, desc, pattern, descriptorSize(), WTA_K);//計算本層內角點的描述子,(3)
        }

        // Copy to the output data
        if (level != firstLevel)  //角點位置資訊返回到原圖上
        {
            float scale = getScale(level, firstLevel, scaleFactor);
            for (vector<KeyPoint>::iterator keypoint = keypoints.begin(),
                 keypointEnd = keypoints.end(); keypoint != keypointEnd; ++keypoint)
                keypoint->pt *= scale; 
        }
        // And add the keypoints to the output
        _keypoints.insert(_keypoints.end(), keypoints.begin(), keypoints.end());//存入描述子資訊,返回
    }
}

(1)提取角點:computeKeyPoints

imagePyramid:即構造好的金字塔

/** Compute the ORB keypoints on an image
 * @param image_pyramid the image pyramid to compute the features and descriptors on
 * @param mask_pyramid the masks to apply at every level
 * @param keypoints the resulting keypoints, clustered per level
 */
static void computeKeyPoints(const vector<Mat>& imagePyramid,
                             const vector<Mat>& maskPyramid,
                             vector<vector<KeyPoint> >& allKeypoints,
                             int nfeatures, int firstLevel, double scaleFactor,
                             int edgeThreshold, int patchSize, int scoreType )
{
    int nlevels = (int)imagePyramid.size();  //金字塔層數
    vector<int> nfeaturesPerLevel(nlevels);

    // fill the extractors and descriptors for the corresponding scales
    float factor = (float)(1.0 / scaleFactor);
    float ndesiredFeaturesPerScale = nfeatures*(1 - factor)/(1 - (float)pow((double)factor, (double)nlevels));//

    int sumFeatures = 0;
    for( int level = 0; level < nlevels-1; level++ )   //對每層影象上分配相應角點數
    {
        nfeaturesPerLevel[level] = cvRound(ndesiredFeaturesPerScale);
        sumFeatures += nfeaturesPerLevel[level];
        ndesiredFeaturesPerScale *= factor;
    }
    nfeaturesPerLevel[nlevels-1] = std::max(nfeatures - sumFeatures, 0);//剩下角點數,由最上層影象提取

    // Make sure we forget about what is too close to the boundary
    //edge_threshold_ = std::max(edge_threshold_, patch_size_/2 + kKernelWidth / 2 + 2);

    // pre-compute the end of a row in a circular patch
    int halfPatchSize = patchSize / 2;           //計算每個特徵點圓鄰域的位置資訊
    vector<int> umax(halfPatchSize + 2);
    int v, v0, vmax = cvFloor(halfPatchSize * sqrt(2.f) / 2 + 1);
    int vmin = cvCeil(halfPatchSize * sqrt(2.f) / 2);
    for (v = 0; v <= vmax; ++v)           //
        umax[v] = cvRound(sqrt((double)halfPatchSize * halfPatchSize - v * v));
    // Make sure we are symmetric
    for (v = halfPatchSize, v0 = 0; v >= vmin; --v)
    {
        while (umax[v0] == umax[v0 + 1])
            ++v0;
        umax[v] = v0;
           ++v0;
    }

    allKeypoints.resize(nlevels);

    for (int level = 0; level < nlevels; ++level)
    {
        int featuresNum = nfeaturesPerLevel[level];
        allKeypoints[level].reserve(featuresNum*2);

        vector<KeyPoint> & keypoints = allKeypoints[level];

        // Detect FAST features, 20 is a good threshold
        FastFeatureDetector fd(20, true);      
        fd.detect(imagePyramid[level], keypoints, maskPyramid[level]);//Fast角點檢測

        // Remove keypoints very close to the border
        KeyPointsFilter::runByImageBorder(keypoints, imagePyramid[level].size(), edgeThreshold);//去除鄰近邊界的點

        if( scoreType == ORB::HARRIS_SCORE )
        {
            // Keep more points than necessary as FAST does not give amazing corners
            KeyPointsFilter::retainBest(keypoints, 2 * featuresNum);//按Fast強度排序,保留前2*featuresNum個特徵點

            // Compute the Harris cornerness (better scoring than FAST)
            HarrisResponses(imagePyramid[level], keypoints, 7, HARRIS_K); //計算每個角點的Harris強度響應
        }

        //cull to the final desired level, using the new Harris scores or the original FAST scores.
        KeyPointsFilter::retainBest(keypoints, featuresNum);//按Harris強度排序,保留前featuresNum個

        float sf = getScale(level, firstLevel, scaleFactor);

        // Set the level of the coordinates
        for (vector<KeyPoint>::iterator keypoint = keypoints.begin(),
             keypointEnd = keypoints.end(); keypoint != keypointEnd; ++keypoint)
        {
            keypoint->octave = level;  //層資訊
            keypoint->size = patchSize*sf; //
        }

        computeOrientation(imagePyramid[level], keypoints, halfPatchSize, umax);  //計算角點的方向,(2)分析
    }
}
(2)為每個角點計算主方向,質心法;
static void computeOrientation(const Mat& image, vector<KeyPoint>& keypoints,
                               int halfPatchSize, const vector<int>& umax)
{
    // Process each keypoint
    for (vector<KeyPoint>::iterator keypoint = keypoints.begin(),  //為每個角點計算主方向
         keypointEnd = keypoints.end(); keypoint != keypointEnd; ++keypoint)
    {
        keypoint->angle = IC_Angle(image, halfPatchSize, keypoint->pt, umax);//計算質心方向
    }
}
static float IC_Angle(const Mat& image, const int half_k, Point2f pt,
                      const vector<int> & u_max)
{
    int m_01 = 0, m_10 = 0;

    const uchar* center = &image.at<uchar> (cvRound(pt.y), cvRound(pt.x));

    // Treat the center line differently, v=0
    for (int u = -half_k; u <= half_k; ++u)
        m_10 += u * center[u];

    // Go line by line in the circular patch
    int step = (int)image.step1();
    for (int v = 1; v <= half_k; ++v)    //每次處理對稱的兩行v
    {
        // Proceed over the two lines
        int v_sum = 0;
        int d = u_max[v];
        for (int u = -d; u <= d; ++u)
        {
            int val_plus = center[u + v*step], val_minus = center[u - v*step];
            v_sum += (val_plus - val_minus); //計算m_01時,位置上差一個符號
            m_10 += u * (val_plus + val_minus);
        }
        m_01 += v * v_sum;//計算上下兩行的m_01
    }

    return fastAtan2((float)m_01, (float)m_10);//計算角度
}
(3)計算特徵點描述子
static void computeDescriptors(const Mat& image, vector<KeyPoint>& keypoints, Mat& descriptors,
                               const vector<Point>& pattern, int dsize, int WTA_K)
{
    //convert to grayscale if more than one color
    CV_Assert(image.type() == CV_8UC1);
    //create the descriptor mat, keypoints.size() rows, BYTES cols
    descriptors = Mat::zeros((int)keypoints.size(), dsize, CV_8UC1);

    for (size_t i = 0; i < keypoints.size(); i++)
        computeOrbDescriptor(keypoints[i], image, &pattern[0], descriptors.ptr((int)i), dsize, WTA_K);
}
static void computeOrbDescriptor(const KeyPoint& kpt,
                                 const Mat& img, const Point* pattern,
                                 uchar* desc, int dsize, int WTA_K)
{
    float angle = kpt.angle; 
    //angle = cvFloor(angle/12)*12.f;
    angle *= (float)(CV_PI/180.f);
    float a = (float)cos(angle), b = (float)sin(angle);

    const uchar* center = &img.at<uchar>(cvRound(kpt.pt.y), cvRound(kpt.pt.x));
    int step = (int)img.step;

#if 1
    #define GET_VALUE(idx) \       //取旋轉後一個畫素點的值
        center[cvRound(pattern[idx].x*b + pattern[idx].y*a)*step + \
               cvRound(pattern[idx].x*a - pattern[idx].y*b)]
#else
    float x, y;
    int ix, iy;
    #define GET_VALUE(idx) \ //取旋轉後一個畫素點,插值法
        (x = pattern[idx].x*a - pattern[idx].y*b, \
        y = pattern[idx].x*b + pattern[idx].y*a, \
        ix = cvFloor(x), iy = cvFloor(y), \
        x -= ix, y -= iy, \
        cvRound(center[iy*step + ix]*(1-x)*(1-y) + center[(iy+1)*step + ix]*(1-x)*y + \
                center[iy*step + ix+1]*x*(1-y) + center[(iy+1)*step + ix+1]*x*y))
#endif

    if( WTA_K == 2 )
    {
        for (int i = 0; i < dsize; ++i, pattern += 16)//每個特徵描述子長度為32個位元組
        {
            int t0, t1, val;
            t0 = GET_VALUE(0); t1 = GET_VALUE(1);
            val = t0 < t1;
            t0 = GET_VALUE(2); t1 = GET_VALUE(3);
            val |= (t0 < t1) << 1;
            t0 = GET_VALUE(4); t1 = GET_VALUE(5);
            val |= (t0 < t1) << 2;
            t0 = GET_VALUE(6); t1 = GET_VALUE(7);
            val |= (t0 < t1) << 3;
            t0 = GET_VALUE(8); t1 = GET_VALUE(9);
            val |= (t0 < t1) << 4;
            t0 = GET_VALUE(10); t1 = GET_VALUE(11);
            val |= (t0 < t1) << 5;
            t0 = GET_VALUE(12); t1 = GET_VALUE(13);
            val |= (t0 < t1) << 6;
            t0 = GET_VALUE(14); t1 = GET_VALUE(15);
            val |= (t0 < t1) << 7;

            desc[i] = (uchar)val;
        }
    }
    else if( WTA_K == 3 )
    {
        for (int i = 0; i < dsize; ++i, pattern += 12)
        {
            int t0, t1, t2, val;
            t0 = GET_VALUE(0); t1 = GET_VALUE(1); t2 = GET_VALUE(2);
            val = t2 > t1 ? (t2 > t0 ? 2 : 0) : (t1 > t0);

            t0 = GET_VALUE(3); t1 = GET_VALUE(4); t2 = GET_VALUE(5);
            val |= (t2 > t1 ? (t2 > t0 ? 2 : 0) : (t1 > t0)) << 2;

            t0 = GET_VALUE(6); t1 = GET_VALUE(7); t2 = GET_VALUE(8);
            val |= (t2 > t1 ? (t2 > t0 ? 2 : 0) : (t1 > t0)) << 4;

            t0 = GET_VALUE(9); t1 = GET_VALUE(10); t2 = GET_VALUE(11);
            val |= (t2 > t1 ? (t2 > t0 ? 2 : 0) : (t1 > t0)) << 6;

            desc[i] = (uchar)val;
        }
    }
    else if( WTA_K == 4 )
    {
        for (int i = 0; i < dsize; ++i, pattern += 16)
        {
            int t0, t1, t2, t3, u, v, k, val;
            t0 = GET_VALUE(0); t1 = GET_VALUE(1);
            t2 = GET_VALUE(2); t3 = GET_VALUE(3);
            u = 0, v = 2;
            if( t1 > t0 ) t0 = t1, u = 1;
            if( t3 > t2 ) t2 = t3, v = 3;
            k = t0 > t2 ? u : v;
            val = k;

            t0 = GET_VALUE(4); t1 = GET_VALUE(5);
            t2 = GET_VALUE(6); t3 = GET_VALUE(7);
            u = 0, v = 2;
            if( t1 > t0 ) t0 = t1, u = 1;
            if( t3 > t2 ) t2 = t3, v = 3;
            k = t0 > t2 ? u : v;
            val |= k << 2;

            t0 = GET_VALUE(8); t1 = GET_VALUE(9);
            t2 = GET_VALUE(10); t3 = GET_VALUE(11);
            u = 0, v = 2;
            if( t1 > t0 ) t0 = t1, u = 1;
            if( t3 > t2 ) t2 = t3, v = 3;
            k = t0 > t2 ? u : v;
            val |= k << 4;

            t0 = GET_VALUE(12); t1 = GET_VALUE(13);
            t2 = GET_VALUE(14); t3 = GET_VALUE(15);
            u = 0, v = 2;
            if( t1 > t0 ) t0 = t1, u = 1;
            if( t3 > t2 ) t2 = t3, v = 3;
            k = t0 > t2 ? u : v;
            val |= k << 6;

            desc[i] = (uchar)val;
        }
    }
    else
        CV_Error( CV_StsBadSize, "Wrong WTA_K. It can be only 2, 3 or 4." );

    #undef GET_VALUE
}

參考:

Ethan Rublee et. ORB:An Efficient Alternative to SIFT or SURF

http://www.cnblogs.com/ronny/p/4083537.html