1. 程式人生 > >python+opencv中最近出現的一些變化( OpenCV 官方的 Python tutorial目前好像還沒有改過來?) 記一次全景影象的拼接

python+opencv中最近出現的一些變化( OpenCV 官方的 Python tutorial目前好像還沒有改過來?) 記一次全景影象的拼接

最近在學習過程中發現opencv有了很多變動, OpenCV 官方的 Python tutorial目前好像還沒有改過來,導致大家在學習上面都出現了一些問題,現在做一個小小的羅列,希望對大家有用

做的是關於全景影象的拼接,關於sift和surf的語法之後有需要會另開文章具體闡述,此篇主要是解決大家困惑許久的問題。

筆者python3.x

首先是安裝上,必須先後安裝pip install opencv_python和pip install opencv-contrib-python==3.3.0.10後面一個一定要指定版本號,因為版本上面最新的opencv-contrib-python-3.4.5.20版本好像申請了什麼專利,所以我們可能無法呼叫的,安裝上要是出現了報錯,先別急著寫在,重新執行一次語句,基本上就可能可以了。

然後是關於sift和surf這兩條語句上面,它的語法函式也出現了變化,具體可以參考這個

http://answers.opencv.org/question/52130/300-python-cv2-module-cannot-find-siftsurforb/

好像是最近才修改的,真的走了很多彎路才走通。

 

#這裡的程式碼有改動之後才能用

#sift = cv.xfeatures2d_SIFT().create()修改為

sift = cv2.xfeatures2d.SIFT_create()

 

hessian=400
#surf=cv2.SURF(hessian)修改為

surf=cv2.xfeatures2d.SURF_create(hessian)

 

下面給出兩個程式碼,是借鑑了網友的,但是對於報錯的部分和需要改正的點都已經糾錯完畢了,希望對大家有所幫助。有其他的bug也歡迎留言。

示例1

 

6.jpg

7.jpg

 

 效果圖

#coding: utf-8
import numpy as np
import cv2
 
leftgray = cv2.imread('6.jpg')
rightgray = cv2.imread('7.jpg')
 
hessian=400
surf
=cv2.xfeatures2d.SURF_create(hessian) #surf=cv2.SURF(hessian) #將Hessian Threshold設定為400,閾值越大能檢測的特徵就越少 kp1,des1=surf.detectAndCompute(leftgray,None) #查詢關鍵點和描述符 kp2,des2=surf.detectAndCompute(rightgray,None) FLANN_INDEX_KDTREE=0 #建立FLANN匹配器的引數 indexParams=dict(algorithm=FLANN_INDEX_KDTREE,trees=5) #配置索引,密度樹的數量為5 searchParams=dict(checks=50) #指定遞迴次數 #FlannBasedMatcher:是目前最快的特徵匹配演算法(最近鄰搜尋) flann=cv2.FlannBasedMatcher(indexParams,searchParams) #建立匹配器 matches=flann.knnMatch(des1,des2,k=2) #得出匹配的關鍵點 good=[] #提取優秀的特徵點 for m,n in matches: if m.distance < 0.7*n.distance: #如果第一個鄰近距離比第二個鄰近距離的0.7倍小,則保留 good.append(m) src_pts = np.array([ kp1[m.queryIdx].pt for m in good]) #查詢影象的特徵描述子索引 dst_pts = np.array([ kp2[m.trainIdx].pt for m in good]) #訓練(模板)影象的特徵描述子索引 H=cv2.findHomography(src_pts,dst_pts) #生成變換矩陣 h,w=leftgray.shape[:2] h1,w1=rightgray.shape[:2] shft=np.array([[1.0,0,w],[0,1.0,0],[0,0,1.0]]) M=np.dot(shft,H[0]) #獲取左邊影象到右邊影象的投影對映關係 dst_corners=cv2.warpPerspective(leftgray,M,(w*2,h))#透視變換,新影象可容納完整的兩幅圖 cv2.imshow('tiledImg1',dst_corners) #顯示,第一幅圖已在標準位置 dst_corners[0:h,w:w*2]=rightgray #將第二幅圖放在右側 #cv2.imwrite('tiled.jpg',dst_corners) cv2.imshow('tiledImg',dst_corners) cv2.imshow('leftgray',leftgray) cv2.imshow('rightgray',rightgray) cv2.waitKey() cv2.destroyAllWindows()

 

 

 

 

 

 

示例2

test1.jpg

test2.jpg

 

 效果圖

 

import numpy as np
import cv2 as cv
from matplotlib import pyplot as plt

if __name__ == '__main__':
    top, bot, left, right = 100, 100, 0, 500
    img1 = cv.imread('test1.jpg')
    img2 = cv.imread('test2.jpg')
    srcImg = cv.copyMakeBorder(img1, top, bot, left, right, cv.BORDER_CONSTANT, value=(0, 0, 0))
    testImg = cv.copyMakeBorder(img2, top, bot, left, right, cv.BORDER_CONSTANT, value=(0, 0, 0))
    img1gray = cv.cvtColor(srcImg, cv.COLOR_BGR2GRAY)
    img2gray = cv.cvtColor(testImg, cv.COLOR_BGR2GRAY)
    
    #這裡的程式碼有改動之後才能用
    #sift = cv.xfeatures2d_SIFT().create()
    sift = cv2.xfeatures2d.SIFT_create()
    
    # find the keypoints and descriptors with SIFT
    kp1, des1 = sift.detectAndCompute(img1gray, None)
    kp2, des2 = sift.detectAndCompute(img2gray, None)
    # FLANN parameters
    FLANN_INDEX_KDTREE = 1
    index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5)
    search_params = dict(checks=50)
    flann = cv.FlannBasedMatcher(index_params, search_params)
    matches = flann.knnMatch(des1, des2, k=2)

    # Need to draw only good matches, so create a mask
    matchesMask = [[0, 0] for i in range(len(matches))]

    good = []
    pts1 = []
    pts2 = []
    # ratio test as per Lowe's paper
    for i, (m, n) in enumerate(matches):
        if m.distance < 0.7*n.distance:
            good.append(m)
            pts2.append(kp2[m.trainIdx].pt)
            pts1.append(kp1[m.queryIdx].pt)
            matchesMask[i] = [1, 0]

    draw_params = dict(matchColor=(0, 255, 0),
                       singlePointColor=(255, 0, 0),
                       matchesMask=matchesMask,
                       flags=0)
    img3 = cv.drawMatchesKnn(img1gray, kp1, img2gray, kp2, matches, None, **draw_params)
    plt.imshow(img3, ), plt.show()

    rows, cols = srcImg.shape[:2]
    MIN_MATCH_COUNT = 10
    if len(good) > MIN_MATCH_COUNT:
        src_pts = np.float32([kp1[m.queryIdx].pt for m in good]).reshape(-1, 1, 2)
        dst_pts = np.float32([kp2[m.trainIdx].pt for m in good]).reshape(-1, 1, 2)
        M, mask = cv.findHomography(src_pts, dst_pts, cv.RANSAC, 5.0)
        warpImg = cv.warpPerspective(testImg, np.array(M), (testImg.shape[1], testImg.shape[0]), flags=cv.WARP_INVERSE_MAP)

        for col in range(0, cols):
            if srcImg[:, col].any() and warpImg[:, col].any():
                left = col
                break
        for col in range(cols-1, 0, -1):
            if srcImg[:, col].any() and warpImg[:, col].any():
                right = col
                break

        res = np.zeros([rows, cols, 3], np.uint8)
        for row in range(0, rows):
            for col in range(0, cols):
                if not srcImg[row, col].any():
                    res[row, col] = warpImg[row, col]
                elif not warpImg[row, col].any():
                    res[row, col] = srcImg[row, col]
                else:
                    srcImgLen = float(abs(col - left))
                    testImgLen = float(abs(col - right))
                    alpha = srcImgLen / (srcImgLen + testImgLen)
                    res[row, col] = np.clip(srcImg[row, col] * (1-alpha) + warpImg[row, col] * alpha, 0, 255)

        # opencv is bgr, matplotlib is rgb
        res = cv.cvtColor(res, cv.COLOR_BGR2RGB)
        # show the result
        plt.figure()
        plt.imshow(res)
        plt.show()
    else:
        print("Not enough matches are found - {}/{}".format(len(good), MIN_MATCH_COUNT))
        matchesMask = None