1. 程式人生 > >車道線檢測

車道線檢測

專案描述:

使用openCV設計演算法處理車輛前置攝像頭錄下的視訊,檢測車輛前方的車道,並計算車道曲率

專案程式碼GitHub地址:https://github.com/yang1688899/CarND-Advanced-Lane-Lines

這是在幾張測試圖片上的處理結果:

alt text


實現步驟:

  • 使用提供的一組棋盤格圖片計算相機校正矩陣(camera calibration matrix)和失真係數(distortion coefficients).
  • 校正圖片
  • 使用梯度閾值(gradient threshold),顏色閾值(color threshold)等處理圖片得到清晰捕捉車道線的二進位制圖(binary image).
  • 使用透視變換(perspective transform)得到二進位制圖(binary image)的鳥瞰圖(birds-eye view).
  • 檢測屬於車道線的畫素並用它來測出車道邊界.
  • 計算車道曲率及車輛相對車道中央的位置.
  • 處理圖片展示車道區域,及車道的曲率和車輛位置.

相機校正(Camera Calibration)

這裡會使用opencv提供的方法通過棋盤格圖片組計算相機校正矩陣(camera calibration matrix)和失真係數(distortion coefficients)。首先要得到棋盤格內角的世界座標"object points"和對應圖片座標"image point"。假設棋盤格內角世界座標的z軸為0,棋盤在(x,y)面上,則對於每張棋盤格圖片組的圖片而言,對應"object points"都是一樣的。而通過使用openCv的cv2.findChessboardCorners(),傳入棋盤格的灰度(grayscale)圖片和橫縱內角點個數就可得到圖片內角的"image point"。


def get_obj_img_points(images,grid=(9,6)):
    object_points=[]
    img_points = []
    for img in images:
        #生成object points
        object_point = np.zeros( (grid[0]*grid[1],3),np.float32 )
        object_point[:,:2]= np.mgrid[0:grid[0],0:grid[1]].T.reshape(-1,2)
        #得到灰度圖片
        gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
        #得到圖片的image points
        ret, corners = cv2.findChessboardCorners(gray, grid, None)
        if ret:
            object_points.append(object_point)
            img_points.append(corners)
    return object_points,img_points
    

然後使用上方法得到的object_points and img_points 傳入cv2.calibrateCamera() 方法中就可以計算出相機校正矩陣(camera calibration matrix)和失真係數(distortion coefficients),再使用 cv2.undistort()方法就可得到校正圖片。

def cal_undistort(img, objpoints, imgpoints):
    ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, img.shape[1::-1], None, None)
    dst = cv2.undistort(img, mtx, dist, None, mtx)
    return dst

以下為其中一張棋盤格圖片校正前後對比:

alt text

校正測試圖片

程式碼如下:

#獲取棋盤格圖片
cal_imgs = utils.get_images_by_dir('camera_cal')
#計算object_points,img_points
object_points,img_points = utils.calibrate(cal_imgs,grid=(9,6))
#獲取測試圖片
test_imgs = utils.get_images_by_dir('test_images')

#校正測試圖片
undistorted = []
for img in test_imgs:
    img = utils.cal_undistort(img,object_points,img_points)
    undistorted.append(img)

測試圖片校正前後對比: alt text

閾值過濾(thresholding)

這裡會使用梯度閾值(gradient threshold),顏色閾值(color threshold)等來處理校正後的圖片,捕獲車道線所在位置的畫素。(這裡的梯度指的是顏色變化的梯度)

以下方法通過"cv2.Sobel()"方法計算x軸方向或y軸方向的顏色變化梯度導數,並以此進行閾值過濾(thresholding),得到二進位制圖(binary image):

def abs_sobel_thresh(img, orient='x', thresh_min=0, thresh_max=255):
    #裝換為灰度圖片
    gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
    #使用cv2.Sobel()計算計算x方向或y方向的導數
    if orient == 'x':
        abs_sobel = np.absolute(cv2.Sobel(gray, cv2.CV_64F, 1, 0))
    if orient == 'y':
        abs_sobel = np.absolute(cv2.Sobel(gray, cv2.CV_64F, 0, 1))
    #閾值過濾
    scaled_sobel = np.uint8(255*abs_sobel/np.max(abs_sobel))
    binary_output = np.zeros_like(scaled_sobel)
    binary_output[(scaled_sobel >= thresh_min) & (scaled_sobel <= thresh_max)] = 1

    return binary_output

通過測試發現使用x軸方向閾值在35到100區間過濾得出的二進位制圖可以捕捉較為清晰的車道線:

x_thresh = utils.abs_sobel_thresh(img, orient='x', thresh_min=35, thresh_max=100)

以下為使用上面方法應用測試圖片的過濾前後對比圖: alt text

可以看到該方法的缺陷是在路面顏色相對較淺且車道線顏色為黃色時,無法捕捉到車道線(第三,第六,第七張圖),但在其他情況車道線捕捉效果還是不錯的。

接下來測試一下使用全域性的顏色變化梯度來進行閾值過濾:

def mag_thresh(img, sobel_kernel=3, mag_thresh=(0, 255)):
    # Convert to grayscale
    gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
    # Take both Sobel x and y gradients
    sobelx = cv2.Sobel(gray, cv2.CV_64F, 1, 0, ksize=sobel_kernel)
    sobely = cv2.Sobel(gray, cv2.CV_64F, 0, 1, ksize=sobel_kernel)
    # Calculate the gradient magnitude
    gradmag = np.sqrt(sobelx**2 + sobely**2)
    # Rescale to 8 bit
    scale_factor = np.max(gradmag)/255 
    gradmag = (gradmag/scale_factor).astype(np.uint8) 
    # Create a binary image of ones where threshold is met, zeros otherwise
    binary_output = np.zeros_like(gradmag)
    binary_output[(gradmag >= mag_thresh[0]) & (gradmag <= mag_thresh[1])] = 1

    # Return the binary image
    return binary_output

mag_thresh = utils.mag_thresh(img, sobel_kernel=9, mag_thresh=(50, 100))

alt text

結果仍然不理想(觀察第三,第六,第七張圖片),原因是當路面顏色相對較淺且車道線顏色為黃色時,顏色變化梯度較小,想要把捕捉車道線需要把閾值下限調低,然而這樣做同時還會捕獲大量的噪音畫素,效果會更差。

那麼使用顏色閾值過濾呢? 下面為使用hls顏色空間的s通道進行閾值過濾:

def hls_select(img,channel='s',thresh=(0, 255)):
    hls = cv2.cvtColor(img, cv2.COLOR_RGB2HLS)
    if channel=='h':
        channel = hls[:,:,0]
    elif channel=='l':
        channel=hls[:,:,1]
    else:
        channel=hls[:,:,2]
    binary_output = np.zeros_like(channel)
    binary_output[(channel > thresh[0]) & (channel <= thresh[1])] = 1
    return binary_output
s_thresh = utils.hls_select(img,channel='s',thresh=(180, 255))

alt text

可以看到在路面顏色相對較淺且車道線顏色為黃色的區域,車道線仍然被清晰的捕捉到了,然而在其他地方表現卻不太理想(第四,第八張圖片)

因此為了應對多變的路面情況,需要結合多種閾值過濾方法。

以下為最終的閾值過濾組合:

def thresholding(img):
    x_thresh = utils.abs_sobel_thresh(img, orient='x', thresh_min=10 ,thresh_max=230)
    mag_thresh = utils.mag_thresh(img, sobel_kernel=3, mag_thresh=(30, 150))
    dir_thresh = utils.dir_threshold(img, sobel_kernel=3, thresh=(0.7, 1.3))
    hls_thresh = utils.hls_select(img, thresh=(180, 255))
    lab_thresh = utils.lab_select(img, thresh=(155, 200))
    luv_thresh = utils.luv_select(img, thresh=(225, 255))
    #Thresholding combination
    threshholded = np.zeros_like(x_thresh)
    threshholded[((x_thresh == 1) & (mag_thresh == 1)) | ((dir_thresh == 1) & (hls_thresh == 1)) | (lab_thresh == 1) | (luv_thresh == 1)] = 1

    return threshholded

alt text

透視變換(perspective transform)

這裡使用"cv2.getPerspectiveTransform()"來獲取變形矩陣(tranform matrix),把閾值過濾後的二進位制圖片變形為鳥撒視角。

以下為定義的源點(source points)和目標點(destination points)

Source      Destination  
585, 460320, 0
203, 720320, 720
1127, 720960, 720
695, 460960, 0

定義方法獲取變形矩陣和逆變形矩陣:

def get_M_Minv():
    src = np.float32([[(203, 720), (585, 460), (695, 460), (1127, 720)]])
    dst = np.float32([[(320, 720), (320, 0), (960, 0), (960, 720)]])
    M = cv2.getPerspectiveTransform(src, dst)
    Minv = cv2.getPerspectiveTransform(dst,src)
    return M,Minv

然後使用"cv2.warpPerspective()"傳入相關值獲得變形圖片(wrapped image)

thresholded_wraped = cv2.warpPerspective(thresholded, M, img.shape[1::-1], flags=cv2.INTER_LINEAR)

以下為原圖及變形後的效果: alt text

以下為閾值過濾後二進位制圖變形後效果: alt text

檢測車道邊界

上面的二進位制圖還存在一定的噪音畫素,為了準確檢測車道邊界,首先要確定哪些畫素是屬於車道線的。

首先要定位車道線的基點(圖片最下方車道出現的x軸座標),由於車道線在的畫素都集中在x軸一定範圍內,因此把圖片一分為二,左右兩邊的在x軸上的畫素分佈峰值非常有可能就是車道線基點。

以下為測試片x軸的畫素分佈圖:

alt text

定位基點後,再使用使用滑動窗多項式擬合(sliding window polynomial fitting)來獲取車道邊界。這裡使用9個200px寬的滑動窗來定位一條車道線畫素:

def find_line(binary_warped):
    # Take a histogram of the bottom half of the image
    histogram = np.sum(binary_warped[binary_warped.shape[0]//2:,:], axis=0)
    # Find the peak of the left and right halves of the histogram
    # These will be the starting point for the left and right lines
    midpoint = np.int(histogram.shape[0]/2)
    leftx_base = np.argmax(histogram[:midpoint])
    rightx_base = np.argmax(histogram[midpoint:]) + midpoint
    
    # Choose the number of sliding windows
    nwindows = 9
    # Set height of windows
    window_height = np.int(binary_warped.shape[0]/nwindows)
    # Identify the x and y positions of all nonzero pixels in the image
    nonzero = binary_warped.nonzero()
    nonzeroy = np.array(nonzero[0])
    nonzerox = np.array(nonzero[1])
    # Current positions to be updated for each window
    leftx_current = leftx_base
    rightx_current = rightx_base
    # Set the width of the windows +/- margin
    margin = 100
    # Set minimum number of pixels found to recenter window
    minpix = 50
    # Create empty lists to receive left and right lane pixel indices
    left_lane_inds = []
    right_lane_inds = []
    
    # Step through the windows one by one
    for window in range(nwindows):
        # Identify window boundaries in x and y (and right and left)
        win_y_low = binary_warped.shape[0] - (window+1)*window_height
        win_y_high = binary_warped.shape[0] - window*window_height
        win_xleft_low = leftx_current - margin
        win_xleft_high = leftx_current + margin
        win_xright_low = rightx_current - margin
        win_xright_high = rightx_current + margin
        # Identify the nonzero pixels in x and y within the window
        good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & 
        (nonzerox >= win_xleft_low) &  (nonzerox < win_xleft_high)).nonzero()[0]
        good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & 
        (nonzerox >= win_xright_low) &  (nonzerox < win_xright_high)).nonzero()[0]
        # Append these indices to the lists
        left_lane_inds.append(good_left_inds)
        right_lane_inds.append(good_right_inds)
        # If you found > minpix pixels, recenter next window on their mean position
        if len(good_left_inds) > minpix:
            leftx_current = np.int(np.mean(nonzerox[good_left_inds]))
        if len(good_right_inds) > minpix:        
            rightx_current = np.int(np.mean(nonzerox[good_right_inds]))
    
    # Concatenate the arrays of indices
    left_lane_inds = np.concatenate(left_lane_inds)
    right_lane_inds = np.concatenate(right_lane_inds)
    
    # Extract left and right line pixel positions
    leftx = nonzerox[left_lane_inds]
    lefty = nonzeroy[left_lane_inds] 
    rightx = nonzerox[right_lane_inds]
    righty = nonzeroy[right_lane_inds] 
    
    # Fit a second order polynomial to each
    left_fit = np.polyfit(lefty, leftx, 2)
    right_fit = np.polyfit(righty, rightx, 2)
    
    return left_fit, right_fit, left_lane_inds, right_lane_inds

以下為滑動窗多項式擬合(sliding window polynomial fitting)得到的結果:

alt text

計算車道曲率及車輛相對車道中心位置

利用檢測車道得到的擬合值(find_line 返回的left_fit, right_fit)計算車道曲率,及車輛相對車道中心位置:

def calculate_curv_and_pos(binary_warped,left_fit, right_fit):
    # Define y-value where we want radius of curvature
    ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0] )
    leftx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
    rightx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
    # Define conversions in x and y from pixels space to meters
    ym_per_pix = 30/720 # meters per pixel in y dimension
    xm_per_pix = 3.7/700 # meters per pixel in x dimension
    y_eval = np.max(ploty)
    # Fit new polynomials to x,y in world space
    left_fit_cr = np.polyfit(ploty*ym_per_pix, leftx*xm_per_pix, 2)
    right_fit_cr = np.polyfit(ploty*ym_per_pix, rightx*xm_per_pix, 2)
    # Calculate the new radii of curvature
    left_curverad = ((1 + (2*left_fit_cr[0]*y_eval*ym_per_pix + left_fit_cr[1])**2)**1.5) / np.absolute(2*left_fit_cr[0])
    right_curverad = ((1 + (2*right_fit_cr[0]*y_eval*ym_per_pix + right_fit_cr[1])**2)**1.5) / np.absolute(2*right_fit_cr[0])
    
    curvature = ((left_curverad + right_curverad) / 2)
    #print(curvature)
    lane_width = np.absolute(leftx[719] - rightx[719])
    lane_xm_per_pix = 3.7 / lane_width
    veh_pos = (((leftx[719] + rightx[719]) * lane_xm_per_pix) / 2.)
    cen_pos = ((binary_warped.shape[1] * lane_xm_per_pix) / 2.)
    distance_from_center = veh_pos - cen_pos
    return curvature,distance_from_center

處理原圖,展示資訊

使用逆變形矩陣把鳥瞰二進位制圖檢測的車道鑲嵌回原圖,並高亮車道區域:

def draw_area(undist,binary_warped,Minv,left_fit, right_fit):
    # Generate x and y values for plotting
    ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0] )
    left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
    right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
    # Create an image to draw the lines on
    warp_zero = np.zeros_like(binary_warped).astype(np.uint8)
    color_warp = np.dstack((warp_zero, warp_zero, warp_zero))
    
    # Recast the x and y points into usable format for cv2.fillPoly()
    pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))])
    pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))])
    pts = np.hstack((pts_left, pts_right))
    
    # Draw the lane onto the warped blank image
    cv2.fillPoly(color_warp, np.int_([pts]), (0,255, 0))
    
    # Warp the blank back to original image space using inverse perspective matrix (Minv)
    newwarp = cv2.warpPerspective(color_warp, Minv, (undist.shape[1], undist.shape[0])) 
    # Combine the result with the original image
    result = cv2.addWeighted(undist, 1, newwarp, 0.3, 0)
    return result

使用"cv2.putText()"方法處理原圖展示車道曲率及車輛相對車道中心位置資訊:

def draw_values(img, curvature, distance_from_center):
    font = cv2.FONT_HERSHEY_SIMPLEX
    radius_text = "Radius of Curvature: %sm" % (round(curvature))

    if distance_from_center > 0:
        pos_flag = 'right'
    else:
        pos_flag = 'left'

    cv2.putText(img, radius_text, (100, 100), font, 1, (255, 255, 255), 2)
    center_text = "Vehicle is %.3fm %s of center" % (abs(distance_from_center), pos_flag)
    cv2.putText(img, center_text, (100, 150), font, 1, (255, 255, 255), 2)
    return img

以下為測試圖片處理後結果:

alt text