1. 程式人生 > >V-rep學習筆記:視覺傳感器2

V-rep學習筆記:視覺傳感器2

存在 bsp ping repr sim isp cif ron depth

  視覺傳感器的屬性設置欄中還有如下幾個選項:

  • Ignore RGB info (faster): if selected, the RGB information of the sensor (i.e. the color) will be ignored so that it can operate faster. Use this option if you only rely on the depth information of the sensor. 當只需要獲取深度圖像時就可以勾選這一選項,以提高運行速度。
  • Ignore depth info (faster): if selected, the depth information of the sensor will be ignored so that it can operate faster. Use this option if you do not intend to use the depth information of the sensor. 只需要彩色圖像時勾選這一選項。
  • Packet1 is blank (faster): if selected, then V-REP won‘t automatically extract specific information from acquired images, so that it can operate faster. Use this option if you do not intend to use the first packet of auxiliary values returned by API functions simReadVisionSensor or simHandleVisionSensor
    . 當不需要使用API獲取圖像中的特定信息時(這些信息保存在Packet1中),可以勾選這一選項。

  15 auxiliary values (default): the values are calculated over all the image pixels and represent the minimum of intensity, red, green, blue, depth value, the maximum of intensity, red, green, blue, depth value, and the average of intensity, red, green, blue, depth value. On a higher resolution image, computation might slow down vision sensors, and if those values are not used, their calculation can be turned off in the vision sensor properties (Packet1 is blank (faster)). Packet1中會保存圖像上的特定信息(灰度、RGB、深度的最小/最大/平均值),這些信息通過遍歷圖中所有的像素點獲得,因此對於分辨率很大的圖像計算會變慢。$$Packet1 = \{I_{min},R_{min},G_{min},B_{min},D_{min},\quad I_{max},R_{max},G_{max},B_{max},D_{max},\quad I_{avg},R_{avg},G_{avg},B_{avg},D_{avg}\}$$

  視覺傳感器的Z軸沿著視線方向,Y軸代表Up方向,X軸垂直於Y/Z軸指向傳感器向左邊。如下圖所示,將傳感器放在機器人正前方,可以看出獲得的圖像跟人眼觀察到的一樣,沒有鏡像。圖像的X軸與視覺傳感器的X軸方向相反,Y軸方向相同

技術分享

  • 深度圖(depth map)

  為了簡單的顯示深度信息,將filter設置為如下:技術分享

  在場景中建立如下模型:4個大小相同的立方體(設置為Renderable屬性),邊長為0.5m,離地面高度分別為0m,0.25,0.5m,1m。將攝像機置於地面下方1mm處,near clipping plane設為最小值(1.00e-04m),far clipping plane設為1m;視場Orthographic size設為1m,X/Y分辨率均設為2

技術分享
  點擊開始仿真按鈕,在Floating view中將會顯示深度圖。亮度越小(越黑/深度值越小)代表離攝像機越近,亮度越大(越白/深度值越大)代表離攝像機越遠。如果要獲得具體的深度數值可以調用simGetVisionSensorDepthBuffer函數:

table depthBuffer = simGetVisionSensorDepthBuffer(number sensorHandle, number posX=0,number posY=0, number sizeX=0,number sizeY=0)

  其中depthBuffer一維表保存了圖像上指定大小的深度數據。註意表中的數值不是真正的距離,而是歸一化的數值,離傳感器最近的值為0,最遠的值為1,因此如果已知剪切平面的位置就可以計算出真實的深度。Returned values are in the range of 0~1 (0=closest to sensor (i.e. close clipping plane), 1=farthest from sensor (i.e. far clipping plane)) 。由於圖像的水平/垂直分辨率均設置為2,因此depthBuffer包含了4個像素點的深度信息:

  -- pixel(1,1) distance is at depthBuffer[1]
  -- pixel(1,2) distance is at depthBuffer[2]
  -- pixel(2,1) distance is at depthBuffer[3]
  -- pixel(2,2) distance is at depthBuffer[4]

  下面代碼獲取了4個像素點的深度信息,並輸出到狀態欄中,結果為:0.50, 0.25, 1.00, 0.00

if (sim_call_type==sim_childscriptcall_initialization) then

    -- Put some initialization code here
    sensor = simGetObjectHandle("Vision_sensor")

end


if (sim_call_type==sim_childscriptcall_sensing) then

    -- Put your main SENSING code here
    depthMap = simGetVisionSensorDepthBuffer(sensor)
    info = string.format("%.2f,%.2f,%.2f,%.2f",depthMap[1],depthMap[2],depthMap[3],depthMap[4])
    simAddStatusbarMessage(info)

end

  另外,上面提到的Packet1中特定的數據可以通過函數simReadVisionSensor來讀取(If additional filter components return values, then they will be appended as packets to the first packet)

number result,table auxiliaryValuePacket1,table auxiliaryValuePacket2,... = simReadVisionSensor(number visionSensorHandle)

  測試可得圖像上深度最小(Packet1[5])、最大(Packet1[10])、平均值(Packet1[15] )分別為:0、1、0.44

  • 彩色圖像(color image)

  修改一下上面的模型,將立方體分別賦予紅、綠、藍、黑4種顏色,filter設置為默認的”Original image to work image→Work image to output image“

技術分享

  為了得到像素的RGB值,可以使用simGetVisionSensorImage函數:

table/string imageBuffer = simGetVisionSensorImage(number sensorHandle, number posX=0,number posY=0,number sizeX=0,number sizeY=0,number returnType=0)

  對於simGetVisionSensorImage函數,返回一維表imageBuffer的長度為sizeX*sizeY*3,RGB值在0~1的範圍內(0代表強度最小,1代表強度最大)。如果是灰度圖( sensorHandle: handle of the vision sensor. Can be combined with sim_handleflag_greyscale (simply add sim_handleflag_greyscale to sensorHandle), if you wish to retrieve the grey scale equivalent)則image bufferf將只包含灰度信息。imageBuffer[1]、imageBuffer[2]、imageBuffer[3]分別代表像素點(1,1)處的RGB值,依此類推imageBuffer[4]、imageBuffer[5]、imageBuffer[6]代表像素點(1,2)處的RGB值……

  調用simGetVisionSensorImage輸出4個像素點的RGB值如下:

  0.00, 0.40, 0.00 → 綠
  0.70, 0.00, 0.00 → 紅
  0.00, 0.00, 0.00 → 黑
  0.00, 0.00, 1.00 → 藍

  函數返回的RGB分量大小與物體顏色的設置有關(漫反射分量(Diffuse component)、高光分量(Specular component)、自發光分量(Emissive component)等參數的設置),這裏藍色立方體的自發光參數設為最大1,綠色和紅色的都比較小,因此得到的顏色強度各不相同。

  獲取灰度圖的代碼如下:

imageBuffer = simGetVisionSensorImage(sensor + sim_handleflag_greyscale)

info = string.format("%.2f,%.2f,%.2f,%.2f",imageBuffer[1],imageBuffer[2],imageBuffer[3],imageBuffer[4]) 
simAddStatusbarMessage(info)

  上面提到過有些filter還可以返回一些圖像信息,這些信息會保存到Packet2、Packet3...之中(If additional filter components return values, then they will be appended as packets to the first packet)。比如Blob Detection filter就能返回圖像上連通區域的信息。Blob檢測是對圖像中相同像素的連通域進行檢測,在計算機視覺中的Blob是指圖像中的具有相似顏色、紋理等特征所組成的一塊連通區域。Blob分析就是將圖像進行二值化,分割得到前景和背景,然後進行連通區域檢測,從而得到Blob塊的過程。

技術分享

  調用simReadVisionSensor來獲取連通域信息,Packet2中將包含如下數據:

Packet2 = {blob count, dataSizePerBlob, blob1 size, blob1 orientation, blob1 position x, blob1 position y, blob1 width, blob1 height, blob2 size,blob2 orientation, blob2 position x, blob2 position y, blob2 width, blob2 height,...}

  在Tool→User settings中將"Hide console window"前面的勾去掉,打開控制臺窗口(因為要使用print函數在控制臺輸出信息),在Child script中加入如下代碼:

技術分享
if (sim_call_type==sim_childscriptcall_initialization) then

    -- Put some initialization code here

    sensor = simGetObjectHandle("Vision_sensor")

end


if (sim_call_type==sim_childscriptcall_actuation) then

end


if (sim_call_type==sim_childscriptcall_sensing) then

    -- Put your main SENSING code here
    result, t0, t1 = simReadVisionSensor(sensor) 
    if (t1) then -- in t1 we should have the blob information if the camera was set-up correctly
        blobCount=t1[1]
        dataSizePerBlob=t1[2]
        print(tostring(blobCount).."bolobs")

        -- Now we go through all blobs:
        for i=1,blobCount,1 do
            blobSize=t1[2+(i-1)*dataSizePerBlob+1]
            blobOrientation=t1[2+(i-1)*dataSizePerBlob+2] --This value is given in radians and represents the relative orientation of the blob‘s bounding box
            blobPos={t1[2+(i-1)*dataSizePerBlob+3],t1[2+(i-1)*dataSizePerBlob+4]} --{position x, position y}
            blobBoxDimensions={t1[2+(i-1)*dataSizePerBlob+5],t1[2+(i-1)*dataSizePerBlob+6]} --{width, height}
            
            print("blob"..tostring(i)..":  position:{"..tostring(blobPos[1])..","..tostring(blobPos[2]).."} orientation:"..tostring(blobOrientation*180/math.pi)) 
        end
    end

end


if (sim_call_type==sim_childscriptcall_cleanup) then

end
View Code

技術分享

參考:

V-rep學習筆記:視覺傳感器1

Measuring the degree of flatness in a landscape

V-rep學習筆記:視覺傳感器2