1. 程式人生 > >librealsense2檢視相機裝置資訊

librealsense2檢視相機裝置資訊

文章目錄

1. librealsense2裝置資訊讀取

librealsense2提供的介面可以檢測連線到系統的所有realsense裝置,如sr300,並可以讀取每個裝置的編號、彩色相機和深度相機的內參(相機內參矩陣引數和畸變係數及其畸變模型)、彩色相機和深度相機座標系間的變換矩陣。
介面呼叫如下:

#include <librealsense2/rs.hpp> // Include RealSense Cross Platform API
#include<iostream> #include<opencv2/opencv.hpp> int main(int argc, char * argv[]) try { rs2::log_to_console(RS2_LOG_SEVERITY_ERROR); /// Create librealsense context for managing devices rs2::context ctx; auto devs = ctx.query_devices(); ///獲取裝置列表 int device_num =
devs.size(); std::cout<<"device num: "<<device_num<<std::endl;///裝置數量 ///我只連了一個裝置,因此我檢視第0個裝置的資訊 /// 當無裝置連線時此處丟擲rs2::error異常 rs2::device dev = devs[0]; ///裝置編號,每個裝置都有一個不同的編號, 如果連線了多個裝置,便可根據此編號找到你希望啟動的裝置 char serial_number[100] = {0}; strcpy(serial_number, dev.get_info(RS2_CAMERA_INFO_SERIAL_NUMBER)
); printf("serial_number: %s\n",serial_number); ///設定從裝置管道獲取的深度圖和彩色圖的配置物件 rs2::config cfg; ///配置彩色影象流:解析度640*480,影象格式:BGR, 幀率:30幀/秒 ///預設配置任意一個裝置,若要配置指定的裝置可以根據裝置在裝置列表裡的序列號進行制定: ///int indx = 0; ///表示第0個裝置 ///cfg.enable_stream(RS2_STREAM_COLOR,indx, 640, 480, RS2_FORMAT_BGR8, 30); cfg.enable_stream(RS2_STREAM_COLOR,640, 480, RS2_FORMAT_BGR8, 30); ///配置深度影象流:解析度640*480,影象格式:Z16, 幀率:30幀/秒 cfg.enable_stream(RS2_STREAM_DEPTH, 640, 480, RS2_FORMAT_Z16, 30); ///生成Realsense管道,用來封裝實際的相機裝置 rs2::pipeline pipe; pipe.start(cfg); ///根據給定的配置啟動相機管道 rs2::frameset data; data = pipe.wait_for_frames();///等待一幀資料,預設等待5s rs2::depth_frame depth = data.get_depth_frame(); ///獲取深度影象資料 rs2::video_frame color = data.get_color_frame(); ///獲取彩色影象資料 rs2::stream_profile dprofile = depth.get_profile(); rs2::stream_profile cprofile = color.get_profile(); ///獲取彩色相機內參 rs2::video_stream_profile cvsprofile(cprofile); rs2_intrinsics color_intrin = cvsprofile.get_intrinsics(); std::cout<<"\ncolor intrinsics: "; std::cout<<color_intrin.width<<" "<<color_intrin.height<<" "; std::cout<<color_intrin.ppx<<" "<<color_intrin.ppy<<" "; std::cout<<color_intrin.fx<<" "<<color_intrin.fy<<std::endl; std::cout<<"coeffs: "; for(auto value : color_intrin.coeffs) std::cout<<value<<" "; std::cout<<std::endl; std::cout<<"distortion model: "<<color_intrin.model<<std::endl;///畸變模型 ///獲取深度相機內參 rs2::video_stream_profile dvsprofile(dprofile); rs2_intrinsics depth_intrin = dvsprofile.get_intrinsics(); std::cout<<"\ndepth intrinsics: "; std::cout<<depth_intrin.width<<" "<<depth_intrin.height<<" "; std::cout<<depth_intrin.ppx<<" "<<depth_intrin.ppy<<" "; std::cout<<depth_intrin.fx<<" "<<depth_intrin.fy<<std::endl; std::cout<<"coeffs: "; for(auto value : depth_intrin.coeffs) std::cout<<value<<" "; std::cout<<std::endl; std::cout<<"distortion model: "<<depth_intrin.model<<std::endl;///畸變模型 ///獲取深度相機相對於彩色相機的外參,即變換矩陣: P_color = R * P_depth + T rs2_extrinsics extrin = dprofile.get_extrinsics_to(cprofile); std::cout<<"\nextrinsics of depth camera to color camera: \nrotaion: "<<std::endl; for(int i = 0; i < 3; ++i){ for(int j = 0; j < 3; ++j){ float value = extrin.rotation[3*i + j]; std::cout<<value<<" "; } std::cout<<std::endl; } std::cout<<std::endl; std::cout<<"translation: "; for(auto value : extrin.translation) std::cout<<value<<" "; std::cout<<std::endl; while(1) { ///等待一幀資料,預設等待5s data = pipe.wait_for_frames(); rs2::depth_frame depth = data.get_depth_frame(); ///獲取深度影象資料 rs2::video_frame color = data.get_color_frame(); ///獲取彩色影象資料 int color_width = color.get_width(); int color_height = color.get_height(); int depth_width = depth.get_width(); int depth_height = depth.get_height(); if (!color || !depth) break; ///如果獲取不到資料則退出 ///將彩色影象和深度影象轉換為Opencv格式 cv::Mat image(cv::Size(color_width, color_height), CV_8UC3, (void*)color.get_data(), cv::Mat::AUTO_STEP); cv::Mat depthmat(cv::Size(depth_width, depth_height), CV_16U, (void*)depth.get_data(), cv::Mat::AUTO_STEP); ///顯示 cv::imshow("image",image); cv::imshow("depth",depthmat); cv::waitKey(1); } return EXIT_SUCCESS; } catch (const rs2::error & e) { ///捕獲相機裝置的異常 std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n " << e.what() << std::endl; return EXIT_FAILURE; } catch (const std::exception& e) { std::cerr<<"Other error : " << e.what() << std::endl; return EXIT_FAILURE; }

編譯執行以上程式碼可以在螢幕上打印出相關引數如圖:
在這裡插入圖片描述
可以看到我連線了一個裝置,裝置編號為541142001592,同時也可以看到彩色相機和深度相機的內參和畸變係數及其畸變模型,彩色相機畸變模型為:None, 並且畸變係數都為0,表示彩色相機無畸變;而深度相機畸變模型為:Inverse Brown Conrady,並且畸變係數不為0,表示深度相機存在畸變。畸變係數從左到右分別是:k1, k2, p1, p2, k3。在librealsense2安裝路徑下的rsutil.h檔案可以看到將3d座標點投影到2d影象平面的函式 rs2_project_point_to_pixel 寫著“ //assert(intrin->model != RS2_DISTORTION_INVERSE_BROWN_CONRADY); // Cannot project to an inverse-distorted image”,也就是說我使用的裝置的深度相機無法將3d座標影象直接投影到2d影象平面,有關相機內參和畸變模型參考另一篇部落格相機內參與畸變模型

2.realsense 投影函式和反投影函式

在librealsense2安裝路徑下的rsutil.h檔案裡可以找到相機的投影函式和反投影函式,投影函式即將相機座標系下的一個3d點投影到2d影象平面上的畫素座標,反投影函式即已知相機2d影象平面上畫素點對應的深度值可以計算出該點在相機座標系下的3d座標值,下面貼上這兩個函數出來:
投影函式:

/* Given a point in 3D space, compute the corresponding pixel coordinates in an image with no distortion or forward distortion coefficients produced by the same camera */
static void rs2_project_point_to_pixel(float pixel[2], const struct rs2_intrinsics * intrin, const float point[3])
{
    //assert(intrin->model != RS2_DISTORTION_INVERSE_BROWN_CONRADY); // Cannot project to an inverse-distorted image

    float x = point[0] / point[2], y = point[1] / point[2];

    if(intrin->model == RS2_DISTORTION_MODIFIED_BROWN_CONRADY)
    {

        float r2  = x*x + y*y;
        float f = 1 + intrin->coeffs[0]*r2 + intrin->coeffs[1]*r2*r2 + intrin->coeffs[4]*r2*r2*r2;
        x *= f;
        y *= f;
        float dx = x + 2*intrin->coeffs[2]*x*y + intrin->coeffs[3]*(r2 + 2*x*x);
        float dy = y + 2*intrin->coeffs[3]*x*y + intrin->coeffs[2]*(r2 + 2*y*y);
        x = dx;
        y = dy;
    }
    if (intrin->model == RS2_DISTORTION_FTHETA)
    {
        float r = sqrt(x*x + y*y);
            auto rd = (1.0f / intrin->coeffs[0] * atan(2 * r* tan(intrin->coeffs[0] / 2.0f)));
            x *= rd / r;
            y *= rd / r;
    }

    pixel[0] = x * intrin->fx + intrin->ppx;
    pixel[1] = y * intrin->fy + intrin->ppy;
}

反投影函式:

/* Given pixel coordinates and depth in an image with no distortion or inverse distortion coefficients, compute the corresponding point in 3D space relative to the same camera */
static void rs2_deproject_pixel_to_point(float point[3], const struct rs2_intrinsics * intrin, const float pixel[2], float depth)
{
    assert(intrin->model != RS2_DISTORTION_MODIFIED_BROWN_CONRADY); // Cannot deproject from a forward-distorted image
    assert(intrin->model != RS2_DISTORTION_FTHETA); // Cannot deproject to an ftheta image
    //assert(intrin->model != RS2_DISTORTION_BROWN_CONRADY); // Cannot deproject to an brown conrady model

    float x = (pixel[0] - intrin->ppx) / intrin->fx;
    float y = (pixel[1] - intrin->ppy) / intrin->fy;
    if(intrin->model == RS2_DISTORTION_INVERSE_BROWN_CONRADY)
    {
        float r2  = x*x + y*y;
        float f = 1 + intrin->coeffs[0]*r2 + intrin->coeffs[1]*r2*r2 + intrin->coeffs[4]*r2*r2*r2;
        float ux = x*f + 2*intrin->coeffs[2]*x*y + intrin->coeffs[3]*(r2 + 2*x*x);
        float uy = y*f + 2*intrin->coeffs[3]*x*y + intrin->coeffs[2]*(r2 + 2*y*y);
        x = ux;
        y = uy;
    }
    point[0] = depth * x;
    point[1] = depth * y;
    point[2] = depth;
}

3.深度相機與彩色相機的座標變換

第一節中我們獲取了深度相機到彩色相機的座標變換矩陣,下列realsense函式將深度相機座標系下的一個座標點轉換到彩色相機座標系下的座標值,同理也可以計算彩色相機座標系下的座標值在深度相機座標系下的座標值:

/* Transform 3D coordinates relative to one sensor to 3D coordinates relative to another viewpoint */
static void rs2_transform_point_to_point(float to_point[3], const struct rs2_extrinsics * extrin, const float from_point[3])
{
    to_point[0] = extrin->rotation[0] * from_point[0] + extrin->rotation[3] * from_point[1] + extrin->rotation[6] * from_point[2] + extrin->translation[0];
    to_point[1] = extrin->rotation[1] * from_point[0] + extrin->rotation[4] * from_point[1] + extrin->rotation[7] * from_point[2] + extrin->translation[1];
    to_point[2] = extrin->rotation[2] * from_point[0] + extrin->rotation[5] * from_point[1] + extrin->rotation[8] * from_point[2] + extrin->translation[2];
}