1. 程式人生 > >OpenCV&Qt學習之三——影象的初步處理

OpenCV&Qt學習之三——影象的初步處理

原文地址為:OpenCV&Qt學習之三——影象的初步處理

Qt影象的縮放顯示

實現影象縮放的方法很多,在 &Qt學習之一——開啟圖片檔案並顯示 的例程中,label控制元件是通過

來實現適應影象顯示的,但是由於視窗固定,在開啟的影象小於控制元件大小時就會縮在左上角顯示,在開啟影象過大時則顯示不全。因此這個例程中首先實現影象適合視窗的縮放顯示。

由於是基於OpenCV和Qt的影象處理,因此影象的縮放處理在OpenCV和Qt都可以完成,我這裡就把OpenCV用作影象的原始處理,Qt用作顯示處理,因此縮放顯示由Qt完成。

Qt中QImage提供了用於縮放的基本函式,而且功能非常強大,使用Qt自帶的幫助可以檢索到相關資訊。

函式原型:

這是直接獲取大小,還有另一種形式:

函式說明以及引數在文件中已經說的非常清楚了,文件摘錄如下:

Returns a copy of the image scaled to a rectangle defined by the given size according to the given aspectRatioMode and transformMode.

image

  1. If aspectRatioMode is Qt::IgnoreAspectRatio, the image is scaled to size.
  2. If aspectRatioMode is
    Qt::KeepAspectRatio, the image is scaled to a rectangle as large as possible inside size, preserving the aspect ratio.
  3. If aspectRatioMode is Qt::KeepAspectRatioByExpanding, the image is scaled to a rectangle as small as possible outside size, preserving the aspect ratio.

官方文件中已經說的比較清楚了,程式碼實現也比較簡單,程式碼如下:

顯示效果如下:

image

QImage的一點疑問與理解

在查詢資料時參考了這篇  Qt中影象的顯示與基本操作 部落格,但是存在一些疑點,部落格中相關程式碼如下:

對於以上程式碼通過和我之前的程式碼做簡單對比,發現有幾點不一樣的地方:

  1. 影象的定義方式,這裡的定義方式為QImage* imgScale = new QImage
  2. scaled函式的呼叫方式,一個是imgScaled = img.scaled後者為*imgScaled=img->scaled,我最開始也是將.寫為->一直沒找出錯誤,提示base operand of '->' has non-pointer type 'QImage'

繼續查詢Qt的幫助手冊,發現QImage的建構函式還真是多:

Public Functions

( const QSize & size, Format format )

( int width, int height, Format format )

( uchar * data, int width, int height, Format format )

( const uchar * data, int width, int height, Format format )

( uchar * data, int width, int height, int bytesPerLine, Format format )

( const uchar * data, int width, int height, int bytesPerLine, Format format )

( const char * const[] xpm )

( const QString & fileName, const char * format = 0 )

( const char * fileName, const char * format = 0 )

( const QImage & image )

QImage提供了適用於不同場合的構造方式,在手冊中對他們也有具體的應用,但是我仍然沒找到QImage image;和QImage* image = new QImage這兩種究竟對應的是哪兩種,有什麼區別和不同。 在上一篇博文  OpenCV&Qt學習之二——QImage的進一步認識  中提到了對於影象資料的一點認識,其中提到QImage是對現有資料的一種重新整合,是一種格式,但是資料還是指向原來的。從這裡來看還需要根據構造方式具體區別,並不完全正確。

凌亂查了查資料,網上的資料就那幾個,互相轉來轉去的,而且多數比較老,仍然沒有幫助我想通關於這裡面數據結構的一些疑問,Qt 和 OpenCV對C和指標的要求還是比較高的,長時間從微控制器類的程式過來那點功底還真不夠,具體的C應用都忘光了。這個問題只能暫時擱置,在後面的學習中慢慢理解。

基於OpenCV的影象初步處理

以下兩個例程根據書籍 OpenCV 2 Computer Vision Application Programming Cookbook中的相關例程整理,這是一本比較新也比較基礎的入門書籍。

salt-and-pepper noise

關於影象資料的基礎知識參見這段介紹:

Fundamentally, an image is a matrix of numerical values. This is why OpenCV 2 manipulates them using the cv::Mat data structure. Each element of the matrix represents one pixel. For a gray-level image (a "black-and-white" image), pixels are unsigned 8-bit values where 0 corresponds to black and corresponds 255 to white. For a color image, three such values per pixel are required to represent the usual three primary color channels {Red, Green, Blue}. A matrix element is therefore made, in this case, of a triplet of values.

這兒以想影象中新增saltand-pepper noise為例,來說明如何訪問影象矩陣中的獨立元素。saltand-pepper noise就是圖片中一些畫素點,隨機的被黑色或者白色的畫素點所替代,因此新增saltand-pepper noise也比較簡單,只需要隨機的產生行和列,將這些行列值對應的畫素值更改即可,當然通過上面的介紹,需要更改RGB3個通道。程式如下:

void Widget::salt(cv::Mat &image, int n)
{

int i,j;
for (int k=0; k<n; k++)
{

i
= qrand()%image.cols;
j
= qrand()%image.rows;

if (image.channels() == 1) { // gray-level image

image.at
<uchar>(j,i)= 255;

}
else if (image.channels() == 3) { // color image

image.at
<cv::Vec3b>(j,i)[0]= 255;
image.at
<cv::Vec3b>(j,i)[1]= 255;
image.at
<cv::Vec3b>(j,i)[2]= 255;
}
}
}

對Win 7系統中的自帶影象考拉進行處理後的效果如下圖所示(程式是Ubuntu 12.04下的):

image

減少色彩位數

在很多處理中需要對圖片中的所有畫素進行遍歷操作,採用什麼方式進行這個操作是需要思考的問題,關於這個問題的論述可以參考下面一段簡介:

Color images are composed of 3-channel pixels. Each of these channels corresponds to the intensity value of one of the three primary colors (red, green, blue). Since each of these values is an 8-bit unsigned char, the total number of colors is 256x256x256, which is more than 16 million colors. Consequently, to reduce the complexity of an analysis, it is sometimes useful to reduce the number of colors in an image. One simple way to achieve this goal is to simply subdivide the RGB space into cubes of equal sizes. For example, if you reduce the number of colors in each dimension by 8, then you would obtain a total of 32x32x32 colors. Each color in the original image is then assigned a new color value in the color-reduced image that corresponds to the value in the center of the cube to which it belongs.

這個例子就是通過操作每一個畫素點來減少色彩的位數,基本內容在以上的英文引文中已經有了介紹,程式碼的實現也比較直接。在彩色影象中,3個通道的資料是依次排列的,每一行的畫素三個通道的值依次排列,cv::Mat中的通道排列順序為BGR,那麼一個影象需要的地址塊空間為uchar 寬×高×3.但是需要注意的是,有些處理器針對行數為4或8的影象處理更有效率,因此為了提高效率就會填充一些額外的畫素,這些額外的畫素不被顯示和儲存,值是忽略的。

實現這個功能的程式碼如下:

// using .ptr and []
void Widget::colorReduce0(cv::Mat &image, int div)
{

int nl= image.rows; // number of lines
int nc= image.cols * image.channels(); // total number of elements per line

for (int j=0; j<nl; j++)
{

uchar
* data= image.ptr<uchar>(j);

for (int i=0; i<nc; i++)
{

// process each pixel ---------------------
data[i]= data[i]/div*div+div/2;

// end of pixel processing ----------------

}
// end of line
}
}

data[i]= data[i]/div*div+div/2; 通過整除的方式,就畫素位數進行減少,這裡沒明白的是為啥後面還要加上div/2。

效果如下:

image

程式原始碼:

#include "widget.h"
#include
"ui_widget.h"
#include
<QDebug>
Widget::Widget(QWidget
*parent) :
QWidget(parent),
ui(
new Ui::Widget)
{
ui
->setupUi(this);

}

Widget::
~Widget()
{
delete ui;
}

void Widget::on_openButton_clicked()
{
QString fileName
= QFileDialog::getOpenFileName(this,tr("Open Image"),
".",tr("Image Files (*.png *.jpg *.bmp)"));
qDebug()
<<"filenames:"<<fileName;
image
= cv::imread(fileName.toAscii().data());
ui
->imgfilelabel->setText(fileName);
//here use 2 ways to make a copy
// image.copyTo(originalimg); //make a copy
originalimg = image.clone(); //clone the img
qimg = Widget::Mat2QImage(image);
display(qimg);
//display by the label
if(image.data)
{
ui
->saltButton->setEnabled(true);
ui
->originalButton->setEnabled(true);
ui
->reduceButton->setEnabled(true);
}
}

QImage Widget::Mat2QImage(
const cv::Mat &mat)
{
QImage img;
if(mat.channels()==3)
{
//cvt Mat BGR 2 QImage RGB
cvtColor(mat,rgb,CV_BGR2RGB);
img
=QImage((const unsigned char*)(rgb.data),
rgb.cols,rgb.rows,
rgb.cols
*rgb.channels(),
QImage::Format_RGB888);
}
else
{
img
=QImage((const unsigned char*)(mat.data),
mat.cols,mat.rows,
mat.cols
*mat.channels(),
QImage::Format_RGB888);
}
return img;
}

void Widget::display(QImage img)
{
QImage imgScaled;
imgScaled
= img.scaled(ui->imagelabel->size(),Qt::KeepAspectRatio);
// imgScaled = img.QImage::scaled(ui->imagelabel->width(),ui->imagelabel->height(),Qt::KeepAspectRatio);
ui->imagelabel->setPixmap(QPixmap::fromImage(imgScaled));
}

void Widget::on_originalButton_clicked()
{
qimg
= Widget::Mat2QImage(originalimg);
display(qimg);
}

void Widget::on_saltButton_clicked()
{
salt(image,
3000);
qimg
= Widget::Mat2QImage(image);
display(qimg);
}
void Widget::on_reduceButton_clicked()
{
colorReduce0(image,
64);
qimg
= Widget::Mat2QImage(image);
display(qimg);
}
void Widget::salt(cv::Mat &image, int n)
{

int i,j;
for (int k=0; k<n; k++)
{

i
= qrand()%image.cols;
j
= qrand()%image.rows;

if (image.channels() == 1)
{
// gray-level image

image.at
<uchar>(j,i)= 255;

}
else if (image.channels() == 3)
{
// color image

image.at
<cv::Vec3b>(j,i)[0]= 255;
image.at
<cv::Vec3b>(j,i)[1]= 255;
image.at
<cv::Vec3b>(j,i)[2]= 255;
}
}
}

// using .ptr and []
void Widget::colorReduce0(cv::Mat &image, int div)
{

int nl= image.rows; // number of lines
int nc= image.cols * image.channels(); // total number of elements per line

for (int j=0; j<nl; j++)
{

uchar
* data= image.ptr<uchar>(j);

for (int i=0; i<nc; i++)
{

// process each pixel ---------------------
data[i]= data[i]/div*div+div/2;

// end of pixel processing ----------------

}
// end of line
}
}
#ifndef WIDGET_H
#define WIDGET_H

#include
<QWidget>
#include
<QImage>
#include
<QFileDialog>
#include
<QTimer>
#include
<opencv2/core/core.hpp>
#include
<opencv2/highgui/highgui.hpp>
#include
<opencv2/imgproc/imgproc.hpp>

using namespace cv;

namespace Ui {
class Widget;
}

class Widget : public QWidget
{
Q_OBJECT

public:
explicit Widget(QWidget *parent = 0);
~Widget();

private slots:
void on_openButton_clicked();
QImage Mat2QImage(
const cv::Mat &mat);
void display(QImage image);
void salt(cv::Mat &image, int n);


void on_saltButton_clicked();
void on_reduceButton_clicked();
void colorReduce0(cv::Mat &image, int div);

void on_originalButton_clicked();

private:
Ui::Widget
*ui;
cv::Mat image;
cv::Mat originalimg;
//store the original img
QImage qimg;
QImage imgScaled;
cv::Mat rgb;
};

#endif // WIDGET_H

書中還給了其他十餘種操作的方法:

#include <iostream>

#include
<opencv2/core/core.hpp>
#include
<opencv2/highgui/highgui.hpp>

// using .ptr and []
void colorReduce0(cv::Mat &image, int div=64) {

int nl= image.rows; // number of lines
int nc= image.cols * image.channels(); // total number of elements per line

for (int j=0; j<nl; j++) {

uchar
* data= image.ptr<uchar>(j);

for (int i=0; i<nc; i++) {

// process each pixel ---------------------

data[i]
= data[i]/div*div + div/2;

// end of pixel processing ----------------

}
// end of line
}
}

// using .ptr and * ++
void colorReduce1(cv::Mat &image, int div=64) {

int nl= image.rows; // number of lines
int nc= image.cols * image.channels(); // total number of elements per line

for (int j=0; j<nl; j++) {

uchar
* data= image.ptr<uchar>(j);

for (int i=0; i<nc; i++) {

// process each pixel ---------------------

*data++= *data/div*div + div/2;

// end of pixel processing ----------------

}
// end of line
}
}

// using .ptr and * ++ and modulo
void colorReduce2(cv::Mat &image, int div=64) {

int nl= image.rows; // number of lines
int nc= image.cols * image.channels(); // total number of elements per line

for (int j=0; j<nl; j++) {

uchar
* data= image.ptr<uchar>(j);

for (int i=0; i<nc; i++) {

// process each pixel ---------------------

int v= *data;
*data++= v - v%div + div/2;

// end of pixel processing ----------------

}
// end of line
}
}

// using .ptr and * ++ and bitwise
void colorReduce3(cv::Mat &image, int div=64) {

int nl= image.rows; // number of lines
int nc= image.cols * image.channels(); // total number of elements per line
int n= static_cast<int>(log(static_cast<double>(div))/log(2.0));
// mask used to round the pixel value
uchar mask= 0xFF<<n; // e.g. for div=16, mask= 0xF0

for (int j=0; j<nl; j++) {

uchar
* data= image.ptr<uchar>(j);

for (int i=0; i<nc; i++) {

// process each pixel ---------------------

*data++= *data&mask + div/2;

// end of pixel processing ----------------

}
// end of line
}
}


// direct pointer arithmetic
void colorReduce4(cv::Mat &image, int div=64) {

int nl= image.rows; // number of lines
int nc= image.cols * image.channels(); // total number of elements per line
int n= static_cast<int>(log(static_cast<double>(div))/log(2.0));
int step= image.step; // effective width
// mask used to round the pixel value
uchar mask= 0xFF<<n; // e.g. for div=16, mask= 0xF0

// get the pointer to the image buffer
uchar *data= image.data;

for (int j=0; j<nl; j++) {

for (int i=0; i<nc; i++) {

// process each pixel ---------------------

*(data+i)= *data&mask + div/2;

// end of pixel processing ----------------

}
// end of line

data
+= step; // next line
}
}

// using .ptr and * ++ and bitwise with image.cols * image.channels()
void colorReduce5(cv::Mat &image, int div=64) {

int nl= image.rows; // number of lines
int n= static_cast<int>(log(static_cast<double>(div))/log(2.0));
// mask used to round the pixel value
uchar mask= 0xFF<<n; // e.g. for div=16, mask= 0xF0

for (int j=0; j<nl; j++) {

uchar
* data= image.ptr<uchar>(j);

for (int i=0; i<image.cols * image.channels(); i++) {

// process each pixel ---------------------

*data++= *data&mask + div/2;

// end of pixel processing ----------------

}
// end of line
}
}

// using .ptr and * ++ and bitwise (continuous)
void colorReduce6(cv::Mat &image, int div=64) {

int nl= image.rows; // number of lines
int nc= image.cols * image.channels(); // total number of elements per line

if (image.isContinuous()) {
// then no padded pixels
nc= nc*nl;
nl
= 1; // it is now a 1D array
}

int n= static_cast<int>(log(static_cast<double>(div))/log(2.0));
// mask used to round the pixel value
uchar mask= 0xFF<<n; // e.g. for div=16, mask= 0xF0

for (int j=0; j<nl; j++) {

uchar
* data= image.ptr<uchar>(j);

for (int i=0; i<nc; i++) {

// process each pixel ---------------------

*data++= *data&mask + div/2;

// end of pixel processing ----------------

}
// end of line
}
}

// using .ptr and * ++ and bitwise (continuous+channels)
void colorReduce7(cv::Mat &image, int div=64) {

int nl= image.rows; // number of lines
int nc= image.cols ; // number of columns

if (image.isContinuous()) {
// then no padded pixels
nc= nc*nl;
nl
= 1; // it is now a 1D array
}

int n= static_cast<int>(log(static_cast<double>(div))/log(2.0));
// mask used to round the pixel value
uchar mask= 0xFF<<n; // e.g. for div=16, mask= 0xF0

for (int j=0; j<nl; j++) {

uchar
* data= image.ptr<uchar>(j);

for (int i=0; i<nc; i++) {

// process each pixel ---------------------

*data++= *data&mask + div/2;
*data++= *data&mask + div/2;
*data++= *data&mask + div/2;

// end of pixel processing ----------------

}
// end of line
}
}


// using Mat_ iterator
void colorReduce8(cv::Mat &image, int div=64) {

// get iterators
cv::Mat_<cv::Vec3b>::iterator it= image.begin<cv::Vec3b>();
cv::Mat_
<cv::Vec3b>::iterator itend= image.end<cv::Vec3b>();

for ( ; it!= itend; ++it) {

// process each pixel ---------------------

(
*it)[0]= (*it)[0]/div*div + div/2;
(
*it)[1]= (*it)[1]/div*div + div/2;
(
*it)[2]= (*it)[2]/div*div + div/2;

// end of pixel processing ----------------
}
}

// using Mat_ iterator and bitwise
void colorReduce9(cv::Mat &image, int div=64) {

// div must be a power of 2
int n= static_cast<int>(log(static_cast<double>(div))/log(2.0));
// mask used to round the pixel value
uchar mask= 0xFF<<n; // e.g. for div=16, mask= 0xF0

// get iterators
cv::Mat_<cv::Vec3b>::iterator it= image.begin<cv::Vec3b>();
cv::Mat_
<cv::Vec3b>::iterator itend= image.end<cv::Vec3b>();

// scan all pixels
for ( ; it!= itend; ++it) {

// process each pixel ---------------------

(
*it)[0]= (*it)[0]&mask + div/2;
(
*it)[1]= (*it)[1]&mask + div/2;
(
*it)[2]= (*it)[2]&mask + div/2;

// end of pixel processing ----------------
}
}

// using MatIterator_
void colorReduce10(cv::Mat &image, int div=64) {

// get iterators
cv::Mat_<cv::Vec3b> cimage= image;
cv::Mat_
<cv::Vec3b>::iterator it=cimage.begin();
cv::Mat_
<cv::Vec3b>::iterator itend=cimage.end();

for ( ; it!= itend; it++) {

// process each pixel ---------------------

(
*it)[0]= (*it)[0]/div*div + div/2;
(
*it)[1]= (*it)[1]/div*div + div/2;
(
*it)[2]= (*it)[2]/div*div + div/2;

// end of pixel processing ----------------
}
}


void colorReduce11(cv::Mat &image, int div=64) {

int nl= image.rows; // number of lines
int nc= image.cols; // number of columns

for (int j=0; j<nl; j++) {
for (int i=0; i<nc; i++) {

// process each pixel ---------------------

image.at
<cv::Vec3b>(j,i)[0]= image.at<cv::Vec3b>(j,i)[0]/div*div + div/2;
image.at
<cv::Vec3b>(j,i)[1]= image.at<cv::Vec3b>(j,i)[1]/div*div + div/2;
image.at
<cv::Vec3b>(j,i)[2]= image.at<cv::Vec3b>(j,i)[2]/div*div + div/2;

// end of pixel processing ----------------

}
// end of line
}
}

// with input/ouput images
void colorReduce12(const cv::Mat &image, // input image
cv::Mat &result, // output image
int div=64) {

int nl= image.rows; // number of lines
int nc= image.cols ; // number of columns

// allocate output image if necessary
result.create(image.rows,image.cols,image.type());

// created images have no padded pixels
nc= nc*nl;
nl
= 1; // it is now a 1D array

int n= static_cast<int>(log(static_cast<double>(div))/log(2.0));
// mask used to round the pixel value
uchar mask= 0xFF<<n; // e.g. for div=16, mask= 0xF0

for (int j=0; j<nl; j++) {

uchar
* data= result.ptr<uchar>(j);
const uchar* idata= image.ptr<uchar>(j);

for (int i=0; i<nc; i++) {

// process each pixel ---------------------

*data++= (*idata++)&mask + div/2;
*data++= (*idata++)&mask + div/2;
*data++= (*idata++)&mask + div/2;

// end of pixel processing ----------------

}
// end of line
}
}

// using overloaded operators
void colorReduce13(cv::Mat &image, int div=64) {

int n= static_cast<int>(log(static_cast<double>(div))/log(2.0));
// mask used to round the pixel value
uchar mask= 0xFF<<n; // e.g. for div=16, mask= 0xF0

// perform color reduction
image=(image&cv::Scalar(mask,mask,mask))+cv::Scalar(div/2,div/2,div/2);
}

轉載請註明本文地址:OpenCV&Qt學習之三——影象的初步處理