v4l2驅動框架分析-1
需要思考的問題:
(1) cimutils應用程式維護了哪些結構體,v4l2驅動框架維護了哪些結構體
(2)/dev/video0 這個節點怎麼建立的
(3)應用層open 裝置節點/dev/video0 的時候,核心中的呼叫關係和具體乾的工作
(4)應用層ioctl 操作後,核心中的呼叫關係流程
(5) VIDIOC_QBUF / VIDIOC_STREAMON / VIDIOC_DQBUF 視訊快取佇列是如何管理的?驅動中在哪裡申請分配記憶體?怎麼入列出列?
(6)I/O操作方式:V4L2_MEMORY_MMAP 和V4L2_MEMORY_USERPTR的區別
整個v4l2的框架分為三層:
在應用層,我們可以在 /dev 目錄發現 video0 類似的裝置節點,上層的攝像頭程式開啟裝置節點進行資料捕獲,顯示視訊畫面。裝置節點的名字很統一,video0 video1 video2...這些裝置節點在是核心層註冊。struct video_device video_register_device /dev/vedio0
核心層 v4l2-dev.c,承上啟下,對於每一個硬體相關層註冊進來的裝置,設定一個統一的介面 v4l2_fops ,既然是統一的介面必然不是具體的視訊裝置的操作函式,應用層呼叫 v4l2_fops 中的函式最終將呼叫到硬體相關層的 video_device 的 fops 。
硬體相關層,與具體的視訊硬體打交道,分配、設定、註冊 video_device 結構體。struct video_device video_register_device /dev/vedio0
v4l2_fops為video4linux2裝置提供了統一的應用層介面
drivers/media/v4l2-core/v4l2-dev.c
static const struct file_operations v4l2_fops
.owner = THIS_MODULE,
.read = v4l2_read,
.write = v4l2_write,
.open = v4l2_open,
.get_unmapped_area = v4l2_get_unmapped_area,
.mmap = v4l2_mmap,
.unlocked_ioctl = v4l2_ioctl,
#ifdef CONFIG_COMPAT
.compat_ioctl = v4l2_compat_ioctl32,
#endif
.release = v4l2_release,
.poll = v4l2_poll,
.llseek = no_llseek,
};
drivers/media/platform/soc_camera/soc_camera.c
static struct v4l2_file_operations soc_camera_fops = {
.owner = THIS_MODULE,
.open = soc_camera_open,
.release = soc_camera_close,
.unlocked_ioctl = video_ioctl2,
.read = soc_camera_read,
.mmap = soc_camera_mmap,
.poll = soc_camera_poll,
};
Video4Linux2是Linux核心中關於視訊裝置的核心驅動框架,為上層的訪問底層的視訊裝置提供了統一的介面。
camera裝置驅動開發:
涉及到的基礎知識點:
(1)字元裝置驅動 (2)裝置模型 (3)平臺裝置驅動 (4)v4l2框架 (5)i2c驅動框架
涉及到的術語:
camera : 指的是整個camera,包括它本身的硬體連線方式及支援i2c控制的i2c裝置
sensor : 指的是支援i2c控制的i2c裝置,它屬於camera的一部分,在核心實現裡也能體現出來
camera host: 指的是與camera相連線的,一般內嵌在soc裡面的控制器
涉及到的資料夾:
drivers/media/platform/soc_camera/ 主要存放camera host驅動,通用的camera驅動也存放在此
drivers/media/i2c/soc_camera/ 主要存放sensor驅動
(1)
Linux系統中視訊輸入裝置主要包括以下四個部分:
(1)字元裝置驅動程式核心:V4L2本身就是一個字元裝置,具有字元裝置所有的特性,暴露介面給使用者空間;
(2)V4L2驅動核心:主要是構建一個核心中標準視訊裝置驅動的框架,為視訊操作提供統一的介面函式;
(3)平臺V4L2裝置驅動:在V4L2框架下,根據平臺自身的特性實現與平臺相關的V4L2驅動部分,包括註冊video_device和v4l2_dev。
(4)具體的sensor驅動:主要上電、提供工作時鐘、視訊影象裁剪、流IO開啟等,實現各種裝置控制方法供上層呼叫並註冊v4l2_subdev。
V4L2的核心原始碼位於drivers/media/v4l2-core/,原始碼以實現的功能可以劃分為四類:
(1)核心模組實現:由v4l2-dev.c實現,主要作用申請字元主裝置號、註冊class和提供video device註冊登出等相關函式;
(2)V4L2框架:由v4l2-device.c、v4l2-subdev.c、v4l2-fh.c、v4l2-ctrls.c等檔案實現,構建V4L2框架;
(3)Videobuf管理:由videobuf2-core.c、videobuf2-dma-contig.c、videobuf2-dma-sg.c、videobuf2-memops.c、videobuf2-vmalloc.c、v4l2-mem2mem.c等檔案實現,完成videobuffer的分配、管理和登出。
(4)Ioctl框架:由v4l2-ioctl.c檔案實現,構建V4L2ioctl的框架。
圖1
(2)
I/O訪問:
V4L2支援三種不同IO訪問方式(核心中還支援了其它的訪問方式,暫不討論):
(1)read和write,是基本幀IO訪問方式,通過read讀取每一幀資料,資料需要在核心和使用者之間拷貝,這種方式訪問速度可能會非常慢;
(2)記憶體對映緩衝區(V4L2_MEMORY_MMAP),是在核心空間開闢緩衝區,應用通過mmap()系統呼叫對映到使用者地址空間。這些緩衝區可以是大而連續DMA緩衝區、通過vmalloc()建立的虛擬緩衝區,或者直接在裝置的IO記憶體中開闢的緩衝區(如果硬體支援);
(3)使用者空間緩衝區(V4L2_MEMORY_USERPTR),是使用者空間的應用中開闢緩衝區,使用者與核心空間之間交換緩衝區指標。很明顯,在這種情況下是不需要mmap()呼叫的,但驅動為有效的支援使用者空間緩衝區,其工作將也會更困難。
Read和write方式屬於幀IO訪問方式,每一幀都要通過IO操作,需要使用者和核心之間資料拷貝,而後兩種是流IO訪問方式,不需要記憶體拷貝,訪問速度比較快。記憶體對映緩衝區訪問方式是比較常用的方式。
記憶體對映快取區方式
硬體層的資料流傳輸
Camerasensor捕捉到影象資料通過並口或MIPI傳輸到CAMIF(camera interface),CAMIF可以對影象資料進行調整(翻轉、裁剪和格式轉換等)。然後DMA控制器設定DMA通道請求AHB將影象資料傳到分配好的DMA緩衝區。
待影象資料傳輸到DMA緩衝區之後,mmap操作把緩衝區對映到使用者空間,應用就可以直接訪問緩衝區的資料。
圖2
v4l2驅動程式碼在drivers\media\v4l2-core資料夾下:
videobuf-core和videobuf2-core;
v4l2-dev.c ---->video_device video_register_device /dev/vedio0
v4l2-device.c ---->v4l2_device soc_camera_host_register---> v4l2_device_register / v4l2_device_register_subdev
v4l2-subdev ----> v4l2_subdev v4l2_i2c_new_subdev
v4l2-ioctl是實現ioctl
video驅動程式碼在driver/media/目錄下,下面分好多子目錄,platform目錄存放的是不同SoC的驅動程式碼,對應video_device,其他大多子目錄如i2c、mmc、usb、tuners、radio等對應subdev的實現
drivers/media/platform/soc_camera/jz_camera_v13.c
drivers/media/i2c/soc_camera/gc2155.c
裝置例項(v4l2_device)
|______子裝置例項(v4l2_subdev)
|______視訊裝置節點(video_device)
|______檔案訪問控制(v4l2_fh)
|______視訊緩衝的處理(videobuf/videobuf2)
struct vb2_buffer
{
struct v4l2_buffer
}
(3)
soc_camera_device 和 soc_camera_host
struct soc_camera_device {
struct list_head list; /* list of all registered devices */
struct soc_camera_desc *sdesc;
struct device *pdev; /* Platform device */
struct device *parent; /* Camera host device */
struct device *control; /* E.g., the i2c client */
s32 user_width;
s32 user_height;
u32 bytesperline; /* for padding, zero if unused */
u32 sizeimage;
enum v4l2_colorspace colorspace;
unsigned char iface; /* Host number */
unsigned char devnum; /* Device number per host */
struct soc_camera_sense *sense; /* See comment in struct definition */
struct video_device *vdev;
struct v4l2_ctrl_handler ctrl_handler;
const struct soc_camera_format_xlate *current_fmt;
struct soc_camera_format_xlate *user_formats;
int num_user_formats;
enum v4l2_field field; /* Preserve field over close() */
void *host_priv; /* Per-device host private data */
/* soc_camera.c private count. Only accessed with .host_lock held */
int use_count;
struct file *streamer; /* stream owner */
union {
struct videobuf_queue vb_vidq;
struct vb2_queue vb2_vidq;
};
};
struct soc_camera_host {
struct v4l2_device v4l2_dev;
struct list_head list;
struct mutex host_lock; /* Protect pipeline modifications */
unsigned char nr; /* Host number */
u32 capabilities;
void *priv;
const char *drv_name;
struct soc_camera_host_ops *ops;
};
應用層xioctl(fd, VIDIOC_QBUF, &buf)
-------------------------------------------------------------------------------
drivers/media/v4l2-core/v4l2-ioctl.c
static int v4l_qbuf(const struct v4l2_ioctl_ops *ops,
struct file *file, void *fh, void *arg)
{
struct v4l2_buffer *p = arg;
int ret = check_fmt(file, p->type);
return ret ? ret : ops->vidioc_qbuf(file, fh, p);
}
drivers/media/platform/soc_camera/soc_camera.c
static const struct v4l2_ioctl_ops soc_camera_ioctl_ops = {
.vidioc_qbuf = soc_camera_qbuf,
}
static int soc_camera_qbuf(struct file *file, void *priv,
struct v4l2_buffer *p)
{
struct soc_camera_device *icd = file->private_data;
struct soc_camera_host *ici = to_soc_camera_host(icd->parent);
WARN_ON(priv != file->private_data);
if (icd->streamer != file)
return -EBUSY;
if (ici->ops->init_videobuf)
return videobuf_qbuf(&icd->vb_vidq, p); //vb1_buffer
else
return vb2_qbuf(&icd->vb2_vidq, p); //vb2_buffer
}
應用層open 節點/dev/vedio0時,呼叫核心介面
v4l2_fops為video4linux2裝置提供了統一的應用層介面
drivers/media/v4l2-core/v4l2-dev.c
static const struct file_operations v4l2_fops = {
.owner = THIS_MODULE,
.read = v4l2_read,
.write = v4l2_write,
.open = v4l2_open,
.get_unmapped_area = v4l2_get_unmapped_area,
.mmap = v4l2_mmap,
.unlocked_ioctl = v4l2_ioctl,
#ifdef CONFIG_COMPAT
.compat_ioctl = v4l2_compat_ioctl32,
#endif
.release = v4l2_release,
.poll = v4l2_poll,
.llseek = no_llseek,
};
drivers/media/platform/soc_camera/soc_camera.c
static struct v4l2_file_operations soc_camera_fops = {
.owner = THIS_MODULE,
.open = soc_camera_open,
.release = soc_camera_close,
.unlocked_ioctl = video_ioctl2,
.read = soc_camera_read,
.mmap = soc_camera_mmap,
.poll = soc_camera_poll,
};
static int soc_camera_open(struct file *file)
{
ret = ici->ops->init_videobuf2(&icd->vb2_vidq, icd);
}
drivers/media/platform/soc_camera/jz_camera_v13.c
static int jz_camera_init_videobuf2(struct vb2_queue *q, struct soc_camera_device *icd)--->
vb2_queue_init(q)
{
INIT_LIST_HEAD(&q->queued_list);
}
int vb2_qbuf(struct vb2_queue *q, struct v4l2_buffer *b)
{
list_add_tail(&vb->queued_entry, &q->queued_list);
if (q->streaming)
__enqueue_in_driver(vb);
/* Fill buffer information for the userspace */
__fill_v4l2_buffer(vb, b);
}
list_add_tail(&vb->queued_entry, &q->queued_list);
struct vb2_queue *q
struct vb2_buffer *vb;
struct vb2_buffer {
struct list_head queued_entry;
}
struct vb2_queue {
struct list_head queued_list;
}
static void __enqueue_in_driver(struct vb2_buffer *vb)
{
struct vb2_queue *q = vb->vb2_queue;
unsigned int plane;
vb->state = VB2_BUF_STATE_ACTIVE;
atomic_inc(&q->queued_count);
/* sync buffers */
for (plane = 0; plane < vb->num_planes; ++plane)
call_memop(q, prepare, vb->planes[plane].mem_priv);
q->ops->buf_queue(vb);
}
drivers/media/platform/soc_camera/jz_camera_v13.c
static struct vb2_ops jz_videobuf2_ops = {
.buf_init = jz_buffer_init,
.queue_setup = jz_queue_setup,
.buf_prepare = jz_buffer_prepare,
.buf_queue = jz_buffer_queue,
.start_streaming = jz_start_streaming,
.stop_streaming = jz_stop_streaming,
.wait_prepare = soc_camera_unlock,
.wait_finish = soc_camera_lock,
};
應用層呼叫ioctl
ioctl(camera_v4l2->fd, VIDIOC_STREAMON, &type)
-----------------------------------------------------------------------------
(1)
drivers/media/v4l2-core/v4l2-ioctl.c
static int v4l_streamon(const struct v4l2_ioctl_ops *ops,
struct file *file, void *fh, void *arg)
{
return ops->vidioc_streamon(file, fh, *(unsigned int *)arg);
}
(2)
drivers/media/platform/soc_camera/soc_camera.c
static int soc_camera_streamon(struct file *file, void *priv,
enum v4l2_buf_type i)
{
if (ici->ops->init_videobuf)
ret = videobuf_streamon(&icd->vb_vidq);
else
ret = vb2_streamon(&icd->vb2_vidq, i);
if (!ret)
v4l2_subdev_call(sd, video, s_stream, 1); //
return ret;
}
struct v4l2_subdev {
const struct v4l2_subdev_ops *ops;
}
struct v4l2_subdev_ops {
const struct v4l2_subdev_video_ops *video;
};
(3)
drivers/media/i2c/soc_camera/gc2155.c
static struct v4l2_subdev_video_ops gc2155_subdev_video_ops = {
.s_stream = gc2155_s_stream, //開始視訊採集
.g_mbus_fmt = gc2155_g_fmt,
.s_mbus_fmt = gc2155_s_fmt,
.try_mbus_fmt = gc2155_try_fmt,
.cropcap = gc2155_cropcap,
.g_crop = gc2155_g_crop,
.enum_mbus_fmt = gc2155_enum_fmt,
.g_mbus_config = gc2155_g_mbus_config,
#if 0
.enum_framesizes = gc2155_enum_framesizes,
.enum_frameintervals = gc2155_enum_frameintervals,
#endif &nb