1. 程式人生 > >direct IO 核心實現分析及揭示一個坑——基於3.10.0-693.11.1

direct IO 核心實現分析及揭示一個坑——基於3.10.0-693.11.1

linux的讀寫系統呼叫提供了一個O_DIRECT標記,可以讓嘗試繞過快取,直接對磁碟進行讀寫(為啥是嘗試繞過?當直接落盤失敗時還要通過快取去落盤)。為了能實現直接落盤,使用direct IO限制多多,檔案偏移得對齊到磁碟block,記憶體地址得對齊到磁碟block,讀寫size也得對齊到磁碟block。但direct IO的實現還是有個小缺陷。這個缺陷我在fuse上的分析已經講清楚了,對於缺陷原理不清晰的,可以移步我的fuse缺陷分析,這裡主要分析direct IO的實現,順帶說一下哪裡引入了缺陷。

從系統呼叫write到direct IO的呼叫路徑是

write --> vfs_write --> do_sync_write --> generic_file_aio_write -->__generic_file_aio_write

從__generic_file_aio_write開始就設計到direct IO的內容了,我們從__generic_file_aio_write開始分析

/*@iocb do_sync_write 宣告的核心io控制塊,主要用於等待io完成實現同步
 *@iov 使用者呼叫write時給的地址空間,陣列只有1個元素
 *@nr_segs 為1
 *@寫檔案偏移
 */
ssize_t __generic_file_aio_write(struct kiocb *iocb, const struct iovec *iov,
				 unsigned long nr_segs, loff_t *
ppos) { //省略部分無關緊要的程式碼 ... if (io_is_direct(file)) { loff_t endbyte; ssize_t written_buffered; //這裡嘗試直接寫入磁碟 written = generic_file_direct_write(iocb, iov, &nr_segs, pos, ppos, count, ocount); /* * If the write stopped short of completing, fall back to * buffered writes. Some filesystems do this for writes to * holes, for example. For DAX files, a buffered write will * not succeed (even if it did, DAX does not handle dirty * page-cache pages correctly). */
if (written < 0 || written == count || IS_DAX(inode)) goto out; pos += written; count -= written; //如果不能將所有內容直接寫入磁碟,則將未寫入的使用快取寫寫入 written_buffered = generic_file_buffered_write(iocb, iov, nr_segs, pos, ppos, count, written); /* * If generic_file_buffered_write() retuned a synchronous error * then we want to return the number of bytes which were * direct-written, or the error code if that was zero. Note * that this differs from normal direct-io semantics, which * will return -EFOO even if some bytes were written. */ if (written_buffered < 0) { err = written_buffered; goto out; } /* * We need to ensure that the page cache pages are written to * disk and invalidated to preserve the expected O_DIRECT * semantics. */ //在快取寫寫入後,這裡將快取刷入磁碟,達到一個系統呼叫直接落盤的效果 endbyte = pos + written_buffered - written - 1; err = filemap_write_and_wait_range(file->f_mapping, pos, endbyte); ... } ... }

如果檔案開啟的時候帶有O_DIRECT標記的話,就嘗試直接寫入磁碟,如果未能直接落盤而又不是返回錯誤的話,就嘗試使用快取寫,先寫入記憶體快取,再讓快取落盤。所以man手冊O_DIRECT這個標記的說明是Try to minimize cache effects of the I/O to and from this file.而不是不使用cache。

ssize_t
generic_file_direct_write(struct kiocb *iocb, const struct iovec *iov,
		unsigned long *nr_segs, loff_t pos, loff_t *ppos,
		size_t count, size_t ocount)
{
	...
	//首先把對應寫入的區域快取在記憶體的部分刷入磁碟
	written = filemap_write_and_wait_range(mapping, pos, pos + write_len - 1);
	if (written)
		goto out;

	/*
	 * After a write we want buffered reads to be sure to go to disk to get
	 * the new data.  We invalidate clean cached page from the region we're
	 * about to write.  We do this *before* the write so that we can return
	 * without clobbering -EIOCBQUEUED from ->direct_IO().
	 */
	if (mapping->nrpages) {
		//在磁盤裡寫了之後,快取裡的內容就和磁碟的不一致了,所以需要讓快取的內容失效,為什麼不寫了之後在寫之後呢? 看上面原生註釋,我沒怎麼看明白
		written = invalidate_inode_pages2_range(mapping,
					pos >> PAGE_CACHE_SHIFT, end);
		/*
		 * If a page can not be invalidated, return 0 to fall back
		 * to buffered write.
		 */
		if (written) {
			if (written == -EBUSY)
				return 0;
			goto out;
		}
	}

	written = mapping->a_ops->direct_IO(WRITE, iocb, iov, pos, *nr_segs);

	/*
	 * Finally, try again to invalidate clean pages which might have been
	 * cached by non-direct readahead, or faulted in by get_user_pages()
	 * if the source of the write was an mmap'ed region of the file
	 * we're writing.  Either one is a pretty crazy thing to do,
	 * so we don't support it 100%.  If this invalidation
	 * fails, tough, the write still worked...
	 */
	 //為什麼之前讓頁失效了這裡還要失效一次?這裡有個坑爹的情景
	if (mapping->nrpages) {
		invalidate_inode_pages2_range(mapping,
					      pos >> PAGE_CACHE_SHIFT, end);
	}
	...
}

generic_file_direct_write首先將要寫的地方的記憶體快取先刷下磁碟,然後再讓對應的快取失效,再direct寫完後還要讓頁面失效一次。這是為啥呢?我們考慮這樣一個情景:

程序用mmap用一個頁映射了檔案的一片空間,然後用將那片記憶體傳給write用direct io寫,寫的地址還恰好在是mmap對映的那一塊。

你說direct io寫完後要不要將那個範圍的頁快取失效呢?這時上面英文註釋所說的一個原因,還有其它原因我還不是很清楚。這裡對mmap實現不熟的可以看我mmap實現的分析

generic_file_direct_write呼叫了檔案系統提供的direct IO函式,但如檔案系統的其它函式一樣,大部分檔案系統的實現是封裝的核心的direct io函式,以ext2為例,最終呼叫的還是do_blockdev_direct_IO,這個函式是讀寫通用的,在分析的時候要注意下。

 /*
@rw 	讀寫方向
@iocb 	核心io控制塊,裡面包含讀寫的檔案offset,讀寫的長度
@inode 	目標inode
@bdev 	目標所在bdev
@iov 	使用者快取iov,一般只有1個
@offset 檔案offset
@nr_segs 	一般為1
@get_block 	解決記憶體檔案block號到硬碟block號的對映
@end_io 	io完成後掉用的方法,此處為NULL
@submit_io 	這個不知道,此處為NULL
@flags 		標記,此處為DIO_LOCKING | DIO_SKIP_HOLES
*/
static inline ssize_t
do_blockdev_direct_IO(int rw, struct kiocb *iocb, struct inode *inode,
	struct block_device *bdev, const struct iovec *iov, loff_t offset, 
	unsigned long nr_segs, get_block_t get_block, dio_iodone_t end_io,
	dio_submit_t submit_io,	int flags)
{
	int seg;
	size_t size;
	unsigned long addr;
	unsigned i_blkbits = ACCESS_ONCE(inode->i_blkbits);
	unsigned blkbits = i_blkbits;
	unsigned blocksize_mask = (1 << blkbits) - 1;
	ssize_t retval = -EINVAL;
	loff_t end = offset;
	struct dio *dio;
	struct dio_submit sdio = { 0, };
	unsigned long user_addr;
	size_t bytes;
	struct buffer_head map_bh = { 0, };
	struct blk_plug plug;

	if (rw & WRITE)
		rw = WRITE_ODIRECT;

	/*
	 * Avoid references to bdev if not absolutely needed to give
	 * the early prefetch in the caller enough time.
	 */

	//讀寫檔案的起始地址是否對齊到inode block或至少對齊到塊裝置的block大小
	if (offset & blocksize_mask) {
		if (bdev)
			blkbits = blksize_bits(bdev_logical_block_size(bdev));
		blocksize_mask = (1 << blkbits) - 1;
		if (offset & blocksize_mask)
			goto out;
	}

	/* Check the memory alignment.  Blocks cannot straddle pages */
	//每個iov的地址和size都要對齊到inode block或至少對齊到塊裝置的block大小
	//為啥要對齊呢? 因為塊裝置io最小的傳輸塊是塊裝置block,一個頁中固定對映2的n次冪個block,
	//即一個頁裡有固定的block的框框,block不能隨便亂放。
	for (seg = 0; seg < nr_segs; seg++) {
		addr = (unsigned long)iov[seg].iov_base;
		size = iov[seg].iov_len;
		end += size;
		if (unlikely((addr & blocksize_mask) ||
			     (size & blocksize_mask))) {
			if (bdev)
				blkbits = blksize_bits(
					 bdev_logical_block_size(bdev));
			blocksize_mask = (1 << blkbits) - 1;
			if ((addr & blocksize_mask) || (size & blocksize_mask))
				goto out;
		}
	}

	/* watch out for a 0 len io from a tricksy fs */
	//這裡是讀長度為0,下面那個是讀到檔案末尾以後
	if (rw == READ && end == offset)
		return 0;

	dio = kmem_cache_alloc(dio_cache, GFP_KERNEL);
	retval = -ENOMEM;
	if (!dio)
		goto out;
	/*
	 * Believe it or not, zeroing out the page array caused a .5%
	 * performance regression in a database benchmark.  So, we take
	 * care to only zero out what's needed.
	 */
	memset(dio, 0, offsetof(struct dio, pages));

	dio->flags = flags;
	if (dio->flags & DIO_LOCKING) {	//direct IO裡設了這個標記,所以會執行一下流程
		if (rw == READ) {
			struct address_space *mapping =
					iocb->ki_filp->f_mapping;

			/* will be released by direct_io_worker */
			mutex_lock(&inode->i_mutex);	//這裡對inode上鎖,因為外層read系統呼叫是不上鎖的,

			//這裡是將檔案對應偏移位置的快取先刷下磁碟,原因很好理解,write已經在外層執行過了,但這裡是read
			retval = filemap_write_and_wait_range(mapping, offset,
							      end - 1);
			if (retval) {
				mutex_unlock(&inode->i_mutex);
				kmem_cache_free(dio_cache, dio);
				goto out;
			}
		}
	}

	/* Once we sampled i_size check for reads beyond EOF */
	//這裡是讀到檔案末尾以後,上面那個是讀長度為0
	dio->i_size = i_size_read(inode);
	if (rw == READ && offset >= dio->i_size) {
		if (dio->flags & DIO_LOCKING)
			mutex_unlock(&inode->i_mutex);
		kmem_cache_free(dio_cache, dio);
		retval = 0;
		goto out;
	}

	/*
	 * For file extending writes updating i_size before data writeouts
	 * complete can expose uninitialized blocks in dumb filesystems.
	 * In that case we need to wait for I/O completion even if asked
	 * for an asynchronous write.
	 */
	if (is_sync_kiocb(iocb)) //這裡判斷成立,在do_sync_write/do_sync_read會對kiocb進行初始化
		dio->is_async = false;
	else if (!(dio->flags & DIO_ASYNC_EXTEND) &&
            (rw & WRITE) && end > i_size_read(inode))
		dio->is_async = false;
	else
		dio->is_async = true;

	dio->inode = inode;
	dio->rw = rw;

	/*
	 * For AIO O_(D)SYNC writes we need to defer completions to a workqueue
	 * so that we can call ->fsync.
	 */
	//這個判斷不會成立,dio->is_async = false
	if ((dio->inode->i_sb->s_type->fs_flags & FS_HAS_DIO_IODONE2) &&
	    dio->is_async && (rw & WRITE) &&
	    ((iocb->ki_filp->f_flags & O_DSYNC) ||
	     IS_SYNC(iocb->ki_filp->f_mapping->host))) {
		retval = dio_set_defer_completion(dio);
		if (retval) {
			/*
			 * We grab i_mutex only for reads so we don't have
			 * to release it here
			 */
			kmem_cache_free(dio_cache, dio);
			goto out;
		}
	}

	/*
	 * Will be decremented at I/O completion time.
	 */
	//這個會成立,會執行inode_dio_begin
	if (!(dio->flags & DIO_SKIP_DIO_COUNT))
		inode_dio_begin(inode);

	retval = 0;
	sdio.blkbits = blkbits;	//blkbits會被設定為塊裝置的block位數或者inode的block位數,這裡假設讀寫對齊到了 
	//inode block,便於分析。
	sdio.blkfactor = i_blkbits - blkbits;	//這裡就是0
	sdio.block_in_file = offset >> blkbits;	//這裡是檔案第一個塊的位移

	sdio.get_block = get_block;
	dio->end_io = end_io;
	sdio.submit_io = submit_io;
	sdio.final_block_in_bio = -1;
	sdio.next_block_for_io = -1;

	dio->iocb = iocb;

	spin_lock_init(&dio->bio_lock);
	dio->refcount = 1;

	/*
	 * In case of non-aligned buffers, we may need 2 more
	 * pages since we need to zero out first and last block.
	 */
	//這幾行的page_in_io記錄的應該是需要對映到核心地址的使用者頁的個數
	if (unlikely(sdio.blkfactor))
		sdio.pages_in_io = 2;

	for (seg = 0; seg < nr_segs; seg++) {
		user_addr = (unsigned long)iov[seg].iov_base;
		sdio.pages_in_io +=
			((user_addr + iov[seg].iov_len + PAGE_SIZE-1) /
				PAGE_SIZE - user_addr / PAGE_SIZE);
	}

	blk_start_plug(&plug);

	for (seg = 0; seg < nr_segs; seg++) {
		user_addr = (unsigned long)iov[seg].iov_base;
		sdio.size += bytes = iov[seg].iov_len;

		/* Index into the first page of the first block */
		sdio.first_block_in_page = (user_addr & ~PAGE_MASK) >> blkbits; //這個是使用者提供的第seg個iov的第一個塊在使用者頁的塊索引
		sdio.final_block_in_request = sdio.block_in_file +
						(bytes >> blkbits); //這個是使用者提供的第seg個iov空間能裝下的最後一個檔案塊的下一個塊的塊索引,sdio.block_in_file在下面的do_direct_IO裡會增加,程式碼對閱讀者真不友好啊
		/* Page fetching state */
		sdio.head = 0;
		sdio.tail = 0;
		sdio.curr_page = 0;

		sdio.total_pages = 0;
		if (user_addr & (PAGE_SIZE-1)) {
			sdio.total_pages++;
			bytes -= PAGE_SIZE - (user_addr & (PAGE_SIZE - 1));
		}
		sdio.total_pages += (bytes + PAGE_SIZE - 1) / PAGE_SIZE;  //這幾行計算本次要用的總共的頁數,在使用者地址沒有對齊到頁的時候可能需要多用一個頁
		sdio.curr_user_address = user_addr;

		retval = do_direct_IO(dio, &sdio, &map_bh);//讀寫函式,待會再好好分析

		dio->result += iov[seg].iov_len -
			((sdio.final_block_in_request - sdio.block_in_file) << //在上面sdio.final_block_in_request = sdio.block_in_file +(bytes >> blkbits),這裡差值應該是沒有讀或寫的數量,被記錄到dio->result裡。
					blkbits);

		if (retval) {
			dio_cleanup(dio, &sdio);
			break;
		}
	} /* end iovec loop */

	if (retval == -ENOTBLK) {
		/*
		 * The remaining part of the request will be
		 * be handled by buffered I/O when we return
		 */
		retval = 0;
	}
	/*
	 * There may be some unwritten disk at the end of a part-written
	 * fs-block-sized block.  Go zero that now.
	 */
	//這裡的意思是如果是寫,在檔案最後沒有對齊到檔案系統的iblock大小時,get_block_t在磁碟申請的i
	//block大小的塊會比實際寫的block要大(一個iblock可能包含幾個block),這裡要把多出來的block置0
	dio_zero_block(dio, &sdio, 1, &map_bh);

	if (sdio.cur_page) {
		ssize_t ret2;

		ret2 = dio_send_cur_page(dio, &sdio, &map_bh);
		if (retval == 0)
			retval = ret2;
		page_cache_release(sdio.cur_page);
		sdio.cur_page = NULL;
	}
	if (sdio.bio)
		dio_bio_submit(dio, &sdio);

	blk_finish_plug(&plug);

	/*
	 * It is possible that, we return short IO due to end of file.
	 * In that case, we need to release all the pages we got hold on.
	 */
	dio_cleanup(dio, &sdio);

	/*
	 * All block lookups have been performed. For READ requests
	 * we can let i_mutex go now that its achieved its purpose
	 * of protecting us from looking up uninitialized blocks.
	 */
	if (rw == READ && (dio->flags & DIO_LOCKING))
		mutex_unlock(&dio->inode->i_mutex);

	/*
	 * The only time we want to leave bios in flight is when a successful
	 * partial aio read or full aio write have been setup.  In that case
	 * bio completion will call aio_complete.  The only time it's safe to
	 * call aio_complete is when we return -EIOCBQUEUED, so we key on that.
	 * This had *better* be the only place that raises -EIOCBQUEUED.
	 */
	//這裡只是提交了bio,真正的磁碟讀寫並未完成
	BUG_ON(retval == -EIOCBQUEUED);
	if (dio->is_async && retval == 0 && dio->result &&
	    ((rw == READ) || (dio->result == sdio.size)))
		retval = -EIOCBQUEUED;

	if (retval != -EIOCBQUEUED)
		dio_await_completion(dio);	//這裡等待IO完成後才返回

	if (drop_refcount(dio) == 0) {
		retval = dio_complete(dio, offset, retval, false);
	} else
		BUG_ON(retval != -EIOCBQUEUED);

out:
	return retval;
}

do_blockdev_direct_IO這個函式很長,但內容不多,主要是對每個連續的使用者空間初始化sdio和dio來呼叫do_direct_IO。系統呼叫的iov的話,都只有一個元素的。sdio和dio看起來是專門用於direct IO的,裡面維護使用者頁、塊對映和bio下發。我們分析下do_direct_IO

/*
 * Walk the user pages, and the file, mapping blocks to disk and generating
 * a sequence of (page,offset,len,block) mappings.  These mappings are injected
 * into submit_page_section(), which takes care of the next stage of submission
 *
 * Direct IO against a blockdev is different from a file.  Because we can
 * happily perform page-sized but 512-byte aligned IOs.  It is important that
 * blockdev IO be able to have fine alignment and large sizes.
 *
 * So what we do is to permit the ->get_block function to populate bh.b_size
 * with the size of IO which is permitted at this offset and this i_blkbits.
 *
 * For best results, the blockdev should be set up with 512-byte i_blkbits and
 * it should set b_size to PAGE_SIZE or more inside get_block().  This gives
 * fine alignment but still allows this function to work in PAGE_SIZE units.
 */
 //這裡的註釋寫明瞭這個函式的作用,遍歷使用者頁和檔案,產生一系列page,offset,len,block的對映,即buffer_head,並提交到下一層。
static int do_direct_IO(struct dio *dio, struct dio_submit *sdio,
			struct buffer_head *map_bh)
{
	const unsigned blkbits = sdio->blkbits;
	const unsigned blocks_per_page = PAGE_SIZE >> blkbits; //字面意思,一個頁裡block的數量,這裡是塊裝置block
	struct page *page;
	unsigned block_in_page;
	int ret = 0;

	/* The I/O can start at any block offset within the first page */
	//sdio->first_block_in_page裡記錄的是使用者空間第一個塊在頁裡的索引
	//這個IO可以從頁的任意一個塊開始,詳細看下面的while迴圈
	block_in_page = sdio->first_block_in_page; 

	while (sdio->block_in_file < sdio->final_block_in_request) { //從iov裡的第一個塊裡到最後一個塊遍歷
		//獲取到塊對應的頁,這裡每次迴圈都呼叫是因為下面遍歷頁裡所有的塊。這裡會引入一個缺陷
		page = dio_get_page(dio, sdio);
		if (IS_ERR(page)) {
			ret = PTR_ERR(page);
			goto out;
		}

		while (block_in_page < blocks_per_page) { //這裡會遍歷一個頁裡所有的block
			...
			//這裡維護的變數比較多,比較難看懂,細節就不深究了,大概意思就是對每個塊呼叫submit_page_section向下提交bio
			ret = submit_page_section(dio, sdio, page,
						  offset_in_page,
						  this_chunk_bytes,
						  sdio->next_block_for_io,
						  map_bh);
			...
		}

		/* Drop the ref which was taken in get_user_pages() */
		page_cache_release(page);
		block_in_page = 0;	//一個頁結束,這個索引計數器歸0
	}
out:
	return ret;
}

這裡主要分析dio_get_page,這裡獲取用於bio的頁,這個頁從哪裡來呢?以前在讀buffer IO程式碼時就糾結,buffer IO的bio的頁從檔案的address_space來,這很自然,直接IO是不過buffer的,如果是申請的臨時頁,那效率就低了。

/*
 * Get another userspace page.  Returns an ERR_PTR on error.  Pages are
 * buffered inside the dio so that we can call get_user_pages() against a
 * decent number of pages, less frequently.  To provide nicer use of the
 * L1 cache.
 */
 //獲取一個對映到使用者空間的頁,這裡嘗試一次性獲取所需的全部,記錄在sdio中,但一次只返回一頁
static inline struct page *dio_get_page(struct dio *dio,
		struct dio_submit *sdio)
{
	if (dio_pages_present(sdio) == 0) 
            
           

相關推薦

direct IO 核心實現分析揭示一個——基於3.10.0-693.11.1

linux的讀寫系統呼叫提供了一個O_DIRECT標記,可以讓嘗試繞過快取,直接對磁碟進行讀寫(為啥是嘗試繞過?當直接落盤失敗時還要通過快取去落盤)。為了能實現直接落盤,使用direct IO限制多多,檔案偏移得對齊到磁碟block,記憶體地址得對齊到磁碟block,讀寫size也得對齊

mmap原始碼分析--基於3.10.0-693.11.1

mmap是個既簡單又好用的東西,對於讀寫檔案,它減少了一次記憶體拷貝,對於記憶體申請,它可以方便的申請到大塊記憶體,用於自己管理。今天就來說說mmap的實現。 mmap的原型是這樣的: void *mmap(void *addr, size_t length,

Linux裝置驅動程式架構分析之I2C架構(基於3.10.1核心

作者:劉昊昱  核心版本:3.10.1 I2C體系架構的硬體實體包括兩部分: 硬體I2C Adapter:硬體I2C Adapter表示一個硬體I2C介面卡,也就是I2C控制器。一般是SOC中的一個介面,也可以用GPIO模擬。硬體I2C Adapter主要用來在I2

cgroup原始碼分析——基於centos3.10.0-693.25.4

  核心升級完測試兄弟跑ltprun套件,發現跑完後cgroup失效了。看系統一切執行正常核心也沒啥錯誤日誌,又不熟cgroup的實現,在一頓翻程式碼後發現cgroup註冊了CPU熱插拔的notifier chain。去翻ltp的測試內容發現,CPU熱插拔赫然在列。為啥不一開始先翻ltp

Generic Netlink核心實現分析(二):通訊

前一篇博文中分析了Generic Netlink的訊息結構及核心初始化流程,本文中通過一個示例程式來了解Generic Netlink在核心和應用層之間的單播通訊流程。 示例程式:demo_genetlink_kern.c(核心模組)、demo_genetlink_

ioctl配置IP地址 Linux核心實現分析

1 執行flow 本文以Linux kernel3.10版本描述 上圖是《Understanding LINUX NETWORK INTERNALS》一書中對socket的ioctl呼叫的整體flow,本文只對其中SIOCSIFADDR這一個command進行flow

學習淘淘商城第九十六課(購物車實現分析工程搭建)

        關於購物車模組,京東和淘寶並不一樣,京東允許使用者在沒有登入的情況下就使用購物車,而且加到購物車裡面的商品可以一直儲存著(其實是放到了Cookie當中,如果清空了Cookie也就清空購物車了)。而淘寶則是必須先登入才能將商品新增到購物車當中,就使用者體驗來說

Netlink 核心實現分析(一):建立

Netlink 是一種IPC(Inter Process Commumicate)機制,它是一種用於核心與使用者空間通訊的機制,同時它也以用於程序間通訊(Netlink 更多用於核心通訊,程序之間通訊更多使用Unix域套接字)。在一般情況下,使用者態和核心態通訊會使用傳統的

linux路由核心實現分析(一)----鄰居子節點

有三種路由結構: 1,neigh_table{} 結構和 neighbour{} 結構  儲存和本機物理上相鄰的主機地址資訊表,通常稱為鄰居子節點,指的是和本機相鄰只有  一跳的機器,其中 neigh_table{} 作為資料結構連結串列來表示 neighbour{}

Linux核心原始碼分析--zImage出生實錄(Linux-3.0 ARMv7)

此文為兩年前為好友劉慶敏的書《嵌入式Linux開發詳解--基於AT91RM9200和Linux 2.6》中幫忙寫的章節的重新整理。如有雷同,純屬必然。經作者同意,將我寫的部分重新整理後放入blog中。 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

音樂、視訊播放模式切換實現方案原理解析(基於vue、vuex、h5 audio)

音樂、視訊播放模式切換實現方案及原理解析(基於vue、vuex、h5 audio) 播放模式有三種: 順序播放 隨機播放 單曲迴圈 定義為一個playMode物件並向外暴露,內含三種播放模式,即為: export const playMode = { sequen

Linux裝置模型分析之bus(基於3.10.1核心

作者:劉昊昱  核心版本:3.10.1 一、bus定義 Linux裝置驅動模型中的bus,即可以是物理匯流排(如PCI、I2C匯流排)的抽象,也可以是出於裝置驅動模型架構需要而定義的虛擬的“platform”匯流排。一個符合Linux裝置驅動模型的device或devi

Linux裝置驅動程式架構分析之platform(基於3.10.1核心

作者:劉昊昱  核心版本:3.10.1 一、platform bus的註冊 platform bus註冊是通過platform_bus_init函式完成的,該函式定義在drivers/base/platform.c檔案中,其內容如下: 904int __init pl

Linux裝置模型分析之device_driver(基於3.10.1核心

一、device_driver定義 181/**  182 * struct device_driver - The basic device driver structure  183 * @name:   Name of the device

Linux--驅動----i2c例項:使用傳統的節點方式 核心3.10.0 RK3288

裝置樹: &i2c1 { status = "okay"; //要配置為okay或者ok [email protected]{ compatible ="rktest,drv-i2c

Linux軟體安裝VirtualBox網絡卡地址10.0.2.15ip問題

在VirtualBox中安裝linux(centOS-6-bin-DVD1) 發現ip是10.0.2.15 在用客戶端連線22埠一直不能連線 出現這種情況,是因為VirtualBox的預設網路連線方式為這個:  將它改為【橋接網絡卡】: Linux解壓縮命令

spark1.6+hadoop2.6+kafka2.10-0.8.2.1+zookeeper3.3.6安裝sparkStreaming程式碼編寫和除錯

安裝環境 安裝之前確保裝置至少有4GB記憶體,推薦8GB centos7.2 docker(這個安裝請參考我的另一篇部落格https://blog.csdn.net/qq_16563637/article/details/81699251) 目標安裝軟體

cocos2d-x實現一個PopStar(消滅星星)遊戲的邏輯分析原始碼

前言 說起PopStar這個遊戲,或許很多人都不知道是啥,但是如果說起消滅星星的話,可能就會有很多人恍然大悟,原來是它。那麼,這個消滅星星長得什麼樣子呢?我們用一張圖來看看: emmm,是的,具體來說,長得就是這樣,我們通過點選圖片上某一個顏色的星星塊,如果,這個顏色塊周圍存在和他相

HashMap實現原理分析簡單實現一個HashMap

HashMap實現原理分析及簡單實現一個HashMap 歡迎關注作者部落格 簡書傳送門 轉載@原文地址   HashMap的工作原理是近年來常見的Java面試題。幾乎每個Java程式設計師都知道HashMap,都知道哪裡要用HashMap,知道HashMap和

CFNet視訊目標跟蹤核心原始碼分析——網路結構設計實現

1. 論文資訊 2. 網路結構設計及實現 根據官方實際程式碼,更加詳細一點的網路結構如下圖所示,可以看出,與SiamFC的網路結構類似,CFNet也包含兩個分支——z和x,其中z分支對應目標物體模板,可以理解為目標在第  幀之內所有幀的模板資料加權融合(利用學習率