1. 程式人生 > >ceph儲存 ceph檔案系統測試工具filebench介紹

ceph儲存 ceph檔案系統測試工具filebench介紹

Filebench 是一款檔案系統性能的自動化測試工具,它通過快速模擬真實應用伺服器的負載來測試檔案系統的效能。它不僅可以模擬檔案系統微操作(如 copyfiles, createfiles, randomread, randomwrite ),而且可以模擬複雜的應用程式(如 varmail, fileserver, oltp, dss, webserver, webproxy )。 Filebench 比較適合用來測試檔案伺服器效能,但同時也是一款負載自動生成工具,也可用於檔案系統的效能。

和傳統的iozone不同,這款benchmark可以模擬這個型別的負載,如mailserver, fileserver等等。和各個知名的mailserver,fileserver負載相比,這款benchmark可以讓你修改負載其中的引數(比如程式需要測試的file size大小等)。某天瀏覽FAST2012會議,發現很多論文都是採用該benchmark測試,估計公認度會越來越高。

下載地址:http://sourceforge.net/apps/mediawiki/filebench/index.php?title=Main_Page

Features

Filebench includes many features to facilitate file system benchmarking:

  • Multiple workload types support via loadable personalities
  • Ships with more than 40 pre-defined personalities, including the one that describe mail, web, file, and database servers behaviour
  • Easy to add new personalities using reach Workload Model Language (WML)
  • Multi-process and multi-thread workload support
  • Configurable directory hierarchies with depth, width, and file sizes set to given statistical distributions
  • Support of asynchronous I/O and process synchronization primitives
  • Integrated statistics for throughput, latency, and CPU cycle counts per system call
  • Tested on Linux, FreeBSD, and Solaris platforms (should work for any POSIX-compliant Operating System)

Installation

Getting and Installing Filebench

Filebench is distributed as a compressed tar-ball with sources. Download the newest version of Filebench here. Uncompress it and follow the regular procedure for building UNIX sources:

  • ./configure
  • make
  • sudo make install

If all of these steps complete successfully, you're ready to run Filebench. In case you run into compilation error or warning, please, report it using Bug Tracking System.

Filebench does not have any mandatory program/library dependencies except libc. If you want command line auto-completion to work in Filebench you need to install libtecla prior to Filebench.

Files in the Installation

Filebench's make install installs single /usr/local/bin/filebench binary and a number of workload personalities to /usr/local/share/filebench/workloads/ directory.Default installation directories might vary from distribution to distribution and you can set them to other values during ./configuration. Workload personalities are stored in files with .f extension. More than 40 pre-defined personalities are already included in Filebench package.

Running

Filebench generates I/O operations by executing a workload personality, which defines the workload to apply to the system and might provide various tunables to customize the workload. As it was mentioned earlier, Filebench is shipped with a library of these personalities, ready to use. Below we describe a use case when one wants to use a pre-defined workload. If you want to define a new personality and run it, read Writing Workload Models page.

Interactively

If the directory where the filebench binary was installed is in the shell PATH, you can start Filebench by simply executing filebench command.Filebench prompt will appear after that:

[email protected]$ filebench
Filebench Version 1.4.9
IMPORTANT: Virtual address space randomization is enabled on this machine!
It is highly recommended to disable randomization to provide stable Filebench runs.
Echo 0 to /proc/sys/kernel/randomize_va_space file to disable the randomization.
WARNING: Could not open /proc/sys/kernel/shmmax file!
It means that you probably ran Filebench not as a root. Filebench will not increase shared
region limits in this case, which can lead to the failures on certain workloads.
11431: 0.000: Allocated 170MB of shared memory
filebench> quit

You can type Filebench commands now. Type quit to exit the prompt.

One can see two warnings above:

  • A lot of Linux distributions enable address space randomization. This prevents Filebench from mapping a shared memory region to the same address in different processes. Disable address space randomization (echo 0 > /proc/sys/kernel/randomize_va_space) for stable operation of multi-process workloads.
  • Second warning informs that Filebench was not able to increase shared memory region size. You can either:
    • Run Filebench as root
    • Increase shared memory region size to 256MB (sudo echo 268435456 > /proc/sys/kernel/randomize_va_space) and ignore this warning

If one disables address space randomization and runs Filebench as root, the output looks much cleaner:

[email protected]$ sudo su
[sudo] password for user:
[email protected]# echo 0 > /proc/sys/kernel/randomize_va_space
[email protected]# go_filebench 
Filebench Version 1.4.9
12102: 0.000: Allocated 170MB of shared memory
filebench> 

Now, one can load and run individual workload personalities with full control over their parameters. The following example demonstrates how to run fileserver workload personality:

[email protected]# go_filebench 
Filebench Version 1.4.9
12324: 0.000: Allocated 170MB of shared memory
filebench> load fileserver
12462: 2.869: FileServer Version 2.2 personality successfully loaded
12462: 2.869: Usage: set $dir=<dir>
12462: 2.869:        set $meanfilesize=<size>     defaults to 131072
12462: 2.869:        set $nfiles=<value>      defaults to 10000
12462: 2.869:        set $nthreads=<value>    defaults to 50
12462: 2.869:        set $meanappendsize=<value>  defaults to 16384
12462: 2.869:        set $iosize=<size>  defaults to 1048576
12462: 2.869:        set $meandirwidth=<size> defaults to 20
12462: 2.869: (sets mean dir width and dir depth is calculated as log (width, nfiles)
12462: 2.869:        run runtime (e.g. run 60)
filebench> set $dir=/mnt
filebench> run 60
12462: 4.909: Creating/pre-allocating files and filesets
12462: 4.918: Fileset bigfileset: 10000 files, avg dir width = 20, avg dir depth = 3.1, 1240.757MB
12462: 5.280: Removed any existing fileset bigfileset in 1 seconds
12462: 5.280: making tree for filset /tmp/bigfileset
12462: 5.290: Creating fileset bigfileset...
12462: 6.080: Preallocated 7979 of 10000 of fileset bigfileset in 1 seconds
12462: 6.080: waiting for fileset pre-allocation to finish
12466: 6.080: Starting 1 filereader instances
12467: 6.081: Starting 50 filereaderthread threads
12462: 7.137: Running...
12462: 67.142: Run took 60 seconds...
12462: 67.145: Per-Operation Breakdown
statfile1            128311ops     2138ops/s   0.0mb/s      0.0ms/op     2320us/op-cpu [0ms - 0ms]
deletefile1          128316ops     2138ops/s   0.0mb/s      0.2ms/op     2535us/op-cpu [0ms - 458ms]
closefile3           128323ops     2139ops/s   0.0mb/s      0.0ms/op     2328us/op-cpu [0ms - 0ms]
readfile1            128327ops     2139ops/s 283.8mb/s      0.1ms/op     2460us/op-cpu [0ms - 267ms]
openfile2            128329ops     2139ops/s   0.0mb/s      0.0ms/op     2332us/op-cpu [0ms - 2ms]
closefile2           128332ops     2139ops/s   0.0mb/s      0.0ms/op     2332us/op-cpu [0ms - 0ms]
appendfilerand1      128337ops     2139ops/s  16.6mb/s      0.1ms/op     2377us/op-cpu [0ms - 559ms]
openfile1            128343ops     2139ops/s   0.0mb/s      0.0ms/op     2353us/op-cpu [0ms - 2ms]
closefile1           128349ops     2139ops/s   0.0mb/s      0.0ms/op     2317us/op-cpu [0ms - 1ms]
wrtfile1             128352ops     2139ops/s 265.2mb/s      0.1ms/op     2601us/op-cpu [0ms - 268ms]
createfile1          128358ops     2139ops/s   0.0mb/s      0.1ms/op     2396us/op-cpu [0ms - 267ms]
12462: 67.145: IO Summary: 1411677 ops, 23526 ops/s, (2139/4278 r/w), 565mb/s, 393us cpu/op, 0.2ms latency
12462: 67.145: Shutting down processes
[email protected]# 

As you can see, we first loaded fileserver personality using load command. Filebench located corresponding .f in the directory with pre-defined workloads. After that tunables of the workload personality can be set. We change the benchmark directory to /mnt where the file system we want to benchmark is presumably mounted. To start workload for 60 seconds we execute run 60 command. In response, Filebench first created a file system tree with the properties defined in the personality file and then spanned all required processes and threads. After 60 seconds of the run the statistics is printed and Filebench exits.

Non-interactively

If you wish to run Filebench in non-interactive mode, you can use -f option. However, you need to add 'run <time>' to the end of the workload personality file. I prefer to create a copy of original workload file for that:

[email protected]# cp /usr/local/share/filebench/workloads/fileserver.f /tmp/fileserver-noninteractive.f
[email protected]# vim /tmp/fileserver-noninteractive.f (add 'run 60' to the end of this file)
[email protected]# filebench -f /tmp/fileserver-noninteractive.f

After that you will see traditional Filebench output.



文件提到的2個注意點
1. filebench預設開啟地址空間randomization (後面有解釋,什麼是address space randomization)。如果測試多程序的負載,為了穩定操作可以選擇關閉該選項。方法是:
echo 0 > /proc/sys/kernel/randomize_va_space
2. 如果想增加filebench的共享記憶體,有2個方法
   a. 用root使用者執行。(su)
   b.
echo 268435456 > /proc/sys/kernel/randomize_va_space (增加到256MB)

執行
你可以選擇匯入別人已經定製好的負載型別(如fileserver)。也可以自己使用workload model language定製特定的負載。下面介紹的是文件演示的匯入已經定製好的fileserver負載(模擬)。
1. 進入filebench
2. load fileserver (filebench匯入對應的~.f檔案,程式碼自帶,也可以從這個地址下載http://sourceforge.net/apps/mediawiki/filebench/index.php?title=Pre-defined_personalities)
3. set $dir=/mnt (測試mnt目錄)
4. run 60 (執行60秒,執行完畢後列印統計資訊)

非互動式執行

執行一條命令即可完成測試,無需要像前面一步一步的設定各個引數。注意在配置檔案的末尾新增 'run <time>'

http://sourceforge.net/apps/mediawiki/filebench/index.php?title=Main_Page#Support

相關推薦

ceph儲存 ceph檔案系統測試工具filebench介紹

Filebench 是一款檔案系統性能的自動化測試工具,它通過快速模擬真實應用伺服器的負載來測試檔案系統的效能。它不僅可以模擬檔案系統微操作(如 copyfiles, createfiles, randomread, randomwrite ),而且可以模擬複雜的應用程式

檔案系統測試工具IOZONE

最近剛剛入職百度做儲存工程師,感覺儲存方面真的是博大精深,有很多需要學習的地方,平常比較忙,也就沒時間更新部落格了(雖然平時來看的人也不是很多。。)。最近發現了幾款檔案系統測試工具,和大家分享一下。 這些都是測試檔案系統的工具,而不是測試磁碟的工具。磁碟裝置之上是檔案系統,

ceph儲存 ceph叢集效能檢視工具iostat

rrqm/s:每秒這個裝置相關的讀取請求有多少被Merge了(當系統呼叫需要讀取資料的時候,VFS將請求發到各個FS,如果FS發現不同的讀取請求讀取的是相同Block的資料,FS會將這個請求合併Merge);wrqm/s:每秒這個裝置相關的寫入請求有多少被Merge了。 rsec/s:每秒讀取的扇區數; w

ceph儲存 ceph叢集效能測試fio

mytest1: (g=0): rw=randrw,bs=16K-16K/16K-16K, ioengine=psync, iodepth=1 … mytest1: (g=0): rw=randrw, bs=16K-16K/16K-16K, ioengine=psync, iodepth=1 fio 2.0.

ceph儲存 ceph叢集效能測試iozone

iozone工具常用引數 常用引數解釋: -a  auto mode產生檔案大小16K-512M,記錄大小4K-16M的輸出結果; -e  計算時間時算上fflush,fsync的時間; -f  指定臨時測試檔案; -s  指定測試檔案大小; -r  指定測試記錄大小;

儲存系統測試工具學習之vdbench

1 單客戶執行vdbench    vdbench既可以執行在linux作業系統下,又可以執行在windows操作下。使用vdbench之前要確保java執行環境的安裝。   Linux:jdk-6u3-linux-i568-rpm.bin   windows:jre-6u

磁碟儲存檔案系統

裝置檔案 磁碟裝置的裝置檔案命名:/dev/DEV_FILE SCSI, SATA, SAS, IDE,USB: /dev/sd 虛擬磁碟:/dev/vd 不同磁碟標識:a-z,aa,ab…/dev/sda, /dev/sdb, ... 同一裝置上的不同分割槽:1,2, .../dev

ceph儲存 ceph整體學習記錄(未整理較亂)

ceph原始碼下載:   http://ceph.com/download/   主要文件下載連線:   http://download.iyunv.com/detail/skdkjxy/8149989   /*******************/ Time:2014-

學習筆記 - swift 物件儲存檔案系統

 物件儲存和檔案系統儲存區別: 所謂檔案系統的本質是POSIX介面,“物件”這個名詞是做物件儲存的人為了把自己做的東西和檔案系統區分開而用的術語,把存在物件儲存裡的檔案叫做“物件”,所以選擇檔案系統還是物件儲存,跟你把這堆資料稱作物件還是檔案並沒有直接關係,而是要看你是需

WPF實戰(1)-高精度定位系統測試工具頁面設計

      簡單介紹下高精度定位系統,一句話,有一個基站,有一個標籤,通過標籤發給基站的資料來算出標籤的位置。       主窗體分為三行三列的佈局,第一行放置socket基本資訊控制元件和監控狀態控制元件,第二行放基站線上列表和資料解

Rpm、yum;磁碟儲存檔案系統;網路基礎

rpm包命名方式:name-VERSION-release.arch.rpm例:bash-4.2.46-19.el7.x86_64.rpm包之間:可能存在依賴關係,甚至迴圈依賴解決依賴包管理工具:yum:rpm包管理器的前端工具dnf: Fedora 18+ rpm包管理器前端管理工具配置檔案:/etc/l

ceph儲存 ceph叢集網路配置

簡介 建立一個高效能的Ceph儲存叢集網路配置是至關重要的。 Ceph的儲存叢集不執行CEPH客戶的路由請求或路由排程。相反, Ceph的Client直接提出請求CEPH OSD守護程序。 Ceph的OSD守護代表Ceph的客戶端執行資料備份,這意味著備份和其他因素施加額外

ceph儲存 ceph中PG的意義

Overview PG = “placement group”. When placing data in the cluster, objects aremapped into PGs, and those PGs are mapped onto OSDs. We use theindirection s

資料庫系統測試工具sysbench

sysbench是一個模組化的、跨平臺、多執行緒基準測試工具,主要用於評估測試各種不同系統引數下的資料庫負載情況。關於這個專案的詳細介紹請看:http://sysbench.sourceforge.net。 它主要包括以下幾種方式的測試: 1、cpu效能 2、磁碟io效能

ceph儲存 ceph原始碼除錯中admin_socket介面

admin_socket:  每一個程序一個,即每一個asok檔案一個,在程序初始化時進行初始化,在程序關閉時關閉,osd程序admin_socket已經初始化,如果想獲取或者設定,直接通過 admin_socket->register_command進行命令註冊,OSD::asok_command進行

ceph儲存 ceph叢集Paxos演算法分析

個人整理PAXOS演算法分析文件下載連線為: 一致性問題 如上圖所示,伺服器Ai(i=1,2,..5)組成儲存叢集,每份資料在5臺伺服器中各保留一個副本。當客戶端C1和C2同時修改儲存在叢集中的同一份資料時,由於網路延遲的存在無法保證兩個修改資料的請求到達每臺伺服器的先

ceph儲存 ceph叢集ntp校時詳細配置說明

那麼大家細心的話就會發現兩個問題: 第一我們連線的是0.uk.pool.ntp.org為什麼和remote server不一樣? 第二那個最前面的+和*都是什麼意思呢? 第一個問題不難理解,因為NTP提供給我們的是一個cluster server所以每次連線的得到的伺服器都有可能是不一樣.同樣這也告訴我們了在

ceph儲存 ceph Bluestore的架構

uint64_t size = 1048476 * 128; string fn = get_temp_bdev(size); BlueFS fs; ASSERT_EQ(0, fs.add_block_device(0, fn)); fs.add_block_extent(0, 1048576

ceph儲存 ceph叢集osd故障自我檢測

心跳是用於OSD節點間檢測對方是否故障的,以便及時發現故障節點進入相應的故障處理流程。故障檢測需要在故障的發現時間和心跳帶來的負載之間做權衡,如果心跳頻率太高則過多的心跳報文會影響系統性能,如果心跳頻率過低則會延長髮現故障節點的時間,從而影響系統的可用性。 建立連線 在大規

ceph儲存 Ceph架構剖析

 1. 介紹 雲硬碟是IaaS雲平臺的重要組成部分,雲硬碟給虛擬機器提供了持久的塊儲存裝置。目前的AWS 的EBS(Elastic Block store)給Amazon的EC2例項提供了高可用高可靠的塊級儲存卷,EBS適合於一些需要訪問塊裝置的應用,比如資料庫、檔案系