1 |
fio --name first-run --size 1M |
So what happened?
fio ran one job called “first-run”. We did not specify what that job should do except that the job should run until 1 MB has been transferred (size=1M). But what data has been transfered to where and how?
So fio used some default values in this case which can be seen in Block 1.
Block 1
- “g=0”
- this job belongs to group 0 – Groups can be used to aggregate job results.
- “rw=read”
- the default io pattern we use is sequential read
- “bs=4K-4K/4K-4K/4K-4K”
- the default (read/write/trim) blocksize will be 4K
- “ioengine=sync”
- the default ioengine is synchronous so no parallel access (async)
- “iodepth=1”
- per default there will be no more then “1 IO unit” in flight against the device
Block 2
- “Laying out IO files…. ”
- This step creates a file if not already existing with 1MB in size with the name “first-run.0.0” in the working directory
- This file is used for the data transfer
Block 3
- Name of the job and some infos about it like
- “err=0”
- no errors occurred when running this job
- “err=0”
Block 4
- This is the IO statistic for the job.
- “read” (remember the default for this job is sequential read)
- “io=1024.0KB”
- number of KB transfered from file (1 MB)
- “bw=341333KB/s”
- we transfered data at a speed of ~333MB per second in average
- “iops=85333”
- is the average IO per second (4k in this case).
- “runt=3msec”
- The job ran 3 milliseconds
Actually in this case we only scheduled 256 IOs (1MB / 4KB) to the file. This took only 3 milliseconds. So the value of 85333 does only means that we could achieve these much IO per second if we read for one second.
1 s / 0,003s = ~333 (we could complete 256 IOs in 3ms) = ~85333
- the rest of Block 4 shows in detail the latency distribution. For more details read Part8.
Block 5
- “cpu”
- this line is dedicated to the CPU usage of the running the job
- “usr=0.00%”
- this is the percentage of CPU usage of the running job at user level
- Its nearly 0. remember the job ran only for 3ms so no impact on CPU
- 100% would mean that one CPU core will be at 100% workload, depending if HT on/off
- “sys=0.00%”
- this is the percentage of CPU usage of the running job at system/kernel level
- “ctx=8”
- The number of context switches this thread encountered while running
- “majf=0” and “minf=0”
- The number of major and minor page faults
Block 6
- this blocks shows the distribution of IO depths over the job lifetime.
- “IO depths : 1=100.0%… ”
- this number showed that job was able to always have 1 IO unit in flight (see Block 1)
- “submit: …..4=100.0%….”
- shows how many IO were submitted in a single call. In this case it could be in the range of 1 to 4
- we know that the IO depths was always at 1 so this indicates the submitted IO in a call have been 1 all time
- “complete: …. 4=100.0%….”
- same like submit but for complete calls.
- “issued:….total=r=256/w=0/d=0″…
- 256 read IO have been issued, no writes, no discards and none of them have been short
- “IO depths : 1=100.0%… ”
Block 7
- This is the group statistic. We ran only one job belonging to group 0
- READ
- “io=1024KB”
- As in the job statics the same amount of transfered MB here
- “aggrb=341333KB/s”
- aggregated bandwidth of all jobs/threads for group 0
- “minb=341333KB/s maxb=341333KB/s”
- The minimum bandwidth one thread saw. In this case is the minimum the same as the maximum because it run only 3 ms
- “mint=3msec” and “maxt=3msec”
- Smallest and longest runtime of one of the jobs. The same because we ran only one job
- “io=1024KB”
- READ
Block 8
- Disks statistics for involved disks but they look strange or?
- “sda: ios=0/0”
- 0 READ IO and 0 WRITE IO on /dev/sda
- “merge=0/0” number of merges the IO from the IO scheduler
- no merges here
- “ticks=0/0”
- number of ticks we kept the drive busy.. never
- “io_queue=0”
- total time spend in the disk queue
- “util=0.00%”
- the utilization of the drive -> nothing be done on disk?
- “sda: ios=0/0”
So what we are seeing here is probably the linux file cache/buffer(page cache) for ext4 files. It seems the blocks are already prefetched. And the linux readahead can have an influence as well.