Tools used for NVMe benchmarking

In order to benchmark our powerful NVMe devices, we have used two Linux tools - dd and fio. You can see the test commands and the full output of the run commands below.

dd tests results

Write - speed

Test command: dd if=/dev/zero of=benchmark bs=64K count=32K conv=fdatasync 

Command output (result):

32768+0 records in
32768+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 0.950341 s, 2.3 GB/s

Test command: dd if=/dev/zero of=dd.test bs=64K count=256K conv=fdatasync

Command output (result):

262144+0 records in
262144+0 records out
17179869184 bytes (17 GB, 16 GiB) copied, 9.85727 s, 1.7 GB/s

Read - speed

Test command: dd if=dd.test of=/dev/null bs=64k count=32k

Command output (result):

32768+0 records in
32768+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 0.108124 s, 19.9 GB/s

Test command: dd if=dd.test of=/dev/null bs=64k count=128k

Command output (result):

131072+0 records in
131072+0 records out
8589934592 bytes (8.6 GB, 8.0 GiB) copied, 0.356223 s, 24.1 GB/s

fio tests

Write - speed

Test command: fio --name=write_throughput --numjobs=8 --size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 --verify=0 --bs=1M --iodepth=64 --rw=write --group_reporting=1

Command output (result):

Starting 8 processes
write_throughput: Laying out IO file (1 file / 10240MiB)
write_throughput: Laying out IO file (1 file / 10240MiB)
write_throughput: Laying out IO file (1 file / 10240MiB)
write_throughput: Laying out IO file (1 file / 10240MiB)
write_throughput: Laying out IO file (1 file / 10240MiB)
write_throughput: Laying out IO file (1 file / 10240MiB)
write_throughput: Laying out IO file (1 file / 10240MiB)
write_throughput: Laying out IO file (1 file / 10240MiB)
Jobs: 6 (f=1): [f(1),_(1),f(3),W(1),f(1),_(1)][100.0%][w=7648MiB/s][w=7648 IOPS][eta 00m:00s]
write_throughput: (groupid=0, jobs=8): err= 0: pid=1208: Wed Jul  9 14:26:45 2025
  write: IOPS=7364, BW=7373MiB/s (7731MB/s)(432GiB/60061msec); 0 zone resets
    slat (usec): min=17, max=216204, avg=65.11, stdev=935.71
    clat (usec): min=690, max=420241, avg=69426.22, stdev=24771.17
     lat (usec): min=754, max=420293, avg=69491.41, stdev=24778.89
    clat percentiles (msec):
     |  1.00th=[   29],  5.00th=[   39], 10.00th=[   44], 20.00th=[   51],
     | 30.00th=[   56], 40.00th=[   62], 50.00th=[   67], 60.00th=[   72],
     | 70.00th=[   78], 80.00th=[   85], 90.00th=[   99], 95.00th=[  113],
     | 99.00th=[  142], 99.50th=[  184], 99.90th=[  255], 99.95th=[  275],
     | 99.99th=[  384]
   bw (  MiB/s): min= 4307, max= 9641, per=100.00%, avg=7374.24, stdev=129.49, samples=952
   iops        : min= 4307, max= 9641, avg=7373.63, stdev=129.48, samples=952
  lat (usec)   : 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%, 10=0.07%, 20=0.22%, 50=18.45%
  lat (msec)   : 100=72.23%, 250=9.02%, 500=0.12%
  cpu          : usr=2.54%, sys=2.38%, ctx=15815, majf=0, minf=291
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=0,442296,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
  WRITE: bw=7373MiB/s (7731MB/s), 7373MiB/s-7373MiB/s (7731MB/s-7731MB/s), io=432GiB (464GB), run=60061-60061msec

Disk stats (read/write):
  vda: ios=143/459613, sectors=5352/937132392, merge=15/1393, ticks=8798/30429617, in_queue=30440235, util=76.03%

Write - IOPS

Test command: fio --name=write_iops --size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 --verify=0 --bs=4K --iodepth=64 --rw=randwrite --group_reporting=1

Command output (result):

Starting 1 process
write_iops: Laying out IO file (1 file / 10240MiB)
Jobs: 1 (f=1): [w(1)][100.0%][w=590MiB/s][w=151k IOPS][eta 00m:00s]
write_iops: (groupid=0, jobs=1): err= 0: pid=1648: Wed Jul  9 14:28:19 2025
  write: IOPS=50.6k, BW=198MiB/s (207MB/s)(11.6GiB/60001msec); 0 zone resets
    slat (nsec): min=1081, max=13911k, avg=16122.79, stdev=106841.57
    clat (usec): min=27, max=16855, avg=1247.03, stdev=1062.31
     lat (usec): min=28, max=16866, avg=1263.15, stdev=1072.62
    clat percentiles (usec):
     |  1.00th=[  110],  5.00th=[  262], 10.00th=[  375], 20.00th=[  537],
     | 30.00th=[  668], 40.00th=[  898], 50.00th=[ 1287], 60.00th=[ 1483],
     | 70.00th=[ 1647], 80.00th=[ 1795], 90.00th=[ 1942], 95.00th=[ 2040],
     | 99.00th=[ 2474], 99.50th=[10683], 99.90th=[13173], 99.95th=[13566],
     | 99.99th=[14615]
   bw (  KiB/s): min=134896, max=617000, per=98.49%, avg=199491.48, stdev=97214.32, samples=119
   iops        : min=33724, max=154250, avg=49872.87, stdev=24303.57, samples=119
  lat (usec)   : 50=0.10%, 100=0.75%, 250=3.82%, 500=13.05%, 750=17.20%
  lat (usec)   : 1000=7.47%
  lat (msec)   : 2=51.12%, 4=5.76%, 10=0.16%, 20=0.57%
  cpu          : usr=7.07%, sys=61.03%, ctx=1044959, majf=0, minf=36
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=0,3038365,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
  WRITE: bw=198MiB/s (207MB/s), 198MiB/s-198MiB/s (207MB/s-207MB/s), io=11.6GiB (12.4GB), run=60001-60001msec

Disk stats (read/write):
  vda: ios=0/3191910, sectors=0/35099352, merge=0/1195853, ticks=0/517642, in_queue=517729, util=54.26%

Read - speed

Test command: fio --name=read_throughput --numjobs=8 --size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 --verify=0 --bs=1M --iodepth=64 --rw=read --group_reporting=1

Command output (result):

Starting 8 processes
read_throughput: Laying out IO file (1 file / 10240MiB)
read_throughput: Laying out IO file (1 file / 10240MiB)
read_throughput: Laying out IO file (1 file / 10240MiB)
read_throughput: Laying out IO file (1 file / 10240MiB)
read_throughput: Laying out IO file (1 file / 10240MiB)
read_throughput: Laying out IO file (1 file / 10240MiB)
read_throughput: Laying out IO file (1 file / 10240MiB)
read_throughput: Laying out IO file (1 file / 10240MiB)
Jobs: 8 (f=8): [R(8)][100.0%][r=23.4GiB/s][r=24.0k IOPS][eta 00m:00s]                   
read_throughput: (groupid=0, jobs=8): err= 0: pid=1661: Wed Jul  9 14:31:49 2025
  read: IOPS=24.0k, BW=23.5GiB/s (25.2GB/s)(1409GiB/60021msec)
    slat (usec): min=11, max=6548, avg=31.27, stdev=30.51
    clat (usec): min=4783, max=78718, avg=21267.04, stdev=2518.73
     lat (usec): min=4818, max=78750, avg=21298.31, stdev=2518.63
    clat percentiles (usec):
     |  1.00th=[11338],  5.00th=[19530], 10.00th=[20055], 20.00th=[20579],
     | 30.00th=[20841], 40.00th=[21103], 50.00th=[21103], 60.00th=[21365],
     | 70.00th=[21627], 80.00th=[21890], 90.00th=[22414], 95.00th=[22938],
     | 99.00th=[31851], 99.50th=[33817], 99.90th=[40109], 99.95th=[44303],
     | 99.99th=[66323]
   bw (  MiB/s): min=23136, max=24999, per=100.00%, avg=24069.62, stdev=42.14, samples=954
   iops        : min=23136, max=24999, avg=24068.95, stdev=42.16, samples=954
  lat (msec)   : 10=0.61%, 20=7.71%, 50=91.67%, 100=0.04%
  cpu          : usr=0.48%, sys=11.01%, ctx=168125, majf=0, minf=291
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=1442599,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=23.5GiB/s (25.2GB/s), 23.5GiB/s-23.5GiB/s (25.2GB/s-25.2GB/s), io=1409GiB (1513GB), run=60021-60021msec

Disk stats (read/write):
  vda: ios=1490241/25, sectors=3052015616/840, merge=0/81, ticks=31296269/408, in_queue=31296708, util=55.02%

Read - IOPS

Test command: fio --name=read_iops --size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 --verify=0 --bs=4K --iodepth=64 --rw=randread --group_reporting=1

Command output (result):

Starting 1 process
read_iops: Laying out IO file (1 file / 10240MiB)
Jobs: 1 (f=1): [r(1)][100.0%][r=848MiB/s][r=217k IOPS][eta 00m:00s]
read_iops: (groupid=0, jobs=1): err= 0: pid=1674: Wed Jul  9 14:33:32 2025
  read: IOPS=215k, BW=841MiB/s (881MB/s)(49.3GiB/60001msec)
    slat (nsec): min=1011, max=259773, avg=1423.83, stdev=1308.05
    clat (usec): min=106, max=1626, avg=295.78, stdev=57.09
     lat (usec): min=108, max=1627, avg=297.20, stdev=57.05
    clat percentiles (usec):
     |  1.00th=[  182],  5.00th=[  208], 10.00th=[  223], 20.00th=[  241],
     | 30.00th=[  260], 40.00th=[  281], 50.00th=[  302], 60.00th=[  314],
     | 70.00th=[  326], 80.00th=[  343], 90.00th=[  367], 95.00th=[  392],
     | 99.00th=[  433], 99.50th=[  445], 99.90th=[  474], 99.95th=[  486],
     | 99.99th=[  627]
   bw (  KiB/s): min=852360, max=871264, per=100.00%, avg=861023.85, stdev=4055.12, samples=120
   iops        : min=213090, max=217816, avg=215255.99, stdev=1013.82, samples=120
  lat (usec)   : 250=25.17%, 500=74.80%, 750=0.03%, 1000=0.01%
  lat (msec)   : 2=0.01%
  cpu          : usr=12.13%, sys=44.02%, ctx=463774, majf=0, minf=36
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=12911582,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=841MiB/s (881MB/s), 841MiB/s-841MiB/s (881MB/s-881MB/s), io=49.3GiB (52.9GB), run=60001-60001msec

Disk stats (read/write):
  vda: ios=13338105/30, sectors=106704840/848, merge=0/76, ticks=3409775/12, in_queue=3409787, util=58.07%
RELATED TAGS
SHARE
Copied!
Copy to clipboard

Get a cloud server with NVMe today!