SAKURA VPS SSD 1G PLAN のベンチマーク
今日の12時から開始されたSSDプランを早速申し込み、 デフォルト環境でベンチマークを取ってみたのでその結果など。
CPU info
CPUは Intel Xeon E5-2640
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Xeon(R) CPU E5-2640
stepping : 1
cpu MHz : 2499.998
cache size : 4096 KB
physical id : 0
siblings : 2
core id : 0
cpu cores : 1
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx lm constant_tsc arch_perfmon unfair_spinlock pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 x2apic popcnt aes xsave avx hypervisor xsaveopt
bogomips : 4999.99
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:
processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Xeon(R) CPU E5-2640
stepping : 1
cpu MHz : 2499.998
cache size : 4096 KB
physical id : 0
siblings : 2
core id : 0
cpu cores : 1
apicid : 1
initial apicid : 1
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx lm constant_tsc arch_perfmon unfair_spinlock pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 x2apic popcnt aes xsave avx hypervisor xsaveopt
bogomips : 4999.99
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:
UnixBench
========================================================================
BYTE UNIX Benchmarks (Version 5.1.3)
System: www3241gi.sakura.ne.jp: GNU/Linux
OS: GNU/Linux -- 2.6.32-279.14.1.el6.x86_64 -- #1 SMP Tue Nov 6 23:43:09 UTC 2012
Machine: x86_64 (x86_64)
Language: en_US.utf8 (charmap="UTF-8", collate="UTF-8")
CPU 0: Intel(R) Xeon(R) CPU E5-2640 (5000.0 bogomips)
Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSCALL/SYSRET
CPU 1: Intel(R) Xeon(R) CPU E5-2640 (5000.0 bogomips)
Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSCALL/SYSRET
17:38:13 up 0 min, 1 user, load average: 0.00, 0.00, 0.00; runlevel 3
------------------------------------------------------------------------
Benchmark Run: 木 12月 13 2012 17:38:13 - 18:06:25
2 CPUs in system; running 1 parallel copy of tests
Dhrystone 2 using register variables 27460863.0 lps (10.0 s, 7 samples)
Double-Precision Whetstone 2922.8 MWIPS (10.0 s, 7 samples)
Execl Throughput 3280.7 lps (30.0 s, 2 samples)
File Copy 1024 bufsize 2000 maxblocks 926853.0 KBps (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks 267654.7 KBps (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks 1128665.7 KBps (30.0 s, 2 samples)
Pipe Throughput 1978252.5 lps (10.0 s, 7 samples)
Pipe-based Context Switching 28567.4 lps (10.0 s, 7 samples)
Process Creation 9445.1 lps (30.0 s, 2 samples)
Shell Scripts (1 concurrent) 5657.4 lpm (60.0 s, 2 samples)
Shell Scripts (8 concurrent) 1231.1 lpm (60.0 s, 2 samples)
System Call Overhead 3329179.1 lps (10.0 s, 7 samples)
System Benchmarks Index Values BASELINE RESULT INDEX
Dhrystone 2 using register variables 116700.0 27460863.0 2353.1
Double-Precision Whetstone 55.0 2922.8 531.4
Execl Throughput 43.0 3280.7 762.9
File Copy 1024 bufsize 2000 maxblocks 3960.0 926853.0 2340.5
File Copy 256 bufsize 500 maxblocks 1655.0 267654.7 1617.2
File Copy 4096 bufsize 8000 maxblocks 5800.0 1128665.7 1946.0
Pipe Throughput 12440.0 1978252.5 1590.2
Pipe-based Context Switching 4000.0 28567.4 71.4
Process Creation 126.0 9445.1 749.6
Shell Scripts (1 concurrent) 42.4 5657.4 1334.3
Shell Scripts (8 concurrent) 6.0 1231.1 2051.8
System Call Overhead 15000.0 3329179.1 2219.5
========
System Benchmarks Index Score 1113.6
------------------------------------------------------------------------
Benchmark Run: 木 12月 13 2012 18:06:25 - 18:34:36
2 CPUs in system; running 2 parallel copies of tests
Dhrystone 2 using register variables 54562641.4 lps (10.0 s, 7 samples)
Double-Precision Whetstone 5840.5 MWIPS (9.9 s, 7 samples)
Execl Throughput 7479.4 lps (29.5 s, 2 samples)
File Copy 1024 bufsize 2000 maxblocks 1012680.1 KBps (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks 276063.8 KBps (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks 1572368.5 KBps (30.0 s, 2 samples)
Pipe Throughput 3878300.4 lps (10.0 s, 7 samples)
Pipe-based Context Switching 553856.9 lps (10.0 s, 7 samples)
Process Creation 23674.3 lps (30.0 s, 2 samples)
Shell Scripts (1 concurrent) 9243.6 lpm (60.0 s, 2 samples)
Shell Scripts (8 concurrent) 1196.2 lpm (60.0 s, 2 samples)
System Call Overhead 4499703.6 lps (10.0 s, 7 samples)
System Benchmarks Index Values BASELINE RESULT INDEX
Dhrystone 2 using register variables 116700.0 54562641.4 4675.5
Double-Precision Whetstone 55.0 5840.5 1061.9
Execl Throughput 43.0 7479.4 1739.4
File Copy 1024 bufsize 2000 maxblocks 3960.0 1012680.1 2557.3
File Copy 256 bufsize 500 maxblocks 1655.0 276063.8 1668.1
File Copy 4096 bufsize 8000 maxblocks 5800.0 1572368.5 2711.0
Pipe Throughput 12440.0 3878300.4 3117.6
Pipe-based Context Switching 4000.0 553856.9 1384.6
Process Creation 126.0 23674.3 1878.9
Shell Scripts (1 concurrent) 42.4 9243.6 2180.1
Shell Scripts (8 concurrent) 6.0 1196.2 1993.7
System Call Overhead 15000.0 4499703.6 2999.8
========
System Benchmarks Index Score 2164.3
IOPing
ioping で計測したもの。
ioping --help
Usage: ioping [-LCDRq] [-c count] [-w deadline] [-p period] [-i interval]
[-s size] [-S wsize] [-o offset] device|file|directory
ioping -h | -v
-c <count> stop after <count> requests
-w <deadline> stop after <deadline>
-p <period> print raw statistics for every <period> requests
-i <interval> interval between requests (1s)
-s <size> request size (4k)
-S <wsize> working set size (1m)
-o <offset> in file offset
-L use sequential operations (includes -s 256k)
-C use cached I/O
-D use direct I/O
-R seek rate test (same as -q -i 0 -w 3 -S 64m)
-q suppress human-readable output
-h display this message and exit
-v display version and exit
I/O 10 requests
IOPing I/O: ioping -c 10
4096 bytes from . (ext4 /dev/vda3): request=1 time=0.4 ms
4096 bytes from . (ext4 /dev/vda3): request=2 time=0.3 ms
4096 bytes from . (ext4 /dev/vda3): request=3 time=0.5 ms
4096 bytes from . (ext4 /dev/vda3): request=4 time=0.5 ms
4096 bytes from . (ext4 /dev/vda3): request=5 time=0.5 ms
4096 bytes from . (ext4 /dev/vda3): request=6 time=0.4 ms
4096 bytes from . (ext4 /dev/vda3): request=7 time=0.5 ms
4096 bytes from . (ext4 /dev/vda3): request=8 time=0.3 ms
4096 bytes from . (ext4 /dev/vda3): request=9 time=0.5 ms
4096 bytes from . (ext4 /dev/vda3): request=10 time=0.4 ms
--- . (ext4 /dev/vda3) ioping statistics ---
10 requests completed in 9005.6 ms, 2314 iops, 9.0 mb/s
min/avg/max/mdev = 0.3/0.4/0.5/0.1 ms
2314 iops
seek rate
IOPing seek rate: ioping -RD
--- . (ext4 /dev/vda3) ioping statistics ---
15596 requests completed in 3000.1 ms, 8551 iops, 33.4 mb/s
min/avg/max/mdev = 0.1/0.1/8.9/0.1 ms
8551 iops
sequential
IOPing sequential: ioping -RL
--- . (ext4 /dev/vda3) ioping statistics ---
6255 requests completed in 3000.1 ms, 2617 iops, 654.2 mb/s
min/avg/max/mdev = 0.3/0.4/9.4/0.1 ms
2617 iops
cached
IOPing cached: ioping -RC
38393 requests completed in 3000.0 ms, 328092 iops, 1281.6 mb/s
min/avg/max/mdev = 0.0/0.0/0.1/0.0 ms
328092 iops
fio
fioでのベンチマーク。
reads.ini
パラメータは以下の通り
[global]
randrepeat=1
ioengine=libaio
bs=4k
ba=4k
size=1G
direct=1
gtod_reduce=1
norandommap
iodepth=64
numjobs=1
[randomreads]
startdelay=0
filename=sb-io-test
readwrite=randread
実施結果。
randomreads: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.0.10
Starting 1 process
randomreads: Laying out IO file(s) (1 file(s) / 1024MB)
Jobs: 1 (f=1): [r] [100.0% done] [44000K/0K/0K /s] [11.0K/0 /0 iops] [eta 00m:00s]
randomreads: (groupid=0, jobs=1): err= 0: pid=1660: Fri Dec 14 00:13:32 2012
read : io=1024.3MB, bw=44039KB/s, iops=11009 , runt= 23816msec
cpu : usr=2.59%, sys=11.35%, ctx=3890, majf=0, minf=90
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued : total=r=262207/w=0/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
READ: io=1024.3MB, aggrb=44038KB/s, minb=44038KB/s, maxb=44038KB/s, mint=23816msec, maxt=23816msec
Disk stats (read/write):
vda: ios=261361/3, merge=0/0, ticks=1379888/5, in_queue=1380424, util=98.99%
11009 iops
writes.ini
パラメータは以下の通り
[global]
randrepeat=1
ioengine=libaio
bs=4k
ba=4k
size=1G
direct=1
gtod_reduce=1
norandommap
iodepth=64
numjobs=1
[randomwrites]
startdelay=0
filename=sb-io-test
readwrite=randwrite
実施結果。
randomwrites: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.0.10
Starting 1 process
Jobs: 1 (f=1): [w] [100.0% done] [0K/44120K/0K /s] [0 /11.3K/0 iops] [eta 00m:00s]
randomwrites: (groupid=0, jobs=1): err= 0: pid=1663: Fri Dec 14 00:14:37 2012
write: io=1024.3MB, bw=44042KB/s, iops=11010 , runt= 23814msec
cpu : usr=2.60%, sys=11.82%, ctx=12615, majf=0, minf=26
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued : total=r=0/w=262207/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
WRITE: io=1024.3MB, aggrb=44042KB/s, minb=44042KB/s, maxb=44042KB/s, mint=23814msec, maxt=23814msec
Disk stats (read/write):
vda: ios=0/260706, merge=0/0, ticks=0/1449818, in_queue=1451469, util=99.52%
11010 iops
その他
他のVPSプランですでにそうなのかどうか確認していないのだけど、デフォルトのCentOS6を少し触ってみたところ、標準環境とは以下のような違いがあった。他にも違いがあるかもしれないがCentOSはあまり詳しくないので分からなかった。
- epelのレポジトリ有効化
- fastestmirror プラグインインストール済
- Development Tools Group のrpmがインストール済
まとめ
これ が大阪リージョンのさくらのVPS1GのUnixBenchの結果ですが、1563.0から2164.3と向上しています。
同様にIOPSの比較も掲載しようとしたのですが、他に借りているVPSはext3環境しかないので辞めました。 SAKURA VPS 1G ext3環境でfioを上記と同じパラメータで fio reads.ini としたもののiopsが 527 だったものが 上記の通り ssd ext4環境では 11009 iops と段違いな数値である事だけ記しておきます。 通常のプランに比べてほんの少し値段が張りますが、IOPSの数値比較だけで見るとかなり突出したI/Oであることが分かると思います。
これだけ良いサーバがこの値段で借りれるようになったさくらインターネットに感謝しつつ、何に使うかはこれから思案の予定です。勢いで借りただけとも言いますが。
2012/12/14 0:18 update
fio のバージョンが古かったため2.0.10での結果へ修正とまとめの追加。
2013/01/25 1:44 update
iops の転記ミスの訂正とconfigファイルの転記ミスの訂正。