IO Test for EBS Volumes: RAID5′s performance
Amazon sales always tell us that EBS is very safe and will never fail. In that case, RAID0 will be the best choice. But world is full of surprise. Before we use RAID0 which is obviously better than others, search EBS failure in Google to find out the truth. Then comes the need to test the other RAID types.
Since there is no way to have a RAID card on EC2 instance, software-based RAID is the preferred way to build RAID.
In the following test, I use 10 EBS vloumes and mdadm to construct the soft RAID. Here is the result:
|16K block size||6.335Mb/s||6.424Mb/s||83.614Mb/s||6.704Mb/s||28.333Mb/s|
|512K block size||18.500Mb/s||17.978Mb/s||90.349Mb/s||25.765Mb/s||81.220Mb/s|
|1M block size||14.207Mb/s||13.710Mb/s||91.478Mb/s||18.509Mb/s||78.220Mb/s|
mdadm is very difficult to tune because we can hardly find any tips about performance, except this post:
Tuning Ubuntu mdadm RAID5/6
The key point in this post is to tune “stripe_cache_size” for RAID5. I have tried several values for EBS, but it doesn’t work out fine. Guess there is something different from magnetic disc.