I have 4x2TB disks and I want to create a well-performing RAID5 array (server is HP N40L microserver with 8GB RAM, booting from a 64GB SSHD). The OS is Centos 6.3, x86_64.
I created the raid array with this command:
mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
When I then do:
mdadm --examine /dev/sda1
…I am told my “Chunk Size” is 512K (apparently this is mdadm’s new default value).
Now I want to format the array with XFS. I am told (at http://www.mythtv.org/wiki/Optimizing_Performance#Optimizing_XFS_on_RAID_Arrays) that “sunit” is equal to my Chunk size, expressed as a number of 512-byte blocks -so, in my case, 512KB = 1024 512byte blocks. Similarly, “swidth” is the number of effective disks in my array times sunit. In my case, I have 4 disks in raid 5, so 3 effective disks, and 3×1024=3072. Therefore, I formatted my new array with the command:
mkfs.xfs -b size=4096 -d sunit=1024,swidth=3072 /dev/md0
I now have two questions. The above command gave me this error:
mkfs.xfs -b size=4096 -d sunit=1024,swidth=3072 /dev/md0 log stripe unit (524288 bytes) is too large (maximum is 256KiB) log stripe unit adjusted to 32KiB [...]
…and I want to know if that means I’ve done something wrong or if I’ll end up with a sub-optimal file system in some way, or if I can just ignore that error for some reason.
The second question is simply whether I have calculated the XFS parameters correctly or if I’m barking up completely the wrong tree (if it helps, the array will store large music and video files, for the most part). Have I understood “chunk size” and “stripe size”, for example? Is the blocksize of 4096 in my mkfs command optimal? And so on.
I would appreciate any advice on this.
XFS doesn’t support stripe units larger than 256k, so just re-make your RAID array with a 256k stripe. This is the
--chunk parameter of
A 4k block size might be too small for your intended usage. If you were storing lots of small files then 4k would probably be more ideal. XFS can go right up to 64k blocks. It’s quicker to read and write contiguous blocks, but you lose some space to overhead of larger block sizes.
You can only allocate in blocks, so select your block size based on the size of the files you expect to be dealing with. With a 4k block size, a file of size 1kb takes up 4kb of space (1 block), and a file of size 65kb takes up 68kb of space (17 blocks). With a 64kb block size, a file of size 1kb takes up 64kb (one block) and a file of size 65kb takes up 128kb (2 blocks).
If you’re dealing in small files then you’ll waste a lot of space with a large block size. If you’re dealing in hundreds-of-gigabyte video files then you probably don’t care about 64kb here or there, and the performance advantage of the larger block size makes more of a difference.
One other thing to understand is Allocation Groups. Each AG gets a separate IO thread. The XFS allocator tries to put each directory in a different AG. A basic theory is one AG per physical device.
Have a read of the XFS documentation and come to understand how the filesystem is built:
Make some educated guesses and decide what factors are most important to you. Get some files which represent your production data (or a copy of the actual production data) and run some benchmarks on what is important to you. Pick a metric like how quickly does your video or audio software read and write files based on different block sizes? How does having multiple audio/video engineers concurrently accessing files affect throughput with different AGs?
XFS is designed for massive hundreds-of-terabytes filesystems, living on SANs worth more than a house, storing massive uncompressed media files like professional movie studios would need. If you’re using this to store your pirated music and TV shows on a cheap Linux box then just use ext4, it’ll be much easier to troubleshoot and fix if you ever run into problems.