RAID 6 that can read with least 1000 Mbit/s?

Posted on

Problem :

I purchased a Dell PERC 6/i which I expected to be able to read with 1000 Mbps. There is not much to do now, but there are some things I wanted knowledge about for another time.

I have configured it with four 2 TByte drives and RAID 6.
It has 256 MByte of RAM and transfer rate of 300 Mbps.
The benchmark test showed:

Min read rate: 136.3 Mbps
Max read rate: 329,6 Mbps
Avg read rate: 242,2 Mbps

What could I had done to get at least 1000 Mbps?

Is it normal for internal and external RAID controllers to have a lower transfer rate eg. 300 Mbps? (I did not noticed at the time that it was not 3 Gbps)

How would a RAID 10 had performed compared to RAID 6 or 5?

Would it have been better to use software RAID (Linux) with the internal 3 Gbps SATA controller?

UPDATE:
The drives is SATA III 6 Gbps.
http://www.seagate.com/files/staticfiles/docs/pdf/datasheet/disc/desktop-hdd-data-sheet-ds1770-1-1212us.pdf (2TB)

UPDATE 2:
I was asked what I had expected as transfer rate. I will put my answer here, although it is irrelevant. I did not ask about the speed could be correct or what I had expected.

I have calculated the numbers with a loss of 1/3.
Example 4 x 3 Gbps (SATA II. Total 12 Gbps) will provide 8 Gbps.

So what should I have done to get the 1000 Mbps or …. bump?

UPDATE 3:
Screenshot of the benchmark test:

screenshot of benchmark test
https://docs.google.com/file/d/0B6swvDCUiDn9WmtZcEFJUTdZRE0/edit

UPDATE 4:
Screenshot of hdparm:
screenshot of hdparm]![enter image description here

But it seems that the fault is in the switch.
Screenshot of the network test (iperf):

screenshot of network test]![enter image description here

Solution :

According to the technical specifications page the controller is capable of 300 MBps (2400 Mbps).

It’s not clear whether this is the maximum when running in RAID 6. Note that RAID levels with parity require parity calculations and this can slow the overall transfer rates. So the maximum might be applicable for a RAID level that’s less demanding of the controller.

The parity calculations can come into play during reads, especially if one of the disks has failed and the controller has to “fill in the blanks”.

That said, the RAID controller might not be the bottleneck. Take 2 disks with a sequential read speed of 20MB/s each in RAID 0. The fact that the controller can push 2.4 Gbs doesn’t mean much if the disks can feed it enough information to transfer at that rate.

Performance of linear read per disk is not limited by interface (SATA III in your case). It may be so for SSDs, but not for HDDs anyway.

Measure your disks independently of RAID controller. They are likely to give about 100 Mb/s each, hence the benchmark result you provided.

There are many determining factors in a hard drive or controller’s reported speeds when under use:

  1. The speed of the hard drive
  2. The speed of the controller/processor

For the speed of the hard drive, you need to also know that hard drives will not run at the fastest possible sped all the time, especially with desktop consumer drives, that are rated for up to 7200 RPM. There are server drives that spin at 10K or 15K RPM, are much more expensive, and much faster when looking for information.

There is also the cache to take into account. The cache is the information received from the spinning parts of the hard drive, which are then fed as a burst of data to the controller. I don’t know of any hard drive with a 6 Gb cache to deliver the speed right up to 6Gbps… The most I’ve seen so far is 64Mb, but I’m sure there are larger caching hard drives available somewhere.

The next part is the controller or the CPU. Once the data leaves the hard drive for the rest of the system, the processor, whether it’s a dedicated controller, or the CPU, needs to process the data and decide what’s happening with it. There is a bottle neck here as well, as the processor has to do calculations on the data, and possibly manipulate it, before it does anything else (i.e. send to Network card). As well, if Anti virus programs are running and scanning files being accessed, this will also slow down (I realize this isn’t part of your question, but not something to eliminate).

In theory, multiple drives increase speed and redundancy; this is true in most situations, but not always the best way. For the fastest speed, you need to lose all redundancy, and go with a RAID 0 (stripe). The issue with that is that if one drive fails, you typically lose all your data across all the drives (at least, the controller says you do). With a RAID 1, you lose half the space, but you have redundancy if one drive fails, and can rebuild easily. RAID 10 offers the best of both worlds, allowing the controller to read from the 4 hard drives at the same time, and not spend time calculating parity, while still allowing you to lose 1 (or, if you’re lucky with the selection, 2) drive(s). RAID 5 and 6 both require calculating parity on each drive, for each write and read, which will slow down the processing of the data; their redundancy allows for 1 (RAID 5) or 2 (RAID 6) drive(s) to fail, while keeping the data intact.

With the data sheet you provided, it also shows the MAXIMUM sustained data reading speeds maxing out at 210 MB/s, which is not anywhere close to 6Gbps, which SATA 3 is rated at. So it’s easy to see that you will not see the full 6Gbps from that, or from most consumer based Processors.

You need a better RAID controller. Your PERC 6/i can only transfer at a maximum of 300Mbps. That is less than a 1/3 of a gigabit per second (.292969 Gbps). This is a relatively low performance card.

There are better cards out there, Googling one at random and came across this one. It had a maximum of 1,850MB/s transfer. This is the equivalent of 2.8 Gbps. Factor in RAID calculations, latencies, etc… This card should be able to support 1Gbps transfer rates.

Your drives and PCIe x8 are more than capable of transferring 1Gbps.

Leave a Reply

Your email address will not be published. Required fields are marked *