RAID: A Redundant Array of Independent Disks is a system of using multiple hard drives for sharing or replicating data among the drives.
Listed below are the standard RAID array configurations. This does not list proprietary, nonstandard RAID systems such as Double Parity, RAID 1.5 or RAID 7.
Concatenation (JBOD, RAID 0): Consists of an Array of Inexpensive Disks (without redundancy). JBOD is used to turn several odd-sized drives into one larger, more useful drive. Therefore, JBOD could use a 3 GB, 15 GB, 5.5 GB, and 12 GB drive to combine into a logical drive at 35.5 GB. This provides the best performance and storage efficiency in I/O intensive environments because there is no parity related overhead but without redundancy every drive becomes a point of failure.
RAID 1: Creates an exact copy (or mirror) of all of data on two or more disks. This is useful for setups where redundancy is more important than using all the disks’ maximum storage capacity. Since each member can be addressed independently if the other fails, reliability is a linear multiple of the number of members.
RAID 2: Stripes data at the bit (rather than block) level, and uses a Hamming code for error correction. The disks are synchronized by the controller to run in perfect tandem. This level of RAID is not currently used.
RAID 3: Uses byte-level striping with a dedicated parity disk. RAID 3 is very rare in practice. One of the side effects of RAID 3 is that it generally cannot service multiple requests simultaneously. This comes about because any single block of data will by definition be spread across all members of the set and will reside in the same location, so any I/O operation requires activity on every disk.
RAID 4: Uses block-level striping with a dedicated parity disk. RAID 4 looks similar to RAID 3 except that it stripes at the block, rather than the byte level. This allows each member of the set to act independently when only a single block is requested. If the disk controller allows it, a RAID 4 set can service multiple read requests simultaneously.
RAID 5: Uses block-level striping with parity data distributed across all member disks. RAID 5 is one of the most popular RAID levels, and is frequently used in both hardware and software implementations. Virtually all storage arrays offer RAID 5.
RAID 6: uses block-level striping with parity data distributed twice across all member disks. It is not one of the original RAID levels.
RAID 0+1: is a RAID used for both replicating and sharing data among disks. The difference between RAID 0+1 and RAID 10 is the location of each RAID system— it is a mirror of stripes. Consider an example of RAID 0+1: six 120GB drives need to be set up on a RAID 0+1. The maximum storage space here is 360GB, spread across two arrays. The advantage is that when a hard drive fails in one of the RAID 0’s, the missing data can be transferred from the other array.
RAID 10: sometimes called RAID 1+0, is similar to a RAID 0+1 with exception that the RAID levels used are reversed—RAID 10 is a stripe of mirrors. One drive from each RAID 1 set could fail without damaging the data. However, if the failed drive is not replaced, the single working hard drive in the set then becomes a single point of failure for the entire array. Unlike RAID 0+1, all the “sub-arrays” do not have to be upgraded at once. RAID10 is often the primary choice for high-load databases, because of its faster write speeds since there is no parity to calculate.
RAID 50: combines the block-level striping with distributed parity of RAID 5, with the straight block-level striping of RAID 0. This is a RAID 0 array striped across RAID 5 elements. A dataset with 5 blocks would have 3 blocks written to the 1st RAID set, and the next 2 blocks written to the 2nd RAID set. The configuration of the RAID sets will impact the overall fault tolerancy. RAID 50 improves upon the performance of RAID 5 particularly during writes, and provides better fault tolerance than a single RAID level does. This level is recommended for applications that require high fault tolerance, capacity and random positioning performance.