Setting Up a Fake Raid Array on PCLinuxOS

by Agapov Sergey (darkgfan)

Editor's Note: Many users may not be familiar with exactly what a RAID array is. According to the Webopedia Computer Dictionary it is:

RAID (rād)
Redundant Array of Independent (or Inexpensive) Disks, a category of disk drives that employ two or more drives in combination for fault tolerance and performance. RAID disk drives are used frequently on servers but aren't generally necessary for personal computers. RAID allows you to store the same data redundantly (in multiple paces) in a balanced ay to improve overall performance.

There are number of different RAID levels:

  • Level 0 — Striped Disk Array without Fault Tolerance: Provides data striping (spreading out blocks of each file across multiple disk drives) but no redundancy. This improves performance but does not deliver fault tolerance. If one drive fails then all data in the array is lost.
  • Level 1 — Mirroring and Duplexing: Provides disk mirroring. Level 1 provides twice the read transaction rate of single disks and the same write transaction rate as single disks.
  • Level 2 — Error-Correcting Coding: Not a typical implementation and rarely used, Level 2 stripes data at the bit level rather than the block level.
  • Level 3 — Bit-Interleaved Parity: Provides byte-level striping with a dedicated parity disk. Level 3, which cannot service simultaneous multiple requests, also is rarely used.
  • Level 4 — Dedicated Parity Drive: A commonly used implementation of RAID, Level 4 provides block-level striping (like Level 0) with a parity disk. If a data disk fails, the parity data is used to create a replacement disk. A disadvantage to Level 4 is that the parity disk can create write bottlenecks.
  • Level 5 — Block Interleaved Distributed Parity: Provides data striping at the byte level and also stripe error correction information. This results in excellent performance and good fault tolerance. Level 5 is one of the most popular implementations of RAID.
  • Level 6 — Independent Data Disks with Double Parity: Provides block-level striping with parity data distributed across all disks.
  • Level 0+1 — A Mirror of Stripes: Not one of the original RAID levels, two RAID 0 stripes are created, and a RAID 1 mirror is created over them. Used for both replicating and sharing data among disks.
  • Level 10 — A Stripe of Mirrors: Not one of the original RAID levels, multiple RAID 1 mirrors are created, and a RAID 0 stripe is created over these.
  • Level 7: A trademark of Storage Computer Corporation that adds caching to Levels 3 or 4.
  • RAID S: (also called Parity RAID) EMC Corporation's proprietary striped parity RAID system used in its Symmetrix storage systems.

This article is devoted to all owners of so-called fake-raid disk arrays. But let's hold down our mighty troika and discuss the details first.

As you must now, there are two types of raid arrays, classified by logical organization — it's our infamous jbod (linear), 0, 1, 0+1, 5, 10 and so on; and their identity to hardware devices organization — consisting of software, hardware and fake-raid arrays.

Hardware cards are still expensive (but it's the best choice, indeed). Software organization is a very good choice, too. All you need are hard disk drives and free plugs. Besides, it has a simple rescue mechanism.

But what about fake-raid? It's on almost all motherboard build-in devices or cards with pricing below $30. Any advantages? In performance comparison with software-based raid arrays, these two stride side-by-side. If it's possible, I strongly recommend to use software raid (to avoid problems in the future). But if there are no free plugs and there is no true hardware raid card nearby, you may have no other choice, and our only way is to "fake" it a little.

We'll focus on building a level zero ("0") array, based on the Silicon 3112 chipset, so let's roll.

  1. I suppose, you have already installed your raid controller and connected hard drives to it. And, of course, you will have needed to select the necessary mode in the controller bios, during the boot up of your system. For me, it was "ZERO mode." If not, now is a good time to do it.

  2. Don't worry about device drivers; your system will automatically load them, installing dmraid and dmsetup utilities.

    In a terminal session, type:

    # apt-get install dmraid
    

    dmsetup is a required dependence, and it will preload with dmraid. Just select "Yes" if you are prompted.

  3. After those tasks, two files must appear in folder /dev/mapper. They have such frightful names as:

    sil_aiabcddccecj
    sil_aiabcddccecj1
    

    You will also need two symbolic links for them (one for each), in the /dev folder, with names dm-0 and dm-1.

    If it didn't happen, we must install our array manually:

    # dmraid —l
    

    When listing all supported controllers, remember we only need «sil».

    asr : Adaptec HostRAID ASR (0,1,10)
    ddf1 : SNIA DDF1 (0,1,4,5,linear)
    hpt37x : Highpoint HPT37X (S,0,1,10,01)
    hpt45x : Highpoint HPT45X (S,0,1,10)
    isw : Intel Software RAID (0,1,01)
    jmicron : JMicron ATARAID (S,0,1)
    lsi : LSI Logic MegaRAID (0,1,10)
    nvidia : NVidia RAID (S,0,1,10,5)
    pdc : Promise FastTrack (S,0,1,10)
    sil : Silicon Image(tm) Medley(tm) (0,1,10)
    via : VIA Software RAID (S,0,1,10)
    dos : DOS partitions on SW RAIDs
    

    Entering the following command in a terminal session:

    # dmraid —r
    

    shows all disks in our array, only to insure that all goes right;

    /dev/sdc: sil, «sil_ajaddfbgejde», stripe, ok, 976771072 sectors, data@ 0
    /dev/sdb: sil, «sil_ajaddfbgejde», stripe, ok, 976771072 sectors, data@ 0
    

    We can activate our array with the following command, typed into a terminal session:

    # dmraid —s
    

    array activation;

    active set
    
    # dmraid —ay —f sil
    

    where «sil» is our controller name, which we fetched above.

    Take notice that using "-f sil" construction sometimes is unnecessary. It's just a precaution to prevent accidents (they happen sometimes). Often, it's enough to set "dmraid —ay" and all works good. Undoubtedly, both constructions will be useful.

    If something goes wrong, we can easily reset our configuration and start with the beginning:

    # dmsetup remove_all —f
    

    will force all of our block devices to an off state.

  4. Now, let us take it a step further, and make a filesystem on our array:

    #mkfs.ext3 —m 0 —L raid_multimedia /dev/dm-0
    

    "-L raid_multimedia" — is only a label. Use it if you need. "-m 0" sets our volume space reserve. It defaults to 5% of the drive volume. It's wise to change it (just imagine what it will it be, if you have 1TB raid — approximately 50GB won't be available).

    Make sure that dm-0 is our phantom raid block device and dm-1 is our main partition on it.

  5. After that, only the last thing we need to do is to mount our fresh partition and enter the information in fstab:

    # mount —t ext3 /dev/dm-1 /mnt/raid
    

    Your fstab line should resemble something like this:

    /dev/dm-1 /mnt/raid ext3 auto,user,rw,async 1 3
    

    It's possible to change the last digit to 0, if you don't want the partition to be checked. And, it won't be a mistake to change disk dump from 1 to 0. Just set it up as you like.

    Furthermore, you can change filesystem checking, using this:

    # tune2fs -c 0 -i 1m /dev/dm-1
    

    I recommend you do this once a month.