NetAdminTools.com
 
Categories:
GNU/Linux | Homebrew designs | Perl | Ruby | Administration | Backup/Recovery | Bugs/Fixes | Certification | Database | Email | File/Print | Hardware | Information Grab Bag | Interoperability | GNU/Linux ABCs | Monitoring | Name Resolution | Network Services | Networking | Remote Control | Security | Desktop | Web | BSD | Solaris | GIAGD | ERP | REALbasic

Last 30 Days | Last 60 Days | Last 90 Days | All Articles | RSS


Categories:
·GNU/Linux
·Homebrew designs
·Perl
·Ruby
·Administration
·Backup/Recovery
·Bugs/Fixes
·Certification
·Database
·Email
·File/Print
·Hardware
·Information Grab Bag
·Interoperability
·GNU/Linux ABCs
·Monitoring
·Name Resolution
·Network Services
·Networking
·Remote Control
·Security
·Desktop
·Web
·BSD
·Solaris
·GIAGD
·ERP
·REALbasic
·All Categories


Configuring Software RAID 5 on GNU/Linux
Topic:GNU/Linux   Date: 2005-01-26
Printer Friendly: Print   Mobile View: mobile

spacerspacer
<<  <   >  >>

Subject

We are running Gentoo on sparc64:

srv-1 root # uname -a
Linux srv-1 2.4.27-sparc #1 SMP Fri Dec 31 08:43:34 PST 2004 sparc64 
sun4u TI UltraSparc II  (BlackBird) GNU/Linux

The first step in installing RAID 5 is to figure out what devices you have available:

srv-1 root # dmesg | grep SCSI
SCSI subsystem driver Revision: 1.00
esp0: IRQ 4,7e0 SCSI ID 7 Clk 40MHz CCYC=25000 CCF=8 TOut 167 NCR53C9XF(espfast)
Type:   Direct-Access                      ANSI SCSI revision: 02
Type:   Direct-Access                      ANSI SCSI revision: 02
Type:   CD-ROM                             ANSI SCSI revision: 02
Type:   Direct-Access                      ANSI SCSI revision: 02
Type:   Direct-Access                      ANSI SCSI revision: 02
Type:   Direct-Access                      ANSI SCSI revision: 02
Type:   Direct-Access                      ANSI SCSI revision: 02
Type:   Direct-Access                      ANSI SCSI revision: 02
esp0: target 0 [period 100ns offset 15 20.00MHz FAST-WIDE SCSI-II]
SCSI device sda: 8385121 512-byte hdwr sectors (4293 MB)
esp0: target 1 [period 100ns offset 15 20.00MHz FAST-WIDE SCSI-II]
SCSI device sdb: 8385121 512-byte hdwr sectors (4293 MB)
esp0: target 9 [period 100ns offset 15 20.00MHz FAST-WIDE SCSI-II]
SCSI device sdc: 35378533 512-byte hdwr sectors (18114 MB)
esp0: target 10 [period 100ns offset 15 20.00MHz FAST-WIDE SCSI-II]
SCSI device sdd: 35378533 512-byte hdwr sectors (18114 MB)
esp0: target 11 [period 100ns offset 15 20.00MHz FAST-WIDE SCSI-II]
SCSI device sde: 35378533 512-byte hdwr sectors (18114 MB)
esp0: target 12 [period 100ns offset 15 20.00MHz FAST-WIDE SCSI-II]
SCSI device sdf: 35378533 512-byte hdwr sectors (18114 MB)
esp0: target 13 [period 100ns offset 15 20.00MHz FAST-WIDE SCSI-II]
SCSI device sdg: 35378533 512-byte hdwr sectors (18114 MB)

The 18 gig disks are housed in a StorEdge MultiPack cabinet. We need a raidtab file for this:

srv-1 root # cat /etc/raidtab
raiddev /dev/md0
raid-level      5
nr-raid-disks   5
persistent-superblock   1
chunk-size      32k
parity-algorithm        left-symmetric
device  /dev/sdc
raid-disk       0
device  /dev/sdd
raid-disk       1
device  /dev/sde
raid-disk       2
device  /dev/sdf
raid-disk       3
device  /dev/sdg
raid-disk       4
srv-1 root # 

Let's partition our disks using the fd type for RAID autodetect:

 
srv-1 root # fdisk /dev/sdc
Command (m for help): p
Disk /dev/sdc (Sun disk label): 19 heads, 248 sectors, 7506 cylinders
Units = cylinders of 4712 * 512 bytes
Device Flag    Start       End    Blocks   Id  System
Command (m for help): n
Partition number (1-8): 1
First cylinder (0-7506): 
Last cylinder or +size or +sizeM or +sizeK (0-7506, default 7506): 
Using default value 7506
Command (m for help): t
Partition number (1-8): 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): 
Command (m for help): p
Disk /dev/sdc (Sun disk label): 19 heads, 248 sectors, 7506 cylinders
Units = cylinders of 4712 * 512 bytes
Device Flag    Start       End    Blocks   Id  System
/dev/sdc1             0      7506  17684136   fd  Linux raid autodetect
Command (m for help): 
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
srv-1 root # 

We will need the raidtools package. Your distribution will vary, but with Gentoo:

srv-1 root # emerge raidtools 
Calculating dependencies ...done!
>>> emerge (1 of 1) sys-fs/raidtools-1.00.3-r2 to /
>>> md5 src_uri ;-) raidtools-1.00.3.tar.gz
.
.
.
>>> sys-fs/raidtools-1.00.3-r2 merged.
>>> Recording sys-fs/raidtools in "world" favorites file...
>>> clean: No packages selected for removal.
>>> Auto-cleaning packages ...
>>> No outdated packages were found on your system.
* GNU info directory index is up-to-date.

We need to create the /dev/md0 device:

srv-1 root # 
srv-1 root # ls -l /dev/md0
ls: /dev/md0: No such file or directory
srv-1 root # mknod /dev/md0 b 9 0
srv-1 root # 

Let's create our RAID 5 filesystem:

srv-1 root # mkraid /dev/md0
handling MD device /dev/md0
analyzing super-block
disk 0: /dev/sdc, 17689266kB, raid superblock at 17689152kB
disk 1: /dev/sdd, 17689266kB, raid superblock at 17689152kB
disk 2: /dev/sde, 17689266kB, raid superblock at 17689152kB
disk 3: /dev/sdf, 17689266kB, raid superblock at 17689152kB
disk 4: /dev/sdg, 17689266kB, raid superblock at 17689152kB
srv-1 root #
srv-1 root # cat /proc/mdstat
Personalities : [raid5] 
read_ahead 1024 sectors
md0 : active raid5 
scsi/host0/bus0/target13/lun0/disc[4] 
scsi/host0/bus0/target12/lun0/disc[3] 
scsi/host0/bus0/target11/lun0/disc[2] 
scsi/host0/bus0/target10/lun0/disc[1] 
scsi/host0/bus0/target9/lun0/disc[0]
70756608 blocks level 5, 32k chunk, algorithm 2 [5/5] [UUUUU]
[>....................]  resync =  1.3% (233496/17689152) 
finish=90.7min speed=3205K/sec
unused devices: 
srv-1 root #
srv-1 root # mkfs.ext3 /dev/md0
mke2fs 1.35 (28-Feb-2004)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
8847360 inodes, 17689152 blocks
884457 blocks (5.00%) reserved for the super user
First data block=0
540 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks: 
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
4096000, 7962624, 11239424
Writing inode tables: done                            
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 37 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
srv-1 root # 

Now, where should we put this? /srv works for us:

srv-1 root # ls /srv
ls: /srv: No such file or directory
srv-1 root # mkdir /srv
srv-1 root #

Consult the Filesystem Hierarchy Standard for ideas about new filesystem mount points. We need to put this in /etc/fstab with the line:

/dev/md0                /srv    ext3    defaults        1 2

On boot, though:

* Checking all filesystems...fsck.ext3: Invalid argument while trying to open 0
/dev/md0:                                                                       
The superblock could not be read or does not describe a correct ext2            
filesystem.  If the device is valid and it really contains an ext2              
filesystem (and not swap or ufs or something else), then the superblock         
is corrupt, and you might try running e2fsck with an alternate superblock:      
e2fsck -b 8193                                                      
* Fsck could not correct all errors, manual repair needed                      
[ !! ]                                                                        
Give root password for maintenance                                              
(or type Control-D for normal startup):                                         
bash-2.05b# 
bash-2.05b# raidstart /dev/md0
bash-2.05b# e2fsck /dev/md0
e2fsck 1.35 (28-Feb-2004)                                                       
/dev/md0: clean, 11/8847360 files, 285845/17689152 blocks                       
bash-2.05b# 

Hmmmm... It appears that the RAID driver is not starting up. We solved this by starting raidstart immediately after mounting the root filesystem by editing /etc/init.d/checkroot:

ebegin "Remounting root filesystem read/write"
mount / -n -o remount,rw &>/dev/null
/sbin/raidstart /dev/md0
if [ "$?" -ne 0 ]
then

It appears that checkfs should be able to do this. Regardless, the above works for our purposes. For more information on software RAID configuration, see this document: Software-RAID-HOWTO.


People:
Places:
Things:
Times:





Please read our Terms of Use and our Privacy Policy
Microsoft, Windows, Windows Server are either trademarks or registered trademarks of Microsoft Corporation. NetAdminTools.com is not affiliated with Microsoft Corporation. Linux is a registered trademark of Linus Torvalds, and refers to the Linux kernel. The operating system of most distributions that contain the Linux kernel is GNU/Linux. All logos and trademarks in this site are property of their respective owner. Copyright 1997-2013 NetAdminTools.com