HOW TO CONFIGURE LINUX LVM (LOGICAL VOLUME MANAGER) USING SOFTWARE RAID 5
Introduction
In this article we are going to learn How to Configure Linux LVM in Software RAID 5 Partition. As we all know that Software RAID 5 and LVM both are one of the most useful and major features of Linux. RAID 5 uses striping with parity technique to store the data in hard disk’s. and Linux LVM (Logical Volume Manager) is used to Extend, Resize, Rename the Logical Volumes. So the purpose behind the configuration of Linux LVM on RAID 5 partition is we can take benefit of both services and can make data more secure. Refer the Diagram Below :
For more reference on Linux LVM and Software RAID Read below articles :
- HOW TO CONFIGURE RAID 5 (SOFTWARE RAID) IN LINUX USING MDADM
- HOW TO INCREASE EXISTING SOFTWARE RAID 5 STORAGE CAPACITY IN LINUX
- HOW TO CONFIGURE SOFTWARE RAID 1 (DISK MIRRORING) USING MDADM IN LINUX
Follow the below Steps to Configure Linux LVM on Software RAID 5 Partition :
Configure Software RAID 5
As a First step we have to configure Software RAID 5. as we all know that we required minimum 3 hard disks to configure the same. Here I have three hard disk’s i.e. /dev/sdb , /dev/sdc and /dev/sdd. Refer the sample output below.
[root@localhost ~]# fdisk -l # List available Disks and Partitions Disk /dev/sda: 21.5 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000817a9 Device Boot Start End Blocks Id System /dev/sda1 * 1 39 307200 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 39 2350 18566144 83 Linux /dev/sda3 2350 2611 2097152 82 Linux swap / Solaris Disk /dev/sdb: 3221 MB, 3221225472 bytes 255 heads, 63 sectors/track, 391 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdc: 3221 MB, 3221225472 bytes 255 heads, 63 sectors/track, 391 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdd: 3221 MB, 3221225472 bytes 255 heads, 63 sectors/track, 391 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000
So let’s go ahead and create partitions on each hard disk and change the partition id for Software RAID i.e. “fd“.
Partitioning the Disk : /dev/sdb
[root@localhost ~]# fdisk /dev/sdb Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel with disk identifier 0x69615a6b. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-391, default 1): Using default value 1 Last cylinder, +cylinders or +size{K,M,G} (1-391, default 391): +3000M Command (m for help): t Selected partition 1 Hex code (type L to list codes): fd Changed system type of partition 1 to fd (Linux raid autodetect) Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.
Partitioning the Disk : /dev/sdc
[root@localhost ~]# fdisk /dev/sdc Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel with disk identifier 0x86e0c23d. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-391, default 1): Using default value 1 Last cylinder, +cylinders or +size{K,M,G} (1-391, default 391): +3000M Command (m for help): t Selected partition 1 Hex code (type L to list codes): fd Changed system type of partition 1 to fd (Linux raid autodetect) Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.
Partitioning the Disk : /dev/sdd
[root@localhost ~]# fdisk /dev/sdd Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel with disk identifier 0xea36e552. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-391, default 1): Using default value 1 Last cylinder, +cylinders or +size{K,M,G} (1-391, default 391): +3000M Command (m for help): t Selected partition 1 Hex code (type L to list codes): fd Changed system type of partition 1 to fd (Linux raid autodetect) Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.
So we have successfully created three partitions i.e. /dev/sdb1, /dev/sdc1, /dev/sdd1 and changed its partition ID for Software RAID. Refer the sample output below.
[root@localhost ~]# fdisk -l Disk /dev/sda: 21.5 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000817a9 Device Boot Start End Blocks Id System /dev/sda1 * 1 39 307200 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 39 2350 18566144 83 Linux /dev/sda3 2350 2611 2097152 82 Linux swap / Solaris Disk /dev/sdb: 3221 MB, 3221225472 bytes 255 heads, 63 sectors/track, 391 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x69615a6b Device Boot Start End Blocks Id System /dev/sdb1 1 383 3076416 fd Linux raid autodetect Disk /dev/sdc: 3221 MB, 3221225472 bytes 255 heads, 63 sectors/track, 391 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x86e0c23d Device Boot Start End Blocks Id System /dev/sdc1 1 383 3076416 fd Linux raid autodetect Disk /dev/sdd: 3221 MB, 3221225472 bytes 255 heads, 63 sectors/track, 391 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xea36e552 Device Boot Start End Blocks Id System /dev/sdd1 1 383 3076416 fd Linux raid autodetect
Now our next step is to create and start Software RAID 5 array. To do so refer the below command.
[root@localhost ~]# mdadm -C /dev/md0 --level=raid5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started.
After creating and started the Software RAID 5 array you will get a new Partition in your partition List. refer the output below.
[root@localhost ~]# fdisk -l Disk /dev/sda: 21.5 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000817a9 Device Boot Start End Blocks Id System /dev/sda1 * 1 39 307200 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 39 2350 18566144 83 Linux /dev/sda3 2350 2611 2097152 82 Linux swap / Solaris Disk /dev/sdb: 3221 MB, 3221225472 bytes 255 heads, 63 sectors/track, 391 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x69615a6b Device Boot Start End Blocks Id System /dev/sdb1 1 383 3076416 fd Linux raid autodetect Disk /dev/sdc: 3221 MB, 3221225472 bytes 255 heads, 63 sectors/track, 391 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x86e0c23d Device Boot Start End Blocks Id System /dev/sdc1 1 383 3076416 fd Linux raid autodetect Disk /dev/sdd: 3221 MB, 3221225472 bytes 255 heads, 63 sectors/track, 391 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xea36e552 Device Boot Start End Blocks Id System /dev/sdd1 1 383 3076416 fd Linux raid autodetect Disk /dev/md0: 6295 MB, 6295650304 bytes ----> Software RAID 5 Partition 2 heads, 4 sectors/track, 1537024 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 524288 bytes / 1048576 bytes Disk identifier: 0x00000000
To check the details of Software RAID 5 partition you can use mdadm command with argument – -detail. Refer the command below.
[root@localhost ~]# mdadm --detail /dev/md0 # Check the Software RAID 5 Partition Details
/dev/md0:
Version : 1.2
Creation Time : Mon Jun 12 18:00:52 2017
Raid Level : raid5
Array Size : 6148096 (5.86 GiB 6.30 GB)
Used Dev Size : 3074048 (2.93 GiB 3.15 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Mon Jun 12 18:03:05 2017
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : localhost.localdomain:0 (local to host localhost.localdomain)
UUID : e219bc02:6d632e29:1730eb49:fb94359c
Events : 18
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
3 8 49 2 active sync /dev/sdd1
After configure the Software RAID 5 we have to save the configurations in /etc/mdadm.conf file otherwise when you restart the system you will lost all your configurations. To do so you can use the below command.
[root@localhost ~]# mdadm --detail --scan --verbose >> /etc/mdadm.conf # Save the RAID 5 Configurations
Confirm the saved configurations of Software RAID 5 in /etc/mdadm.conf file.
[root@localhost ~]# cat /etc/mdadm.conf ARRAY /dev/md0 level=raid5 num-devices=3 metadata=1.2 name=localhost.localdomain:0 UUID=e219bc02:6d632e29:1730eb49:fb94359c devices=/dev/sdb1,/dev/sdc1,/dev/sdd1
Configure Linux LVM on Software RAID 5 Partition
Now we are all set to configure Linux LVM (Logical Volume Manager) on Software RAID 5 partition. we have the RAID 5 partition i.e. /dev/md0. Let’s go ahead and create Physical Volume using the RAID 5 partition i.e. /dev/md0.
[root@localhost ~]# pvcreate /dev/md0 # Create Physical Volume
Physical volume "/dev/md0" successfully created
You can check the details of Physical Volume using pvdisplay command. Refer the sample output below.
[root@localhost ~]# pvdisplay # Check Details of Physical Volume
"/dev/md0" is a new physical volume of "5.86 GiB"
--- NEW Physical volume ---
PV Name /dev/md0
VG Name
PV Size 5.86 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID S0rEU3-vS7w-dbAJ-jEHC-0bMx-5E2a-Np3ChR
After creating the Physical Volume now our second step toward Linux LVM Configuration is we have to create Volume Group using the Physical Volume. To do so we have to use vgcreate command.
Here I am creating my Volume Group with name vgroup001 using vgcreate command. Refer the Sample Output below.
[root@localhost ~]# vgcreate vgroup001 /dev/md0 # Create Volume Group
Volume group "vgroup001" successfully created
[root@localhost ~]# vgdisplay vgroup001
--- Volume group ---
VG Name vgroup001
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 5.86 GiB
PE Size 4.00 MiB
Total PE 1500
Alloc PE / Size 0 / 0
Free PE / Size 1500 / 5.86 GiB
VG UUID S7t4dy-OKu2-6WeB-XcGF-YxBB-YCJT-KXZpG3
Now we have Volume Group i.e. vgroup001. So let’s go ahead and create Logical Volumes. Here I am going to create two logical volumes with name lvolume001 and lvolume002.
Creating First Logical Volume i.e. lvolume001 (of Size – 2 GB) :
[root@localhost ~]# lvcreate -L 2G -n lvolume001 vgroup001 Logical volume "lvolume001" created
Creating First Logical Volume i.e. lvolume002 (of Size – 1 GB) :
[root@localhost ~]# lvcreate -L 1G -n lvolume002 vgroup001 Logical volume "lvolume002" created
To check the details of Logical Volumes you can use lvdisplay command. Refer the sample output below.
[root@localhost ~]# lvdisplay --- Logical volume --- LV Path /dev/vgroup001/lvolume001 LV Name lvolume001 VG Name vgroup001 LV UUID ZJQdZW-KlcU-yZDl-Z36I-9e5B-1R28-CFpeXO LV Write Access read/write LV Creation host, time localhost.localdomain, 2017-06-12 18:06:56 -0700 LV Status available # open 0 LV Size 2.00 GiB Current LE 512 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 4096 Block device 253:0 --- Logical volume --- LV Path /dev/vgroup001/lvolume002 LV Name lvolume002 VG Name vgroup001 LV UUID jtqBEJ-Ovtq-TWZy-UVUJ-jXNY-2Ys8-vPNQCd LV Write Access read/write LV Creation host, time localhost.localdomain, 2017-06-12 18:07:13 -0700 LV Status available # open 0 LV Size 1.00 GiB Current LE 256 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 4096 Block device 253:1
After creating the Logical Volumes we have to format both of them to create File Systems. Here I am formatting my first Logical Volume with using ext4 file system.
[root@localhost ~]# mkfs.ext4 /dev/vgroup001/lvolume001 mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=128 blocks, Stripe width=256 blocks 131072 inodes, 524288 blocks 26214 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=536870912 16 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912 Writing inode tables: done Creating journal (16384 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 38 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.
Now create a directory and give appropriate permissions to mount the Logical Volume. Here I am creating a directory named lvm and giving full access to all.
[root@localhost ~]# mkdir /lvm [root@localhost ~]# chmod -R 777 /lvm/
You can mount the Logical Volume temporarily using below command.
[root@localhost ~]# mount /dev/vgroup001/lvolume001 /lvm
Confirm the Mount Points.
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 18G 2.5G 15G 15% /
tmpfs 935M 224K 935M 1% /dev/shm
/dev/sda1 291M 39M 238M 14% /boot
/dev/mapper/vgroup001-lvolume001 2.0G 67M 1.9G 4% /lvm
For Permanent Mounting you have to make an entry in /etc/fstab file. Here I made a entry as per my LVM setup. Refer the sample output below.
[root@localhost ~]# nano /etc/fstab
/dev/vgroup001/lvolume001 /lvm ext4 defaults 0 0
Now refresh all mount points using below command.
[root@localhost ~]# mount -a
You can also Un-mount and then Mount the Logical Volume again by using below command.
[root@localhost ~]# umount /lvm/ # Unmount the Logical Volume [root@localhost ~]# mount /lvm # Mount the Logical Volume [root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 18G 2.5G 15G 15% / tmpfs 935M 224K 935M 1% /dev/shm /dev/sda1 291M 39M 238M 14% /boot /dev/mapper/vgroup001-lvolume001 2.0G 67M 1.9G 4% /lvm
As you can see below now the lvolume001 is ready to store data.
[root@localhost ~]# lvdisplay /dev/vgroup001/lvolume001 --- Logical volume --- LV Path /dev/vgroup001/lvolume001 LV Name lvolume001 VG Name vgroup001 LV UUID ZJQdZW-KlcU-yZDl-Z36I-9e5B-1R28-CFpeXO LV Write Access read/write LV Creation host, time localhost.localdomain, 2017-06-12 18:06:56 -0700 LV Status available # open 1 LV Size 2.00 GiB Current LE 512 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 4096 Block device 253:0
Now let’s go ahead and format our second Logical Volume i.e. lvolume002 Refer the command below.
[root@localhost ~]# mkfs.ext4 /dev/vgroup001/lvolume002 mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=128 blocks, Stripe width=256 blocks 65536 inodes, 262144 blocks 13107 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=268435456 8 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376 Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 32 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.
Create a directory named lvm2 and then temporarily mount the Logical Volume using below command.
[root@localhost ~]# mkdir /lvm2 [root@localhost ~]# mount /dev/vgroup001/lvolume002 /lvm2/
Confirm the Mount Points.
[root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 18G 2.6G 15G 16% / tmpfs 935M 224K 935M 1% /dev/shm /dev/sda1 291M 39M 238M 14% /boot /dev/mapper/vgroup001-lvolume001 2.0G 67M 1.9G 4% /lvm /dev/mapper/vgroup001-lvolume002 1008M 34M 924M 4% /lvm2
For permanent mounting enter the below line in /etc/fstab file.
[root@localhost ~]# nano /etc/fstab
/dev/vgroup001/lvolume002 /lvm2 ext4 defaults 0 0
After creating two Logical Volumes let’s check what changes happened in Physical Volume and Volume Group.
As you can see below Total PE of Physical Volume is 1500 and available Free PE is 732. PE stands for Physical Extent.For information on Linux LVM I have already explained a article on LVM (Logical Volume Manager) Configuration.
[root@localhost ~]# pvdisplay /dev/md0
--- Physical volume ---
PV Name /dev/md0
VG Name vgroup001
PV Size 5.86 GiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 1500
Free PE 732
Allocated PE 768
PV UUID S0rEU3-vS7w-dbAJ-jEHC-0bMx-5E2a-Np3ChR
Let’s check the Volume Group information after creating two Logical Volumes.
As you can see below Cur LV = 2 (Current Logical Volumes) , Open LV = 2 (Open Logical Volumes) , Total PE = 1500 , Available Free PE = 732
[root@localhost ~]# vgdisplay vgroup001 --- Volume group --- VG Name vgroup001 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 5.86 GiB PE Size 4.00 MiB Total PE 1500 Alloc PE / Size 768 / 3.00 GiB Free PE / Size 732 / 2.86 GiB VG UUID S7t4dy-OKu2-6WeB-XcGF-YxBB-YCJT-KXZpG3
After complete configuration of LVM we have to save the Volume Group configurations and have to active the Logical Volumes. we can do so by using vgchange command. Refer the command below.
[root@localhost Desktop]# vgchange -a y vgroup001 # Save and Active Volum Group
2 logical volume(s) in volume group "vgroup001" now active
Now I want to test something Interesting. Let’s first store some data on both Logical Volumes. Here I am creating some files in both Logical Volumes i.e. in /lvm and /lvm2.
Creating Files in First Logical Volume :
[root@localhost ~]# cd /lvm [root@localhost lvm]# touch file{1,2,3,4,5}.txt [root@localhost lvm]# ls file1.txt file2.txt file3.txt file4.txt file5.txt lost+found
Creating Files in Second Logical Volume :
[root@localhost ~]# cd /lvm2/ [root@localhost lvm2]# touch test{1,2,3,4,5}.txt [root@localhost lvm2]# ls lost+found test1.txt test2.txt test3.txt test4.txt test5.txt
Actually I want to know what if one harddisk got failure out of three hard disk on Software RAID 5 and what impact will happen to my LVM and available data. For that I want to make one hard disk fail. To do so refer the below command. Here I am failing the hard disk /dev/sdb1.
[root@localhost Desktop]# mdadm /dev/md0 -f /dev/sdb1 # Make a Harddisk Faulty in Software RAID 5
Confirm the Failure Hard disk
mdadm: set /dev/sdb1 faulty in /dev/md0
[root@localhost Desktop]# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Mon Jun 12 18:00:52 2017
Raid Level : raid5
Array Size : 6148096 (5.86 GiB 6.30 GB)
Used Dev Size : 3074048 (2.93 GiB 3.15 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Tue Jun 13 10:31:11 2017
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 1
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : localhost.localdomain:0 (local to host localhost.localdomain)
UUID : e219bc02:6d632e29:1730eb49:fb94359c
Events : 20
Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 33 1 active sync /dev/sdc1
3 8 49 2 active sync /dev/sdd1
0 8 17 - faulty /dev/sdb1
Now remove the failure Hard disk using below command
[root@localhost Desktop]# mdadm /dev/md0 -r /dev/sdb1 # Remove faulty Harddisk from Software RAID 5 mdadm: hot removed /dev/sdb1 from /dev/md0 [root@localhost Desktop]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Mon Jun 12 18:00:52 2017 Raid Level : raid5 Array Size : 6148096 (5.86 GiB 6.30 GB) Used Dev Size : 3074048 (2.93 GiB 3.15 GB) Raid Devices : 3 Total Devices : 2 Persistence : Superblock is persistent Update Time : Tue Jun 13 10:31:57 2017 State : clean, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : localhost.localdomain:0 (local to host localhost.localdomain) UUID : e219bc02:6d632e29:1730eb49:fb94359c Events : 23 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 33 1 active sync /dev/sdc1 3 8 49 2 active sync /dev/sdd1
Now we have to add a New hard disk as a replacement of Faulty Hard disk. Here I have Hard disk i.e. /dev/sde. So to add the new hard disk we have follow the same process what we did before to configure Software RAID 5.
Disk /dev/sde: 3221 MB, 3221225472 bytes
255 heads, 63 sectors/track, 391 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
So to create partition in /dev/sde hard disk and change the partition ID for Software RAID i.e. “fd”.
[root@localhost ~]# fdisk /dev/sde
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xca235717.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-391, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-391, default 391): +3000M
Command (m for help): t # Change the Partition ID
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
Now add the new hard disk in Software RAID 5 using below command.
[root@localhost ~]# mdadm /dev/md0 -a /dev/sde1 # Add a Harddisk in Software RAID 5
mdadm: added /dev/sde1
Confirm the Software RAID 5 partition if hard disk properly added or not by using below command.
[root@localhost ~]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Mon Jun 12 18:00:52 2017
Raid Level : raid5
Array Size : 6148096 (5.86 GiB 6.30 GB)
Used Dev Size : 3074048 (2.93 GiB 3.15 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Tue Jun 13 10:37:41 2017
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : localhost.localdomain:0 (local to host localhost.localdomain)
UUID : e219bc02:6d632e29:1730eb49:fb94359c
Events : 52
Number Major Minor RaidDevice State
4 8 65 0 active sync /dev/sde1
1 8 33 1 active sync /dev/sdc1
3 8 49 2 active sync /dev/sdd1
Then refer the mount table and check the mount points.
[root@localhost ~]# mount -a [root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 18G 2.5G 15G 15% / tmpfs 935M 224K 935M 1% /dev/shm /dev/sda1 291M 39M 238M 14% /boot /dev/mapper/vgroup001-lvolume001 2.0G 67M 1.9G 4% /lvm /dev/mapper/vgroup001-lvolume002 1008M 34M 924M 4% /lvm2
As you can see on above output our both Logical Volumes are safe and looks good. Now let’s check the data.
As you can see below data is also safe and we haven’t missed any data.
[root@localhost ~]# ls /lvm file1.txt file2.txt file3.txt file4.txt file5.txt lost+found [root@localhost ~]# ls /lvm2/ lost+found test1.txt test2.txt test3.txt test4.txt test5.txt
If you found this article useful then Like Us, Share Us, Subscribe our Newsletter OR if you have something to say then feel free to comment on the comment box below.