{"id":504,"date":"2018-03-11T10:01:47","date_gmt":"2018-03-11T14:01:47","guid":{"rendered":"https:\/\/blog.lufamily.ca\/kang\/?p=504"},"modified":"2024-02-15T13:59:47","modified_gmt":"2024-02-15T18:59:47","slug":"linux-lvm-super-simple-to-expand","status":"publish","type":"post","link":"https:\/\/blog.lufamily.ca\/kang\/2018\/03\/11\/linux-lvm-super-simple-to-expand\/","title":{"rendered":"Linux LVM Super Simple to Expand"},"content":{"rendered":"<p>During the Boxing Day sales event of 2017, I purchased a couple of\u00a0Seagate Barracuda ST4000DM004 (4TB) hard drives. The intention is to expand our main home network storage, which is a network accessible \/ attached storage (<strong>NAS<\/strong>) managed by our Linux server using <strong>mdadm<\/strong> and Logical Volume Manager (<strong>LVM<\/strong>).<\/p>\n<p>However I procrastinated until this weekend and finally performed the upgrade. The task went smoothly without any hiccups. I have to give due credit to the following site:\u00a0<a href=\"https:\/\/raid.wiki.kernel.org\/index.php\/Growing\" target=\"_blank\" rel=\"noopener\">https:\/\/raid.wiki.kernel.org\/index.php\/Growing<\/a>. It really provided very detail information for me to follow. It was a great help.<\/p>\n<p>One major concern I had was whether I can do this without any data loss, and a question of how much down time would this upgrade require.<\/p>\n<p>I had a logical volume, named <em>\/dev\/airvideovg2\/airvideo<\/em>, consisted of 100% usage of a volume group which was made up of three RAID-1 multiple devices (<strong>md<\/strong>). Since I ran out of physical drive bays, to perform the upgrade, I had to effectively replace 2 older drives which were 2TB in size with the newer 4TB drives. The old drives were Western Digital\u00a0WDC WD20EZRX-00D8PB0 (2TB). I can use these old drives for other needs.<\/p>\n<p>First I had to find the md containing the 2TB pair in RAID-1 (mirror) configuration. I did this with a combination of <strong>lsblk<\/strong> and <strong>mdadm<\/strong> commands. For example:<\/p>\n<pre>$ lsblk\nNAME \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 MAJ:MIN RM\u00a0 SIZE RO TYPE\u00a0 MOUNTPOINT\nsdb\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 8:0\u00a0 \u00a0 0\u00a0 1.8T\u00a0 0 disk\n\u2514\u2500sdb1 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 8:1\u00a0 \u00a0 0\u00a0 1.8T\u00a0 0 part\n  \u2514\u2500md2\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 9:3\u00a0 \u00a0 0\u00a0 1.8T\u00a0 0 raid1\n    \u2514\u2500airvideovg2-airvideo 252:0\u00a0 \u00a0 0 \u00a0 10T\u00a0 0 lvm \u00a0 \/mnt\/airvideo\n\n$ sudo mdadm --detail \/dev\/md2\n\/dev\/md2:\n        Version : 1.2\n  Creation Time : Sat Nov 12 18:01:36 2016\n     Raid Level : raid1\n     Array Size : 1906885632 (1725.90 GiB 2000.65 GB)\n  Used Dev Size : 1906885632 (1725.90 GiB 2000.65 GB)\n   Raid Devices : 2\n  Total Devices : 2\n    Persistence : Superblock is persistent\n\n  Intent Bitmap : Internal\n\n    Update Time : Sun Mar 11 09:12:05 2018\n          State : clean \n Active Devices : 2\nWorking Devices : 2\n Failed Devices : 0\n  Spare Devices : 0\n\n           Name : avs:2  (local to host avs)\n           UUID : 3d1afb64:878574e6:f9beb686:00eebdd5\n         Events : 55191\n\n    Number   Major   Minor   RaidDevice State\n       2       8       81        0      active sync   \/dev\/sdf1\n       3       8       17        1      active sync   \/dev\/sdb1\n<\/pre>\n<p>I found that I needed to replace <em>\/dev\/md2\u00a0<\/em>which consisted of two <em>\/dev\/sdf1<\/em> and <em>\/dev\/sdb1<\/em> partitions. These partitions belonged respectively to the old WD drives. I have 6 hard drives in the server chassis, so I needed to get the serial number of the drives to ensure that I was swapping the right one. I used the\u00a0<strong>hdparm<\/strong> command to get serial number of <em>\/dev\/sdb<\/em> and <em>\/dev\/sdf<\/em>. For example:<\/p>\n<pre>$ sudo hdparm -i \/dev\/sdb\n\n\/dev\/sdb:\n\n Model=WDC WD20EZRX-00D8PB0, FwRev=0001, SerialNo=WD-WCC4M4UDRZLD\n Config={ HardSect NotMFM HdSw&gt;15uSec Fixed DTR&gt;10Mbs RotSpdTol&gt;.5% }\n RawCHS=16383\/16\/63, TrkSize=0, SectSize=0, ECCbytes=0\n BuffType=unknown, BuffSize=unknown, MaxMultSect=16, MultSect=off\n CurCHS=16383\/16\/63, CurSects=16514064, LBA=yes, LBAsects=7814037168\n IORDY=on\/off, tPIO={min:120,w\/IORDY:120}, tDMA={min:120,rec:120}\n PIO modes:  pio0 pio1 pio2 pio3 pio4 \n DMA modes:  mdma0 mdma1 mdma2 \n UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6 \n AdvancedPM=no WriteCache=enabled\n Drive conforms to: unknown:  ATA\/ATAPI-4,5,6,7\n\n * signifies the current active mode\n<\/pre>\n<p>Before I physically replace the drives, I must first remove them from the md device by doing the following.<\/p>\n<pre>sudo mdadm -f \/dev\/md2 \/dev\/sdb1\nsudo mdadm -r \/dev\/md2 \/dev\/sdb1\n<\/pre>\n<p>After swapping out only\u00a0<strong>one<\/strong> of the drives, and replaced it with a new one, I rebooted the machine. I first have to partitioned the new drive using the\u00a0<strong>parted<\/strong> command.<\/p>\n<pre>sudo parted \/dev\/sdb\n<\/pre>\n<p>Once within parted, execute the following commands to create a single partition using the whole drive.<\/p>\n<pre>mklabel\u00a0gpt\nmkpart primary 2048s 100%\n<\/pre>\n<p>I learned that I can only start at sector 2048 to achieve optimal alignment. Once I created the <em>\/dev\/sdb1<\/em> partition, I have to add it back into the RAID.<\/p>\n<pre>sudo mdadm --add \/dev\/md2 \/dev\/sdb1\n<\/pre>\n<p>As soon as the new partition with the new drive was added into the RAID, a drive resynchronization begins automatically. The resync took more than 3 hours. Once the resync is completed, I did the same thing with the remaining old drive in the RAID, and performed another resync. This time from the first new drive to the second new drive. After another 3+ hours, we can now\u00a0<strong>grow<\/strong> the RAID device.<\/p>\n<pre>sudo mdadm --grow \/dev\/md2 --bitmap none\nsudo mdadm --grow \/dev\/md2 --size max\nsudo mdadm --wait \/dev\/md2\nsudo mdadm --grow \/dev\/md2 --bitmap internal\n<\/pre>\n<p>The third command (<em>&#8211;wait<\/em>) took another 3+ hours to complete. After a full day of RAID resync, we now have the same <em>\/dev\/md2<\/em> that is 4TB instead of 2TB in size. However the corresponding physical volume of LVM still needed to be resized. We did this with:<\/p>\n<pre>sudo pvresize \/dev\/md2\n<\/pre>\n<p>Once the physical volume is resized, we can then <em>extend<\/em> the logical volume to use up the remaining free space that we just added.<\/p>\n<pre>lvresize -l +100%FREE \/dev\/airvideovg2\/airvideo\n<\/pre>\n<p>This took a few minutes but at least it was not 3+ hours.<\/p>\n<p>Aside from the down time that I had to use to swap the hard drives, the logical volume was usable throughout the resynchronization and resizing process. This was impressive. However, now I have to take the volume offline by first <strong><em>umount<\/em><\/strong> the volume and changing it to inactive status.<\/p>\n<pre>sudo lvchange -an \/dev\/airvideovg2\/airvideo\n<\/pre>\n<p>Note that I had stop the <em>smbd<\/em> and other services that was using the volume before I can unmount it.<\/p>\n<p>The last step is to resize the file system of the logical volume, but before I can do that I was forced to perform a file system check.<\/p>\n<pre>e2fsck -f \/dev\/airvideovg2\/airvideo\nresize2fs -p \/dev\/airvideovg2\/airvideo\n<\/pre>\n<p>I rebooted the machine to ensure all the new configurations held, and voila I upgraded my 8TB network attached storage to 10TB! Okay it was not super simple, but the upgrade process was pretty simple and painless. The down time was minimal. The <strong>LVM<\/strong> and <strong>mdadm<\/strong> guys did a really good job here.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>During the Boxing Day sales event of 2017, I purchased a couple of\u00a0Seagate Barracuda ST4000DM004 (4TB) hard drives. The intention is to expand our main home network storage, which is a network accessible \/ attached storage (NAS) managed by our Linux server using mdadm and Logical Volume Manager (LVM). However I procrastinated until this weekend &hellip; <a href=\"https:\/\/blog.lufamily.ca\/kang\/2018\/03\/11\/linux-lvm-super-simple-to-expand\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Linux LVM Super Simple to Expand&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[111],"tags":[5,28,6],"class_list":["post-504","post","type-post","status-publish","format-standard","hentry","category-tech","tag-nas","tag-technology","tag-ubuntu"],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/p7V6i8-88","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/blog.lufamily.ca\/kang\/wp-json\/wp\/v2\/posts\/504","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.lufamily.ca\/kang\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.lufamily.ca\/kang\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.lufamily.ca\/kang\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.lufamily.ca\/kang\/wp-json\/wp\/v2\/comments?post=504"}],"version-history":[{"count":6,"href":"https:\/\/blog.lufamily.ca\/kang\/wp-json\/wp\/v2\/posts\/504\/revisions"}],"predecessor-version":[{"id":510,"href":"https:\/\/blog.lufamily.ca\/kang\/wp-json\/wp\/v2\/posts\/504\/revisions\/510"}],"wp:attachment":[{"href":"https:\/\/blog.lufamily.ca\/kang\/wp-json\/wp\/v2\/media?parent=504"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.lufamily.ca\/kang\/wp-json\/wp\/v2\/categories?post=504"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.lufamily.ca\/kang\/wp-json\/wp\/v2\/tags?post=504"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}