{"id":2874,"date":"2024-08-15T21:30:32","date_gmt":"2024-08-16T01:30:32","guid":{"rendered":"https:\/\/blog.lufamily.ca\/kang\/?p=2874"},"modified":"2024-08-15T21:30:32","modified_gmt":"2024-08-16T01:30:32","slug":"replacing-vdev-in-a-zfs-pool","status":"publish","type":"post","link":"https:\/\/blog.lufamily.ca\/kang\/2024\/08\/15\/replacing-vdev-in-a-zfs-pool\/","title":{"rendered":"Replacing VDEV in a ZFS Pool"},"content":{"rendered":"\n<p>Several months ago I had an old 3TB hard drive (HDD) crashed on me. Luckily it was a hard drive that is primarily used for backup purposes, so the data lost can quickly be duplicated from source by performing another backup. Since it was not critical that I replace the damaged drive immediately, it was kind of left to fester until today.<\/p>\n\n\n\n<p>Recently I acquired four additional WD Red 6TB HDD, and I wanted to install these new drives into my NAS chassis. Since I am opening the chassis, I will irradicate the damaged drive, and also take this opportunity to swap some old drives out of the ZFS pool that I created <a href=\"https:\/\/blog.lufamily.ca\/kang\/2024\/03\/02\/lvm-to-zfs-migration\/\" data-type=\"post\" data-id=\"2779\" target=\"_blank\" rel=\"noreferrer noopener\">earlier<\/a> and add these new drives into the pool.<\/p>\n\n\n\n<p>I first use the following command to add two additional mirror vdev&#8217;s each composed of the two new WD Red drives.<\/p>\n\n\n\n<pre class=\"wp-block-code has-medium-font-size\"><code>sudo zpool add vault mirror {id_of_drive_1} {id_of_drive_2}<\/code><\/pre>\n\n\n\n<p>The drive id&#8217;s is located in the following path: <code>\/dev\/disk\/by-id<\/code> and is typically prefixed with <code>ata<\/code> or <code>wwn<\/code>.<\/p>\n\n\n\n<p>This created two vdev&#8217;s into the pool, and I can remove an existing vdev. Doing so will automatically start redistributing the data on the removing vdev to the other vdev&#8217;s in the pool. All of this is performed while the pool is still online and running to service the NAS. To remove the old vdev, I execute the following command:<\/p>\n\n\n\n<pre class=\"wp-block-code has-medium-font-size\"><code>sudo zpool remove vault {vdev_name}<\/code><\/pre>\n\n\n\n<p>In my case, the old vdev&#8217;s name is <code>mirror-5<\/code>. <\/p>\n\n\n\n<p>Once the remove command is given, the copying of data from the old vdev to the other vdev&#8217;s begins. You can check the status with:<\/p>\n\n\n\n<pre class=\"wp-block-code has-medium-font-size\"><code>sudo zpool status -v vault<\/code><\/pre>\n\n\n\n<p>The above will show the copying status and the approximate time it will take to complete the job.<\/p>\n\n\n\n<p>Once the removal is completed, the old HDD of <code>mirror-5<\/code> is still labeled for ZFS use. I had to use the <code>labelclear<\/code> command to clean the drive so that I could repurpose the drives for backup duty. Below is an example of the command.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>sudo zpool labelclear sdb1<\/code><\/pre>\n\n\n\n<p>The resulting pool now looks like this:<\/p>\n\n\n\n<pre class=\"wp-block-code has-small-font-size\"><code>sudo zpool list -v vault\n\n(Output truncated)\n                                                                                                                     NAME                                                    SIZE  ALLOC   FREE\nvault                                                  52.7T  38.5T  14.3T\n  mirror-0                                             9.09T  9.00T  92.4G\n    ata-ST10000VN0008-2JJ101_ZHZ1KMA0-part1                -      -      -\n    ata-WDC_WD101EFAX-68LDBN0_VCG6VRWN-part1               -      -      -\n  mirror-1                                             7.27T  7.19T  73.7G\n    wwn-0x5000c500b41844d9-part1                           -      -      -\n    ata-ST8000VN0022-2EL112_ZA1E8S0V-part1                 -      -      -\n  mirror-2                                             9.09T  9.00T  93.1G\n    wwn-0x5000c500c3d33191-part1                           -      -      -\n    ata-ST10000VN0004-1ZD101_ZA2964KD-part1                -      -      -\n  mirror-3                                             10.9T  10.8T   112G\n    wwn-0x5000c500dc587450-part1                           -      -      -\n    wwn-0x5000c500dcc525ab-part1                           -      -      -\n  mirror-4                                             5.45T  1.74T  3.72T\n    wwn-0x50014ee2b9f82b35-part1                           -      -      -\n    wwn-0x50014ee2b96dac7c-part1                           -      -      -\n  indirect-5                                               -      -      -\n  mirror-6                                             5.45T   372G  5.09T\n    wwn-0x50014ee265d315cd-part1                           -      -      -\n    wwn-0x50014ee2bb37517e-part1                           -      -      -\n  mirror-7                                             5.45T   373G  5.09T\n    wwn-0x50014ee265d315b1-part1                           -      -      -\n    wwn-0x50014ee2bb2898c2-part1                           -      -      -\ncache                                                      -      -      -\n  nvme-Samsung_SSD_970_EVO_Plus_500GB_S4P2NF0M419555D   466G   462G  4.05G<\/code><\/pre>\n\n\n\n<p>The above <code>indirect-5<\/code> can be safely ignored. It is just a reference to the old <code>mirror-5<\/code>.<\/p>\n\n\n\n<p>This time we replaced the entire vdev, another technique is to replace the actual drives within the vdev. To do this, we will have to use the <code>zpool replace<\/code> command. We may also have to perform a <code>zpool offline<\/code> first before the replace command. This can be successively done on all the old drives in the mirror with newer drives with larger capacities to increase an existing vdev&#8217;s size.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Several months ago I had an old 3TB hard drive (HDD) crashed on me. Luckily it was a hard drive that is primarily used for backup purposes, so the data lost can quickly be duplicated from source by performing another backup. Since it was not critical that I replace the damaged drive immediately, it was &hellip; <a href=\"https:\/\/blog.lufamily.ca\/kang\/2024\/08\/15\/replacing-vdev-in-a-zfs-pool\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Replacing VDEV in a ZFS Pool&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[111],"tags":[5,28,6],"class_list":["post-2874","post","type-post","status-publish","format-standard","hentry","category-tech","tag-nas","tag-technology","tag-ubuntu"],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/p7V6i8-Km","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/blog.lufamily.ca\/kang\/wp-json\/wp\/v2\/posts\/2874","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.lufamily.ca\/kang\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.lufamily.ca\/kang\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.lufamily.ca\/kang\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.lufamily.ca\/kang\/wp-json\/wp\/v2\/comments?post=2874"}],"version-history":[{"count":6,"href":"https:\/\/blog.lufamily.ca\/kang\/wp-json\/wp\/v2\/posts\/2874\/revisions"}],"predecessor-version":[{"id":2880,"href":"https:\/\/blog.lufamily.ca\/kang\/wp-json\/wp\/v2\/posts\/2874\/revisions\/2880"}],"wp:attachment":[{"href":"https:\/\/blog.lufamily.ca\/kang\/wp-json\/wp\/v2\/media?parent=2874"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.lufamily.ca\/kang\/wp-json\/wp\/v2\/categories?post=2874"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.lufamily.ca\/kang\/wp-json\/wp\/v2\/tags?post=2874"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}