How to save my drive after a stupid `sudo zpool create -f arachXXX /dev/sda`? šŸ˜„

Hello there,

Is there a way to save my drive? (Je suis dans la merdeā€¦)

0. Before the accident

Before that stupid amateur error, the following commands output my previous ā€œdisk stateā€:

nfg@coco:~/temp$ sudo gdisk -l /dev/sda
GPT fdisk (gdisk) version 1.0.8

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sda: 1953525167 sectors, 931.5 GiB
Model: Expansion
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): AB88A2E1-5B2D-40C5-BE22-436DC27B5582
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 1953525133
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         1050623   512.0 MiB   EF00  EFI System Partition
   2         1050624         5244927   2.0 GiB     8200
   3         5244928         9439231   2.0 GiB     BE00
   4         9439232      1953525133   927.0 GiB   BF00

nfg@coco:~/temp$ sudo blkid /dev/sda2
/dev/sda2: UUID="41f51b83-daae-4680-aa1d-e6ba7de1a0b1" TYPE="crypto_LUKS" PARTUUID="eb860651-953a-6542-8a98-5453509b8c4f"

nfg@coco:~/temp$ sudo blkid /dev/sda3
/dev/sda3: LABEL="bpool" UUID="1983260328483891507" UUID_SUB="2280284370650918900" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="daec9167-7629-e54b-85d1-8d7995a9f35c"

nfg@coco:~/temp$ sudo blkid /dev/sda4
/dev/sda4: LABEL="rpool" UUID="11453632911775052663" UUID_SUB="12417951916163988317" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="47bedec4-6433-4b4e-8b55-35765f9bbfe7"

1. The fatal error

sudo zpool create -f arachXXX /dev/sda

ā€¦ oups! :astonished:

2. Is it possible to rollback?

Now my drive look like this, and I want to know if itā€™s possible to repareā€¦

nfg@coco:~$ sudo gdisk -l /dev/sda
[sudo] Mot de passe de nfg :                                  
GPT fdisk (gdisk) version 1.0.8

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sda: 1953525167 sectors, 931.5 GiB
Model: Expansion       
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): 22ED8AD5-96F2-5544-979B-153237CD9C9E
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 1953525133
Partitions will be aligned on 2048-sector boundaries
Total free space is 3436 sectors (1.7 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048      1953507327   931.5 GiB   BF01  zfs-5ff1ad3692280185
   9      1953507328      1953523711   8.0 MiB     BF07  

Any idea?

Pleaseā€¦ I need help (je suis vraiment dans la merdeā€¦)

Ā°JĀ°

Export the pool if you havenā€™t already.

You should be able to use disk to delete the new partitions and recreate them as they were. The EFI system partition is likely damaged beyond repair. The others may or may not have been damaged. If they have a filesystem on them, see if fsck can repair them.

Emphasize - create a new partition table and recreate the old partitions WITHOUT FORMATTING THE PARTITIONS. If the data is still there, you may be able to import the pools, but there is no guarantee.

@mgerdts these are ZFS pools, there is no fsck for ZFS. An import and scrub may perform some repairs if possible.

Good luck!

Ahh, I didnā€™t realize you had existing pools. Good luck!

Thanks a lot guys for your support: tout Ƨa me redonne le moral: vive les logiciels libres et force Ć  leurs communautĆ©s ! (I live in and I work from France GMT+1, so sorry for my poor english and the jetlag. :slight_smile: )

Job in progressā€¦

nfg@coco:~$ zpool list
NAME       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
arachide   928G   164K   928G        -         -     0%     0%  1.00x    ONLINE  -

nfg@coco:~$ sudo zpool export arachide
[...]

Itā€™s seem to take a long timeā€¦ is that normal? :sweat:

May I kill ^C the process? (Waiting for answers before doing anything else: I will follow your wise advices step by stepā€¦)

Yes Sir!

After that, I will test to:

  • RECREATE OLD PARTITIONS,
  • WITH THE EXACT START AND END SECTORS NUMBERS
GPT fdisk (gdisk) version 1.0.8

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sda: 1953525167 sectors, 931.5 GiB
Model: Expansion
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): AB88A2E1-5B2D-40C5-BE22-436DC27B5582
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 1953525133
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         1050623   512.0 MiB   EF00  EFI System Partition
   2         1050624         5244927   2.0 GiB     8200
   3         5244928         9439231   2.0 GiB     BE00
   4         9439232      1953525133   927.0 GiB   BF00
  • AND WITHOUT FORMATTING THE PARTITIONS!

Are you OK with that strategy?

And, on other terminal, nothing, no response with this command:

nfg@coco:~$ zpool list
[...]

Phase 2? I try to RECREATE OLD PARTITIONS?

If your pool is actually available right now, Iā€™d back up all the data before trying anything else.

In theory, just recreating the partition table will be enough to save it. But in practiceā€¦ yeah, like I said, if the pool is actually online and mounted the first thing Iā€™d be doing is trying to replicate the data off from it, then try the fancy tricks to bring it back to life in-place, but know that if they go wrong, at least Iā€™ve still got all my data.

1 Like

:white_check_mark: Properly backup and restore partition table: done:

  1. The drive to rescue:
    nfg@coco:~$ lsblk
    NAME            MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
    [...]
    sda               8:0    0 931,5G  0 disk  
    ā”œā”€sda1            8:1    0 931,5G  0 part  
    ā””ā”€sda9            8:9    0     8M  0 part  
    [...]
    
  2. The backup command:
    nfg@coco:~$ sudo sfdisk -d /dev/sda > temp/arachide-partition-table.broken
    
  3. The result:
    nfg@coco:~$ cat temp/arachide-partition-table.broken 
    label: gpt
    label-id: 22ED8AD5-96F2-5544-979B-153237CD9C9E
    device: /dev/sda
    unit: sectors
    first-lba: 34
    last-lba: 1953525133
    sector-size: 512
    
    /dev/sda1 : start=  2048,       size=  1953505280, type=6A898CC3-1DD2-11B2-99A6-080020736631, uuid=17E15941-AA1C-0C43-BD3C-613CF2FA27D2, name="zfs-5ff1ad3692280185"
    /dev/sda9 : start=  1953507328, size=       16384, type=6A945A3B-1DD2-11B2-99A6-080020736631, uuid=FFDE9FD8-0F89-ED46-BE62-84A91734F1CA
    

[FR] Jā€™aimerais vivre en thĆ©orie, parce quā€™en thĆ©orie tout se passe bien. :slight_smile:

So, thatā€™s the plan (in theory): the old table was:

 nfg@coco:~/temp$ sudo gdisk -l /dev/sda
GPT fdisk (gdisk) version 1.0.8

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sda: 1953525167 sectors, 931.5 GiB
Model: Expansion
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): AB88A2E1-5B2D-40C5-BE22-436DC27B5582
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 1953525133
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         1050623   512.0 MiB   EF00  EFI System Partition
   2         1050624         5244927   2.0 GiB     8200
   3         5244928         9439231   2.0 GiB     BE00
   4         9439232      1953525133   927.0 GiB   BF00

nfg@coco:~/temp$ sudo blkid /dev/sda2
/dev/sda2: UUID="41f51b83-daae-4680-aa1d-e6ba7de1a0b1" TYPE="crypto_LUKS" PARTUUID="eb860651-953a-6542-8a98-5453509b8c4f"

nfg@coco:~/temp$ sudo blkid /dev/sda3
/dev/sda3: LABEL="bpool" UUID="1983260328483891507" UUID_SUB="2280284370650918900" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="daec9167-7629-e54b-85d1-8d7995a9f35c"

nfg@coco:~/temp$ sudo blkid /dev/sda4
/dev/sda4: LABEL="rpool" UUID="11453632911775052663" UUID_SUB="12417951916163988317" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="47bedec4-6433-4b4e-8b55-35765f9bbfe7"

Step #1 ā€” Manualy recreate the global partitionā€™s table with gfdisk

  • Disk identifier (GUID) = AB88A2E1-5B2D-40C5-BE22-436DC27B5582
  • Partions 1, 2, 3 and 4:
    • Start (sector)
    • End (sector)
    • Code
    • Name (only for sda1 with ā€œEFI System Partitionā€)

The result is:

nfg@coco:~$ sudo gdisk -l /dev/sda
GPT fdisk (gdisk) version 1.0.8

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sda: 1953525167 sectors, 931.5 GiB
Model: Expansion       
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): AB88A2E1-5B2D-40C5-BE22-436DC27B5582
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 1953525133
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         1050623   512.0 MiB   EF00  EFI System Partition
   2         1050624         5244927   2.0 GiB     8200  
   3         5244928         9439231   2.0 GiB     BE00  
   4         9439232      1953525133   927.0 GiB   BF00  

Step #2 ā€” Work in progressā€¦ (see you tomorrow)

:sleeping:

I wish you luck. I canā€™t add anything to your plan.

(I didnā€™t reply earlier because we were out of town to see the solar eclipse that crossed North America yesterday. If you get a chance to watch the one that crosses Spain in 2026 I highly recommend it.)

Woawā€¦ no, I didnā€™t get that chanceā€¦ but your story remind me the total moon eclipse of 4th april 1996: I was here! :wink:

My old argentic photographiesā€¦

Brefā€¦

Soā€¦

Step #3 ā€” Set partitions right parameters

Before the disaster:

nfg@coco:~/temp$ sudo blkid /dev/sda2
/dev/sda2: UUID="41f51b83-daae-4680-aa1d-e6ba7de1a0b1" TYPE="crypto_LUKS" PARTUUID="eb860651-953a-6542-8a98-5453509b8c4f"

nfg@coco:~/temp$ sudo blkid /dev/sda3
/dev/sda3: LABEL="bpool" UUID="1983260328483891507" UUID_SUB="2280284370650918900" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="daec9167-7629-e54b-85d1-8d7995a9f35c"

nfg@coco:~/temp$ sudo blkid /dev/sda4
/dev/sda4: LABEL="rpool" UUID="11453632911775052663" UUID_SUB="12417951916163988317" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="47bedec4-6433-4b4e-8b55-35765f9bbfe7"

Now, with gfdisk I set this:

Command (? for help): i
Partition number (1-4): 2
Partition GUID code: 0657FD6D-A4AB-43C4-84E5-0933C84B4F4F (Linux swap)
Partition unique GUID: 41F51B83-DAAE-4680-AA1D-E6BA7DE1A0B1
First sector: 1050624 (at 513.0 MiB)
Last sector: 5244927 (at 2.5 GiB)
Partition size: 4194304 sectors (2.0 GiB)
Attribute flags: 0000000000000000
Partition name: ''

Command (? for help): i
Partition number (1-4): 3
Partition GUID code: 6A82CB45-1DD2-11B2-99A6-080020736631 (Solaris boot)
Partition unique GUID: 26E3A7C6-EEA4-754D-8E42-E7D3E4CF6286
First sector: 5244928 (at 2.5 GiB)
Last sector: 9439231 (at 4.5 GiB)
Partition size: 4194304 sectors (2.0 GiB)
Attribute flags: 0000000000000000
Partition name: ''

Command (? for help): i
Partition number (1-4): 4
Partition GUID code: 6A85CF4D-1DD2-11B2-99A6-080020736631 (Solaris root)
Partition unique GUID: 9FE1BD0E-1FA8-3643-8B9F-4237468FB14C
First sector: 9439232 (at 4.5 GiB)
Last sector: 1953525133 (at 931.5 GiB)
Partition size: 1944085902 sectors (927.0 GiB)
Attribute flags: 0000000000000000
Partition name: ''

And with blkid, now I have this:

nfg@coco:~/.tor-browser$ sudo blkid /dev/sda2
/dev/sda2: UUID="41f51b83-daae-4680-aa1d-e6ba7de1a0b1" TYPE="crypto_LUKS" PARTUUID="41f51b83-daae-4680-aa1d-e6ba7de1a0b1"

nfg@coco:~/.tor-browser$ sudo blkid /dev/sda3
/dev/sda3: LABEL="bpool" UUID="1983260328483891507" UUID_SUB="2280284370650918900" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="26e3a7c6-eea4-754d-8e42-e7d3e4cf6286"

nfg@coco:~/.tor-browser$ sudo blkid /dev/sda4
/dev/sda4: LABEL="rpool" UUID="11453632911775052663" UUID_SUB="12417951916163988317" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="9fe1bd0e-1fa8-3643-8b9f-4237468fb14c"

With gfdisk I can reset the UUID correctly, and everything seems to be OK.

But I canā€™t find the way to set the correct value for PARTUUIDā€¦

Any idea? (waiting before touch anything else :slight_smile: )

Step #4 ā€” Mounting the encrypted zfs filesystem, just like that

:heart_eyes:

Inspire by Mounting the encrypted zfs filesystem:

root@xubuntu:~# zfs list
no datasets available

root@xubuntu:~# lsblk
NAME     MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
[...]
sda        8:0    0 931,5G  0 disk
ā”œā”€sda1     8:1    0   512M  0 part
ā”œā”€sda2     8:2    0     2G  0 part
ā”œā”€sda3     8:3    0     2G  0 part
ā””ā”€sda4     8:4    0   927G  0 part

root@xubuntu:~# zpool import -f rpool

root@xubuntu:~# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool   920G   619G   301G        -         -     5%    67%  1.00x    ONLINE  -

root@xubuntu:~# cryptsetup open /dev/zvol/rpool/keystore rpool-keystore
Saisissez la phrase secrĆØte pour /dev/zvol/rpool/keystore :

root@xubuntu:~# ls /mnt/archide-keystore

root@xubuntu:~# mount /dev/mapper/rpool-keystore /mnt/archide-keystore

root@xubuntu:~# ls /mnt/archide-keystore
lost+found  system.key

root@xubuntu:~# cat /mnt/archide-keystore/system.key | zfs load-key -L prompt rpool

root@xubuntu:~# umount /mnt/archide-keystore

root@xubuntu:~# cryptsetup close rpool-keystore

root@xubuntu:~# zfs list
NAME                                               USED  AVAIL     REFER  MOUNTPOINT
rpool                                              619G   272G      192K  /
rpool/ROOT                                        7.57G   272G      192K  none
rpool/ROOT/ubuntu_5btsrf                          7.57G   272G     5.56G  /
rpool/ROOT/ubuntu_5btsrf/srv                       192K   272G      192K  /srv
rpool/ROOT/ubuntu_5btsrf/usr                       580K   272G      192K  /usr
rpool/ROOT/ubuntu_5btsrf/usr/local                 388K   272G      388K  /usr/local
rpool/ROOT/ubuntu_5btsrf/var                      2.01G   272G      192K  /var
rpool/ROOT/ubuntu_5btsrf/var/games                 192K   272G      192K  /var/games
rpool/ROOT/ubuntu_5btsrf/var/lib                  1.99G   272G     1.82G  /var/lib
rpool/ROOT/ubuntu_5btsrf/var/lib/AccountsService   244K   272G      244K  /var/lib/AccountsService
rpool/ROOT/ubuntu_5btsrf/var/lib/NetworkManager    296K   272G      296K  /var/lib/NetworkManager
rpool/ROOT/ubuntu_5btsrf/var/lib/apt               118M   272G      118M  /var/lib/apt
rpool/ROOT/ubuntu_5btsrf/var/lib/dpkg             64.3M   272G     64.3M  /var/lib/dpkg
rpool/ROOT/ubuntu_5btsrf/var/log                  14.0M   272G     14.0M  /var/log
rpool/ROOT/ubuntu_5btsrf/var/mail                  192K   272G      192K  /var/mail
rpool/ROOT/ubuntu_5btsrf/var/snap                 1.13M   272G     1.13M  /var/snap
rpool/ROOT/ubuntu_5btsrf/var/spool                 252K   272G      252K  /var/spool
rpool/ROOT/ubuntu_5btsrf/var/www                   192K   272G      192K  /var/www
rpool/USERDATA                                     611G   272G      192K  /
rpool/USERDATA/nfengone_npb9i1                     611G   272G      611G  /home/nfengone
rpool/USERDATA/root_npb9i1                        1.32M   272G     1.32M  /root
rpool/keystore                                     518M   273G     63.4M  -

root@xubuntu:~# zfs list | grep home
rpool/USERDATA/nfengone_npb9i1                     611G   272G      611G  /home/nfengone

root@xubuntu:~# mkdir /mnt/nfengone@arachide; zfs set mountpoint=/mnt/nfengone@arachide rpool/USERDATA/nfengone_npb9i1

root@xubuntu:~# zfs mount rpool/USERDATA/nfengone_npb9i1

root@xubuntu:~# ls /mnt/nfengone@arachide
Bureau  coco.back  DISTILIBRE  Documents  Images  jmn@versatil.fr@partage.versatil.fr  LISTEL  marion  mireille  ModĆØles  Musique  ndata  nfengone  OXA  oxalis  pavot.kdbx  Public  rosalie  RZM  snap  TĆ©lĆ©chargements  temp  VidĆ©os  vs  VS
1 Like