====== Run "zpool labelclear" when reusing storages used to be other pool member. ======
We should execute ''zpool labelclear'' command to erase ZFS pool information on target disks when we create a new pool by reusing storages which have used to be other pool member.
Trying to create a new pool with reused storages is most often failed with the following message:
# zpool create ztank da0p3
invalid vdev specification
use '-f' to override the following errors:
/dev/da0p3 is part of potentially active pool 'zroot'
The command noticed kindly, but ''zroot'' pool has been definitely destroyed and I surely want to create ''ztank'' with ''da0p3'' in this example. Although I feel ZFS had better clear the label, ZFS is even capable to undo the ''zpool destory'', so they don't clear it, I guess. FYI: About restoring destroyed pool, see ''zpool import -D'' option.
However, we can unexpectedly create a new pool with storages remaining old label without the notice. If it so happened, it is really nightmare. There will be invalid old pool and valid new pool on same disks in administrative information. It looks obviously mess-up. The following is a reproduction log:
# zpool status
pool: newtank
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
newtank ONLINE 0 0 0
da0p3 ONLINE 0 0 0
errors: No known data errors
pool: oldtank
state: UNAVAIL
scan: none requested
config:
NAME STATE READ WRITE CKSUM
oldtank UNAVAIL 0 0 0
1234567890123456789 UNAVAIL 0 0 0
errors: No known data errors
The log shows two pools, ''newtank'' and ''oldtank'', which consist of each different storage. Actually, the ''oldtank'' has been already destroyed pool which consisted of ''da0p3''. And now, it is member of the ''newtank''. ZFS system somehow recognise the invalid ''oldtank''. I don't get it...
If that happens, it is past saving. We can't destroy the ''oldtank'' by reason of non-existent pool, and also can't do anything with vdev number of "1234567890123456789". Just because we can't do anything, DO NOT EXECUTE ''zpool labelcelar'' IN THIS STAGE. Otherwise both of pools will be destroyed. _:(´ཀ`」∠):_ (It's my real experience...)
For upon reason, we never forget to do ''zpool labelclear'' when creating new pools.
----
**(2017-11-14 追記)**
The problem occurred just at the right moment. Get the screen shot.
{{ :blog:2017:appear_previous_corrupted_zpool.png |}}
Steps are...
- Destroy previous ''zroot'' pool (red) and do ''labelclear''.
- Create new ''zroot'' pool (green).
- Install FreeBSD 11.0-RELEASE newly.
- Run ''freebsd-update'' to upgrade to 11.1-RELEASE.
- Reboot the system, but failed to boot at ''Trying to mount root'' sequence.
* I think in retrospect the previous pool might appear at this step and the system try to mount it.
- I booted the system with ''kernel.old'' and tried to roll back to the latest 11.0-RELEASE with ''freebsd-update''.
- Broke the system completely. Never so much as boot loader can load kernel or zfs module.
- Boot from an installer media, run ''zpool import'' and get the above screen shot.
I think I surely ran ''labelclear''. I was wondering if I used wrong ''zpool.cache'' file. No idea.
----
**(2017-11-16 追記)**
I... I need to tell you something.
The system recognised the old ''zroot'' in spite of doing ''labelclear'' and zero-filling with ''dd'' to each partition again. I ran ''zpool labelclear da0'', then it disappeared at last. Of course, the first GPT table is broken in this way. (Nonetheless it is able to recover from second table.) I have no idea what the pool label is on GPT area, that is, what I created the pool without partitioning. How did this happen!?
Conclusion:
Do execute ''zpool labelclear'' to all target devices ''/dev/daX'' and partitions ''/dev/daXpY''. Fill entire disk by zero if you will.