Reinstall c1 after failed disk.

This commit is contained in:
2025-02-02 12:43:54 +00:00
parent 3c3e96dc72
commit cb6b27f00c
4 changed files with 10 additions and 3 deletions

View File

@@ -21,3 +21,9 @@ adding a new gluster node to the compute volume, with c3 having failed:
* c1: gluster peer detach c3
same to then later replace 192.168.1.2 with 192.168.1.73
replacing failed / reinstalled gluster volume (c1 in this case). all commands on c2:
* gluster volume remove-brick compute replica 2 c1:/persist/glusterfs/compute/brick1 force
* gluster peer detach c1
* gluster peer probe 192.168.1.71 (not c1 because switching to IPs to avoid DNS/tailscale issues)
* gluster volume add-brick compute replica 3 192.168.1.71:/persist/glusterfs/compute/brick1