Files
alo-cluster/stateful-commands.txt

22 lines
1.0 KiB
Plaintext

glusterfs setup on c1:
* for h in c1 c2 c3; do ssh $h sudo mkdir /persist/glusterfs/compute; done
* gluster peer probe c2
* gluster peer probe c3
* gluster volume create compute replica 3 c{1,2,3}:/persist/glusterfs/compute/brick1
* gluster volume start compute
* gluster volume bitrot compute enable
mysql credentials
* Put secrets/mysql_root_password into a Nomad var named secrets/mysql.root_password
postgres credentials
* Put secrets/postgres_password into a Nomad var named secrets/postgresql.postgres_password
adding a new gluster node to the compute volume, with c3 having failed:
(instructions from https://icicimov.github.io/blog/high-availability/Replacing-GlusterFS-failed-node/)
* zippy: sudo mkdir /persist/glusterfs/compute -p
* c1: gluster peer probe 192.168.1.2 (by IP because zippy resolved to a tailscale address)
* c1: gluster volume replace-brick compute c3:/persist/glusterfs/compute/brick1 192.168.1.2:/persist/glusterfs/compute/brick1 commit force
* c1: gluster volume heal compute full
* c1: gluster peer detach c3