In a cloud , we have a cluster of glusterfs nodes (participating in gluster volume) and clients (that mount to gluster volumes). These nodes are created using terraform hashicorp tool.
Once the cluster is up and running, if we want to change the gluster machine configuration like increasing the compute size from 4 cpus to 8 cpus , terraform has the provision to recreate the nodes with new configuration.So the existing gluster nodes are destroyed and new instances are created but with the same ip. In the newly created instance , volume creation command fails saying brick is already part of volume.
sudo gluster volume create VolName replica 2 transport tcp ip1:/mnt/ppshare/brick0 ip2:/mnt/ppshare/brick0
volume create: VolName: failed: /mnt/ppshare/brick0 is already part of a volume
But no volumes are present in this instance.
I understand if I have to expand or shrink volume, I can add or remove bricks from existing volume. Here, I'm changing the compute of the node and hence it has to be recreated. I don't understand why it should say brick is already part of volume as it is a new machine altogether.
It would be very helpful if someone can explain why it says Brick is already part of volume and where it is storing the volume/brick information. So that I can recreate the volume successfully.
I also tried the below steps from this link to clear the glusterfs volume related attributes from the mount but no luck. https://linuxsysadm.wordpress.com/2013/05/16/glusterfs-remove-extended-attributes-to-completely-remove-bricks/.
apt-get install attr
cd /glusterfs
for i in attr -lq .
; do setfattr -x trusted.$i .; done
attr -lq /glusterfs (for testing, the output should pe empty)
Simply put "force" in the end of "gluster volume create ..." command.