- We are on vSphere 6.7 u3 and using VDS 6.6
- All of our hosts have 4 uplinks: 2 for management network; 2 for guest network (connected to VDS)
- Each cluster has its own VDS
This discussion is spawning from an outage last year when upgrading our VDS. I am interested in setting up a ‘stand by’ VDS for our production environment that is a replica of the connected and utilized VDS. In the event of a failed upgrade I would like to use it to switch all of our hosts over to the ‘stand by’ one.
What would be the proper method for accomplishing this? I was told that the easiest way to do this would be to select all of the affected hosts and migrate one uplink at a time to the stand by VDS (VDS2) then migrating all of the virtual machines networking to the EPG located on VDS2. I realize this would be a little work as I would have to do this for each and every VM but we could at least be intentional on which ones we move first (critical VMs).
In testing this in the lab, when moving one uplink on the guest side to VDS2, the other uplink then goes ‘down’ and shows disconnected on VDS1. As long as I move all of the VMs i care about to an EPG on VDS2, they continue to work (good). Is this the expected behavior? I guess I could move both uplinks as a cleaner move, but only moving one originally would save me time in the event of a major outage.
Another thing I noticed when moving uplinks between VDS’s: if i rmove all uplinks from VDS1 to VDS2 – under VDS1, it still shows the host as connected. to cleanly remove it, i have to edit the VDS and remove the host.
In summary, the method I am pursuing in the event of a hosed VDS1 is to move all uplinks to the replica VDS2, migrate all VMs EPG to the EPG on VDS2, then remove VDS1 from each host. This seems to be quite a painful exercise (that I could alleviate with some scripting) but would be SOME sort of fallback. Is there a better method? Am i barking up the right tree?