upgrading network switch VNX/vmware iscsi

This post was originally published on this site

Hi all,

we are replacing our data centre network switches (juniper to cisco). For the first step of the migration i want to move the iscsi connections connecting the iscsi vmk ports on our esxi hosts to the target ports on our vnx5200 storage. can anyone advise the best practice to do this?

this is our setup:

3 esxi hosts

each host has 2 vSwitches dedicated for iscsi as follows:

vSwitch1 – subnet to storage processor port A0 and B0

vSwitch2 – subnet to storage processor port A1 and B1

the physical vmnics connecting to the switch from the hosts are aggregate links – which will be similarly re-connected to portchannel groups on the new cisco switch.

what is the best way to carry out the migration without causing disruption, failure, data loss etc.?

Do i move one subnet at a time? e.g. A0 and B0 with the associated vmnics on the host end, presumably if i disconnect then all iscsi traffic will fail over to A1 and B1?

or alternatively is it better to move one storage processor at a time, e.g. B0 and B1 to cisco where presumably the LUNs will still be accessible via storage processor A where it will have failed over. The vmnics on the host will be split between the juniper and the cisco.

whichever of the options once half the iscsi connections are on cisco and the other on juniper, how can i check that iscsi is working on the new cisco connection, is a simple ping test from the host to the vnx target ip enough or do i need to do something else?

once i know it’s working on the cisco presumably it should be a simple case of migrating the other subnet or storage processor.

would appreciate any advice.

thanks in advance

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.