Wondering if someone could bless this or suggest optional design.
I have three Servers connected via a pair of stacked Switches (dedicated to iSCSI and vMotion) to an iSCSI SAN. Currently I am using the built in 1Gbps ports as follows:
vSwitch0
nic0 – LAN
vSwitch1
nic1 – iSCSI
nic2 – iSCSI
nic3 – iSCSI (vMotion enabled)
The iSCSI links are spread across the two switches so that if one goes down, connection to SAN will not be lost. Each host has only about 5 VMs on it so in order to obtain some more density and not saturate the single LAN port I have obtained new Intel I350-T4 NICs that I will be installing this weekend.
New design will be (vCenter is on LAN)
vSwitch0
Broadcom nic0 – LAN LAG1
Broadcom nic1 – LAN LAG2
vSwitch1
Broadcom nic2 – iSCSI vMotion LAG1
Broadcom nic3 – iSCSI vMotion LAG2
vSwitch2
Intel nic0 – iSCSI
Intel nic1 – iSCSI
Intel nic2 – iSCSI
Intel nic3 – iSCSI
Not sure if I should configure VLANs on switches? Or maybe run Fault Tolerance and vMotion as two different VLANs on vSwitch 1 over 2Gbps LAG?
I can conceive of the concepts but flipping all the right switches is the tough part.
Appreciate anyone’s insight.