I am working on an implementation plan which uses Dell R640 servers that have dual-port 100gb Mellanox adapters installed. These adapters were selected specifically because they support NVMe-oF. I am utilizing a PureStorage x70 array which also has 100gb NVMe-oF capable interfaces. This is all cabled between 2 Cisco Nexus 3k 100gb switches. What I was hoping to do was to create a single vSwitch on each host that utilizes both mellanox 100GB intefaces as an uplink. I would use standard active/standby teaming for Management, active/active for VMNetwork, and Active/Not used for A side vmotion and B side vmotion. I would also be creating 2 vmkernel ports on separate VLANs for RDMA traffic with each vmkernel port only using a single NIC for uplink and the other set as “not-used”.
I have this all setup and have added my 2 RDMA software adapters and tied them to the appropriate physical uplink adapter. When i go to mount my storage, I am able to map to the B side of the PureArray on the appropriate RDMA adapter, but can never get the A side to connect on the other RDMA adapter. I have opened a case 2 weeks ago with VMware support but thus far they haven’t been able to tell me why this doesn’t work or that it is not supported to share the uplink for RDMA traffic with other services like we do with iSCSI traffic.
I will say that if I separate the nvme vmkernel ports so that they are on separate vSwitches, each with only 1 physical uplink, I can connect just fine to both sides of the PureStorage array. This would be fine, except that it would require me to cable separate 10gb ethernet to the Dell servers to handle all other traffice other than NVMe-oF.
Does anyone have any experience with NVMe-oF yet and can confirm whether it is supported to share the uplinks with other types of traffic? Thanks!