We are currently doing an upgrade. We have installed SCVMM 2016, and have reinstalled one Hyper-V host with Windows Server 2016 Datacenter and added it to VMM. We configured the host's network in SCVMM 2016 by the following (just like the previous config, which worked), then added the host to the existing cluster (2012 R2 level):
1 Logical switch for Management, Live Migration and CSV. 3 vNICs for each of them
1 Logical switch for VM Networks
2 Logical switches for iSCSI with a vNIC under each. The iSCSI storage are Storsimple 5520 and 8100.
This works just fine. However, if we put the host in maintenance mode, and reboot it, both iSCSI VNICs come up unplugged. We have disabled VMQ (only 1Gb NICs in the server), but that does not solve it.
The VNICs then have disappeared from VMM. The only way to get the iSCSI VNICs up and running, is to recreate them in VMM. Then it works until the next reboot. The physical NICs are Intel Gigabit ET Quad port.
Does anybody know what could cause this?