I always advice my customers about the importance of creating virtual switch using nics coming from both motherboard and addon cards, mixing them. The reason is simple: redundancy.
Using for example two nics, one from the motherboard and the other from the addon card, virtual switch will work even if some components fails.
However, many techies do not use this schema, and they always have troubles sooner or later.
My last one came from a customer, with a vSphere cluster created by former consultants. All nodes have 4 nics, two from motherboard and two from an addon card. Suddenly after a reboot, one of these hosts was not able anymore to connect to vCenter and rejoin the HA cluster.
After some tests by phone, I decided to go to customer’s site. At the console (ESX 4.0) network connection was up, but we were able to ping only itself.
I got a doubt, confirmed after few commands: vswitch0 has assigned vmnic0 e vmnic1, and both were from the motherboard chipset; they were no more visible inside ESX, maybe because of a hardware failure or a corrupted driver. Since HA cluster was suffering from this node being offline, we went for a fast solution. I checked the physical switch has correct settings for vmnic2, than i used the command:
esxcfg-vswitch -L vmnic2 vSwitch0
and the management network came online again.
If vmnic2 was added to vswitch0 since the beginning, this fault would not happened.