One way to divide bandwidth can depend on the CPU, means every VM is getting a chance to use the NIC whenever it is getting the CPU cycles. But now a day we have many other way to share the same NIC among various VMs, where CPU is not coming into picture at all and packet is directly copied to the VM memory space(ex: using DPDK library or using SR-IOV etc).
Could you please explain how total available network bandwidth is divided among various VMs and How can we make sure that one VM is not disturbing other VM (in terms of network bandwidth) ?
Edit 1: I am more interested in the concept rather than for a particular hypervisor. In fact, any approach which solves the problem is good enough for me.
I think you may be thinking of things incorrectly.
When you have multiple VM’s, the networking stuff acts like a bridge (or a router), thus it simply handles data as its put onto the wire the same way a switch does – it does not do QoS or control of packets.
The efficiency with which it is put onto the wire would depend somewhat on the virtual NIC driver used – in short, this all takes place beneath the TCP/IP level.
You can impose packet shaping, QOS, different route policies etc on the host box [ if using Linux you can also use things like ebtables, vlans etc) or an upstream router, and presumably you could force the NICS into 10 megabit or 100 megabit and/or half duplex mode to crudely limit performance, but for the most part its a first-come, first-serve system – same as an unmanaged switch.
In short, you normally wouldn’t impose limits on the individual VM’s unless it was a business model, in which case you would either put it behind a router to handle it or turn the host into a router. Remember that each virtual nic on each VM will have its own MAC address, so you can do some controlling there – although I’m sure this can be subverted. If you need to prevent it being subverted you would probably need to create separate bridges – possibly on separate VLANS to isolate the VMs from each other, and then manage those.