5

I'm working on a project that can monitor virtual machines' vgpu usage. The hypervisor is vCenter, we have nvidia A16 cards installed on vCenter hosts, and assigned a16 vGPU to a couple of windows VMs on this host, theses vGPUs are allocated to the same GPU chip.

I tried to use nvidia-smi command to retrieve vGPU usage in both the host and VMs. In host I used nvidia-smi vgpu, and in VMs I used nvdia-smi. But it turned out the metrics provided by nvidia-smi was always different from what was provided by Windows OS in VM.

For example, the usage from nvidia-smi could be as low as 6%, but the usage from windows task manager was always around 15%.

enter image description here

We prefer to trust the metrics provided by guest OS, as it reflects the real demand of user case.

My question is, what's the meaning and source of nvidia-smi metrics? Why is the result so different? Can I somehow modify the result to reflect the real guest demand?

Thanks for any pointers!

sotirov
  • 216
zb2939
  • 51

1 Answers1

1

The periods and/or timepoints during/at which Task Manager and nvidia-smi measure might be different, which leads to different usage percentages.

As per the documentation

utilization.gpu

Percent of time over the past sample period during which one or more kernels was executing on the GPU. The sample period may be between 1 second and 1/6 second depending on the product.

utilization.memory

Percent of time over the past sample period during which global (device) memory was being read or written. The sample period may be between 1 second and 1/6 second depending on the product.

Try a constant non-changing load and measure then if they match.

Raf
  • 111