Nvida tesla p4 doesn't decode streams

Moderators: HeneryH, TimG

Post Reply
kolt
Posts: 2
Joined: Tue Jan 31, 2023 10:26 pm

Nvida tesla p4 doesn't decode streams

Post by kolt » Tue Jan 31, 2023 10:35 pm

I want to move blue iris on to a windows 10 VM using Proxmox. After importing a backup and viewing the cameras from a web browser, the cameras are understandably laggy with a lot of frameloss. I've read that using a GPU for decode can be a benefit, however, after installing and passing through a Nvida Tesla P4 and setting all the cameras to use Nvida NVDEC for decode, it doesn't appear to work. It seems that the CPU is still doing the decoding with no load on the GPU as indicated from the hardware monitor on MSI afterburner as the usage doesn't appear in task manager. I've installed the latest drivers for it and even did BTC mining to see if added load to the GPU which it did. I was wondering if there is anything I need to configure for this to work?

Thank you
PaulDaisy
Posts: 85
Joined: Mon Jan 16, 2023 5:06 pm

Re: Nvida tesla p4 doesn't decode streams

Post by PaulDaisy » Wed Feb 01, 2023 12:01 am

I would be curious to see any feedback on this too. From what I read, the Nvidia decode implementation in BI is not very beneficial, the Intel decode is implemented better. However, if you use CPAI, that does use CUDA for sure. I'd be very interested to know if your P4 works with CPAI and what the processing times are with YOLO models. Do you have a Nvidia video card as well?
kolt
Posts: 2
Joined: Tue Jan 31, 2023 10:26 pm

Re: Nvida tesla p4 doesn't decode streams

Post by kolt » Wed Feb 01, 2023 5:10 am

PaulDaisy wrote: Wed Feb 01, 2023 12:01 am I would be curious to see any feedback on this too. From what I read, the Nvidia decode implementation in BI is not very beneficial, the Intel decode is implemented better. However, if you use CPAI, that does use CUDA for sure. I'd be very interested to know if your P4 works with CPAI and what the processing times are with YOLO models. Do you have a Nvidia video card as well?
The thing is, the processors are E5-2697 v2 and using 8 vCPUs on the VM shows about 60-70% CPU usage which is ok but the video is very choppy. FPS keeps going up and down and is super unstable. Just thought this Tesla P4 might show some improvement over using the processors
louyo
Posts: 94
Joined: Sat Apr 18, 2020 1:16 am
Location: South Florida, US

Re: Nvida tesla p4 doesn't decode streams

Post by louyo » Wed Feb 01, 2023 1:53 pm

For reference (maybe oranges to apples).

I am running one test system with BI in a W10 VM on an old workstation (E5-1650 V3) running ESXi 7.03. Only 3 cameras, 2 are wireless. I am running the latest level BI and the latest level of GPAI in a Docker running in a Debian 11.5 VM.

GPU is Nvidia P620 with pcie passthrough. Only has 2GB RAM.

I access the system 3 ways: the ESXi Console, Tight VNC, and the Web interface. VNC and Web connections are via VPN (BI is on a separate LAN from the ESXi web server).

CPU wanders from about 9-25%, depending on how many interfaces I open and housekeeping and such.

GPU shows 1-2%.

I run the cameras at 1080, at 15fps, with substreams, direct to disk, etc, etc. Group settings, webcasting: Default max FPS is set to 15.

My alert times run from 60-80msec, some delay is created by the use of the Docker.

Lag: When I have the web interface running and connect via Tightvnc, I see some lag, typically a 1 second or less delay and sometimes a 1 second drop out. With the web interface only, I don't see any lag or delay. With the ESXi console, I see a very small delay (lag?) but never a skip. Although I have had some database problems lately, recording playback is smooth.

We have a similar test setup with 5 cloned cameras (out of 11) and ESXi on an older Dell Server at a client site. Separate system from main BI system. Performance similar to above, with almost no lag using the BI console via tightvnc over VPN. CPU runs in the 30-40% range. Same Nvdia P620. Considering that 3 of the cameras go to BI through an NVR, performance is very good. AI eliminates all false alerts created by headlights.

When the dust settles, we will purchase a better GPU and put more cameras on AI.

Lou

--
An intellectual is someone who can listen to the William Tell Overture and not think of the Lone Ranger.

¯\_(ツ)_/¯
Post Reply