DeepStack Analysis - Fine Tuning Settings

We created a lot of DeepStack self-help content.
The DeepStack + Blue Iris article talks about the integration and proper setup.
The DeepStack Gotchas article is learnings from past tickets.
Understanding the DeepStack analysis (.dat) functionality is very useful when trying to understand what DeepStack is doing.
See Fine Tuning Settings article.
Post Reply
varghesesa
Posts: 90
Joined: Thu Jul 11, 2019 9:52 pm

DeepStack Analysis - Fine Tuning Settings

Post by varghesesa »

Introduction

BI leverages AI with your motion settings in order to provide very intelligent alerts. This is why a camera facing a parking lot will alert on a moving car but not alert on a parked car. This article is a deep dive into the DeepStack Analysis functionality.

The webinar associated with this article is The DeepStack Analysis Feature. The webinar provides a demo of the BI software while going through the article content.

Playback window article/webinar
This article deep dives into the playback window. The playback window has powerful functionality to troubleshoot motion and DeepStack settings. Setting the motion sensor correctly is a big if not bigger part in getting smart alerts from your cameras.

Best Practice

Clone the camera

Leverage BI functionality as much as possible to make your life easier. Fine tuning your Motion and DeepStack settings to deliver smarter alerts is as much an art as a science. Therefore, use camera clones to compare before and after results so you know for yourself whether you made alerts better or introduced more issues.

Cloning cameras is easy in BI. When adding a camera, select Copy from another camera.

deepstack fine tuning - clone.png
deepstack fine tuning - clone.png (36.97 KiB) Viewed 14783 times

Unselect Clone master when creating the duplicate. Camera settings -> General tab.

deepstack fine tuning - clone master.png
deepstack fine tuning - clone master.png (22.85 KiB) Viewed 14784 times


DeepStack Settings

playback deepstack camera settings.png
playback deepstack camera settings.png (83.69 KiB) Viewed 16470 times

Instead of walking through all the settings (see DeepStack article), this article highlights the key settings that may need to be changed for Fine tuning. The other settings are either on/off by default or changed based on user preference. They should have no bearing on the Fine Tuning process.
  • Save DeepStack analysis details: This is THE KEY SETTING. BI now makes understanding what is happening in the software so easy.
    Easy to check if DeepStack analysis is active. Simply go to your Alerts folder and see if *.dat files are starting to populate after camera alerts.
    Definitely unselect after fine tuning is completed. This feature takes CPU resources and storage resources.
  • Hide cancelled alerts on timeline and 'all alerts': I prefer this unselected while fine tuning because I like to leverage the Alerts list to see all alerts that are processed in BI.
    If this feature were active, the "nothing found" alerts would be listed in the "Cancelled alerts" folder.
    Many users select this feature once fine tuning is completed.

    deepstack debug clipslist.PNG
    deepstack debug clipslist.PNG (99.3 KiB) Viewed 16470 times
  • Use mainstream if available: Unselected during fine tuning.
    This setting is important because the motion sensor is applied to the sub stream if you have connected two streams to your camera.
    Motion and object overlap analysis is a huge value add that makes BI AI alerts accurate. So I want DeepStack to analyze the exact same frame that was used to detect motion so the overlap analysis is as accurate as possible.
    User preference whether to leave unselected after fine tuning. I leave it unselected. However, if you have really good synchronization between your main and sub stream and you feel you are getting more accurate AI object detection from the high res main frame, then selecting the mainstream may make sense. Keep in mind, analyzing high resolution images also takes more CPU/GPU load.
  • Other optional settings I find useful at times
    • Recording tab: Set recording to When triggered. Uncheck Combine or cut video each. This way a new BVR is created for each trigger. Because you have a BVR for each trigger event, it's easy to replay a missed alert/false alert and tweak your motion settings and observe if the tweaks improve your alerts. If you cannot figure out an issue, it is also easy to send the short bvr of the motion trigger to support for review!
    • Trigger tab: Leave Motion overlays off. Camera settings -> Trigger tab -> Motion Sensor. Highlight: Do not highlight motion.
      The overlays may interfere with DeepStack resulting in more missed objects. Best to turn overlays off when using DeepStack.
      Engineering will be addressing this issue.
      FYI, with D2D recording, turning motion overlays on/off is less important since they are saved as meta data regardless. Therefore, even if you do not see highlights in Live view, you can still do so during playback.
      If you selected Re-encode when recording, then unfortunately, no motion overlays are available if they are turned off for Live View.
    • Trigger tab: Set Add to alerts list = Hi-res JPEG files.
      With the *.dat files now created with Save DeepStack analysis details option, having the JPEG images makes finding the corresponding *.dat file easy, especially when you are stuck and need to consult with support.
      With the JPEG images, when you right click on an alert -> Open containing folder, BI will find the JPEG for you.

      playback alerts open containing folder.png
      playback alerts open containing folder.png (106.43 KiB) Viewed 16470 times

      After doing so, the preceding *.dat file in Window Explorer is the *.dat file associated with the selected Alert. Makes identifying *.dat files for alerts of interest easier in Windows Explorer.

      deepstack alerts folder.PNG
      deepstack alerts folder.PNG (113.61 KiB) Viewed 16470 times
Motion sensor
This section explains default settings that work well with DeepStack.

Motion sensor vs AI
The choices are often to reduce motion sensitivity and have AI take more responsibility for filtering alerts. AI could take a lot of CPU/GPU resources. You will need to balance how much filtering is to be done by the motion sensor vs AI in order to work well with your CPU/GPU resources.

:idea: Object detection: Turning object detection off is a major no, no. If BI does NOT identify objects, BI will NOT send frames to DeepStack for processing. You basically turned DeepStack OFF.

Algorithm: Simple vs Edge vector? I lean towards Simple because best to increase the sensitivity of the motion sensor and trust the AI to cancel false triggers.

deepstack fine tuning_motion sensor.png
deepstack fine tuning_motion sensor.png (53.82 KiB) Viewed 12642 times


Object travels: Same as above, best to increase the sensitivity of the motion sensor and trust the AI to cancel false triggers.

Object crosses zones: I like having a Zone B that triggers the camera at an ideal location where the AI has a good look at the object. See Pro Tip 1 below for details.

deepstack fine tuning_object detection.png
deepstack fine tuning_object detection.png (39.78 KiB) Viewed 12642 times

DeepStack analysis

Understanding why DeepStack did not alert has just become a lot easier with the "Save DeepStack analysis" feature. Now BI can show you exactly which images(i.e. frames/samples) were processed by DeepStack when a motion trigger fired so you can understand why an alert was or was not sent.

In order to get the feature to work:
  • First check "Save DeepStack analysis" in Camera settings -> Trigger tab -> Artificial Intelligence. This setting will start creating *.dat files in the Alerts folder pertaining to the meta data for each motion trigger.
  • Open the Status -> DeepStack window.
  • Double click any motion trigger in the Alerts List and the DeepStack Status window will populate with the DeepStack meta data making it much easier to understand what is going on.
save deepstack analysis.png
save deepstack analysis.png (188.79 KiB) Viewed 16465 times

This example highlights an alert that was cancelled by DeepStack.

Now you see exactly what BI does when making alert decisions based on your settings. Tweaking settings based on missed alerts has become much easier.

The logs also provide data consistent with the DeepStack analysis data.

deepstack log verification.PNG
deepstack log verification.PNG (6.33 KiB) Viewed 16465 times

From the logs, motion was detected at 11:00:51.907. DeepStack cancelled the alert at 11:01:01.504 roughly 10s later.


DeepStack tab

Status button -> DeepStack tab
deepstack analysis.png
deepstack analysis.png (146.11 KiB) Viewed 16454 times

Frame Analysis Window

The below image connects the AI settings in BI to the output from the DeepStack tab.

Log DeepStack.png
Log DeepStack.png (106.61 KiB) Viewed 16458 times
  • BI will always analyze the trigger leading edge image (T=0). This is the frame that caused BI to trigger.
  • + real-time images = 2. Tells BI number of frames to sample beyond the trigger leading edge image (T=0, T=1, T=2).
  • Begin analysis with motion-leading image. Tells BI to also sample motion leading edge (T-1).
  • Make note of No object found and Motion detected symbols when a frame is analyzed.
  • The Asterisk marks the first frame where DeepStack identified any object in the list.
    Or it marks the last frame sampled. In this case, the alert was cancelled so the Asterisk marks the last frame sampled.
    FYI, this frame is also the DeepStack Alert Image. It is saved to the database and is the image that appears in the Alerts list.

Confirmed vs Cancelled Alert

The * frame easily tells you whether an Alert was confirmed or cancelled.

deepstack fine tuning - confirmed vs cancelled.png
deepstack fine tuning - confirmed vs cancelled.png (73.18 KiB) Viewed 14782 times


Note, DeepStack STOPS analyzing further frames once the first frame that identifies any object of interest is identified. Therefore, if you tell DeepStack to analyze 10 additional frames and an object of interest was found in the trigger frame (T=0), BI will not continue sampling 10 additional frames. BI will then fire the Alert (if any).

Thus, subsequent objects that may appear later within same motion trigger are NOT identified nor does BI fire subsequent alerts. This is also why you may have set DeepStack to analyze 10 images, but DeepStack stopped after analyzing two frames. An object on the list was found on the second frame. Therefore BI stopped processing the motion trigger and sent the alert.


:?: Review Trigger Tab article / webinar if you want a refresher on the meaning of Motion leading edge and/or Trigger leading edge.

Other motion trigger settings that should be considered when determining AI settings.
Break time: Camera settings -> trigger tab. Default = 10s
Many users select a + real-time images setting equal to the Break time. Users then set analyze one each equal to 750ms or 1s.
If your CPU/GPU has the ability to handle the load, this allows BI to sample every 1s or less which should result in few missed alerts.

DeepStack Image

The image has a wealth of information.
  • The black areas show the areas ignored by Motion and DeepStack.
  • The annotation(s) shows what DeepStack identified.
    Blue indicates BI believes the person is static, i.e. not moving.
  • The yellow motion highlight shows where BI motion is identified.
    This image shows why BI cancelled the alert, there was no overlap between the Motion and the DeepStack object (person identified).
deepstack frame.png
deepstack frame.png (154.74 KiB) Viewed 16453 times

:idea: Most common gotcha!

The example above highlights the power of the DeepStack analysis feature. The issue here is the main stream is used for DeepStack analysis and as we know, BI motion sensor always works from the sub stream. The motion is far ahead of the DeepStack object. This is an obvious clue that the main stream and the sub stream are not synchronized.
Fix:
  • Tell DeepStack to use the sub stream for analysis. Use main stream if available should be UNSELECTED.
  • OR Uncheck "Use RTSP/stream timecode" if you want to continue processing the mainstream.
    IP Config dialog for the Camera connector settings.
    deepstack fine tuning_rtsp setting.png
    deepstack fine tuning_rtsp setting.png (1.49 KiB) Viewed 15657 times

Final thoughts

DeepStack analysis is great because it will tell you why alerts were missed. At the end of the day, you basically have 3 levers to play with:
  • The sub stream is where motion detection occurs (if you setup dual streams). Running DeepStack analysis on the sub stream may provide better accuracy in regards to object / motion overlap.
    Running DeepStack on the sub stream will provide faster inferences (less CPU resources). If you have a CPU based DeepStack deployment, this may be of value since BI and DeepStack are both competing for the CPU.
  • Alternatively run more samples per trigger. My break time is 10s, so I sample every second to improve my chances of finding objects.
    deepstack_fine tuning_samples.png
    deepstack_fine tuning_samples.png (9.94 KiB) Viewed 13323 times
  • A final trick is to play with motion zones so the object is in an ideal position for accurate DeepStack analysis. (Pro Tip 1 below)

    Motion fine tuning and leveraging the playback window is also good to know when tweaking alerts.

Pro Tips

Pro Tip 2: Nothing found alerts would be false alerts from your motion settings if AI was not available.

See Fine Tuning Motions settings article for details.

deepstack fine tuning nothing found.png
deepstack fine tuning nothing found.png (135.49 KiB) Viewed 16408 times



Pro Tip 1: Use Zones to capture better views of objects so AI is more accurate.

Issue
DeepStack said this is not a car (no annotation) because the headlights are confusing the AI.

deepstack miss.PNG
deepstack miss.PNG (30.74 KiB) Viewed 12645 times

You can see for yourself in below image what is going on. BI motion sensors identified the moving object, no problem. The headlights confused DeepStack and cancelled the alert.

deepstack-headlights_optimized.png
deepstack-headlights_optimized.png (163.76 KiB) Viewed 12646 times

If DeepStack saw a car (a different alert below), you would see the annotation. The overlap of the DeepStack object and the Motion detection object (turn on Motion Highlight: Rectangles or Highlight / Rectangle) further confirms the static object test, i.e. it's a moving car, not a parked car because of motion and DeepStack object overlap.
deepstack alert image confirmed.png
deepstack alert image confirmed.png (168.06 KiB) Viewed 12645 times

To resolve the issue, BI provides two very important levers. With the motion sensor, you can control where/when the camera triggers. You can thus give DeepStack the best shot at identifying objects.

Second, BI allows you to sample multiple frames per trigger. If you have the default 10s break time, then sample every 1s. This gives DeepStack 10 samples (one per sec) to identify objects. Redundancy leads to improved accuracy.

The power of Motion sensors with AI!

Goal: Very accurate alerts when cars enter and leave. Below solution is from a different camera.

deepstack-fine-tuning---gate_optimized.png
deepstack-fine-tuning---gate_optimized.png (62.77 KiB) Viewed 16417 times

Solution:

Zone A: Set to entire scene.

deepstack fine tuning zone A.png
deepstack fine tuning zone A.png (127.42 KiB) Viewed 16417 times

Zone B: Set to ideal location for AI object recognition.

deepstack fine tuning zone B.png
deepstack fine tuning zone B.png (135.56 KiB) Viewed 16417 times

Zone crossing: Set to Zone B

deepstack fine tuning zone crossing.png
deepstack fine tuning zone crossing.png (25.05 KiB) Viewed 16417 times

Set motion sensor to force BI to Alert when car (object) is positioned perfectly in Zone B. Give AI the best opportunity to identify the object.
Also note Object travels is unselected. Not needed because I know AI can accurately identify a car in Zone B so Object travels does not improve accuracy but could waste CPU resources.

Also note I increased Min. object size.
By doing so, BI correctly cancels street noise because the min object size threshold is never met by cars at a distance in the street.

deepstack fine tuning min obj size.png
deepstack fine tuning min obj size.png (29.92 KiB) Viewed 16417 times

Other advantages:
  • You can probably reduce number of samples that need to go to DeepStack saving CPU/GPU resources.
  • Using min. object size to cancel motion from the street had negligible impact on CPU utilization. Removing the street from Zone A is still available if needed (see below).

Prior solution:

deepstack-fine-tuning-prior-solution_optimized.png
deepstack-fine-tuning-prior-solution_optimized.png (58.21 KiB) Viewed 16417 times

The prior solution just had a Zone A which would block the street in the distance to reduce false alerts. This solution was functional, however, occasionally the motion sensor would identify the object (car) quite far from the camera (near the gate) and the AI was not able to classify the object as a car and returned a nothing found cancelled alert.

Setting the motion sensor to trigger when the camera has a clear view of the car improved accuracy!




Next steps

If you cannot resolve the issue yourself, send the following information.
  • Describe issue.
  • A short bvr capturing the issue.
    Couple of options to make smaller bvr files. In Camera settings -> Record tab.

    If you uncheck Combine or Cut Video, that will result in BI cutting the BVR file after each motion trigger.
    Or you can leave Combine or Cut Video, and change the time length to something small like 15 min.
  • Your current camera settings. Camera settings -> General tab -> Export
  • *.dat file associated with the Alert.
Post Reply