DeepStack Analysis - Fine Tuning Settings

Post Reply
varghesesa
Posts: 49
Joined: Thu Jul 11, 2019 9:52 pm

DeepStack Analysis - Fine Tuning Settings

Post by varghesesa » Sun Aug 15, 2021 10:40 pm

Introduction

BI leverages AI with your motion settings in order to provide very intelligent alerts. This is why a camera facing a parking lot will alert on a moving car but not alert on a parked car. This article is a deep dive into the DeepStack Analysis functionality.

The webinar associated with this article is The DeepStack Analysis Feature. The webinar provides a demo of the BI software while going through the article content.

See the Fine Tuning Motion settings article/webinar as well.
Setting your motion settings correctly is a big if not bigger part of getting smart alerts from your cameras. You first need to understand the Trigger tab and how to fine tune your motion settings.



Best Practice

Clone the camera

Leverage BI functionality as much as possible to make your life easier. Fine tuning your Motion and DeepStack settings to deliver smarter alerts is as much an art as a science. Therefore, use camera clones to compare before and after results so you know for yourself whether you made alerts better or introduced more issues.

Cloning cameras is easy in BI. When adding a camera, select Copy from another camera.

deepstack fine tuning - clone.png
deepstack fine tuning - clone.png (36.97 KiB) Viewed 369 times

Unselect Clone master when creating the duplicate. Camera settings -> General tab.

deepstack fine tuning - clone master.png
deepstack fine tuning - clone master.png (22.85 KiB) Viewed 369 times



DeepStack Settings


playback deepstack camera settings.png
playback deepstack camera settings.png (83.69 KiB) Viewed 2056 times

Instead of walking through all the settings (see DeepStack article), this article highlights the key settings that may need to be changed for Fine tuning. The other settings are either on/off by default or changed based on user preference. They should have no bearing on the Fine Tuning process.
  • Save DeepStack analysis details: This is THE KEY SETTING. BI now makes understanding what is happening in the software so easy.
    Easy to check if DeepStack analysis is active. Simply go to your Alerts folder and see if *.dat files are starting to populate after camera alerts.
    Definitely unselect after fine tuning is completed. This feature takes CPU resources and storage resources.
  • Hide cancelled alerts on timeline and 'all alerts': I prefer this unselected while fine tuning because I like to leverage the Alerts list to see all alerts that are processed in BI.
    If this feature were active, the "nothing found" alerts would be listed in the "Cancelled alerts" folder.
    Many users select this feature once fine tuning is completed.

    deepstack debug clipslist.PNG
    deepstack debug clipslist.PNG (99.3 KiB) Viewed 2056 times
  • Use mainstream if available: Unselected during fine tuning.
    This setting is important because the motion sensor is applied to the sub stream if you have connected two streams to your camera.
    Motion and object overlap analysis is a huge value add that makes BI AI alerts accurate. So I want DeepStack to analyze the exact same frame that was used to detect motion so the overlap analysis is as accurate as possible.
    User preference whether to leave unselected after fine tuning. I leave it unselected. However, if you have really good synchronization between your main and sub stream and you feel you are getting more accurate AI object detection from the high res main frame, then selecting the mainstream may make sense. See Motion - Fine Tuning Settings article for details.
    Keep in mind, analyzing high resolution images also takes more CPU/GPU load.
  • Other optional settings I find useful at times
    • Recording tab: Set recording to When triggered. Uncheck Combine or cut video each. This way a new BVR is created for each trigger. Because you have a BVR for each trigger event, it's easy to replay a missed alert/false alert and tweak your motion settings and observe if the tweaks improve your alerts. If you cannot figure out an issue, it is also easy to send the short bvr of the motion trigger to support for review!
    • Trigger tab: Leave Motion overlays off. Camera settings -> Trigger tab -> Motion Sensor. Highlight: Do not highlight motion.
      The overlays may interfere with DeepStack resulting in more missed objects. Best to turn overlays off during when using DeepStack.
      Engineering will be addressing this issue.
      FYI, with D2D recording, turning motion overlays on/off is less important since they are saved as meta data regardless. Therefore, even if you do not see highlights in Live view, you can still do so during playback.
      If you selected Re-encode when recording, then unfortunately, no motion overlays are available if they are turned off for Live View.
    • Trigger tab: Set Add to alerts list = Hi-res JPEG files.
      With the *.dat files now created with Save DeepStack analysis details option, having the JPEG images makes finding the corresponding *.dat file easy, especially when you are stuck and need to consult with support.
      With the JPEG images, when your right click on an alert -> Open containing folder, BI will find the JPEG for you.

      playback alerts open containing folder.png
      playback alerts open containing folder.png (106.43 KiB) Viewed 2056 times

      After doing so, the preceding *.dat file in Window Explorer is the *.dat file associated with the selected Alert. Makes identifying *.dat files for alerts of interest easier in Windows Explorer.

      deepstack alerts folder.PNG
      deepstack alerts folder.PNG (113.61 KiB) Viewed 2056 times



DeepStack analysis


Understanding why DeepStack did not alert has just become a lot easier with the "Save DeepStack analysis" feature. Now BI can show you exactly which images(i.e. frames/samples) were processed by DeepStack when a motion trigger fired so you can understand why an alert was or was not sent.

In order to get the feature to work:
  • First check "Save DeepStack analysis" in Camera settings -> Trigger tab -> Artificial Intelligence. This setting will start creating *.dat files in the Alerts folder pertaining to the meta data for each motion trigger.
  • Open the Status -> DeepStack window.
  • Double click any motion trigger in the Clips List and the DeepStack Status window will populate with the DeepStack meta data making it much easier to understand what is going on.
save deepstack analysis.png
save deepstack analysis.png (188.79 KiB) Viewed 2051 times

This example highlights an alert that was cancelled by DeepStack. Double-clicking on an alert while the Status -> DeepStack window is open now populates the Status window with the data associated with the DeepStack analysis.

Now you see exactly what BI does when making alert decisions based on your settings. Tweaking settings based on missed alerts has become much easier.

The logs also provide data consistent with the DeepStack analysis data.

deepstack log verification.PNG
deepstack log verification.PNG (6.33 KiB) Viewed 2051 times

From the logs, motion was detected at 11:00:51.907. DeepStack cancelled the alert at 11:01:01.504 roughly 10s later.




Understanding the DeepStack tab. Status button -> DeepStack tab.

deepstack analysis.png
deepstack analysis.png (146.11 KiB) Viewed 2040 times

Frame Analysis Window

The below image connects the AI settings in BI to the output from the DeepStack tab.

Log DeepStack.png
Log DeepStack.png (106.61 KiB) Viewed 2044 times
  • BI will always analyze the trigger leading edge image (T=0). This is the frame that caused BI to trigger.
  • + real-time images = 2. Tells BI number of frames to sample beyond the trigger leading edge image (T=0, T=1, T=2).
  • Begin analysis with motion-leading image. Tells BI to also sample motion leading edge (T-1).
  • Make note of No object found and Motion detected symbols when a frame is analyzed.
  • The Asterisk marks the first frame where DeepStack identified any object in the list.
    Or it marks the last frame sampled. In this case, the alert was cancelled so the Asterisk marks the last frame sampled.
    FYI, this frame is also the DeepStack Alert Image. It is saved to the database and is the image that appears in the Alerts list.

Confirmed vs Cancelled Alert

The * frame easily tells you whether an Alert was confirmed or cancelled.

deepstack fine tuning - confirmed vs cancelled.png
deepstack fine tuning - confirmed vs cancelled.png (73.18 KiB) Viewed 368 times


Note, DeepStack STOPS analyzing further frames once the first frame that identifies any object of interest is identified.
Therefore, if you tell DeepStack to analyze 10 additional frames and an object of interest was found in the trigger frame (T=0), BI will not continue sampling 10 additional frames.
BI will then fire the Alert (if any).
Thus, subsequent objects that may appear later within same motion trigger are NOT identified nor does BI fire subsequent alerts.
This is also why you may have set DeepStack to analyze 10 images, but DeepStack stopped after analyzing two frames.
An object on the list was found on the second frame. Therefore BI stopped processing the motion trigger and sent the alert.


** Review Trigger Tab article / webinar if you want a refresher on the meaning of Motion leading edge and/or Trigger leading edge.

Other motion trigger settings that should be considered when determining AI settings.
Break time: Camera settings -> trigger tab. Default = 10s
Many users select a + real-time images setting equal to the Break time.
If your CPU/GPU has the ability to handle the load, this allows BI to sample every 1s or less which should result in few missed alerts.
Users then set analyze one each equal to 750ms or 1s.

DeepStack Image

The image has a wealth of information.
  • The black areas show the areas ignored by Motion and DeepStack.
  • The annotation shows what DeepStack identified.
    Blue indicates BI believes the person is static, i.e. not moving.
  • The yellow motion highlight shows where BI motion is identified.
    This image shows why BI cancelled the alert, there was no overlap between the Motion and the DeepStack object (person identified).
deepstack frame.png
deepstack frame.png (154.74 KiB) Viewed 2039 times

Most common gotcha!

The example above highlights the power of the DeepStack analysis feature. The issue here is the main stream is used for DeepStack analysis and as we know, BI motion sensor always works from the sub stream.
The motion is far ahead of the DeepStack object. This is an obvious clue that the main stream and the sub stream are not synchronized.
Fix:
  • Tell DeepStack to use the sub stream for analysis. Use main stream if available should be UNSELECTED.
  • OR Uncheck "Use RTSP/stream timecode" if you want to continue processing the mainstream.
    IP Config dialog for the Camera connector settings.
    deepstack fine tuning_rtsp setting.png
    deepstack fine tuning_rtsp setting.png (1.49 KiB) Viewed 1243 times



Pro Tips

Pro Tip 2: Nothing found alerts would be false alerts from your motion settings if AI was not available.

See Fine Tuning Motions settings article for details.

deepstack fine tuning nothing found.png
deepstack fine tuning nothing found.png (135.49 KiB) Viewed 1994 times



Pro Tip 1: Use Zones to capture better views of objects so AI is more accurate.

Goal: Very accurate alerts when cars enter and leave.

deepstack-fine-tuning---gate_optimized.png
deepstack-fine-tuning---gate_optimized.png (62.77 KiB) Viewed 2003 times

Solution:

Zone A: Set to entire scene.

deepstack fine tuning zone A.png
deepstack fine tuning zone A.png (127.42 KiB) Viewed 2003 times

Zone B: Set to ideal location for AI object recognition.

deepstack fine tuning zone B.png
deepstack fine tuning zone B.png (135.56 KiB) Viewed 2003 times

Zone crossing: Set to Zone B

deepstack fine tuning zone crossing.png
deepstack fine tuning zone crossing.png (25.05 KiB) Viewed 2003 times

Set motion sensor to force BI to Alert when car (object) is positioned perfectly in Zone B. Give AI the best opportunity to identify the object.
Also note Object travels is unselected. Not needed because I know AI can accurately identify a car in Zone B so Object travels does not improve accuracy but could waste CPU resources.

Also note I increased Min. object size.
By doing so, BI correctly cancels street noise because the min object size threshold is never met by cars at a distance in the street.

deepstack fine tuning min obj size.png
deepstack fine tuning min obj size.png (29.92 KiB) Viewed 2003 times

Other advantages:
  • You can probably reduce number of samples that need to go to DeepStack saving CPU/GPU resources.
  • Using min. object size to cancel motion from the street had negligible impact on CPU utilization.
    Removing the street from Zone A is still available if needed.

Prior solution:

deepstack-fine-tuning-prior-solution_optimized.png
deepstack-fine-tuning-prior-solution_optimized.png (58.21 KiB) Viewed 2003 times

The prior solution just had a Zone A which would block the street in the distance to reduce false alerts.
This solution was functional, however, occasionally the motion sensor would identify the object quite far from the camera and the AI was not able to classify the object as a car and returned a nothing found cancelled alert.

Setting the motion sensor to trigger when the camera has a clear view of the car improved accuracy!




Next steps

If you cannot resolve the issue yourself, send the following information.
  • Describe issue.
  • *.dat file
  • A short bvr capturing the issue.
    Couple of options to make smaller bvr files. In Camera settings -> Record tab.

    If you uncheck Combine or Cut Video, that will result in BI cutting the BVR file after each motion trigger.
    Or you can leave Combine or Cut Video, and change the time length to something small like 15 min.
  • Your current camera settings. Camera settings -> General tab -> Export
Post Reply