Blue Iris integrated directly with the Deepstack open source AI project with the 5.4.0 release. This is an on-prem only AI solution requested by many users.
Deepstack (deepstack.cc) was a private AI company (December 2018) located in Nigeria that recently open sourced their work (March 2019).
If you prefer to watch the webinar associated with this article, checkout our YouTube channel. Webinar name: Deepstack AI.
The value of AI in surveillance is it improves accuracy to detect objects and reduces false alerts. AI will draw attention to activity that matters by identifying objects that could be missed by humans because of lack of attention or hard to see at night. AI also suppresses events that do not matter such as moving shadows from the wind, changes in sunlight, flies, birds, squirrels, rain etc. AI is becoming a very effective tool for outdoor cameras.
- Green check: AI confirmation, i.e. AI found an object, but no unique icon for object.
- Flag: Used to easily filter clip list based on trigger events with AI confirmation.
- Cancelled: nothing found
- Occupied State per camera:
Same object type: +/- 10% confidence level
Same location: +/- 5%
Time limit: 1 hour
Object color (recently added)
Gotcha: Sometimes occupied state results in false cancellations. In image above, second car is actually a different car so should have been alerted. Object color was recently added as another heuristic to measure so occupied state is even smarter. In addition, users can also turn off Static Objects for cameras. Camera settings -> Trigger tab -> Artificial Intelligence. Uncheck "Detect/ignore static objects".
- Orange: High level of confidence.
- Yellow: Confidence level <67%
- Blue: Found something, but occupied state cancelled alert.
- Red: Cancelled alert based on "To cancel" field in Camera settings.
- 5.4.3: Blue Iris AI detection takes into account motion detection before triggering an alert. The parked car should not alert!
The Action Map for alerts have been extend so you can trigger alerts on specific objects. You can finally get the alert when the dog jumps on the couch!
Particularly handy if you are using facial recognition and want to be alerted only on unrecognized faces. Use the tag "unknown".
Deepstack AI Installation
Depending on your machine, you can choose the many Windows installation options. Below is the link to the Windows installation page. The CPU version is very easy to install. Just run the installer.
Gotcha!: The DeepStack website needs some cleanup. There are pages on the site that point to a legacy installer that does not work! Stick to the URL above and you will be fine. In particular, the installer name is DeepStack.Installer.exe. My hunch is this version was created around 2019 before the company decided to open source their software. Another clue you have the wrong version is when you bring up the DeepStack splash page, Activation Key information is part of the page.
If you choose the wrong version, you will find BI cannot start DeepStack automatically. It cannot communicate with the server either with below errors in the Status -> Log.
Other useful Deepstack resources;
According to the documentation, the GPU installation also requires CUDA 10.1 and cuDNN. The latest version of CUDA is 11.2. I took a chance with CUDA 11.2 and it seems to be working fine for me. I will share issues if I come across any. For those who are curious, CUDA is an API created by NVidia to make parallel programming on NVidia GPUs easier.
Once installed, connect BI to Deepstack.
- Specify which port to use for Deepstack. 83 is the default. I randomly picked 5432 in diagram. As long as it is NOT the same port as the web server you are fine. I'm assuming you installed Deepstack on your BI machine, thus IP Address = 127.0.0.1. BI does allow the ability to run Deepstack on a separate server.
- Best to install Deepstack in default location (C:\DeepStack). If you choose a different location, please specify accordingly.
- Hit Start now.
- Use Test in browser link to confirm Deepstack is running.
Deepstack browser confirmation below.
BI Status Log Error - Deepstack not running or unreachable
If BI cannot communicate with Deepstack, BI will let you know.
Assuming you have Auto start/stop with BI enabled, simple fix is stop Deepstack if it's running and restart BI.
It is feasible to run BI on one machine and Deepstack on a completely separate machine. A separate machine could mean a completely separate hardware server or a VM or a Dockers container.
One user stated he did so successfully by running a Docker container in an Ubuntu 18.04 VM. In his implementation the IP Address for the Docker container was 10.32.1.9.
The docker command was:
Code: Select all
docker run -d \ --name=deepstack \ -p 80:5000 \ -v /opt/deepstack-storage:/datastore:rw \ -e "VISION-DETECTION=True" \ -e "VISION-FACE=True" \ deepquestai/deepstack
Once setup, similar to the non-distributed system (above), check/confirm that you can access the Deepstack confirmation page from a browser on the BI server (Test in browser link). In BI, the AI settings based on the Docker IP Address mentioned above would be:
If you see traffic on the AI machine but no objects in BI, you likely did not enable the vision detection.
Blue Iris AI Camera settings
Camera settings -> Trigger tab -> Artificial Intelligence
List of Objects Available for detection
Best to check deepstack.cc for current list of objects.
Code: Select all
person, bicycle, car, motorcycle, airplane, bus, train, truck, boat, traffic light, fire hydrant, stop_sign, parking meter, bench, bird, cat, dog, horse, sheep, cow, elephant, bear, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase, frisbee, skis, snowboard, sports ball, kite, baseball bat, baseball glove, skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork, knife, spoon, bowl, banana, apple, sandwich, orange, broccoli, carrot, hot dog, pizza, donut, cake, chair, couch, potted plant, bed, dining table, toilet, tv, laptop, mouse, remote, keyboard, cell phone, microwave, oven, toaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair dryer, toothbrush.
Blue Iris before Artificial Intelligence
Use Case: Better safe than sorry. AI is great but I rather be notified on all motion alerts.
Use Case: So many false alerts. I trust my AI and its notifications.
Use Case: I trust my AI but I still need my other alerts like mqtt commands to trigger (home automation).
- Turn off "Hide cancelled alerts on timeline and 'all alerts'. It's easier to understand what is going on when all alerts are easily visible on the clip list. Take advantage of BI functionality.
- Use the motion detection fine tuning that is already part of Blue Iris.
- Use continuous recording so you can also discover missed alerts caused by BI motion detection.
- Now you can find out what Deepstack recognizes when processing an alert. Great tool if you are wondering why an alert was missed.
- Select an alert and start playback.
- Stop the alert clip where you want Deepstack to analyze the image.
- Take a snapshot. Saving to the clipboard directory (default) is fine.
- From the clip list, select the jpg image that you just created. (may need to select clipboard icon on the clip list)
- Right click -> Analyze image with Deepstack.
- See the annotation.
Simple. Uncheck 'Fire "On alert" actions only when confirmed". This will result in BI sending all the alerts like before. Send yourself an email, SMS or Push notification with the attached image (which is the image used to process the event). If there are no annotations, then Deepstack missed the object.
High CPU usage with Deepstack.
AI / Deepstack is very computationally heavy. If your CPU usage jumps and you don't have a GPU the best you can do is throttle the calls to Deepstack.
- Turn on Deepstack only for outdoor cameras. BI motion settings are usually fine for indoor cameras.
- Reduce number of images sampled per alert.
- Tighten your BI motion sensor settings so cameras do not trigger an alert so frequently.
Shout out to our users for sharing issues and solutions!
For some reason, the BlueIris software is ignoring what the Deepstack server says in terms of confidence and the label. This seems to happen during the second, third or sometimes fourth image that Deepstack analyzes. I have confirmed with a packet capture that despite these additional images being sent back to the BlueIris software with a successful response, BlueIris ignores it and goes to cancel the alert. I confirm these images meet the criteria of the minimum confidence percentage and "to confirm". I would like to also mention that if DeepStack responds back with a prediction that is within the criteria on the first image, it works OK.
Fix: I just needed to change the keyframe interval to be the same value as the FPS so it is equal to 1.00 (as displayed from Blue Iris). The detection is working much better now. See Camera stream optimization article for details.
Alternative fix: If for some reason you cannot adjust the key frame interval on the camera, identify the current key frame interval (Status -> Cameras tab). If the value is 0.25, i.e. 1/4, i.e. 1 key frame every 4s, then set the pre-trigger buffer (Camera settings -> Record tab) greater than or equal to 4s. This will guarantee BI at least one key frame to process.
Deepstack always returning nothing found. (Another shout out to the community for making everyone smarter)
This could imply the Deepstack server is returning server errors and BI is just reporting back "nothing found".
BI will eventually report the Deepstack errors in the Status->log to make users aware of the issue easier.
Until done so, one user creatively used Postman to identify the issue. If you are a developer, you are probably familiar with Postman. It's a very popular tool that helps test and debug APIs. If new to Postman, lots of content on the internet how to use it. By using Postman, the user discovered object detection requests to Postman were returning 500 error codes. 500 errors mean the server had a problem processing the request.
FYI, 400 errors mean the request to Deepstack is not correct. For those new to Postman, a 400 error means the request you created in Postman to send to Deepstack is not correct. You need to first make sure you are sending correct requests to Deepstack.
Once the user realized their Deepstack installation was not working correctly, they reinstalled Deepstack. Deepstack does provide error logs, "C:\Users\[Username]\AppData\Local\DeepStack\logs" (%LOCALAPPDATA%\DeepStack\logs). Understanding the logs were challenging so best course of action was to try reinstalling Deepstack.
User also noted uninstalling Deepstack may not flush out all the files. The user went back into the installation directory (C:\DeepStack) and manually deleted any remaining files.
Furthermore, user noted, it's good to go into task manager and make sure any old Deepstack processes are not running. You can go into Task Manager and kill any of the following processes: server.exe, python.exe, redis-server-exe if they exist.
FYI, installing / using AI Tool for debugging may be worthwhile as BI continues to build out the debugging capabilities. In AI Tool, error messages will pop up right away if old Deepstack versions are still running.
Another clue that Deepstack is not running properly is by running Deepstack manually and observing the response time in the console. If Deepstack is taking 1m to respond, there is probably something wrong.
Front door camera
Front door view has the road and therefore a lot of unnecessary alerts. All you really want are people coming to the house or property.
So you create a zone in front of your house to reduce the noise from the road.
No alerts! The problem is the camera will trigger when a person's feet crosses Zone B. The image being processed by Deepstack is just the feet! So Deepstack will often return nothing found.
The fix! Make sure to always define a Zone A which is the overall area of interest on the camera. The image processed by Deepstack is the overlap of all the zones, thus the entire person is sent to Deepstack for processing and the appropriate alert is sent.
One important point that I forgot to mention in the Webinar but did come up in the Q&A is how to tell BI to only be concerned with motion in Zone B??? Simple, see below. By setting "Object crosses zones" to B, I am telling BI to only worry about motion in Zone B. Notice with AI, I uncheck "Object travels" and "Object size exceeds" settings. With AI, the motion sensor settings can usually be simplified!
Person detection & Facial detection
Keep in mind the current implementation of Deepstack runs in series, not in parallel. So if you have both activated in Deepstack, Deepstack will alert on person detection first and never alert on facial recognition. Most people would probably think Deepstack would first discover the person and then proceed to recognize the face, but this does not seem to be the case.
Person detected in image below even though person detection and facial recognition was activated.
With just facial recognition activated.
Next steps / Submit a ticket
- Describe issue.
- Any steps you have taken so far to resolve the issue. This will help us understand the issue.
- Camera settings. Camera settings -> General tab -> Export