AI Expectations?

General discussion about Blue Iris
tardigrade
Posts: 21
Joined: Fri Dec 29, 2023 5:35 am

Re: AI Expectations?

Post by tardigrade »

Well I definitely have something set incorrectly. Initially when I first activated AI it was detecting people and even small animals (dropped from 3000 alerts on wiggly bushes to just 10 of people, cars, etc.). Now the system has not detected anything in the last 36 hours (including me standing talking to he neighbor for twenty minutes) except last night three cameras (in series) alerted on something at 0200 but there is no movement in the videos. Think I may have a ghost... <snicker>
p3ter
Posts: 6
Joined: Thu Apr 15, 2021 9:44 am

Re: AI Expectations?

Post by p3ter »

Warning: Cynicism Overload.

For the current generation of AI, and without massive specific investment in training on your own cameras, my expectation is that we all waste a huge amount of time, energy, money, and CPU cycles playing with AI, to ultimately decide "move on, there is nothing to see here - YET" But if your goal is to start yet another time-consuming hobby, and if you think making a computer do clever stuff is fun (you don't have a dog which you can teach tricks?) then knock yourself out! Many will disagree with me, and as long as they are having fun, I won't invest energy trying to burst their bubbles...

It turns out however that I am not alone on this position - Gartner group currently places Generative AI on 'the peak of inflated expectations'
https://www.gartner.com/en/newsroom/pre ... chnologies, which means that 2024 should be the year we enter 'The trough of disillusionment' for many commercial AI offerings.

I could write pages and pages of text on 'why', but nobody would read it - but any AI which is 'Pre-Trained' should come with a MASSIVE 'your mileage may vary' warning. And I'm not knocking AI itself, AI is definitely here to stay, and definitely has a future in analysing security camera images. But the way it is currently implemented, and the level of training today are definite red flags.

Using AI as a positive classifier "I'm sure this image contains a car" means that if AI is sure there is a Car in the image, then there is very probably a car in the image. But AI 100% definitely can and will miss valid cars, and will be very confident about it! AI doesn't care if someone is about to steal your stuff, AI doesn't have any understanding of 'context', or 'risk'; and if you Google something like 'adversarial images AI' you will see AI can in certain conditions confidently state that a Banana is a Toaster.

I would love to see AI implemented as a weighted negative classifier for alert confidence in future - i.e. set up your Triggers based on existing, simple rules (motion, size, contrast, direction/zones), then ask AI to take a look and weed out the stuff which is 'uninteresting' and based on a customizable confidence level, AI would determine that it is 98% confident that the image contains an uninteresting trigger, and would not Alert. But using AI to cancel a trigger and choose not to save a clip is inherently a gamble that you can only eventually lose on.

Basically, AI could complement Alerts as follows:
  • I'm 80+% sure this object is a [threat object] based on your training - LEVEL 1 ALERT!
  • I'm not sure what this is - ALERT!
  • I'm 80+% sure this image contains uninteresting motion objects [rain/snow/shadows/leaves/dog/cat] based on your training - DO NOT ALERT!
In other words, AI should sit quietly in the background learning from things you do every day. Every natural action of Flagging Alerts, Deleting Alerts, etc would quietly teach the AI, and AI would use that input to raise or lower the confidence of future alerts to the point at which you could confidently know that you have not missed anything (Relevant motion is still saved as a clips), but you have saved a massive amount of time by not needing to review unnecessary 'definitely not interesting' Alerts.

I have analysed camera images that have led to discussions/arguments between multiple people: 'is that a fox or a bobcat?' - humans are still much smarter than AI, and in most situations where you were unsure, AI still has no chance. AI is currently beating humans in VERY limited situations. "Here are thousands of images, taken in EXACTLY the same size/position/contrast/exposure - all positively or negatively identified (and confirmed by multiple other clinical tests) to show cancerous cells. Does this next new image contains cancerous cells?"
Given that you could create the same conditions with your own camera feeds, and given that you invested the same level of effort in training, would AI be as successful at identifying threats on your camera stream? Maybe. But you need to multiply the training for every weather condition, every season, every new threat...
And going back to the 'i'm sure this banana is a toaster' example - if we did arrive in a world where AI was the leading method of identifying threats in camera feeds, bad actors would learn from this, and adapt. Your house will be burglarized by a Banana, and AI will never bother to warn you.
User avatar
TimG
Posts: 2178
Joined: Tue Jun 18, 2019 10:45 am
Location: Nottinghamshire, UK.

Re: AI Expectations?

Post by TimG »

Bananas eh ? I was always told to watch out for a homicidal maniac with a bunch of loganberries :shock:

Interesting write up by the way :D
Forum Moderator.
Problem ? Ask and we will try to assist, but please check the Help file.
louyo
Posts: 166
Joined: Sat Apr 18, 2020 1:16 am

Re: AI Expectations?

Post by louyo »

bananas? Logan Berries?
Crap, I thought it was the boogy man.
(CPAI is doing a good job of cancelling alerts caused by headlights at night, although it thoroughly irritated the cat when it called it a person)
Post Reply