So our MV cameras think the concrete mixer is a person.
Unfortunately this is contributing to analytics. At what point does the camera decide it's not a person and gives up?
To enable this feature, users simply navigate to an individual camera feed and select the ‘Settings’ tab. Then scroll to ‘Privacy Window’ and draw up to 10 boxes within the video feed. These boxes can overlap to cover irregularly shaped areas.
That's not really a solution though is it, as you would want to see if the mixer was moved, taken etc. Privacy window would not be the same as an exclusion.
I guess what I was more after was a way of tagging this as NOT being a person, part of the ML.
Cheers, good idea, but overall it would be good if somehow you could tag regions that the camera is suspecting as people to help with the ML. Thanks for the idea.
Machine Learning (ML) based Computer Vision (CV) implementations like those in the MV12, MV22, & MV72 get better over time when their models are trained with a larger and more diverse set of real world data. Training does not happen on the camera, this is done at Meraki and then we generate an updated model. The MV cameras' ML CV model is updated when a camera updates its firmware.
The data used to train the camera model is currently only sourced from specific development cameras. In the future we hope to enable customers to submit their own video data to help train the model and make it better. Video clips such as the concrete mixer being falsely detected are high value pieces of training data and would improve our model. For example we have updated the camera model to "train out" detection of specific pieces of gym equipment as people.