New Object Detection Model Selector and Analytics Debug Mode

Saralyn
Meraki Employee
Meraki Employee

New Object Detection Model Selector and Analytics Debug Mode

We have two new features to share today that we think we be of particular interest to those of you using the MV Sense API for any kind of third party applications. First, we've added a new, experimental object detection model option to all second generation MV cameras. For all MV12 and MV22 series cameras, the experimental option is for body person detection. For MV32 cameras, the experimental model is for head person. For MV72 series cameras, the experimental model is for both body person detection, and vehicle detection.

 

Object_detection_model_selection

 

 

The default object detection model is optimized to suit the majority use cases. However, the new experimental object detection model may perform better in certain environments. While the new object detection models are available on all second generation MV cameras, when used on a camera with an MV Sense license applied, the new Analytics Debug feature can be used to help evaluate model performance. The Analytics Debug mode provides a detailed output of the object detection model - including the detected object's Object ID #, the confidence %, and bounding coordinates (X0,Y0),(X1,Y1), indicating location. This can be helpful in the troubleshooting and debugging of applications using the camera's object detection data via MQTT or dashboard API calls.  

 

analytic_debug_mode2.jpg

 

To turn on analytics debug mode, select the "Show Objects" button when viewing historical video. Once selected, a new "Show details" option will appear - select that to enable the debug mode. 

 

Screen Shot 2020-06-25 at 3.43.33 PM.png

 

To access the model picker, navigate to the Camera Settings, then MV Sense. There, you'll see the object detection model drop down selector, allowing you to choose between the default and experimental model for the camera. For more information, including tips on troubleshooting and guidelines for model selection, see the MV Object Detection documentation article. 

 

We think these new features are exciting, and hopefully give you a little insight into what we've been working on in the area of analytics. We'd love to hear feedback from those of you who are using the object detection data from the cameras, especially with MV Sense. If you try out the new experimental detection model, what type of environment was it, and how did it work for you? 

1 REPLY 1
DR1
Getting noticed

Hello,

 

I saw your post and signed up to ask this question. We're in the middle of trialling a full stack of network, wifi, and meraki cameras for a fairly large museum space.

 

We're interesting in using the MV32, MV22x, or both for counting the number of people at any given moment in very large rooms with 12' high ceilings. Some of these rooms will be quite dark.

 

There could be 100 people in a room, with people streaming in and out constantly with that number increasing and decreasing constantly.

 

Regarding per camera object detection:

Is there any 'upper limits' for maximum number of object detections at a time per. camera that we should be aware of? This detail is not explored or mentioned anywhere in the documentation. It seems likely that the performance hit on the hardware would scale up to some kind of hard limit, or start to loose its fidelity due to buffering.

 

Along these lines, do you know if theres already a feature in the dashboard that displays a quantity of type(people in the case) of objects detected? If not, can these be extrapolated from the API?

 

Thank you,

Darius

Get notified when there are additional replies to this discussion.
Welcome to the Meraki Community!
To start contributing, simply sign in with your Cisco account. If you don't yet have a Cisco account, you can sign up.