We have two new features to share today that we think we be of particular interest to those of you using the MV Sense API for any kind of third party applications. First, we've added a new, experimental object detection model option to all second generation MV cameras. For all MV12 and MV22 series cameras, the experimental option is for body person detection. For MV32 cameras, the experimental model is for head person. For MV72 series cameras, the experimental model is for both body person detection, and vehicle detection.
The default object detection model is optimized to suit the majority use cases. However, the new experimental object detection model may perform better in certain environments. While the new object detection models are available on all second generation MV cameras, when used on a camera with an MV Sense license applied, the new Analytics Debug feature can be used to help evaluate model performance. The Analytics Debug mode provides a detailed output of the object detection model - including the detected object's Object ID #, the confidence %, and bounding coordinates (X0,Y0),(X1,Y1), indicating location. This can be helpful in the troubleshooting and debugging of applications using the camera's object detection data via MQTT or dashboard API calls.
To turn on analytics debug mode, select the "Show Objects" button when viewing historical video. Once selected, a new "Show details" option will appear - select that to enable the debug mode.
To access the model picker, navigate to the Camera Settings, then MV Sense. There, you'll see the object detection model drop down selector, allowing you to choose between the default and experimental model for the camera. For more information, including tips on troubleshooting and guidelines for model selection, see the MV Object Detection documentation article.
We think these new features are exciting, and hopefully give you a little insight into what we've been working on in the area of analytics. We'd love to hear feedback from those of you who are using the object detection data from the cameras, especially with MV Sense. If you try out the new experimental detection model, what type of environment was it, and how did it work for you?