Enabled mv sense, now to understand use of webhook to detect objects, positioning etc?

AmeyGadre
Conversationalist

Enabled mv sense, now to understand use of webhook to detect objects, positioning etc?

Hello members,

 

We recently enabled sense and custom cv on one of our cameras. Post deploying tf model, I want to understand few things about webhooks.

 

1. it appears webhooks needs to be enabled/added at network alert level - how would we make sure that its only listening or receiving events from specific camera on which sense is enabled?

2. In response to webhook, how to make sure it responds back with positioning of objects and does it include snapshot link?

3. How to set the frequency or occurence at which webhook will be called with data?

 

5 Replies 5
alemabrahao
Kind of a big deal

Webhooks in Meraki are configured at the network level, not per camera. But, you can filter events programmatically.

 

Webhooks do not directly include snapshot images. In this case  you can use the Snapshot API (GET /networks/{networkId}/cameras/{serial}/snapshot) to request a snapshot at the time of detection.Combine the timestamp from the webhook with the snapshot API to retrieve a relevant image.

 

If you need more control, consider using MQTT instead of webhooks, which allows for more granular data streaming and filtering.

 

 

MV Camera Intelligence - REST API Endpoints - Meraki MV Camera API - Cisco Meraki Developer Hub

 

MV Sense Custom Computer Vision - Cisco Meraki Documentation

 

https://documentation.meraki.com/MT/MT_General_Articles/MT_MQTT_Setup_Guide

 

https://developer.cisco.com/meraki/mv-sense/mqtt/

 

 

I am not a Cisco Meraki employee. My suggestions are based on documentation of Meraki best practices and day-to-day experience.

Please, if this post was useful, leave your kudos and mark it as solved.
AmeyGadre
Conversationalist

Thank you for your response. Does this mean its virtually of no use if I install a tensorflow model & make use of webhook if I need to make a snapshot call anyway? I am just thinking what would my model do?

alemabrahao
Kind of a big deal

Using a TensorFlow model + webhook makes sense when you want real-time object detection on the camera, need to trigger actions (e.g., alerts, logs) when specific objects are detected, only want to selectively fetch snapshots, not continuously, or want to analyze object positions or behaviors over time. As mentioned, webhooks do not include images, so you must call the Snapshot API separately. Webhooks span the entire network, so you must filter by camera serial in your code. You cannot control the webhook frequency directly since it is detection-based. In general, your model is useful for intelligent detection and automation. If your only goal is to get images, simpler methods may be sufficient. But for intelligent, event-driven workflows, this setup is powerful.
I am not a Cisco Meraki employee. My suggestions are based on documentation of Meraki best practices and day-to-day experience.

Please, if this post was useful, leave your kudos and mark it as solved.
PhilipDAth
Kind of a big deal
Kind of a big deal

I guess that depends on how good your recognition model is.  If it only triggers on a high probability, then it shouldn't need further processing.

 

Note that you do need to allocate a sense API licence to a camera - so I wouldn't expect you to get notification events from cameras without a licence.

 

PhilipDAth_0-1750100800689.png

 

AmeyGadre
Conversationalist

Thank you Philip and Alessandro!!

Get notified when there are additional replies to this discussion.
Welcome to the Meraki Community!
To start contributing, simply sign in with your Cisco ID. If you don't yet have a Cisco ID, you can sign up.
Labels