I want to play with three things, all in real time or near real time. All of them require an image to be extracted to pass onto another system for further processing.
It is not clear to me specifically which API(s) I should use.
First, processing simple motion based alerts. Should I be simply configuring a standard motion based alert and using a Web Hook to push it to me?
Next, processing an event where a person is detected in real time. Specifically which API should I used to get this event and retrieve an image of that detected person (or people)?
Next, processing an event where a vehicle is detected [on an MV72] in real time. Specifically which API should I used to get this event and retrieve an image of that vehicle which was detected?
Great question and we to would be interested. Will be watching this topic and maybe @GeorgeB could shed some light for us all.
Hi @PhilipDAth, how are you?
You have asked some good questions here.
Web Hook is an excellent option for anything motion detections related. Time sensitive so that's what you want.
You can use the MQTT topic as a trigger to fire away a POST request to the Meraki Dashboard Snapshot API. Just bear in mind that obtaining a snapshot directly from a camera will take anywhere between 6 - 12 seconds per each request. MQTT topics distinguish between a person and a vehicle, but only on the MV72.
@pgospodarczyk86, as best as I can see, MQTT only returns counts of people and the Lux lught level, and not that motion has been detected.
How can you use MQTT is a trigger? Are there some other kind of messages it can return?
Hi folks, as you guys have indicated, MQTT can be used with the camera's objection detection model to return, in real-time sub-second, the light level, people in the frame, and on MV72s, vehicles in the frame. Otherwise motion by itself will not be something that MQTT currently detects.
Webhooks can be used to alert on motion events, and are typically sent up to a few minutes after the fact. Also for second-gen cameras (anything apart from MV21/MV71), there is a link in the webhook to the motion/recap composite image if that feature is enabled.
For both MQTT real-time object detection as well as webhook alerts, you can also process to retrieve a snapshot using the API, which can then be passed onto a CV engine if needed for additional analysis.
I think I have enough to work this out now.
If using MQTT it will be something like:
Then either (depending on use case):
I've had a glood play with MQTT now and retrieving a snapshot. It's not been as good as I hoped.
The MQTT broker sees messages frequently. I might get 5 per second. Once I get a motion event I am interested I have to:
* Wait 60s (!!!)
* Call the snapshot API using the supplied timestamp
- If the call fails wait another 30s and repeat above
* Wait 5s
* Retrieve the URL from the snapshot message
- If the call fails wait another 5s and repeat above
Having to wait 60s from receiving the MQTT message (sometimes much longer) before being able to call the snapshot API really causes a big lag, and makes the whole concept of "real time" non-workable.
Other serious issues are the limit of 5 API calls per second (which the snapshot API is limited by). With even a small number of cameras I often get snapshot API requests queing up far faster than the dashboard API can service.
If you were a retail chain with hundreds or thousands of cameras the limit of 5 requests for second would kill the ability to use the system. This is a really serious issue preventing the system from being scaled up to large deployments.
I'm considering using motion based webhooks now that I have lost the ability to have real time processing using MQTT. I'm hoping webhooks are not rate limited like the dashboard API.
Oh crap. I was expecting the WebHook version of MotionAlerts to be like the email version. However it seems the motion WebHook does not include the image URL.
So it seems it is impossible to do image processing at a rate greater than 5 images per second.
That doc doesn't reflect the latest, and webhooks do include an imageUrl parameter that for MV12/22/72 models link to the motion recap image. Also, if you're using MQTT to trigger snapshots, the wait time should only be a few seconds if coded correctly.
>That doc doesn't reflect the latest,
@CameronMoody the documentation at
is becoming stale. Who looks after that? There are at least two corrections that need ot be made now.
I'm using node.js. I wait for MQTT to send me a message I'm interested in. Potentially the event might have been recorded just 50ms ago. Because I know the Meraki API doesn't respond straight away I use setTimeout to call my code at a later time. In the example below, it calls a function called processSnapshot which makes a request to the snapshot API to get a URL of the image.
setTimeout(processSnapshot, 60000, netID, cameraSerial,new Date(ts).toISOString(),processPerson,0)
The 60000 causes the code to be called in 60s. If I make the number small - I get the API returning error codes. Basically you can't seem to request a snapshot for an event that the camera has only just recorded. You have to wait till it has had it for 60s or so.
Note this only happens if you request the timestamp to get the exact moment in time the the camera detected the person/vehicle. If you leave the timestamp out you can make the request for the frame "now" - except it isn't really now.
Once I have the snapshot response I have to wait another 5s or so and then request the actual image.
I don't think I have coded it wrong. The only difference between it working and not working are the timer values.
If you really think I should get a response to my snapshot request within a couple of seconds do you think perhaps there is a fault with the shard I am on? I'd open a case with support ... but I doubt they could help. Because this is an MQTT triggered event you can't just supply some code (such as curl) to reproduce the issue.
I am now very interested in using Webhooks. This would save me having to use the dashboard API, and get around the 5 requests per second limit. I'm assuming that if you retrieve the image via the URL in the Webhook that it doesn't count against you?
Also if you have any influence (and yes I have made a wish) I would love the jpeg images to be higher quality (aka use less compression). I'm trying to pass the images often to subsequent systems for machine processing and there is a noticable quality deteriation between the recorded video and the single jpeg returned.
MQTT is pretty much live, real-time messages, telling you multiple times a second (as you have noted) the objects seen. If your receiver is processing also in real-time, you'll see objects/counts in the message, and can fire off to take a snapshot right away. That POST API call doesn't include a timestamp, as there's no reason to why since you're asking for a live image of what's going on right now. The only wait time involved is a few seconds for that image to be ready to be retrieved (GET), and typically that takes less than 10 seconds (around five as you noted too).
I have a camera pointed at my apartment front door, and with MQTT, logic that tells me immediately when somebody is seen when I'm not at home. When I come back home, I get an alert consistently within seconds with a snapshot, sometimes before I even have a chance to take off my shoes completely. (This workflow is part of what you can build in the Adventure API lab at cs.co/adventure).
To answer the other questions, using the imageUrl in webhooks does not count as an API call for the rate limit, and our API team will look at updating documentation on the Developer Hub.
>To answer the other questions, using the imageUrl in webhooks does not count as an API call for the rate limit, and our API team will look at updating documentation on the Developer Hub.
While you are fixing documentation errors, the document refers to the MQTT timestamp as "Current time in epoch seconds UTC time". It should actually be "Current time in epoch milliseconds UTC time". An example of where it is wrong is:
>That POST API call doesn't include a timestamp, as there's no reason to why since you're asking for a live image of what's going on right now. The only wait time involved is a few seconds for that image to be ready to be retrieved (GET), and typically that takes less than 10 seconds (around five as you noted too).
Your 100% wrong on this one. This is a link to the documentation for the snapshot API.
Here is the information about the parameter:
Also, here is a link to your own code where you specified the timestamp - except you set it to "none".
So there 100% is a timestamp parameter. The MQTT message gives me the timestamp. If I request a snapshot using that timestamp I get a picture with the person in it. If I make the same call without specify a parameter the person is usually not in the frame. In fact, I suspect it gets a frame from 60s ago.
So in my case due to camera processing delay, I have to use the timestamp.
I have tried power cycling the camera, and the same behaviour happens. I guess it would be a time synchronisation issue between the shard and the camera, but I suspect this would break a lot of other things.
I'm using the 3.66 firmware.
I suspect if you modify your code to actually ask for a snapshot at the time the event happens your experience this same issue.
You may be lucky enough in your code to be hanging around long enough so that when you retrieve the snapshot with no snapshot you are still in it.
OK, sorry I wasn't clear: I know the POST API call can include a timestamp variable, and what I meant was that you wouldn't be specifying it in "that call" that's made when MQTT is running live. Calm down, no need to try to prove Cunningham's Law by linking to code that I wrote, which by the way has it set to None as an optional parameter, so can be set, but doesn't need to be.
From my testing and that of many other Meraki Engineers, if you request (POST) the snapshot API without timestamp specified, you get a current image. That's what I mean when I said taking a snapshot of that apartment camera shows me walking in the doorway, and not 60 seconds before I entered the apartment. Attached below as proof (timestamp not specified nor used when requesting the image).
Also, there is no 3.66 firmware yet. 🙂
Another issue that I'm seeing after retrieving quite a few images (maybe a hundred or more) is the snapshot API gives you a URL that simply returns a 404 - even a minute later. I've tried manually retrieving the URLs in my web browser maybe 10 minutes later as well. Some of them just never work. Only a small percentage.
I know the URL is correct, because when you use the wrong URL to retrieve the image you get back a single page saying "error" as opposed to a 404.
I just had a play with using the timestamp parameter and not using it.
I put a clock in front of the camera. I did a simple single snapshot call without the time stamp parameter. The eventual image I get is within 1000ms of the mtt message. So this is good [enough].
And then I went to use it for real and ran into my problem. I hit the 5 API request limit and I start queing the requests. My eventual API request can be sometime later than the MTT trigger, so the result is useless.
So I have to use the timestamp parameter. I can see what happens. Using the timestamp I initially get good response times to the snapshot API, and then as the queue builds up it takes longer and longer to respond. Adding in a 60s delay was masking the problem.
I think I am screwed trying to do real time image processing using MQTT.
I don't think I have any choice but to use WebHooks. I hope WebHooks will scale to much higher numbers.
So there currently is an issue with firmware 3.33 through 3.36, where the POST call is successful and returns a URL, but the GET on that URL will return the 404 you're seeing a good percentage of the time. If you downgrade to 3.32 that will fix the issue with the GET, and most (just about all) URLs returned should be successfully retrieved.