Hi all. My VMX Azure VM had the status "virtual machine agent status not ready" this morning. In Meraki Dashboard, the vmx showed offline as well. This happened earlier this year. The resolution then was to stop and start the VM in Azure 3 times (third time is the charm?). However, today, doing so did not resolve the issue. After stopping\starting the vm in Azure, the VMX would regain connectivity in Meraki for a few minutes then go offline again. So, I redeployed the VMX by removing it from the Meraki network, adding back, deleting Meraki VMX app in Azure then deploying again. The VM is up and the VMX is connected in Meraki dashboard, and has been for about one hour now, however, the status of the VM in Azure is "virtual machine agent status not ready". Typically, this status hasn't been present in the past unless there is a problem. Any of you with VMXs in production, do you have this status in Azure too? Is it a status to ignore? How has VMX performance been for you all?
Have you opened a support case?
No, I've not opened a support case with Meraki or Azure. I assumed Meraki would tell me it's a Microsoft issue, lol.
It's not an issue as this agent is Windows based (something as vMware tools)
Meraki vMX is linux based
I see this on my vMX since 4 years and still don't have problems
Good point about the vmx being Linux based. Good to know you see the message on your vm too and no problems.
If you point your web browser at the public IP address on the VMX - what does the local status page say (you might have to enable this if it is disabled via the Meraki dashboard)? It would be beneficial next time (hopefully not) that this happens to see what that is reporting.
I've never had a customer's VMX fail in Azure yet, so you are pretty unlucky.
The local status page has the check mark indicating no problems, but off course right now there is no problem. Next time (hopefully not!) I'll check to see what it is reporting.
I'd agree with the unluckiness, lol.
Just for anyone else - I had a customer who experienced exactly the same symptoms over the weekend. Logged with support who advised that we were hotting a couple of suspected bugs. Only rebooting the vMX a number of times brought it back.
vMX was running 18.105, and 18.106 also looks to be effected. (No reported cases for 18.107 as of yet)
Thank you for sharing. My VMX in question was on 18.106 when having problems. When I destroyed and deployed a new one, I upgraded it to latest firmware 18.107