vMX in Azure...Concentrator mode with Limited NAT mode❓❓❓❓❓

RobinsonRoca
Getting noticed

vMX in Azure...Concentrator mode with Limited NAT mode❓❓❓❓❓

Can anyone PLEASE explain to me why the vMX only operates in Concentrator mode I know that it can be converted to NAT mode through the Meraki support backend, but that provides a "Limited NAT Mode".  It basically overload NATs to the inside.  This is a virtual appliance, a piece of software that can be given as much compute and memory power as needed, why couldn't the Meraki programmers create a full fledged MX firewall, instead of this basterdized piece of virtual hardware.  Someone has got to give me a good answer for this, lol and don't say hardware limitations.🤣

12 REPLIES 12
alemabrahao
Kind of a big deal
Kind of a big deal

Check this link:  https://documentation.meraki.com/MX/MX_Installation_Guides/vMX_Setup_Guide_for_Microsoft_Azure

I am not a Cisco Meraki employee. My suggestions are based on documentation of Meraki best practices and day-to-day experience.

Please, if this post was useful, leave your kudos and mark it as solved.

Thanks alemabrahao, I know very well how to build the vMX in Azure, I don't need the doc.  That's not my problem.  My problem, is why would Meraki build an appliance with such a handicap?

alemabrahao
Kind of a big deal
Kind of a big deal

Well, in my opinion, it's a way for you to route traffic to machines that you have on Azure without having to create a VPN tunnel for each branch. You can concentrate all routing on the hub and branches would have routes via SD-WAN.

Of course, this is in case you don't want to publish these servers to the internet.

I am not a Cisco Meraki employee. My suggestions are based on documentation of Meraki best practices and day-to-day experience.

Please, if this post was useful, leave your kudos and mark it as solved.

But my question is this, and I’m not so sure why so many don’t understand this…

 

If I have Virtual Workstations (AVD), or for that matter, have severs I want to create NATs for, why is the firewall not capable of being configured like any other Meraki MX firewall?  Meraki can’t seem to answer that question.

 

I have built Cisco FTD firewalls in Azure, and they form VPN tunnels, and can be used in routed mode to allow full blown NAT, I have used other vendor firewalls to do the same, all in Azure.  Why is it specifically, that the Meraki firewall HAS to be used as a single arm concentrator, and if you want it in routed mode, then it’s a “work around” to get “some” NATing functionality.

 

I need Meraki staff to step in on this conversation, believe you me, I will be at Cisco live, and the Meraki desk, and looking for the BU owner to get a real answer…

MyHomeNWLab
A model citizen

To put it another way, "Why can't vMX (Virtual MX) be in Routed Mode (NAT Mode)?".
 
It is related to DC-DC Failover.
To make Meraki vMX redundant, it must be DC-DC Failover. Because Meraki vMX does not support Warm-Spare (VRRP).
The implementation is different from on-premises because the public cloud's infrastructure manages IP addresses and Route Tables.
 
Redundant vMX Hub in DC-DC Failover must be in Concentrator Mode, not Routed Mode (NAT Mode).
Because, duplicate subnets are not allowed in Routed Mode (NAT Mode).
MX in Routed Mode (NAT Mode) that join Auto VPN must be unique subnets.
 
Auto VPN Routes are statically configured and there must not be more than one route.
For example, Packet says: "Where do I go if I have duplicate routes in Routed Mode (NAT Mode)?"
However. with DC-DC Failover topology, there is redundancy between the DCs, so there is a possibility of reaching the destination via either DC.
 
Concentrator Mode allows duplicate routes to be advertised to Auto VPN.
It is only with that duplicated route that DC-DC Failover is possible.
 
For these reasons, it makes virtually no sense to put vMX in Routed Mode (NAT Mode) in the production environment, even if it was possible.
 
If you ask why we chose such specifications, that is another topic.
This product characteristic is very difficult.
PhilipDAth
Kind of a big deal
Kind of a big deal

It's pretty hard to create a fully-fledged "MX" for a public cloud environment due to the restrictions imposed on what can be done.  It would almost have to be a new code based to provide similar functionality.

 

The majority of customers I have mostly just want to extend their SDWAN into the cloud easily, and for this purpose, the product is spot on.

Fady
Meraki Employee
Meraki Employee

@RobinsonRoca While I don't disagree that Meraki should have full functional vMX in a NAT mode and operate as a stateful firewall in the cloud but I don't see it deployed to terminate SDWAN tunnels, and you would still need to use vMX in the VPN concentrator state mainly because of routing.

Currently, only VPN concentrators can be configured with BGP to exchange routes with your cloud environment automatically, and we have seen the adoption of Azure vWAN and AWS Cloud WAN, which are all based on configuring BGP; hence the need to consider a VPN concentrator in the environment.

 

That is my 2 cents

CShackelford
New here

I've just deployed my first vmx in Azure and have similar frustrations.  I have it working fine in passthrough mode, but my colleague doing the Azure coud config cannot figure out how to present the same Azure servers both on the tunnel and make them publically accessible from Azure.  Meraki support does not seem to know either. 

There isn't much to do - on the Azure side you just add a route via the VMX for everything sitting behind MXs.  On the VMX side you just add the Azure subnets as "local subnets".

no I get that part, and we have that working.  Azure has a route for 10.0.0.0/8 pointing to the vmx and all the terrestrial branches are tunneled to the VMX and working.   Our issue is how to also present these same Azure servers as fixed publc ips natively out of Azure so they can be accessed by internet users.  I'm not that versed on the Azure side so sorry if my description may not be clear

Just assign a public IP address to them in Azure.  Then create a DNS entry pointing to that public IP address.

In additional to what @PhilipDAth suggested, you can enable client VPN on those vMX to create a secure tunnel from your internet users (assuming those are your corp users).

Get notified when there are additional replies to this discussion.