I am relatively new to Meraki and trying to understand how this scenario might work.
Currently over 100 spokes connect to a single HA pair at the DC and BGP is used for sharing DC networks. We have been having issues with reaching the limitations on the MX appliances and are looking to distribute the load of tunnels between multiple hubs. Hub 1 - 50+ spokes, Hub 2 - 50+ spokes - preferrably both hubs configured at all 100 spokes with priority 1,2 vs 2,1 to provide a fail over similar to DC - DC fail over but with 1 DC.
Potentially we would like to have atleast 3 hubs per DC.
1. Would this result in DC routing tables all using a single Hub due to selection by router ID?
2. If only 1 hub is configured at each spoke (either Hub 1 or Hub 2) and manual intervention is used for failover, will both hubs still share all 100 spoke's routes through BGP?
3. As the answer to 1 & 2 may result in this not working, is there another approach having multiple hubs serve spokes from a single DC?
1, the mx does as prepending on routes advertised from the secondary concentrator.
2, yess all mx's knows all routes through ibgp
In order to maintain integrity of the route table for all MXs in the SD-WAN fabric, Meraki has implemented protection both inbound and outbound from the vpn concentrator. To protect the integrity of the route tables inbound, there is a configurable Receive limit. In addition to protecting the integrity inbound, there is also an AS Path ACL that is placed outbound for all eBGP peers. This AS Path ACL ensures that the Meraki ASN is always the originating ASN. This ensures that Merakis SD WAN fabric will never be transit between two DCs. By default this is on and can be disabled by clicking the checkbox under Allow transit.
Routes are advertised to eBGP and iBGP peers in the following conditions:
https://documentation.meraki.com/MX/Networks_and_Routing/BGP
It sounds like you need bigger MXs at the DCs to cope with more spokes. Adding in more and load balancing just complicates things.
An MX85 can support 200 spokes. An MX95 can support 500 spokes.
https://meraki.cisco.com/product-collateral/mx-sizing-guide/?file
Thank you for the replies.
If I understand correctly, the secondary MX would be creating a longer AS Path to cause the primary MX to be chosen as the route to a spoke network. All MXs are in a single AS as they are all part of the same organization, would this work with more than 2 MXs as there would only be two AS numbers?
I have reviewed the BGP page and understand these statements, however, the following is not clear when we introduce multiple MXs. MX 1 and MX 2 will both have identical local networks as they are providing connectivity to the same DC. Will this be permitted to add the same local networks on multiple MXs? (I know this is prevented when an MX in routed mode exists in the organization). Would this be required to have them added this way as these routes will be shared through BGP anyway?
The MX models are MX 450 with about 120 spoke sites. The limitation is the number of flows (500k), tunnels, traffic, etc. are not close to becoming an issue.
I am struggling to believe you are hitting any limitations with MX450 hubs with only 120 spokes. It seems very improbable.
Have you spoken to support about the limitation you are hitting?
It was support that indicated we are exceeding the limit of flows the MX can handle - I haven't found a way to verify/monitor the overall flows on an MX.
The symptoms that initiated the support request was extremley high latency and packet loss.
Strange, I have a client with over 700 spokes (the HUBs are MX450 too) and we haven't had any problems. The only difference is that we are not using BGP.
I have a client with just under 300 spokes on MX250's, no BGP, no issue. Most of the spokes have 300Mb/s fibre connections.
Have clients with around 20-30 spokes going over the limit of mx450. With default route in vpn its not that hard
We have 20 Hubs of MX450 and we are balancing 50-120 sites depending on the size. Depending on the load it is pretty easy to hit over 80% utilization.
No full tunnel. BGP is on.
Did you do that balancing manually by setting the hub priority or is there a way by setting up a percentage per hub like
hub 1: 30%
hub 2: 50%
hub 3: 20%
No , the way we do it is via templates ( when possible )
Template 1 : HUB1_DC1
Template 2: HUB2_DC1
Template 3: HUB1_DC2
Template 4 : HUB2_DC2
that way the 4 HUBs from 2 DC are getting around 25% of the load ( considering that each spoke has the same load , which is impossible )
Thx for responding. Thats what i meant by doing it "manually".