VPLS Circuit Latency

Twitch
Building a reputation

VPLS Circuit Latency

Hello to the Crew - for those of you sending traffic over VPLS circuits, what kind of latency numbers do you see?

 

We currently have six of our locations running over a Cogent VPLS with a variety of LECs providing the last mile, including Verizon and Segra.

 

The circuits have 28 - 40 ms of latency continuously, which seems very high to me. I see that at each site and in all directions between sites, regardless of the physical distance involved. 

 

When I called and spoke to a tech support guy at Cogent, I was told he saw the same thing on his side and, basically, "that's just the way it is." When I tried to question why it was so high, he gave some half answer and disconnected the call.

 

We have Cisco 2901 routers running at the edge of our VPLS cloud. Circuits are 100 Mbps. If I run an extended ping from a VPLS router interface to another VPLS router interface, with nothing but the provider's network in-between, I still see 28 - 40 Mbps of latency. 

 

Is anyone seeing similar numbers on their circuits?

 

Thanks.

 

Twitch

 

 

15 REPLIES 15
DarrenOC
Kind of a big deal
Kind of a big deal

@cmr  - one for you.

Darren OConnor | doconnor@resalire.co.uk
https://www.linkedin.com/in/darrenoconnor/

I'm not an employee of Cisco/Meraki. My posts are based on Meraki best practice and what has worked for me in the field.
cmr
Kind of a big deal
Kind of a big deal

@Twitch what distances do you have?  We have two VPLS networks where the longest link is just over 400 miles (as the crow flies) and the latencies from the remote MX to the primary datacenter time server (via SD-WAN link to the DC concentrator MX) vary as below:

 

<5 miles - 0.7ms

~50 miles - 2.6ms

~200 miles - 7.5ms

~400 miles - 14ms

 

Circuits are a mix of 100Mb and 1Gb fibre bearers that are part provisioned.

 

We generally terminate the VPLS tails with a dumb Cisco 5 port 1Gb switch (to give us enough ports for the HA pairing) and then straight into the MXes, why do you have a Cisco 2901 inline, or are you not running SD-WAN over the VPLS network?

 

Our older MPLS network has latency generally 2-4ms higher than the VPLS networks.

Twitch
Building a reputation

@cmr- We have distances ranging from roughly 6 miles to 430 miles to 750 miles. Latency is generally in the same range regardless of the distance.

 

We are not running SD-WAN over the VPLS. We had SD-WAN running over our broadband internet connections to each site, but my boss wanted to get away from the site-to-site VPNs, so he decided to switch over to VPLS circuits between each site instead.

 

We are using 2901s in order to route traffic via OSPF between sites. My boss's original plan was to use VLAN tagging over the VPLS, but he did not want to pay for the QinQ service, so we decided to route traffic via OSPF instead.

 

Your latency is far superior, to say the least. Which company provides your circuits?

 

 

cmr
Kind of a big deal
Kind of a big deal

@Twitch we are primarily in the UK and the VPLS is provided by a company called Exponential-e, the last mile provider is generally Openreach (used to be part of British Telecom, the old national carrier).  Are you able to test without the 2901s as although they can forward 100Mbps+ of data packets, they are only recommended for up to 25Mbps full duplex when you are running anything other than basic routing on them...  Hopefully you aren't trying to run them with any access rules or cryptography?

Twitch
Building a reputation

@cmr- we are running basic routing at this point. No crypto or access rules.

 

When the circuits first went live we tested with laptops connected directly to the VPLS hand-off at several sites. Latency numbers were the same as what we see with the 2901s running, 25 - 40 msec.

 

I did not know that tidbit about the limitations on the 2901s.

cmr
Kind of a big deal
Kind of a big deal

@Twitch are the tail circuits full fibre, copper or wireless?  If the first then that is truly terrible latency! 

 

At home I have a consumer fibre connection running of a GPON infrastructure and a second consumer connection where the last 1/2 mile is copper.  Connecting from home to the office over 40 miles away has a latency of about 5ms over the former and 7ms over the latter.

Twitch
Building a reputation

@cmr- fiber to the demarc, copper hand-off to the 2901 gig port.

 

 

cmr
Kind of a big deal
Kind of a big deal

@Twitch that's horrible latency, how bad was the link over the old broadband connections?

Twitch
Building a reputation

@cmr- Over two of our remaining SD-WAN StS VPNs, Virginia to NJ is running 32 ms, and Virginia to SC is running 18 ms out of the MX.

 

The same company that is providing our VPLS circuits is also providing the DIA out of our Virginia data center.

 

The latency is horrible. I agree. I have asked my boss to check the SLA that our provider gave us to see what kind of numbers they guarantee. Like I said in the original post, the tech support guy basically cut me off when I questioned why the latency is so bad. He said it's "normal" for VPLS circuits, and they were seeing the same numbers internal to their network.

 

 

cmr
Kind of a big deal
Kind of a big deal

@Twitch sounds like a plan. 

 

We've had VPLS circuits with the low latency mentioned above for 8 years, so there's definitely an argument worth having.  In the UK VPLS has a lower latency than MPLS or DIA as it should be a cleaner infrastructure.  We also have pretty much zero packet loss as well...

Twitch
Building a reputation

@cmr- Speaking of output drops, here are the current stats from our 2901. Basically, it's getting killed. We have a gig internal interface connected to the Meraki switch stack leaving the 2901 out a Fast Ethernet interface set to 100/Full Duplex per the provider's request.

 

Twitch_0-1634070227142.png

 

At the time we put the 2901s in, we were not aware of the 25 Mbps throughput limitation. The output buffer of Gig 0/0 (the VPLS interface) is filling-up and inbound traffic from Gig 0/1 is getting dropped before there is space available in the buffer. 

 

We have ordered 1000 Mbps for our VPLS circuit here at the DC, and will be replacing the 2901s with 4331s, hopefully with the Boost license (my preference), or the Performance license so we have at least 300 Mbps of throughput instead of 25. If the 4331 works out well here, we will probably replace all of the 2901s at the remote sites as well.

 

 

 

DarrenOC
Kind of a big deal
Kind of a big deal

Crikey! That interface is being hammered!  Good spot by @cmr . I was never aware of the boost license until Charles mentioned it to me for one of his own sites many years ago.  

Darren OConnor | doconnor@resalire.co.uk
https://www.linkedin.com/in/darrenoconnor/

I'm not an employee of Cisco/Meraki. My posts are based on Meraki best practice and what has worked for me in the field.
cmr
Kind of a big deal
Kind of a big deal

@Twitch when we used routers with our VPLS before moving to MXs, we actually used a routed port on a Cisco L3 switch at the DC.  Due to using only Ethernet and keeping the speeds similar, this worked better than a router.  Do you have a 3750, 3850 or 9300 that you could use?

Twitch
Building a reputation

@cmrWe don't have any of those devices on-hand. I'm sure we could get some.

 

"we actually used a routed port on a Cisco L3 switch at the DC" - Since we're currently using an Ethernet port on the router, doesn't this effectively accomplish the same thing, or are you saying that due to router interface throughput/overhead, the switch is actually more efficient at passing the traffic in/out of the VPLS circuits?

 

How were you guys routing the traffic to the remote sites?

 

 

cmr
Kind of a big deal
Kind of a big deal

@Twitch a Layer3 switch is effectively a router without the flexibility to support multiple protocols or acceleration / transcoding cards etc.  As it only has to deal with the single protocol of (usually) Ethernet, it's hardware is normally more focussed on that and that alone.  Technologies such as cut through (where only the packet header is examined) are often implemented, as opposed to a router's store and forward method, massively reducing latency and processing requirements.

 

Obviously features like stateful firewall rules etc. just don't appear on most L3 switches, but as you (and we) aren't using those then a switch can often give you better throughput for the cost. 

Get notified when there are additional replies to this discussion.
Welcome to the Meraki Community!
To start contributing, simply sign in with your Cisco account. If you don't yet have a Cisco account, you can sign up.