Hey Team, I saw some fresh posts on this old thread, and wanted to weigh in here, and respond mostly to the "what the heck" and "what kind of company are you running" sentiments. First, have a fresh read of this blog post and let that start to smooth things over maybe just a little. https://meraki.cisco.com/blog/2017/08/mea-culpa-and-what-happens-next/
We store customer-uploaded Objects (floor plans, custom logos on splash pages, etc) in a separate service from network configuration data due to the nature of the files themselves. Those data stores AND their backups were accessible by a single back-end system and it was during routine engineering maintenance over a weekend that the accidental deletion happened. This separate system along with its backups is no longer accessible from a single interface, and has been updated accordingly, such a data loss cannot happen again.
While these comments will be my own and not Meraki's, I believe this was actually a failure worth celebrating. The bad news is this caused headaches for a lot of customers, but the good news is Meraki handled this in a pretty incredible way. If there is any silver lining, it’s that the team responded quickly, took full ownership, notified customers via website/dashboard/emails, identified which assets customers lost, and built tools to restore them in a matter of days. A new link appeared in dashboard shortly after the announcement that allowed customers to view exactly which assets had been deleted, and we quickly built tools to make replacing these files as simple as possible. The data cache and checksum match tools for bulk file re-upload worked brilliantly. All was right in most customer's worlds within a couple weeks.
Very few vendors could have recovered a fumble that large and complex in that time frame. I was proud to be part of the Meraki team that pulled together in a crisis the way we did. From the sales teams and engineers in the field (like me) taking rightfully angry calls/emails from customers, to the product teams, to the Engineering and Dashboard teams building recovery tools, it became a textbook play for turning a loss into a win.
We owned the problem, fully admitted what happened and were completely transparent about it. The crisis was not averted, it hit hard, and we showed up to fix what we broke. We put out excellent documentation and rapid updates and tools. I and many other engineers worked many extra hours to rebuild AP floorplan maps for some customer who did not have or could not find local backup copies of their floorplans.
It was a true demonstration that Meraki cares and did everything possible to ensure customer satisfaction and retain loyalty. Yes there were some rough days and weeks and heated discussions, but I did not have a single customer relationship weakened in the end, some stayed unchanged while many others strengthened.
As for network configurations, none were impacted with that data loss. All of your network configs were safe and sound and backed up all along. In fact, there are tertiary backups in a different data center. So there's never just one (but as many as 3) copies of your configuration in the cloud. Even if the whole darn cloud itself was down and Dashboard was inaccessible, you lose the ability to make active changes via Dashboard, but don't lose the ability to run your network. Meraki equipment has a dual partition NVRAM and always has both its current and last known good firmware and config file. So with that I could argue the config is in as many a 4 places, not 3. The whole architecture obviates the need for on-site TFTP servers or even old school physical media backups sitting on top of the equipment.
Also, for anyone feeling like you have to take manual screenshots as backups: If you have a "golden config" for a network and you want to be able to restore that configuration, that's simple enough to do by cloning it to an alternate network that doesn't have any physical devices in it. Then if you need to roll back to a specific config from a certain day/time, you can move the device(s) serial number(s) from their current network to the "backup/golden" network to revert to those older configs.
So to the original question, why do I bother using your service... for one thing, because of what I described above. Last fiscal year, Meraki had 57% year-over-year growth, and still sees growth accelerating. This just does not happen in a $1.5B business in the IT industry. Not unless there is something very special and unique about that comany and thier products and architectures. Sure, I'm biased, but the numbers don't lie.
Hope that helps, just wanted to share a little insight from my side on last year's data loss.