Thanks for the replies, Gents.
@RomanMD- I believe you are correct. Meraki support recommended using the transit VLAN interface that resides on the MX, in our case 192.168.20.2, and used to be 10.0.0.2, which is the default gateway for our data network. I added a RADIUS client with 192.168.20.2 to the domain controller via Network Policy Server (based on a doc I received from support), but that didn't change anything - clients are still unable to authenticate via the VPN because a "domain controller cannot be found."
We are using Meraki authentication for the VPN as a temporary workaround until we can get RADIUS working again.
@PhilipDAth- You are correct - I deleted the VLAN interface that the VPN was using on the MX and moved it to the switch stack (10.0.0.0/22). The only remaining interfaces on the MX are the Management VLAN (192.168.100.0/24) and the Transit VLAN (192.168.20.0/24). All other interfaces now reside on the stack.
You are also correct regarding the default route on the stack which points to the MX IP of the transit VLAN, 192.168.20.2.
There should not be any subnet conflicts that would be overriding each other. The client VPN is using 10.10.10.0/24, which is unique in our environment. It seems like the traffic is arriving via the client VPN but not being passed correctly to the 10.0.0.0/22 network to reach the DC for authentication. There is a static route on the MX for each subnet that points down to the stack Transit VLAN interface of 192.168.20.3. Internal to the network traffic is flowing like a hot knife through butter. Traffic on the site-to-site VPN is also working fine from all remote locations - everything is accessible.
@cmr- You are correct - users connecting via the client VPN are unable to reach the DC for authentication or reach the primary data center network of 10.0.0.0/22, which is also where the DC resides along with the file shares.
I re-used every interface - same IP scheme, gateways, and VLAN numbers. The VLANs that were living on the MX now live on the switch stack dot for dot - I figuratively picked them up and stuck them on the stack. In fact, the only new VLANs are the management VLAN and the Transit VLAN, both of which live on the MX and the stack with their respective interfaces on each.
Our stack is comprised of three MS250-48 switches, stacked together. We do not yet have an active/standby setup, so all switches are active in the stack. All production VLANs have been removed from the MX as noted earlier.
I truly appreciate your thoughts, guys. This is the only fallout of the change. If I can figure this out and get it going I can move on to getting OSPF implemented.
Thanks!!
Twitch