URL blocking
Whenever a client fetches a web page on this network, the requested URL is checked against the lists configured below to determine if the request will be allowed or blocked.
Pattern matching follows these steps:
- Check if the full requested URL is on either list. e.g., http://www.foo.bar.com/qux/baz/lol?abc=123&true=false
- Cut off the protocol and leading "www" from the URL, and check if that is on either list: foo.bar.com/qux/baz/lol?abc=123&true=false
- Cut off any "GET parameters" (everything following a question mark) and check that: foo.bar.com/qux/baz/lol
- Cut off paths one by one, and check each: foo.bar.com/qux/baz, then foo.bar.com/qux, then foo.bar.com
- Cut off subdomains one by one and check those: bar.com, and then com
- Finally, check for the special catch-all wildcard, *, in either list.
If any of the above produces a match, then the request will be allowed through if it is in the whitelist and blocked otherwise. (That is, the whitelist takes precedence over the blacklist.)
If there is no match, the request is allowed, subject to the category filtering settings above.
HTTPS requests can also be blocked. Because the URL in an HTTPS request is encrypted, only the domain checks will be performed (
www.foo.bar.com,
foo.bar.com,
bar.com,
com, and the special catch-all
*).
So it doesn't sound like it will work, because Whitelist takes precedence. The only other option I can see is traffic shaping.
I am not using this service so I can't test, but I would create a rule limiting facebook.com bandwidth to 20kbps and then another rule with the sub domain with 5Mbps+ bandwidth.