Does Facebook's webcrawling bots respect the Crawl-delay:
directive in robots.txt
files?
No, it doesn't respect robots.txt
Contrary to other answers here, facebookexternalhit behaves like the meanest of crawlers. Whether it got the urls it requests from crawling or from like buttons doesn't matter so much when it goes through every one of those at an insane rate.
We sometimes get several hundred hits per second as it goes through almost every url on our site. It kills our servers every time. The funny thing is that when that happens, we can see that Googlebot slows down and waits for things to settle down before slowly ramping back up. facebookexternalhit, on the other hand, just continues to pound our servers, often harder than the initial bout that killed us.
We have to run much beefier servers than we actually need for our traffic, just because of facebookexternalhit. We've done tons of searching and can't find a way to slow them down.
How is that a good user experience, Facebook?