5.5. How tests evolved

Several of our testing methods evolved over time, from being pretty basic to being significantly more involved & advanced. The badlinks plugin was a typical example of something which started as a very basic test, and later evolved to be more useful and comprehensive.

The initial aim of the plugin was to detect and reject emails of a particular form:

There were several possible approaches to programatically detecting emails like this, but our initial approach was to extract URLs from each incoming message and compare those to a blacklist. A SPAM link might look something like this:

This link contains several parts:

The path and key components would typically vary between messages, so we were only concerned with examining the hostname, and testing that against our blacklist file.

The blacklist would be regularly updated and uploaded to each satellite MX machine, via rsync, after an admin added new entries to it. This meant that links would be extracted from every single incoming email:

Having a global list of URLs contained in rejected emails allowed us to develop a simple online tool, which ordered URLs based upon the number of times they'd been seen, and loaded each site in a browser along with a "SPAM: Yes/No" checkbox. An admin could test several hundred sites in a short space of time, and keep current with emerging URLs easily.

It quickly became apparent that although there was an administrative overhead the technique itself was useful, as a single site could be advertised in over 20,000 emails in the space of 48 hours.

Although testing incoming hyper-links against a flat file of known bad sites was a simple process and reasonably fast to carry out, we knew that such an approach was doomed to failure if we ever fell behind in maintaining our list. So we looked at different ways we could evolve the test and reduce the administrative burden involved in maintaining the blacklist.

One change which lead to a significant improvement was the observation that a lot of sites were hosted by botnets. Detecting botnet sites was something that could be achieved using simple DNS lookups, which was a very lightweight process.

Because botnets don't like to have a single point of failure they would typically implement webhosting across a number of compromised machines. A site hosted in this fashion might have a name in DNS which resolved to 5+ different IP addresses, each of which would be hosted upon a residential IP address and none of which would have reverse DNS entries.[1]

Many large or popular websites like http://www.microsoft.com/ are configured with redundant hosting via partners such as akamai, and they too typically resolve to multiple IP addresses but the difference between those sites and botnet sites is obvious:

So in addition to testing domain names against our list of known-bad sites we also heuristically decided that links were bogus if the hostname section resolved to multiple IP addresses, and over half of those IP addresses were missing reverse DNS entries or were obviously hosted upon residential IP addresses.

We also experimented with testing links against the surbl.org lookup site. But we found this didn't increase our accuracy in any significant fashion, so these lookups were abandoned.

Many of our tests evolved from their initial implementation to increase facilities or correctness. The change from comparing sites against a blacklist to performing botnet detection was a particularly interesting evolution. Although this evolution was generally a positive one it still qualifies for its own entry in our list of mistakes: Failure To Use DNS Caches.

Notes

[1]

Wikipedia has an article on this topic: fast flux hosting.