Will Filters Kill Spam?

December 2002

(This article is derived from one I wrote for the January 2003 issue of the Computer Security Journal.)

I get about 45 spams a day, but only about one a week makes it into my inbox. If everyone had this much of their spam filtered out, spammers would give up sending it.

Will that happen?

The first generation of spam filters used rules to recognize specific spam features. Now a new generation of statistical spam filters seems to offer significantly better performance. Statistical filters look at the entire contents of each incoming email and decide whether it's spam based on its overall similarity to previous spams. This new kind of filter routinely catches over 99% of current spam with near zero false positives.

The simplest statistical filter can be described in a paragraph. Users discard all their spam in a separate trash can. At intervals, a program looks through all the user's email and, for each token, calculates the ratio of spam occurrences to total occurrences. For example, if "cash" occurs in 200 of 1000 spams and 3 of 500 nonspam emails, its spam probability is (200/1000) / (3/500 + 200/1000) or .971. When a new email arrives, extract all the tokens and find the fifteen with probabilities p1...p15 furthest (in either direction) from .5. The probability that the mail is a spam is p1p2...p15 ------------------------------------------ p1p2...p15 + (1 - p1)(1 - p2)...(1 - p15) I use a cutoff of .9, but it doesn't matter too much where you put it, because most probabilities end up close to 0 or 1.

In the past few months, many new statistical spam filters have appeared. There are now over 30 available. Apple has one, MSN has one, AOL is said to have one in beta, and you can be pretty sure Yahoo is working on one.

Some of these filters were inspired by an article I wrote called A Plan for Spam, but two open-source statistical text classifiers (a superset of spam filtering), ifile and CRM114, have been around for years, and Apple and Microsoft have also been working on statistical filtering for a while. The reason for the sudden explosion of statistical filters is probably simply that spam grew to be enough of a problem that people started paying serious attention to it, and statistical filters are what you get when you do that.

These filters don't all work exactly the same way. The algorithm described above is called a "naive Bayes classifier," because it uses a degenerate case of Bayes' Theorem to combine probabilities. Most of the new open-source filters are naive Bayesian, and so is MSN's, I suspect. Apple's filter calculates spam probabilities based on "adaptive latent semantic analysis," which as far as I can tell amounts to the same thing.

Regardless of how they calculate probabilities, these new statistical filters all share some important benefits:

1. They're very effective. Even the simplest statistical filter will catch 99% of current spam. The most effective filter I know of, Bill Yerazunis' CRM114, catches 99.8%. (Mine is lagging behind at about 99.7%.)

2. They generate few false positives. False positives, legitimate emails that are mistakenly treated as spam, are the bane of spam filtering. Statistical filters yield fewer false positives because they consider evidence of innocence as well as evidence of guilt. A token that occurs disproportionately often in your nonspam mail, like the name of a friend, will count as much toward decreasing the spam probability as a token like "cash" would to increasing it.

3. They learn. You don't have to look through piles of spam and figure out rules to identify them. Whatever's in there, the filters tend to find it. Like us, statistical filters notice that the token "cash" is sign of spam. However, they also notice that "modalities" (used in a surprisingly high proportion of Nigerian spams) and "FF0000" (html for bright red) are even better signs of spam. And as spammers change their messages or their infrastructure, the filters adapt.

4. They let each user define what's spam. Although statistical filters could be used at the network level, ideally the probabilities should be calculated individually for each user. To the extent users' definitions of spam differ, their inboxes will reflect this.

5. They're hard to trick. There are only two ways to get past a statistical filter: use fewer bad words, or use more innocent words. Spammers can't do the latter, because the most innocent words (words related to your friends and family, your work, your interests) vary for each user. So they have to use fewer bad words. They can't use weird spellings (e.g. "Freee" instead of "Free") because filters quickly learn those. Their only option is to use vaguer and vaguer euphemisms, or simply to have some generic sounding text, and a link.

What's going to happen as this new generation of spam filters get delivered to end-users? The most exciting possibility is that they may make spam go away.

What the spammers care most about is response rate. In any kind of direct marketing, revenue is proportional to response rate. Spammers are satisfied with a much lower response rate than direct mail, because their costs are so much lower, but response rate is still the key to how much they make.

Filtering hits spammers right in their center of gravity: if recipients don't see the spam, they don't respond to it. If we can filter out 95% of spam, we decrease spammers' revenues by a factor of 20. If we can filter out 99.5%, we decrease revenues by a factor of 200. Spammers' costs are low, but not that low. In an article in the Detroit Free Press, one spammer said that he charged a flat fee of $22,000 to send mail to his entire list of 250 million addresses. If filters cut response rates by a factor of 100, the average value of what he was selling would sink to $220. I doubt that would even cover his costs.

Filters should at least save us from seeing most spam. But if they can decrease spammers' response rates enough, spam will no longer pay, and the spammers will actually stop sending it.

Or so we hope. But there is an alarming possibility here. If email programs aren't designed right, spam might still be seen by the very people the spammers most want to reach.

The person who responds to spam is a rare bird. Response rates can be as low as 15 per million. That's the whole problem: spammers waste the time of a million people just to reach the 15 stupidest or most perverted.

If we want to make spam stop working, we have to somehow prevent the 15 idiots from responding to the spams that are sent to them. Otherwise the spammers will keep sending it to everyone. So, strangely enough, whether or not filtering will kill spam depends entirely on what those 15 idiots do.

The great danger is that whatever filter is most widely deployed in the idiot market will require too much effort by the user. Bayesian filters calculate spam probabilities based on the spam and nonspam mail each user receives. There have to be two kinds of deletion, ordinary delete and delete-as-spam, and the user has to delete each kind of message in the right way. The problem is, the 15 idiots are probably also the 15 users who won't bother.

As long as the 15 idiots continue to see spams, we're all going to be sent them. So whether filters put an end to spam depends on how the email software used by the idiots is designed. My guess is that idiots are pretty passive, so the key here is to make the default do the right thing. Hear me, O AOL and Microsoft: when you release Bayesian filters, don't make all the users train their own filters from scratch. Use initial filters based on mail classified by all your users. That way, as long as the user just keeps blindly clicking, most email will end up in the right corpus.

Do that, and spam will decrease, which will mean lower infrastructure costs, and thus greater profits for you.

There is one other possibility we ought to worry about, though probably not as much. What if the 15 idiots see the spam in the junk mail folder and respond to it? I don't think this happens very much. We can see that from the efforts that spammers take to prevent their mail ending up in the junk mail folders of services like Yahoo Mail and Hotmail. But designers of email software ought to at least bear this in mind. Quietly flush spam folders, and don't encourage users to look through them and open the mails. (One good way to do this is to list some of the incriminating words, in random order, along with the subject line of each spam. Then users won't be tempted to open spams with ambiguous subject lines.)

If the email software implements them properly, statistical filters should send nearly all spams straight to the spam folder, and this will decrease response rates dramatically.

The spammers most threatened by filters are the opt-in spammers. Opt-in spammers are the ones on the more legitimate end of the spam spectrum. They buy your address from other companies whose terms of service (you didn't notice that in paragraph 47?) allow them to sell customers' email addresses. They're called opt-in spammers because they usually claim in the mail they send you that you have asked to receive email from them.

One of the biggest opt-in spammers claims to have 60 million unique email addresses, nearly all of them domestic. 60 million addresses is more than half the US online population. If half of US Internet users have asked to receive email from this company, it should be pretty easy to find someone who has. And yet if you start asking around among your friends, I bet you won't find a single one who remembers asking to receive their valuable offers.

The arrival of better filters is going to put an end to the fiction of opt-in, because opt-in spam is especially vulnerable to filters. Opt-in spammers don't try to conceal their identities. They don't use fake return addresses like the real hardcore spammers. The text of the email openly says who it's from. And so opt-in spam is very easy to filter. The simplest statistical filter can catch 100% of it.

Once statistical filters are widely deployed, most opt-in spam will go right into the trash. This should flush the opt-in spammers out of their present cover of semi-legitimacy. If filtering destroys their response rates, they're either going to go out of business, or they're going to have to admit what they are and start taking measures to conceal the origin and nature of their messages.

All spammers will have to, if they want to get past statistical filters. Will they be able to?

We already see plenty of evidence of spammers tweaking their messages to get past simple-minded spam filters based on specific words or patterns. What could they do to get past statistical filters, and will it work?

If a statistical filter notices that you're using spam words, you're in trouble. So the spammer has two options: prevent the filter from seeing the words, or use different words.

Spammers have been trying to prevent filters from recognizing the words in their messages for at least six months. The most common trick is to use variant spellings-- for example, to use 0 instead of O and 1 instead of l. These are no problem for Bayesian filters once the corpus reaches a decent size, and indeed are a positive aid in recognizing spam.

Spammers also try to prevent filters from recognizing the tokens in the mail by breaking them up-- for example, by using whitespace or punctuation characters in the middle of words

li ke th.is
But this doesn't work well either. One reason is that legitimate email doesn't have many individual letters or word fragments in it, so a fragment like "ke" or "th" will tend to have an above-average spam probability. Another is that they can't do this sort of obfuscation on headers and urls, and those are enough by themselves to identify most spam. We could probably reconstruct the broken words if we had to, but this hasn't even been necessary so far.

Spammers sometimes insert html comments at random places within words, but this is also easy to ignore. In general, on the token front, it is a question of closing loopholes. There are only so many tricks spammers can use, and we deal with them individually. So far none has been insurmountable.

People sometimes ask, what if spammers sent the mail as an image? They do already, and this kind of spam is easy for filters to catch. Tokens like "img" and "href" have spam probabilities like those of pornographic terms. Plus there is the domain name and filename in the url, and, as always, the headers. On the whole, spam containing html is easy to filter. The most hardened spammers seem to know this and already avoid html in their mails. Whatever the spam of the future looks like, it probably won't contain html.

I think the only territory left to the spammers will be vocabulary. They won't be able to prevent filters from seeing the words they're using, so they'll have to use different words.

It won't work simply to pad spams with random words. In the algorithm I described earlier, only the most statistically significant fifteen words contribute to the probability, and neutral words like "onion", no matter how many there are of them, can't compete with the incriminating "viagra" for statistical significance. To outweigh incriminating words, the spammers would need to dilute their emails with especially innocent words, i.e. those that are not merely neutral but occur disproportionately often in the user's legitimate email. But these words (the names of one's friends and family, terms one uses in one's work) are different for each recipient, and the spammers have no way of figuring out what they are.

I think the only thing that will work for the spammers will be to avoid using incriminating words. Interestingly, this will work to different degrees depending on the type of spam. For mortgage spams it will be deadly. How can you pitch people a mortgage without using terms like "mortgage", "rate", "lend" and "borrow"? Whereas there are other kinds of spams that could be rephrased not to use identifiable spam terms at all.

One fairly common type of spam is sent by an online dating service (oops, sorry, an irresponsible affiliate of an online dating service), pretending to be an email from female member who wants to meet you. Just sign up and search for her profile. There can be very little in the text of such mails for a filter to notice. I've seen some where the text was indistinguishable, statistically, from an email an actual person might send you. All you have to go on there is the url, and the headers.

Of course, the url and the headers are a lot to go on. And they are what I think filtering will ultimately come to depend on. I predict that the spam of the future will look like this:

To: joeuser@aol.com
From: john@smith.com
Subject: hey

Check out this site.

http://www.randomdomain.com
That is, some fairly innocuous text, followed by a link. The domain name in the link will have to be one that hasn't been used before, or the filters will recognize it.

What will the response rate be for this type of spam? That is the big question. It will presumably be lower than for a spam loaded with with sales pitches, or the spammers wouldn't bother to write the sales pitches, but how much lower? If the response rate is low enough, then forcing spam to become this generic will be enough to kill it.

To the extent spam can be made this generic and still work, there are still techniques we can use to recognize it. I can think of two ways to deal with an unknown domain. One would be to send out a crawler to look at the page you're being sent to. You could filter the page much as you would the body of an email. If widely used, this would have the amusing effect of pounding any site advertised in a spam with a large number of hits. If the overloaded server took a while to respond, that would be evidence in itself-- a kind of peer-to-spammer-to-peer filtering network.

Another way to deal with unknown domains would be to have a server to which filters could report domains seen in spams. A filter unsure about a new url could query this server and get a quick answer back. You might get false positives in the case of a link to some image or news story that was making the rounds, but that kind of email tends to be sent by friends who would already be on your whitelist.

And remember, there are still the headers. At the moment, spammers leave their fingerprints all over them. Some spam software actually identifies itself by name in the headers. One day we'll smile at the quaintness of that. But even after they clean up the obvious stuff, spammers will tend to use the same infrastructure for a while. Unless of course they use open relays. That would be retro. But I think it would put a lot of spammers out of business if they had to resort to that. Certainly clients like Disney would balk at it.

Strangely enough, the spam I expect to see in the future will be a lot like the spam of the past. There will be less of it, as there was in the past. And the spammers will be real outlaws, as they were before spam got corporate in the last couple years.

I don't want to underestimate the ingenuity of the spammers. They have found their way past every barrier we've put in front of them before. But I don't think this is inevitable. I think we just didn't work too hard on barriers till recently. Statistical spam filters represent a more serious effort to fight back. If they don't put an end to spam, they'll at least insure that we see less of it.