Kludges and Spam Filters
November 27, 2006
Roscoe, NY
-
"Say you're writing a program and you discover you've done something wrong, like every time you try to use the program, a button pops up. Most programmers go in, analyze their program, find out what causes the button to pop up and cure it so it doesn't do that. [John] Draper [aka Captain Crunch] would go in and code around the button so when the bug occurs, the program knows it's made an error and fixes it, rather than avoiding the error in the first place. The joke is, if Draper were writing math routines for addition and he came up with the answer 2 + 2 = 5, he would put a clause in the program, if 2 + 2 = 5, then the answer is 4."
-
— Chris Espinosa as quoted in Steven Levy, Hackers: Heroes of the Computer Revolution (1984), ch. 13
This type of solution is commonly called a "kludge." The kludge doesn't address the actual problem. The kludge addresses a symptom of the problem and hopes nobody will notice the difference.
Another example: When Deirdre first bought her house in the Catskills (where we moved the cats to yesterday and where we'll be staying through the month of December), we shopped around for various items we needed, and got a bread machine at Macy's for the amazing price of $19.95. But this bread machine — ToastMaster Model TBR15 — has a design flaw. The pan is too large and the wrong shape for the kneading blade, and unmixed ingredients are often left on the sides. The designers evidently recognized this flaw and decided on a software solution: Following the first kneading and rising cycle, the bread machine beeps to alert you to "scrape ingredients from sides of pan." (manual, pg. 7)
It's a kludge. It doesn't actually fix the design flaw of the bread machine. The kludge just compensates for the design flaw by shifting the burden to the user.
The common spam filter is also a kludge of sorts. It attempts to solve a specific problem, but it's only addressing a symptom of the problem. The problem isn't really that I get (or any of us get) a couple hundred junk emails a day. The problem exists on the other end — with the actions of the people sending these emails.
I don't use a spam filter for several reasons, but the most significant is that I want to be kept aware how bad the problem really is. I don't want to hide the problem and pretend it's solved, because it's not solved. Even if I were to use a spam filter with 100% accuracy, I still have to deal with other people's spam filters. I've had legitimate emails I've sent to people bounce back because of over-zealous spam blockers. Do we really think that as spam filters get more sophisticated, the number of false positives will drop? Or will the false positives just become more inexplicable because they're determined by some Bayesian neural network?
Spam is a form of man-made pollution. There are a small number of people causing it, and a very large number of people suffering from it. I'm not in favor of total prohibition, and I'm opposed to capital punishment, but if people wish to pollute our inboxes, they should have to pay for the privilege.