I think the whole idea of an automatic censor is absurd (on several levels), but putting that aside for a moment: Am I being daft or do none of the suggested solutions address the problem in the first post?
I assume you wish a series of characters (a banned word) to be obscured but not if they are part of a larger word. Adding versions with a space at the start and end will not prevent this unless you also remove the standard version.
If you remove the standard version then a space at the start or end will still incorrectly catch "cumulative" and "locum" etc. whilst a version with a space at both ends will fail to catch hyphenated versions (unless specifically banned also) nor banned words at the end of sentences or otherwise flanked by punctuation e.g. in brackets since those are not equal to space characters.
You could do something more sophisticated with regular expressions but anyone who has received spam for one of the more than a sextillion potential spellings of \/i@g®@ knows that creative spelling goes a long way to defeating these kinds of defences.
Getting back to my philosophical objections to the very idea of an automatic censor, would you be surprised if I told you that the word you are trying to censor in your example was Word of the Day at dictionary.com on Sunday August 10, 2003?
I prefer someone's suggestion that the filter send an email to notify a human who can then decide what to do.
And on a technical note, the current filter doesn't seem to differentiate between words in text and link urls (for example). This could lead to some very odd behaviour.