Mostly, what I've seen happen is a bit more complicated. Let's say you do the experiment once, and the results come out positive. You think about it for a bit, locate a potential design flaw, tweak the experiment, then come out with a negative result. Your original positive result gets file-drawered, and you publish your negative result in a peer-reviewed journal. Or, perhaps you get a another positive result, find another design flaw, and repeat the test again.
This is how science works, right? Continually eliminating sources of bias from experiments to get better results. But, it also leads to "file drawer" results. Your preconceived notion of what the results should be dictates when you stop looking for experimental flaws. Now, a good scientist will eventually say, "Maybe my preconceived notion was wrong" in the face of repeatedly successful results, and an even better scientist will ask, "How may this negative result have been biased by the experimental set-up?" after a negative result, but this does not occur all the time. And, in fields such as parapsyhology where the statistical difference between a positive and negative result is so small, this sort of unintentional file-drawer bias is even more likely to occur.
(I don't mean to pick on you, blutoski - I merely used "you" in the illustrative sense. And of course, I would never accuse skeptics of exhibiting such biases more than believers do.)