Oh, the experimenter effect. As far as I can tell, it’s a purely post hoc thing – only used as an explanation after results have been collated. There’ve been several cases (two large scale examples recently: Wiseman, Schiltz, Radin, Watt (was this 2006? Or 2007? I don’t remember) and Smith, Savva 2008) which were set up to specifically look for this effect, but found nothing, while other experiments have found the effect. So you see, sometimes it’s there, sometimes not.
The experimenter effect can be somewhat explained by pro-psi researchers writing up there results in the most favourable way. A good example is York (1976) who used two ways to measure the results – one scored statistically significantly, and the other didn’t. York wrote it up as if only the significant measure had been used, but when Hyman read a preliminary version of the paper, he saw that the other measure had been the primary measure (Hyman, JoP 49, 1985). There are other examples too.
Kennedy’s paper, written in 1976, uses data from before Honorton put into place the policy at the Parapsychological Association of publishing all results, both good and bad. The problem with this era of parapsychology is also Rhine’s habit of not naming and shaming fraudsters, so there’s no way of knowing how “clean” the data is in Kennedy’s paper (additionally, he mentions the findings of Soal in one example).
People who don't accept the current collection of data haven't looked at it closely enough, or have listened to too many debunkers, or have strong personal/ideological/psychological aversions toward the subject...which can be as strong as a phobia.
So nothing we can say will ever convince you that parapsychology has not adequately demonstrated that psi can be measured in a laboratory setting?