I disagree with 3. You can always make new strategies. I mostly agree with everything else, except for the "I do not know when to stop using it". If your strategy is sound, you will have a calculation on when your system is no longer working.
My comments on this referred specifically to the paper, not anything you or anyone else might be doing. The point was there was no system, no method of analysis to find a new strategy for the method described in the paper. Just make one up, back test it, use it until you decide it's not working (which can't be defined, given the constraints). But this is kind of a pointless discussion, as I can't provide the paper at the moment.
As for the rest, I don't know what to say. I respect your right not to share IP. I will note that this is how it has gone
every single time I have ever discussed these things on this or any other forum. Never any citations. Never any evidence. In any individual instance I grant the poster's right not to post data, though I reserve the right to point out that they have provided zero evidence, and will strenuously object if they end up giving financial advice to others. But it's the sum of all those actions that interests me. We have ample studies showing no positive expectation over appropriately long time periods, balanced against claims of efficiacy that quickly fade away (every time!) when the light of reason is shone on them.
I never have that problem with proponents of efficient markets, or proponents of value investing, or proponents of some sort of index investing. Citations are readily proffered. I'll happily share with you any of my thoughts on investing. Not individual stocks, I'm not licensed, and don't give stock tips, but IP? It's all yours for the asking. Backed up with citations to the literature, if necessary. Of course, I am a nobody, but you will see the same behavior by respected value investors: Graham, Dodd, Greenblatt, Schloss, Fischer, Eveillard, Berkowitz, Buffet, Munger. You will see the same by EMT proponents and others: Fama, Malkiel, Bogle, etc., etc., etc. But no TA folks. Strange, huh?
Here's a link to a paper cited by Wikipedia as positive evidence for simple TA rules. They find some rules that generate excessive returns on back tested data. Cool! Except, read it. The rules stop working: "However, we also find that the best technical trading rule does not provide
superior performance when used to trade in the subsequent 10-year post-sample period."
It's classic data mining, IMO. You can
always find some algorithm to fit data. It's guaranteed (I leave DrKitten to provide the math

) That is in no way evidence that something works. Furthermore, when it stops working just as soon as new data starts coming in, well, you ain't predicting the market, you are data mining.
The conclusion is particularly laughable. They note that the rules they developed in fact
do not work beyond the data set they used to test the rules, but then hypothesize this could be due to nonrepresentative data during the post-sample period, or that the markets have become more efficient. They didn't even mention the possibility that they are
ex post facto fitting algorithms to data. I had this problem in my first job with a scientist at NIH, where he kept changing the rules until he got some data hits on cancer incidences and mortality in US Hispanic populations. No hit on this idea? Well, look at just Cuban Americans. No hit there. How about the ones ages 25-34. No hit? How bout 25-34
female Cuban Americans. A hit! Yay! Let's publish! (and let's ignore we are now looking at 6 data elements). Back testing data
is not evidence. It's a suggestion.
That's just one study, but a routinely quoted one.
It sure is interesting that despite the number of claims of efficacy, no one can say "roger, this is the way to do it" or "roger, here is a paper with no special pleading".
edit: thank you for an enjoyable conversation.