Lucifuge Rofocale
Muse
- Joined
- Dec 5, 2001
- Messages
- 968
a) The error was corrected, and I posted the link to the corrected paper.I don't do whack-a-mole, especially when based on vague non-assertions. You have already conceeded that Lambert was correct when he exposed the fact that there was a degrees/radians screw-up. Get over it and move on.
b) You were wrong saying that the paper with the error was the one we were discussing.
c) You where wrong saying than M&M where the authors of the wrong paper.
d) You were wrong saying that those errors invalidated the original McKitrick & Michael paper.
d) You insist listing Names suporting AGW instead of discussing the papers.
e) You haven't seem the link in Lambert's you gave to me, where the error and the paper and the validity of the MBH study is being discussed by actual scientists.
f) You keep claiming that I recognized what I and McKitric have already recognized and corrected, wich I did but by no means affect the actual discussion here.
g) You refuse to discuss the paper and quote outdated evidence as backup, while you desviate the discussion of the MBH flaws.
h) You don't want to recognize that the paper with the error has NOTHING TO DO with M&M MBH refutation
So as long as you are not interested in the facts but in character assasination. If you are interested in facts, please provide answers to this questions: (from http://timlambert.org/2005/06/barton/all-comments/#comments)
Let’s not keep debating in generalities about this stuff. Appended below is McIntyre’s list of wants from www.climateaudit.org/inde…. In my opinion, all, or almost all of it, falls into the categories of “I’ve shown that X is a mistake; prove to me it wasn’t” or “I’ve shown that Y is a discrepancy; explain to me why it wasn’t” or just “I’ve shown that Z calculation is wrong; show me your work so I can point to your exact mistake” and not “missing data.” Of all of these 5 might be missing data and 21 might be a needed clarification about data used, although I haven’t looked at them in enough detail to know one way or the other. The rest appear to fall into the categories I outlined. Data or not, is it reasonable to ask Mike to provide this type of information?
-the selection of proxies which was supposedly done according to “clear a priori” criteria. The “clear a priori” criteria were not reported and have not been disclosed in response to inquiries. Without a statement of these “clear a priori” criteria, it is obviously impossible to replicate the proxy selections of MBH98.
-the selection of tree ring chronologies listed in the original SI according to the criteria listed in Mann et al [2000] (which expanded on MBH98 information); here
-the explanation in the Corrigendum for the discrepancy between the tree ring sites listed as being used in the original SI and the tree ring sites actually used. The exclusion of the excluded sites (and the inclusion of included sites) cannot be replicated according to the stated criteria. here
the data set archived in July 2004 does not match the description provided in MBH98 or in the Corrigendum SI. here
-the proxy rosters in each calculation step from pre-Corrigendum information, including a total of 159 series said to have been used in MBH98 [Mann et al , 2003] here
-the use of 24 proxy series in the AD1450 step as reported in MBH98 here
failure to use 6 available proxy series in the AD1500 step (including 5 series used in the AD1450 step) here
-the selection of the 1082 “dense” gridcells and 219 “sparse” gridcells according to the selection criteria stated (for the first time) in the Corrigendum SI. here
-the archived “sparse” and “dense” instrumental series here
-the number of retained temperature PC series in each calculation step. The number retained appears to depend on short-segment standardization, which we criticized in connection with tree ring series. The number retained cannot be replicated with non-erroneous PC methods.
-the number of tree ring PC series retained in each network/calculation step according to the retention policy (Preisendorfer’s Rule N) reported at realclimate.org [link] in December 2004. No information was provided in MBH98, the Corrigendum talked about a scree test being used as well. I can replicate the illustration at realclimate for the AD1400 North American network, but as soon as you try other networks/periods, the criterion can’t be replicated. here
-the Corrigendum states that PC series were re-calculated for each calculation step, but this is not correct. The actual selection of steps in which fresh calculations are made is impossible to replicate.
-is there an unreported step commencing in 1650? If you plot the confidence intervals, there is a step here, but there is no mention anywhere of a step commencing in 1650 in MBH98 or the new SI. An archived reconstructed PC also begins in 1650: what’s going on here? here
-the 5 archived RPCs cannot be replicated. Here I wish to emphasize that my emulations of the RPCs were identical to those of Wahl and Ammann. However, they are content if their emulation is roughly similar to MBH98; I am not. here
-why does the RPC replication deteriorate in the early 15th century. The 15th century is obviously a problem area. Given other issues with the 15th century, I’m really interested to see what’s going on here. here
the reconstructed NH temperature series from the RPCs here
-In MM03, we reported collation problems in the data set archived at Mann’s FTP site to which we were originally given access. After publication of MM03, Mann made the duplicate accounts available and a new FTP site appeared. Mann said that the collation errors in the previous accounts did not exist in the actual accounts. However, Rutherford referred to a file pcproxy (retrieved from the Wayback machine) long before our inquiry. I think that it’s quite possible that the collation errors in the first data set did not exist in the actual data set (and not much turns on this in terms of the final results), but, for good order’s sake, I’d like to see code demonstrating that collation errors were not made. I have a sneaking suspicion that they were made and that this is one of the reasons why Mann is so reluctant to show his code.
MBH98 and Mann et al [2000] both stated that MBH results were “robust” to presence/absence of all dendroclimatic indicators. A fortiori, this entails that MBH98 results are “robust” to the presence/absence of the bristlecones (and the PC4). Wahl and Ammann do not report that they have replicated this result – I wonder why not?
-MBH98 stated that they had done R2 cross-validation tests and Mann told Natuurwetenschap that his reconstruction passed an R2 cross-validation test. Again we can surmise the answer – I presume that Wahl and Ammann have replicated the catastrophic failure of the R2 test and have similarly replicated MBH withholding of this information. This is not really the type of replication that one wants.
-confidence interval calculations in MBH98 here and here
-data citations for instrumental series. These are currently attributed only to NOAA, which is not an adequate citation.
Bill Bud Says:
July 1st, 2005 at 1:06 am
Re#77:
Tim, I seem to have missed where the issue of centered/non-centered is addressed in that link except for comment #2 that seems to agree with what I said: if you do not center the data on it’s mean, you automatically increase variance. Unless you meant main point (4) which I discuss below.
Have you ever done a PCA? One of its principal [pun intended] values is that it can be used to remove high frequency noise that seems omnipresent in measurement. In statistical terminology, you are discarding irrelevant variance. Unfortunately, PCA does little to solve the question of what’s mundane and what is not.
All of the datasets used in MBH98 are supposedly measuring the same thing, namely, global temperature. Removing one or two should not change the observed trends.
Tree ring data is at least once removed from actual measurement of the desired item. Each dataset may exhibit trends of some variables unrelated to temperature — available water for instance. Hopefully, when taken as an aggregate, local variations in the unrelated will appear in the higher, discardable PCs provided that you have at least the same time period for each in each dataset and that the datasets have been properly normalized.
Problems begin to arise when the datasets don’t overlap as when they are separated in time or place.
RC Point(4) as promised: In general, one of the underlying assumptions involved in PCA is that the mean of each dataset is zero. Using the mean of the dataset means may be valid if all of the datasets properly overlap but the validity of this must be verified. The significance criteria (3) cannot be used for this purpose. For example, if the minimum of all data was used as reference instead of the mean, all of the PCs would pass (3). Using anything other than the dataset mean when the datasets do not overlap is highly questionable. Subsequently grafting them raises eyebrows about competence. Grafting non-overlapped datasets that used different measurement techniques is …. Well, you figure it out.
In fact, comparing two datasets that do not have overlap is questionable. Tree ring data do not simply reflect temperature. The main observed trend may not be driven by temperature at all. The significance criteria described in point (3), while statistically correct, do not provide aid in distinguishing the underlying cause of any particular PC. It’s quite possible that the causes of PC1 and PC2 could be exchanged between datasets.
Indiscriminately applying or improperly applying statistical methods can quickly lead to GIGO.
I snipped for you the relevant points that NO ONE (in Lamberts site) has been able to answer. That is the refutation of MBH. If you want the links go to the original page and search.
So, go there, look and learn, but don't waste my time with points not related and lists of scientifics supporting AGW. I'm learning a lot in climateaudit and timlambert sites and if you weren't be that obtuse, you'll go there and learn.