Arp objects, QSOs, Statistics

DeiRenDopa said:
So, are we reading the same L-C&G paper or not?
You tell me. Why don't you link us to the L-C&G paper you are referring to so everything is made clear. No need to be coy.
.
First tentative detection of anisotropy in the QSO distribution around nearby edge-on spiral galaxies. The link is to the arXiv preprint; the A&A paper is both easy to get to from this link and essentially the same in content.

Haven't we already discussed this, at some length, earlier in this thread?

ETA: that'll teach me to post so quickly; it is, I think, the very same paper included in the post of yours that I'm quoting ...
 
Last edited:
(part omitted)
In this case, the thing missing from BAC's posts on this topic (so far) is a clear statement, preferably quantitative, of just what the hypothesis he is testing is, and what the null hypothesis is.
I'm beginning to think you actually do have a reading problem. I've stated my hypothesis very clearly ... that the calculations I made (including the Bayesian portion) strongly suggest that the mainstream's explanation for quasars is WRONG.

(rest omitted)
.
Leaving aside the fact that "the calculations I made (including the Bayesian portion) strongly suggest that the mainstream's explanation for quasars is WRONG" is not a quantitative hypothesis, and leaving aside the fact that there is no null hypothesis, there is the question of what "the mainstream's explanation for quasars" actually is.

BAC, would you please state what this explanation is?

Then, from it, derive a clear, unambiguous, quantitative hypothesis that you seek to test?

At the same time, state, clearly and unambiguously, what the null hypothesis is?

Oh, and for the record, once again, the probability of finding the five quasars at the locations near NGC 3516 and with the observed redshifts is, in standard astrophysics, 1. Why? Because they have been observed to be there and with those redshifts. The kind of analysis you have presented is not, by any stretch of anyone's imagination, a "mainstream explanation", nor is it an acceptable "mainstream method", nor ... As several people who have posted to this thread have been saying, the kind of a posterori analysis you have presented, even if it were error free, is most certainly not "mainstream".

But, perhaps, just maybe, I have completely misunderstood ... if so, would you please be kind enough to carefully, and slowly, present a mainstream analysis of the (quasar) data in the Chu et al. paper?
 
DeiRenDopa said:
Questions?
No, other than to ask you how many quasars there are in the sky in the range of visibility covered by the SDSS survey. Care to offer an estimate ... or at least an average density number? Since you seem to deem this important. :D
.
You will really, really enjoy this 2003 paper: "The Sloan Digital Sky Survey Quasar Catalog II. First Data Release".

Quite apart from the nitty-gritty discussions of the many, many difficulties the team faced, and overcame, with respect to refining their 'quasar target algorithms', this paper contains the following:
Quasars have historically been defined as the high luminosity branch of AGNs; the (somewhat arbitrary) luminosity division between quasars and Seyfert galaxies has a consensus value of MB = -23 for [a particular set of LCDM cosmology parameters].

...

We have adopted a luminosity cutoff for the DR1 catalog of Mi < -22; this corresponds to a Paper I ("standard" quasar cosmology) absolute magnitude of MB ~ -22.4 for an object at zero redshift with a typical AGN spectrum. An object of Mi ~ -22.4 will reach the "low-redshift" (ugri-selected) SDSS quasar selection limit at a redshift of ~0.4.
.

There are several, very important, considerations implicit in this, not least of which is the "luminosity" criterion ... it is, obviously, defined in terms of a (cosmological) model and the apparent (observed) luminosity, in a particular (optical, visual) waveband. By definition, 'type 2 quasars' are excluded, as are 'Seyferts', and 'obscured AGNs'. Not so obviously excluded are BL Lac objects (for which no redshift is/can be estimated, even if the target object were selected for a spectroscope run), and those too hard to observe (or even detect), because of proximity to a bright star (or galaxy), fiber collisions, etc.

And so on (I particularly like the part about objects which move in, or out, of the database, due to their variability between observations!).

"an average density number" is, obviously, an important parameter in the a posterori analyses you have presented in this thread; didn't you think that a clear, unambiguous, quantitative estimate of this number would be important in your analyses? And don't you think it important to at least discuss any differences there may be between how 'quasar' is defined, implicitly or explicitly, in Chu et al. and SDSS?
 
Last edited:
it is, I think, the very same paper

No kidding. :rolleyes:

So if what you claimed is true, how come the abstract of that paper states: "There is a clear excess of QSOs near the minor axis with respect to the major axis of nearby edge-on spiral galaxies"? And why does it seem that the major complaint in the paper is not about not finding this effect within 1 degree but about finding this effect beyond where Arp hypothesized ... out to 3 degrees?

But in any case, how about we add another case to the other two already mentioned ... just to make things even more interesting.

Let's look at the NGC 2639. The Burbidges, Arp and Zibetti discuss it here: http://www.journals.uchicago.edu/doi/abs/10.1086/421465 "QSOs and Active Galactic Nuclei Associated with NGC 2639, 2004".

Note that in this instance, there are optical x-ray quasars, at z = 0.305 and z = 0.323, on opposite sides of the galaxy about the same distance from the galaxy. Their separation from each other is said to be about 35 arcmin. One is within an angle of 10 degrees of the minor axis and the other is within about 40 degrees angle. The paper also says there are seven X-ray sources lying along the northeast minor axis. Four of them were measured spectroscopically and found to be QSO/active galactic nuclei with redshifts of z = 0.337, 0.352, 1.304, and 2.630. Now how unlikely is this configuration, DRD?

Recall the Karlsson peak values: z = 0.06. 0:30, 0:60, 0:96, 1:41, 1:96, 2:63. Let's just compute the combinatorial probability of all these objects being this close to those values around one galaxy and also (with the exception of one) seemingly aligned with the galaxy's minor axis.

Following the same procedure as I used previously, let's first compute the difference between the observations and the nearest Karlsson value to each observation.

We have z = 0.305-0.30, 0.323-0.30, 0.337-0.30, 0.352,-0.30, 1.304-1.41, 2.63-2.63.

So the differences are +0.005, +0.023, +0.037, +0.052, -0.106, 0.00 , respectively.

Doubling those values we get 0.01, 0.046, 0.074, 0.104, 0.212 and 0.0 .

Let's start by just looking at the probability of finding the objects with z near 0.30. We can use a increment of .10 in that case, which should be very conservative given that one of those is actually at an increment of 0.02 and two others are significantly less than 0.10. So there are 30 different possibilities. Thus the probability is 1/(30*29*28*27)/(4*3*2) = 3.6 x 10-5.

Now factor in the probability of finding the quasar at 1.304 with an increment of 0.212. In the range 0 to 3.0 there are about 14.15 increments. Use 14. Thus the probability of finding this case is 1/14 = 0.07. For the probability of finding the z = 2.63 quasar, we will just use an increment of 0.01. So it's 1/300 = 0.0033.

Multiply them all together and we get 3.6 x 10-5 * 0.07 * 0.0033 = 8.6 x 10-9.

Now multiply by the probability of finding 5 quasars aligned along the minor axis (we'll ignore the one that only trends in the direction). That gives a probability of 8.6 x 10-9 * 0.08 (same value used previously) = 6.9 x 10-10.

Now multiple by the estimated (conservatively) total number of galaxies that might have 5 quasars near them (ignore the fact that even fewer would have 6) and we get 6.9 x 10-10 * 20,600 = 1.4 X 10-5 = .000014. The smallest yet. :)

My goodness ... we've found a third VERY unlikely observation ... even if we assume that Arp et al had examined every single possible quasar/galaxy combination ... which they didn't.

Now let's add this latest observation into the Bayes' Theorem look at the likelihood the mainstream's theory is correct. Recall that when we last left this story, the probability that the mainstream theory is correct, Pr1(A), was down to 0.33. And that was without considering the second case of those two. What does this latest observation do to that? Well ... do I really need to tell you, DRD? It's very, very small. :D
 
No kidding. :rolleyes:

So if what you claimed is true, how come the abstract of that paper states: "There is a clear excess of QSOs near the minor axis with respect to the major axis of nearby edge-on spiral galaxies"?
.
You would, following your own, clearly stated, views on how to answer questions of this kind, need to ask L-C and/or G, wouldn't you?
.
And why does it seem that the major complaint in the paper is not about not finding this effect within 1 degree but about finding this effect beyond where Arp hypothesized ... out to 3 degrees?
.
Ditto.
.
But in any case, how about we add another case to the other two already mentioned ... just to make things even more interesting.

(part omitted)
.

How about we return to discussing the extent to which the L-C&G paper presents data (etc) which seems to show that whatever Chu et al. say, in their paper, there is no statistically significant minor axis anisotropy for quasars spanning the magnitude range of the quasars in Chu et al., and that there is no statistically significant minor axis anisotropy for quasars within 1o, period?

Or, to ask a more general question, within the BeAChooser version of how astronomy (etc) should be done, how are the Chu et al. and L-C&G results reconciled?
 
Leaving aside the fact that "the calculations I made (including the Bayesian portion) strongly suggest that the mainstream's explanation for quasars is WRONG" is not a quantitative hypothesis

It most certainly is ... and the numerical results surely suggest the mainstream theory is faulty in some important manner.

there is the question of what "the mainstream's explanation for quasars" actually is. BAC, would you please state what this explanation is?

Why the one that leads to the mainstream's claim that all redshifts equate to distance, are randomly distributed across the sky, and are distributed in distance in the non-quantized manner I linked to earlier. Since that's all I use from the mainstream's explanation in my calculations, that's all I need to state. My calculation simply tests whether that explanation, whatever it is, is consistent with now three observations. And it doesn't appear to be. :)

Then, from it, derive a clear, unambiguous, quantitative hypothesis that you seek to test?

I guess after all my effort you still don't understand what those probabilities I calculated mean nor why I made that Bayes' Theorem calculation. :)

At the same time, state, clearly and unambiguously, what the null hypothesis is?

I did. You just didn't recognize it ... for the umpteenth time.

Oh, and for the record, once again, the probability of finding the five quasars at the locations near NGC 3516 and with the observed redshifts is, in standard astrophysics, 1.

See. I told you folks ... he doesn't even understand what probability I was calculating nor the meaning of Bayes' Theorem. :D

As several people who have posted to this thread have been saying, the kind of a posterori analysis you have presented, even if it were error free, is most certainly not "mainstream".

Perhaps in the astronomical community. But it is simply common sense math ... something that has been lacking in that community for decades. The methodology I used is widely used outside the field of astronomy. You'll find it in all sorts of books. Maybe it's time that astronomers learned something from other disciplines ... like electrical engineering ... instead of just ignoring them. :D
 
Originally Posted by BeAChooser
other than to ask you how many quasars there are in the sky in the range of visibility covered by the SDSS survey. Care to offer an estimate ... or at least an average density number? Since you seem to deem this important.
.
You will really, really enjoy this 2003 paper: "The Sloan Digital Sky Survey Quasar Catalog II. First Data Release"

Quite apart from the nitty-gritty discussions of the many, many difficulties the team faced, and overcame, with respect to refining their 'quasar target algorithms', this paper contains the following:

I notice that in all you quoted and pontificated on, you forgot to mention how many quasars there are ... or provide an average density number at the very least. Are you being coy, DRD, or just blowing more smoke because you sense what I'm going to do with that number if you provide it? :D
 
BAC,

I notice you still haven't addressed the major thrust of the thread, using the statitcis that you are using how is one to tell a random association of objects from a causal relationship of objects?
 
Still in catch up mode; this is post #223, star date 11 April.

As before, much of what's here has been covered subsequently, so I'll only comment on a few things.
DeiRenDopa said:
how was NGC 3516 selected?
You are handwaving because that doesn't matter as far as my calculation is concerned. The calculation is independent of whether NGC 3516 was found or not. The calculation simply determines the probability of that particular configuration being found amongst the entire population of quasar/galaxies that we could observe assuming the mainstream's theories. And that probability is exceedingly small. So the fact that Arp found that case (and several others like it) after relatively few observations (compared to the total number of quasar/galaxy associations in a sky filled with literally billions of galaxies) should be a warning flag that something may be wrong with the mainstream's theories.
(bolding added)

Assuming 'that calculation' were error free, this (bold) sentence nicely captures the problem:

* as every 'particular configuration' is unique (or nearly so), its probability will be exceedingly small, as estimated by the BAC method, no matter what 'theories' are used!

* not once, in BAC's posts before this (and, I think, to date), has "the mainstream's theories" been stated or referenced ... how then can they have been used in any calculations?

More generally, and beyond the narrow scope of just this post, there's the interesting question of how anyone, astronomer or not, should (or even could) move forward, assuming acceptance of a 'something strange' conclusion. This is, for me, much more interesting territory (more later)!
.
how many 'near NGC 3516 quasars' were known before Chu et al. planned their observations?
Again, that's handwaving because when the quasars were found doesn't affect the calculated probabilities in ANY way and because like or not there are 5 quasars with redshifts suspiciously close to Karlsson values aligned with the minor axis of NGC 3516 and no other objects identified as quasars in the observed field.
.
As I noted earlier, this becomes quite interesting, if not necessary important, when considering:

* how to reconcile the L-C&G paper (no statistically significant minor axis anisotropies in the magnitude range spanned by Chu et al., no statistically significant minor axis anisotropies within 1o) with the quasar data in the Chu et al. paper

* how to treat quasars in the environs of NGC 3516 which are/will be discovered after publication of the Chu et al. paper.
.
how are quasar redshifts distributed, in [0,3]? If they are not distributed equally (to within some bound), then probability calculations need to reflect that non-equal distribution.
I addressed that question. As I said, the frequency of redshifts is not constant over the entire range. Based on recent mainstream sources it looks like it rises from a finite value (about 1/8th of the max) near z=0 to a max at about z=1 to 1.5, then levels off till around z=2.5 to 3.0, where it precipitously drops reaching a value of only 1/40th to 1/50th the maximum at z=0.6. Thus, the low z data points (say below z = 1) are overweighted in my calculation compared to the higher z values in the range 1 to 3. That would affect the overall calculation more significantly if we observed that the separation between observed redshifts and Karlsson values as a percentage of their spacing between Karlsson values remained constant or trended upward or downward. But it doesn't. That spacing as a percentage goes up and down from point to point. Finding this effect is fairly involved but we can at least gage whether ignoring this in the calculation is conservative.

In the case of NGC 3516, observed z = 0.33, 0.69, 0.93, 1.40, 2.10. The Karlsson z = 0.3, 0.6, 0.96, 1.41, 1.96, 2.64. Therefore, the spacings are +0.03, +0.09, -0.03, -0.01, +0.14 which, as a percentage of the distance between the two nearest Karlsson values are 10%, 25%, 8%, 2%, 20%. Thus, the first two values with separations from the Karlsson value of 10% and 25% are overweighted, compared to the ones that have separations of 8%, 2% and 20%. That means in the calculation involving the three quasars with the lowest separations, the two lowest separations are underweighted compared to the highest separation of the three. Meaning that the corrected probability from that calculation would be lower than was calculated. And in the two separate calculations to account for effect of the other two quasars, one of the two is somewhat overweighted but the other may be slightly underweighted. So I assert that incorporating this factor into this particular calculation would LOWER the final probability from the value I determined.

In looking this over for the NGC 5985 case, I find I made a mistake in the previous calculation. The observed z = 0.35, 0.59, 0.81, 1.97, 2.13. The spacings are therefore +0.05, -0.01, -0.15, +0.01, +0.17. In the previous calculation, I used a separation of +0.03 for the last data point instead of +0.17. That effects the calculation in a number of ways, so I'm going to redo the whole calculation before addressing the z distribution evenness issue.

Now we could do the same as before and simply calculate the combinatorial probability of finding the lowest three spacing data points, z= 0.35 (+0.05), 0.59 (-0.01) and 1.97 (+0.01). In that case, the required increment would be 0.10 and the probability would be 1/((30 * 29 * 28)/(3*2*1)) = 2 x 10-4.

But that might significantly overestimate the probability since two of the data points are within an increment of only 0.02. So what's the combinatorial probability of finding 2 data points with a increment of 0.02? The answer is 1/((150 * 149 )/(2*1)) = 9 x 10-5. Which is less than the above estimate so let's use it.

Now we add in the effect of the 0.35 (+0.05), 2.13 (+0.17) and 0.81 (-0.15) values. The probability of seeing the 0.35 data point with a increment of 0.10 is about 1/30 = 0.033; the probability of seeing the 0.81 data point with a increment of 0.30 is about 1/10 = 0.1; and the probability of seeing the 2.13 data point with an increment of 0.34 is about 1/8.8 = 0.11. Combined, these would reduce the 9 x 10-5 probability estimate to only 3 x 10-8.

Next, we must account for the actual number of quasars that might be seen near galaxies in groups of the size needed to do the above calculations. Previously, I found that the mainstream estimates there should be a total of about 410,000 quasars in the sky. And I then assumed (very conservatively, I think) that only half are near low redshift galaxies. That left us with 205,000 quasars to distribute. Then I assumed (again, very conservatively) that half of these would be distributed in quantities less than five to all the galaxies available, leaving 103,000 that are in groups of 5 near low redshift galaxies. Dividing by 5, the final result was 20,600 galaxies with at least 5 nearby quasars. Multiplying the above probability by 20,600 yields a probability of 5.4 x 10-5.

It's at this point, however, that I notice another possible complication in my previous procedure. Since about half of the 3 x 10-8 probability comes from only 2 quasars being together near a galaxy, the number of galaxies that might have 2 quasars is larger (by 2.5 times). Thus, the importance of those 2 quasars could be improperly diminished if I assume 20,600 as the total number of galaxies. Thus, we can expect an UPPER BOUND of 5/2 * 5.4 x 10-5 = 0.000135 for the probability at this stage of the calculation. Let's conservatively use that.

Finally, we add in the fact that all 5 of these objects are aligned with the minor axis. As before, the alignment probability reduces the likelihood by 0.08, giving a final probability value of 0.000135 * 0.08 = 1.1 x 10-5 for the NGC 5985 case.

(By the way, accounting for an increase in galaxy sample size in the NGC 3516 case because much of the probability only depends on 3 quasars, one can estimate an upper bound probability of 5/3 * 0.0005 (the original probability in that case) = 0.00083 ... a very, very small likelihood of that case turning up at all if we were somehow able to check every single quasar thought to be visible in the sky.

Now let's examine your concern about the z distribution in the NGC 5985 case. The percentage of distance between the two nearest Karlsson z values for the z = 0.35, 0.59, 0.81, 1.97, 2.13 observations are 16%, 3%, 42%, 1%, 25%, respectively. In this case, the 16%, 3% and 42% data points are overweighted, while the 1% and 25% data points are underweighted. In the two quasar calculation (which used z=0.59 and 1.97), the 3% value is somewhat overweighted. This would increase the probability at least a bit ... perhaps a factor of 2? But counteracting this is that fact that the z = 0.35 data point with a very large increment is underweighted. Likewise the z = 0.81 data point with an even bigger increment is also underweighted. But if you like, I'll still give you that factor of 2. In which case, the final probability of seeing NGC 5985, if one could check every single quasar out there, would only be 2 x 10-5 ... again a VERY small number.

Any way you cut it, DeiRenDopa, this calculation proves that the mainstream's theory about quasars is on shaky ground. They need to reexamine that theory in light of this data or come up with an explanation why redshifts seem to be quantized around certain values and show up with a higher than expected frequency around galaxies along their minor axes. Or one has to illogically believe that Arp was REALLY LUCKY in turning up 2 cases with likelihoods of only 0.0008 and 0.0002 even if all the galaxies in the sky with quasars could be examined (which he didn't come close to doing).
.
I need to go find the paper BAC cited for 'the NGC 5985 case'; same follow-on questions would apply (obviously) - how to address newly discovered quasars in its environs, how to reconcile with the L-C&G paper, and how to move forward.
.
your calculation, on its own, would seem to apply to any set of three numbers in [0,3]
No, the three numbers that turned up aren't just any three numbers, are they. They are all close to values that were determined without any reference to the data in these particular samples. Or so I believe. :)
.
Now here's an interesting question: which, if any, of the various quasars 'near' NGC 3516 and NGC 5985 were among those in Karlsson's original paper? in papers which cited that original one (not counting the Chu et al. and whatever paper mentioned NGC 5985)?

Notice, too, that BAC here is getting very close to numerology ... suppose I find six (or is it more? less?) numbers - some transform of telephone numbers 'near' 'BAC' in some telephone directory say, and suppose I find three of these are 'near', in a manner similar to that presented in BAC's posts, to some quasar redshifts, can I thus claim that my 'calculation proves that the mainstream's theory about quasars is on shaky ground'?

This in turn illustrates nicely one reason why the kind of a posterori approach BAC has presented, as a means of 'proving wrong', is not used.

Now if he had used this approach to test some hypothesis about Karlsson peaks, or minor axis anisotropies, the approach might have some value ...
.
As I noted earlier, if the range is [0,3], then all the Karlsson values in that range need to be included.
And as I noted in my response, there is nothing in the theory that requires quasars be at all the Karlsson values around any given galaxy at any one time.
.
This is one of the parts of BAC's posts that I remembered, and lead to me asking about what hypothesis/hypotheses he intended his a posterori approach to test.

If the hypothesis being tested is solely concerned with 'mainstream theories', then Karlsson peaks are irrelevant ... any 3/5/10 numbers in [0,3] will do.

If Karlsson peaks are important, then the hypothesis being tested cannot concern 'mainstream theories' alone.
.
Beachooser wrote:
Here's a 2005 study http://www.iop.org/EJ/abstract/1538-3881/129/5/204 that indicates an average density of 8.25 deg-2 based on the SDSS survey then argues it should be corrected upward to 10.2 deg-2 to make it complete. And if you go to the SDSS website (http://www.sdss.jhu.edu/ ) you find they say the effort will observe 100,000 quasars over a 10,000 deg2 area. That also works out to about 10 quasars deg-2. So it looks like I used a number that was 3 times too large in my earlier calculation. In this revised calculation, I will only assume the average quasar density is 10 deg-2. That means the total number of quasars than can be seen from earth is around 410,000.

The Chu et al. paper carefully explains how they chose which objects to observe (in order to measure redshifts); the "average density" of quasars you need to use in this part of your calculation is that which would be obtained if the search method used in Chu et al. were to be used over the whole sky. As there appears to be no effort to explain this, in any quantitative fashion, let alone estimate it, you are left with an unknown.
Let me emphasize that my calculation actually has nothing to do with the Chu et. al. paper. If you don't like the "average density" of quasars that I used for all the quasars in the sky (i.e., 10/deg2), then I'll make you the same offer I made to David. You provide that number and we will just insert it in the calculation and see what we get. I used 10/deg2 because the SDSS study and website says that's the average density. It's why they indicate there are in total about 400,000 quasars in the visible universe. If you want me to use whatever Chu claimed is the average density over the ENTIRE sky, then just tell us what Chu says that is. But be prepared to justify it, if it happens to disagree with the SDSS estimate. After all, the SDSS estimate is based on the most complete study of quasars that is available (you folks kept telling me that) and papers have been published by mainstream astronomers that in fact conclude the SDSS study is very close to complete in it's identification of the quasars that are out there in the section of the sky that was surveyed). If you don't like the assumption that the density of quasars in the quarter of the sky that was surveyed is the same as the density in the three-quarters that was not surveyed ... take it up with the SDSS authors. :) Frankly, I think you are just doing more hand-waving ... now desperate to avoid accepting what is a rather obvious conclusion for this set of calculations. :D (rest of post omitted)
. Since this post that I'm quoting, we have, at last, started to look at the question of what a quasar is. I hope, by now, that it's clear why this is important, and why BAC's calculations depend heavily on getting clear, consistent answers ... especially if, as he claims, what he intends to test is 'mainstream theories'. (to be continued)
 
No kidding. :rolleyes:

So if what you claimed is true, how come the abstract of that paper states: "There is a clear excess of QSOs near the minor axis with respect to the major axis of nearby edge-on spiral galaxies"? And why does it seem that the major complaint in the paper is not about not finding this effect within 1 degree but about finding this effect beyond where Arp hypothesized ... out to 3 degrees?

But in any case, how about we add another case to the other two already mentioned ... just to make things even more interesting.

Let's look at the NGC 2639. The Burbidges, Arp and Zibetti discuss it here: http://www.journals.uchicago.edu/doi/abs/10.1086/421465 "QSOs and Active Galactic Nuclei Associated with NGC 2639, 2004".

Note that in this instance, there are optical x-ray quasars, at z = 0.305 and z = 0.323, on opposite sides of the galaxy about the same distance from the galaxy. Their separation from each other is said to be about 35 arcmin. One is within an angle of 10 degrees of the minor axis and the other is within about 40 degrees angle. The paper also says there are seven X-ray sources lying along the northeast minor axis. Four of them were measured spectroscopically and found to be QSO/active galactic nuclei with redshifts of z = 0.337, 0.352, 1.304, and 2.630. Now how unlikely is this configuration, DRD?

Recall the Karlsson peak values: z = 0.06. 0:30, 0:60, 0:96, 1:41, 1:96, 2:63. Let's just compute the combinatorial probability of all these objects being this close to those values around one galaxy and also (with the exception of one) seemingly aligned with the galaxy's minor axis.

Following the same procedure as I used previously, let's first compute the difference between the observations and the nearest Karlsson value to each observation.

We have z = 0.305-0.30, 0.323-0.30, 0.337-0.30, 0.352,-0.30, 1.304-1.41, 2.63-2.63.

So the differences are +0.005, +0.023, +0.037, +0.052, -0.106, 0.00 , respectively.

Doubling those values we get 0.01, 0.046, 0.074, 0.104, 0.212 and 0.0 .

Let's start by just looking at the probability of finding the objects with z near 0.30. We can use a increment of .10 in that case, which should be very conservative given that one of those is actually at an increment of 0.02 and two others are significantly less than 0.10. So there are 30 different possibilities. Thus the probability is 1/(30*29*28*27)/(4*3*2) = 3.6 x 10-5.

Now factor in the probability of finding the quasar at 1.304 with an increment of 0.212. In the range 0 to 3.0 there are about 14.15 increments. Use 14. Thus the probability of finding this case is 1/14 = 0.07. For the probability of finding the z = 2.63 quasar, we will just use an increment of 0.01. So it's 1/300 = 0.0033.

Multiply them all together and we get 3.6 x 10-5 * 0.07 * 0.0033 = 8.6 x 10-9.

Now multiply by the probability of finding 5 quasars aligned along the minor axis (we'll ignore the one that only trends in the direction). That gives a probability of 8.6 x 10-9 * 0.08 (same value used previously) = 6.9 x 10-10.

Now multiple by the estimated (conservatively) total number of galaxies that might have 5 quasars near them (ignore the fact that even fewer would have 6) and we get 6.9 x 10-10 * 20,600 = 1.4 X 10-5 = .000014. The smallest yet. :)

My goodness ... we've found a third VERY unlikely observation ... even if we assume that Arp et al had examined every single possible quasar/galaxy combination ... which they didn't.

Now let's add this latest observation into the Bayes' Theorem look at the likelihood the mainstream's theory is correct. Recall that when we last left this story, the probability that the mainstream theory is correct, Pr1(A), was down to 0.33. And that was without considering the second case of those two. What does this latest observation do to that? Well ... do I really need to tell you, DRD? It's very, very small. :D
.
So, back to the 'hypothesis' question: what is the hypothesis that you claim this method (and data) tests?

As I mentioned in my last post, it cannot be one that is, or involves, solely 'mainstream theories', if only because 'Karlsson peaks' are not part of any mainstream theory (that I know of).

Therefore the calculations can be repeated, using the same approach, using any set of numbers in [0,3]!

And, to repeat, the L-C&G paper seems to show that there is no statistically significant minor axis alignment of quasars brighter than mg = ~20, nor any statistically significant minor axis alignment with 1o.

Is it too much to ask that you write something about how to reconcile these apparently contradictory facts?
 
DeiRenDopa said:
Leaving aside the fact that "the calculations I made (including the Bayesian portion) strongly suggest that the mainstream's explanation for quasars is WRONG" is not a quantitative hypothesis
It most certainly is ... and the numerical results surely suggest the mainstream theory is faulty in some important manner.
.
And as I have just pointed out, if this is so, then the hypothesis you claim to be testing cannot give any special status to 'Karlsson peaks', because they are not part of any 'mainstream theory'.

And, as estimates of a posterori probability, your calculations may be correct (if error free), but then they'd apply just as well to any set of numbers in [0,3].
.
there is the question of what "the mainstream's explanation for quasars" actually is. BAC, would you please state what this explanation is?
Why the one that leads to the mainstream's claim that all redshifts equate to distance, are randomly distributed across the sky, and are distributed in distance in the non-quantized manner I linked to earlier. Since that's all I use from the mainstream's explanation in my calculations, that's all I need to state. My calculation simply tests whether that explanation, whatever it is, is consistent with now three observations. And it doesn't appear to be. :)
.
I don't know what you used as a source, that lead you to conclude that this is an accurate summary! :jaw-dropp :confused:

Maybe it's worthwhile to go through this, in some detail, if only so you can understand just how inaccurate it is?
.
Then, from it, derive a clear, unambiguous, quantitative hypothesis that you seek to test?
I guess after all my effort you still don't understand what those probabilities I calculated mean nor why I made that Bayes' Theorem calculation. :)
.
As I have been trying to stress, my comments have been focussed on the errors outside the actual calculations, such as the inputs and assumptions you used.

Also, as I have pointed out, and as should now be quite clear, 'what those probabilities [BAC] calculated mean' is something quite different than what you clearly think they mean.
.
At the same time, state, clearly and unambiguously, what the null hypothesis is?
I did. You just didn't recognize it ... for the umpteenth time.
.
OK, fine ... except that a) you didn't state it, and b) what you did present isn't what you think it is (if {'mainstream'} then {'Karlsson peaks' are a no-no}).
.
See. I told you folks ... he doesn't even understand what probability I was calculating nor the meaning of Bayes' Theorem. :D
As several people who have posted to this thread have been saying, the kind of a posterori analysis you have presented, even if it were error free, is most certainly not "mainstream".
Perhaps in the astronomical community. But it is simply common sense math ... something that has been lacking in that community for decades. The methodology I used is widely used outside the field of astronomy. You'll find it in all sorts of books. Maybe it's time that astronomers learned something from other disciplines ... like electrical engineering ... instead of just ignoring them. :D
(bolding added)

To the extent that this is an accurate summary (which it isn't, but never mind for now), it highlights one thing I am quite interested in, namely what methods you (BAC) think should be used in astronomy and astrophysics.

As something simple and straight-forward, and as an example to get us started, perhaps you'd like to share with all readers of this thread how the method you think should be used in astronomy can be used to reconcile the Chu et al. paper's quasar conclusions with those of the L-C&G ('minor axis anisotropy') paper?
 
(continued - post #223, dated 11 April)

Commenting on 'the NGC 5985 case' ...
(parts omitted)

In looking this over for the NGC 5985 case, I find I made a mistake in the previous calculation. The observed z = 0.35, 0.59, 0.81, 1.97, 2.13. The spacings are therefore +0.05, -0.01, -0.15, +0.01, +0.17. In the previous calculation, I used a separation of +0.03 for the last data point instead of +0.17. That effects the calculation in a number of ways, so I'm going to redo the whole calculation before addressing the z distribution evenness issue.

Now we could do the same as before and simply calculate the combinatorial probability of finding the lowest three spacing data points, z= 0.35 (+0.05), 0.59 (-0.01) and 1.97 (+0.01). In that case, the required increment would be 0.10 and the probability would be 1/((30 * 29 * 28)/(3*2*1)) = 2 x 10-4.

But that might significantly overestimate the probability since two of the data points are within an increment of only 0.02. So what's the combinatorial probability of finding 2 data points with a increment of 0.02? The answer is 1/((150 * 149 )/(2*1)) = 9 x 10-5. Which is less than the above estimate so let's use it.

Now we add in the effect of the 0.35 (+0.05), 2.13 (+0.17) and 0.81 (-0.15) values. The probability of seeing the 0.35 data point with a increment of 0.10 is about 1/30 = 0.033; the probability of seeing the 0.81 data point with a increment of 0.30 is about 1/10 = 0.1; and the probability of seeing the 2.13 data point with an increment of 0.34 is about 1/8.8 = 0.11. Combined, these would reduce the 9 x 10-5 probability estimate to only 3 x 10-8.

Next, we must account for the actual number of quasars that might be seen near galaxies in groups of the size needed to do the above calculations. Previously, I found that the mainstream estimates there should be a total of about 410,000 quasars in the sky. And I then assumed (very conservatively, I think) that only half are near low redshift galaxies. That left us with 205,000 quasars to distribute. Then I assumed (again, very conservatively) that half of these would be distributed in quantities less than five to all the galaxies available, leaving 103,000 that are in groups of 5 near low redshift galaxies. Dividing by 5, the final result was 20,600 galaxies with at least 5 nearby quasars. Multiplying the above probability by 20,600 yields a probability of 5.4 x 10-5.

It's at this point, however, that I notice another possible complication in my previous procedure. Since about half of the 3 x 10-8 probability comes from only 2 quasars being together near a galaxy, the number of galaxies that might have 2 quasars is larger (by 2.5 times). Thus, the importance of those 2 quasars could be improperly diminished if I assume 20,600 as the total number of galaxies. Thus, we can expect an UPPER BOUND of 5/2 * 5.4 x 10-5 = 0.000135 for the probability at this stage of the calculation. Let's conservatively use that.

Finally, we add in the fact that all 5 of these objects are aligned with the minor axis. As before, the alignment probability reduces the likelihood by 0.08, giving a final probability value of 0.000135 * 0.08 = 1.1 x 10-5 for the NGC 5985 case.

(By the way, accounting for an increase in galaxy sample size in the NGC 3516 case because much of the probability only depends on 3 quasars, one can estimate an upper bound probability of 5/3 * 0.0005 (the original probability in that case) = 0.00083 ... a very, very small likelihood of that case turning up at all if we were somehow able to check every single quasar thought to be visible in the sky.

Now let's examine your concern about the z distribution in the NGC 5985 case. The percentage of distance between the two nearest Karlsson z values for the z = 0.35, 0.59, 0.81, 1.97, 2.13 observations are 16%, 3%, 42%, 1%, 25%, respectively. In this case, the 16%, 3% and 42% data points are overweighted, while the 1% and 25% data points are underweighted. In the two quasar calculation (which used z=0.59 and 1.97), the 3% value is somewhat overweighted. This would increase the probability at least a bit ... perhaps a factor of 2? But counteracting this is that fact that the z = 0.35 data point with a very large increment is underweighted. Likewise the z = 0.81 data point with an even bigger increment is also underweighted. But if you like, I'll still give you that factor of 2. In which case, the final probability of seeing NGC 5985, if one could check every single quasar out there, would only be 2 x 10-5 ... again a VERY small number.

Any way you cut it, DeiRenDopa, this calculation proves that the mainstream's theory about quasars is on shaky ground. They need to reexamine that theory in light of this data or come up with an explanation why redshifts seem to be quantized around certain values and show up with a higher than expected frequency around galaxies along their minor axes. Or one has to illogically believe that Arp was REALLY LUCKY in turning up 2 cases with likelihoods of only 0.0008 and 0.0002 even if all the galaxies in the sky with quasars could be examined (which he didn't come close to doing).

(rest omitted)
.
Let's start with the facts.

I think the paper in which 'the NGC 5985 case' is covered is A QSO 2.4 arcsec from a dwarf galaxy - the rest of the story, by H. Arp (the link is to the arXiv preprint abstract).

If so, then the redshifts in that paper are, in increasing order of z:

0.008 (NGC 5985, the Seyfert)
0.009 (a dSp right near a QSO, hence the title of the paper)
0.348 (a S1, type 1 Seyfert)
0.690
0.807
1.895
1.968
2.125

This doesn't align very well with what went into BAC's calculations (quoted here):
* the 0.35 object is a Seyfert, not a quasar
* the '0.59' quasar in fact has a redshift of 0.69
* the 1.90 quasar is missing.

In terms of radial distance, here is the list, ranked by distance (from the Arp paper):
12.0' (z=2.13 quasar)
25.2' (1.97)
36.9' (0.81, and the dSp)
48.2' (0.69)
54.3' (1.90)
90.4' (0.35, which is the S1)

Section 1 of the Arp paper concludes as follows: "The following sections present the full census of objects around the Seyfert NGC5985."; however, the only objects mentioned in the following sections are the QSOs, the one dSp galaxy, and the one S1 galaxy.

A quick check using NED turns up 104 (extra-galactic) objects within 30' of NGC 5985; in addition to the two in the Arp paper, there are:
* 41 radio sources (without listed redshifts)
* 30 galaxies (types unspecified, no listed redshifts)
* 4 IR sources (without listed redshifts)
* 2 x-ray sources (without listed redshifts)
* 1 emission line source (without listed redshift)
* 1 galaxy triple (without listed redshift)
* 1 group of galaxies (without listed redshift)
* 10 galaxies with redshifts (types unspecified; z = 0.006, 0.01, 0.01, 0.01, 0.07, 0.09, 0.11, 0.19, 0.19, and 0.25)
* 2 groups of galaxies (z = 0.009, 0.01)
* 1 cluster of galaxies (z = 0.14, determined photometrically)
* 4 UV excess sources (photometric redshifts of 0.98, 1.28, 1.28, and 1.78)
... and 5 QSOs (redshifts of 1.54, 1.78, 1.97, 2.13, and 3.88).

Perhaps there is some unrecognised duplication, a radio source may be the same as a galaxy, for example?

I wonder how many of the objects without redshifts are, in fact, either quasars or S1 or dSp galaxies?

I wonder how many galaxies, with redshifts, either alone or in the groups and one cluster, are, in fact, S1 or dSp galaxies?

I wonder how many of the UV excess sources are, in fact, either quasars or S1 or dSp galaxies?

I wonder how many lie near the minor axis of NGC 5985?

I wonder how consistent the numbers are with the findings presented in L-C&G?

I wonder what would happen if we looked right out to 91' from NGC 5985?

But, above all, I wonder what sort of results one would get by applying a BAC-type analysis to these 104 'near NGC 5985' objects?
 
Comparing SDSS DR1 'quasars' with Chu et al.'s 'quasars'

While we wait for BeAChooser to show us all how he reconciles his analyses of the NGC 3516 Chu et al. quasars with the null findings of L-C&G (for the relevant magnitude range and distance), I thought it might be interesting to compare the SDSS DR1 quasars with those in the Chu et al. NGC 3516 paper.

In an earlier post I looked, briefly, at the SDSS DR1 quasars, and noted that they are tightly defined as those with an absolute luminosity of Mi < -22 (or thereabouts), and bright enough, at the time, for the spectroscopic pipeline to estimate redshifts from the observed spectra. Although the paper I cited perhaps didn't say so as clearly as it could have, it seems this means the quasars, so defined, in DR1 are complete to z ~0.4. Of course, there are many caveats to be entered, not least of which is that the authors did not, in that paper, explicitly set out to estimate how closely DR1 came to meeting the design target (90% completeness).

What of the quasars in the NGC 3516 Chu et al. paper?

First, we learn that the five objects they observed had been selected by Arp, and that one had been independently confirmed as a quasar before they began their observations.

So how did Arp select these five quasar candidates, and only these five?

In an earlier paper (Arp 1997a - link is to the PDF, so be warned!), Arp took a list of x-ray sources near ~20 Seyferts, compiled by Radecke, and tried to match the 'excess' x-ray sources against objects on Palomar Schmidt blue and red plates ... how he identified which x-ray sources were 'excess' and which were not I do not know (perhaps I didn't read his paper carefully enough). He found that five x-ray sources 'near' NGC 3516 could be matched to bright blue stellar objects on the blue plate(s) ... again, it's not clear to me how many x-ray sources he did not/could not match to stellar sources in the Palomar plate(s), in this same field.

So, how would you, dear reader, go about estimating the overlap between the method Arp+Chu et al. used to select quasars with that used in compiling the SDSS DR1 catalog?
 
as every 'particular configuration' is unique (or nearly so), its probability will be exceedingly small, as estimated by the BAC method, no matter what 'theories' are used!

ABSOLUTELY FALSE. The probability of finding alignments along minor axes and redshifts near Karlsson values like those in the 3 observations I've discussed will not be nearly as small under the Arp/Narlikar redshift theory as under the mainstream theory. In fact, under the Arp/Narlikar theory, such cases are expected to regularly be found even in small samplings of the total population of quasar/galaxy combinations.

And even if theirs isn't the true explanation for many such cases actually being found, there still must be some alternative physical mechanism that the mainstream hasn't considered to explain the fact that after studying only a small portion of the total population, we've already found 3 cases (actually more than that, if truth be told). Clearly, DRD, you still don't understand what those probabilities I calculated even mean. :)

not once, in BAC's posts before this (and, I think, to date), has "the mainstream's theories" been stated or referenced [/

ABSOLUTELY FALSE AGAIN. I've stated (and referenced) all the portions of the mainstream theory needed to do my calculations. Do you really want to deny that mainstream theorists claim quasars are continuously distributed (as opposed to quantized) with regards to distance and therefore redshift? Do you really want to deny that the mainstream says quasars are randomly located in the sky with respect to our viewing location?

As I noted earlier, this becomes quite interesting, if not necessary important, when considering:

* how to reconcile the L-C&G paper (no statistically significant minor axis anisotropies in the magnitude range spanned by Chu et al., no statistically significant minor axis anisotropies within 1o) with the quasar data in the Chu et al. paper

What is interesting is how you continue to misrepresent what the L-C&G paper concluded. Here's more of what the paper stated: "In the previous subsection, we showed that, given the actual distribution of SDSS QSOs, a random distribution of position angles in the RC3 galaxies should not show the level of anisotropy found; that is, the anisotropy is not due to the peculiarities of the QSO distribution itself. ... snip ... This gives 135 galaxies (instead of 71 galaxies with total coverage) ... snip ... whose significance according to Monte Carlo simulations is 4.4-sigma (a probability of 1 in 90,000). For z > 0.5 the significance according to Monte Carlo simulations is 4.8-sigma (a probability of 1 in 600,000)."

And the only place in the paper where there is a discussion involving an angular distance (I'll substitute THETA for the Greek character in the quote below) of 1 degree or less is here:

"Other values of THETAmax (see Fig. 3) show the existence of anisotropy but the best significance is for THETAmax ?3 degrees. For lower angles, there seems to be a lower value of ALPHA (BAC - anisotrophy), although for THETAmax < 1 degree the number of QSOs is too low to draw any conclusions. "

Did you miss that qualification regarding the data and conclusions? It states very clearly that there are too few data points in that range to justify any conclusions ... in other words, YOUR conclusions. Is this your normal scientific approach, DRD ... to ignore the warnings of the authors of papers? ;)

* how to treat quasars in the environs of NGC 3516 which are/will be discovered after publication of the Chu et al. paper.

Again, this is a red herring. You haven't proven that any quasars have been discovered in the environs of NGC 3516 since the Chu paper. And given that Chu et al were specifically looking for quasars in that environ, it's rather doubtful that many, if any, have been found unless they just missed the obvious. And I doubt they did. Furthermore, you presume any new quasars would not also be aligned with respect to significant galaxy features or have redshifts close to Karlsson values. Check back in when you can actually provide new data on quasars near NGC 3516 that contradict the conclusion of the study. The ball is in your court.

Now here's an interesting question

You are full of questions. But you don't actually listen to the answers you are given nor answer anyone else's questions.

which, if any, of the various quasars 'near' NGC 3516 and NGC 5985 were among those in Karlsson's original paper?

Why didn't you go find out? Too lazy? Because unless you do, you are just throwing out another red herring, DRD, which seems to be an increasingly standard approach by you in this debate. But let me save you the effort of going to look now.

http://adsabs.harvard.edu/cgi-bin/n...T&data_type=HTML&format=&high=42ca922c9c07056 "Possible Discretization of Quasar Redshifts, Karlsson,*K.*G., Astronomy and Astrophysics, Vol. 13, p. 333 (1971)" clearly indicates that the basis of Karlsson's work are some lists of quasars compiled by the Burbidges in 1967, 1969 and 1970. Now I suppose you could go look in the references Karlsson cites for your answer. But you might want to first observe that according to the Chu paper (you know, the one you *claim* to have read and understood), 4 of the 5 quasars aligned with NGC 3516 minor axis were only identified as quasars by Chu himself in the late 1990s. So it's rather unlikely that they were used in Karlsson's much earlier work. Wouldn't you say? :)

And I'll leave it up to you to prove the quasars in the NGC 5985 case were included in Karlsson's work. Good luck. You'll need it. In fact, you'll need luck better than you apparently believe Arp et. al. had in finding all the examples I've been doing calculations for. :D

Notice, too, that BAC here is getting very close to numerology

Notice, too, that DRD is getting even more desperate in his defense of Big Bang's theory. Numerology? ROTFLOL!

This in turn illustrates nicely one reason why the kind of a posterori approach BAC has presented, as a means of 'proving wrong', is not used.

That's funny ... coming from someone who clearly doesn't even understand the calculations I made. :)

Now if he had used this approach to test some hypothesis about Karlsson peaks, or minor axis anisotropies, the approach might have some value ...

Tell us, DRD ... since clearly the alignment in NGC 3516 was discovered AFTER Karlsson identified his specific quantization values, wouldn't you admit that my calculation tests the hypothesis that Karlsson was right and not the mainstream which claims redshifts are not quantized? Or is the logic of that beyond you? :)

If the hypothesis being tested is solely concerned with 'mainstream theories', then Karlsson peaks are irrelevant ... any 3/5/10 numbers in [0,3] will do.

ABSOLUTELY FALSE AGAIN. You have quite a string of those going, DRD. You will not get the same probabilities if you use any 3/5/10 numbers. Let me demonstrate for NGC 3516.

Instead of the Karlsson values of z = 0.3, 0.6, 0.96, 1.41, 1.96, 2.64, let's use z = 0.10, 0.48, 0.75, 1.20, 2.25, 2.80 as the values (call them DRD values). Note that I even tried to keep them around the actual data points (helping you) rather than picking *any* five.

The quasars have z = 0.33, 0.69, 0.93, 1.40, 2.10. Therefore the spacings to each of the nearest DRD values are -0.15, -0.16, +0.18, +0.20, -0.15. Now double those to find the increment to use in the calculation. Let's just assume they are all 0.15 (helping you again, by the way) so the increment will be 0.30. Between 0 and 3.0 there are 10 zones of that increment. So the probability of those 5 quasars showing up close to the DRD values is 1/(10*9*8*7*6)/(5*4*3*2*1) = 0.004. That compares to a probability of 0.0000003 in my calculation using Karlsson's values. That's not even REMOTELY close. So you clearly don't know what you are talking about when it comes to discussing probability and statistics. You're just blowing smoke and have been since the beginning of this thread. :rolleyes:

The Chu et al. paper carefully explains how they chose which objects to observe (in order to measure redshifts); the "average density" of quasars you need to use in this part of your calculation is that which would be obtained if the search method used in Chu et al. were to be used over the whole sky.

Curious. I STILL don't see you actually providing a number. I don't think you can because you not only don't understand the calculations I did, you don't even understand Chu's paper. We already know you didn't actually read the Chu paper because you claimed it found a result over a domain that the paper itself stated there was too little data to draw any conclusions. You know what you are, DRD? A TROLL ... whose only purpose is to keep this thread going I think. You've demonstrated three (or is it four times) that you didn't actually read the paper you were claiming to know about and falsely claimed supported your view of things. You best study yourself in that troll study you *claim* to be doing, because I'm tired of your dishonesty and probably won't continue "feeding the troll" for much longer. But I will thank you for motivating me to flesh out certain details of the calculations and lower the probabilities even further than initially. :D

Here's another peer reviewed paper you can ignore:

http://www.hedla.org/example_paper.pdf "Quasars Associated with NGC 613, NGC 936 and NGC 941, H. Arp, Astrophysics and Space Science, 2006, ... snip ... There are three major aims of this paper. One is to report an X-ray nucleus and radio extensions which suggest ejection of material along the narrowbar in NGC613. Since the adjacent, high redshift quasars are found in the same line, an ejection origin for these quasars is strongly supported. The multiple arms of this barred spiral are attributed to multiple ejections. The second aim is to show a completely different but confirming example of another barred spiral, NGC 936, active in both radio and X-rays, which appears to be ejecting a pair of quasars of similar redshift in nearly the same configuration as in NGC 613."

... or mischaracterize since I don't expect you to understand it any better than you did Keel's papers ... or Chu's ... or L-C&G's ... or Arp's. :D
 
.
So, back to the 'hypothesis' question: what is the hypothesis that you claim this method (and data) tests? ... snip ... Is it too much to ask that you write something about how to reconcile these apparently contradictory facts?

My response to this is in my last post.
 
.
And as I have just pointed out, if this is so, then the hypothesis you claim to be testing cannot give any special status to 'Karlsson peaks', because they are not part of any 'mainstream theory'. .... snip ... As something simple and straight-forward, and as an example to get us started, perhaps you'd like to share with all readers of this thread how the method you think should be used in astronomy can be used to reconcile the Chu et al. paper's quasar conclusions with those of the L-C&G ('minor axis anisotropy') paper?

Ditto. My response to the claims in this post are in post #296.
 
:)


You are full of questions. But you don't actually listen to the answers you are given nor answer anyone else's questions.
...

That's funny ... coming from someone who clearly doesn't even understand the calculations I made. :)
...
Curious. I STILL don't see you actually providing a number. I don't think you can because you not only don't understand the calculations I did, you don't even understand Chu's paper. We already know you didn't actually read the Chu paper because you claimed it found a result over a domain that the paper itself stated there was too little data to draw any conclusions. You know what you are, DRD? A TROLL ... whose only purpose is to keep this thread going I think. You've demonstrated three (or is it four times) that you didn't actually read the paper you were claiming to know about and falsely claimed supported your view of things. You best study yourself in that troll study you *claim* to be doing, because I'm tired of your dishonesty and probably won't continue "feeding the troll" for much longer. But I will thank you for motivating me to flesh out certain details of the calculations and lower the probabilities even further than initially. :D

The irony is rather amazing and dreadfully wonderful, classic BAC. You still haven't addressed the inadequacies of your pet theory and now will play the aggreived martyr.

Still haven't defended your inability to distinguish between a random place ment and a causal placement. have you?

Using your statistics BAC, how can you tell a random placement from a causal one?


Also there seem to be some problems with Hoyle/Narlikar theory which is rather essential to your claims:
http://www.internationalskeptics.com/forums/showthread.php?t=111400
 
Last edited:
DeiRenDopa said:
as every 'particular configuration' is unique (or nearly so), its probability will be exceedingly small, as estimated by the BAC method, no matter what 'theories' are used!
ABSOLUTELY FALSE. The probability of finding alignments along minor axes and redshifts near Karlsson values like those in the 3 observations I've discussed will not be nearly as small under the Arp/Narlikar redshift theory as under the mainstream theory. In fact, under the Arp/Narlikar theory, such cases are expected to regularly be found even in small samplings of the total population of quasar/galaxy combinations.

And even if theirs isn't the true explanation for many such cases actually being found, there still must be some alternative physical mechanism that the mainstream hasn't considered to explain the fact that after studying only a small portion of the total population, we've already found 3 cases (actually more than that, if truth be told). Clearly, DRD, you still don't understand what those probabilities I calculated even mean. :)

(rest omitted)
.

BAC, the hypothesis you claim you are testing, per many many posts in this thread, is one concerning "mainstream theories".

There are no 'Karlsson peaks' in any such theories (that I know of; if you know differently, please provide an appropriate reference).

Therefore whatever your hypothesis is, it does not, and cannot, test "the mainstream theory".

There is no quantitative, "Arp/Narlikar theory" that can be used to develop hypotheses which can be tested, quantitatively, using quasar-galaxy associations, by the kind of a posterori method you have used. At least, despite having been asked, more than once, if you knew of any, you have not yet provided a reference to any such "Arp/Narlikar theory". Further, as readers of the Hoyle-Narlikar Theory JREF forum thread have learned, in its most recent incarnation, no such hypothesis is possible, even in principle.

The most interesting discovery, to me, I have made though this discussion is that you seem to believe the method you use in the NGC 3516 (etc) calculations is legitimate in the branches of science called astronomy, astrophysics, and cosmology. This suggests that you do not 'do' science in these fields the same way as the professionals do (to the extent that you may think what you are doing constitutes science at all).

If so, then I would like to explore the differences between your vision of how astronomy (etc) should be done and how it is actually done.

And rather than continue with it in this thread, I'll start a new thread, in the next day or so.
 

Back
Top Bottom