Arp objects, QSOs, Statistics

I'm glad you agree because this is for the example that YOU posed where there are 10 QSO all with the same deltaz (0.25) Maybe I misunderstood but I believe you said the overall probability was 0.5. That's wrong. It's 0.001, as I showed.

What I said (perhaps not perfectly clearly) is that for a typical random distribution, the probability for that distribution should be around .5 (by definition of "typical"). A typical random distribution will have an average deltaz of .25. If you could plug that average into your formula, you'd get P~.001 (which I said). Of course the actual values of P will be lower, since the points with deltaz smaller than .25 will matter more. Therefore, P is not the probability you want.

It's not wrong to use different deltaz.

If what you want to calculate is the confidence with which you can reject null hypothesis given the data, it is wrong.

Isn't it a fact that the probability for each QSO, by itself, can be determined individually as 2*deltaz. Just like in the above 1 QSO example. Correct?

The probability that a given QSO drawn from a flat distribution will be within deltaz of some point is proportional to 2deltaz, yes.

And isn't it a fact that the placement of each QSO is independent of the placement of any other. They are independent phenomena ... under the assumptions of mainstream theory. And given that the placement of the QSOs are independent of one another, isn't the joint probability the product of the probabilities of the individual placements?

That computes the probability of a very specific type of data set (one with those particular deltaz values). That's not at all what you want.

Look - let's make it even simpler. Suppose there's only a single data point, drawn from a flat distribution, and it's (I don't know) .22370000. Now, what's the probability of finding that result? My god - it's zero!!! The odds of finding that value, precisely that value, are 0!! And if there is some uncertainty in the measurement, the odds are proportional to that uncertainty (which might be very very small). So should we therefore reject the theory that the data was drawn from a flat distribution? OF COURSE NOT.

So the result is inescapable, even when the deltaz of each QSO is different. It's just simple math. I'm not wrong in this case, sol. The formula is correct for 1 "quasi-Karlsson" value and 10 quasars each a different distance from that "quasi-Karlsson" value.

No, the result is totally wrong. Or to be a little more precise - the probability you are computing is the probability of finding a very specific data set - one with one point within deltaz1, another point within deltaz2, etc. But we don't care about that, because (if the null hypothesis is correct) there is nothing at all special about that data set.

It's just like my example above - ANY data set has zero probability!

The random number cases you cited resulted in small probabilities, but that's as one would expect when the probability for any given point being within that points deltax from the "quasi-Karlsson" value is < 1. From the set of calculations you did, it would appear that the average probability you will get in that case is around 10-6 to 10-7 from random samples ... assuming the distribution they are drawn from is uniform across the total range. But what probability would you get if the set of QSOs you observed were are all fairly close to the midpoint ... say within 0.1? Why 10-10. A thousand orders of magnitude less. So the real measure of whether the QSOs are inordinately close is how small the probability is to that average probability you get if you assume a random placement.

Yes, that's a very crude way of getting closer to the real procedure. But the point is, the way you've defined things P=10^-7 does NOT mean anything - it is perfectly consistent with the null hypothesis.

If time and time again, you get probabilities that are many, many orders of magnitude smaller than what you expect to get on average, that might be an indication of a problem in your assumptions about the distribution of z.

Agreed.

I've never suggested that the single sample probability in my calculations isn't going to be a small value even if the distribution from which the z come is really uniform and not quantized.

Yes you have - many times.

But it's hard to evaluate what that small probability means, one way or the other, with just that alone. What's important is the final probability accounting for the maximum possible number of cases there could be with r quasars or even better, the number of cases actually examined to find the observations you have that contain those r quasar low probabilities cases. If you multiply the single sample probability by the total number of cases that might possibly exist, and you still end up with probabilities much less than one, and you have found numerous such cases, then perhaps it is time to reevaluate your assumption about the distribution of z in the total interval. OK?

As I keep telling you, what you would have to do is compute what fraction of the possible data sets are MORE significant (more tightly clustered around the Karlsson values) than the one you actually measured. That fraction is something close to the real significance (for example, in the case above with 10 QSOs the typical P value was around 10^-7, so a P value like that is not at all significant, but a P value of 10^-10 would be).

You have done not done such a calculation, so the smallness of the P values you were finding is completely meaningless.
 
Last edited:
What I said (perhaps not perfectly clearly) is that for a typical random distribution, the probability for that distribution should be around .5 (by definition of "typical"). A typical random distribution will have an average deltaz of .25. If you could plug that average into your formula, you'd get P~.001 (which I said). Of course the actual values of P will be lower, since the points with deltaz smaller than .25 will matter more. Therefore, P is not the probability you want.

I'm still unclear what you are trying to claim here, sol. The formula IS correct for a 1 quasi-Karlsson value at 0.5 and 10 random QSOs in the interval 0 to 1. It is correct for the reasons I noted. We know the probability of any given QSO being that close to the midpoint. The QSOs z values are independent. Therefore, their combined probability of being that near the midpoint is the product of each of their individual probabilities. That IS the probability I said I was calculating and what we want.

If what you want to calculate is the confidence with which you can reject null hypothesis given the data, it is wrong.

How one interprets that probability is another matter. That interpretation is not clear cut, as I admitted, and very much a function of other parameters that I tried to define. If you read the early parts of this thread, you will encounter my use of Bayes Theorem as a means of expressing the probability that the mainstream theory is right given the existence of these data points whose probabilities seem out of touch versus what has actually been observed. You might want to check that out.

Quote:
And isn't it a fact that the placement of each QSO is independent of the placement of any other. They are independent phenomena ... under the assumptions of mainstream theory. And given that the placement of the QSOs are independent of one another, isn't the joint probability the product of the probabilities of the individual placements?

That computes the probability of a very specific type of data set (one with those particular deltaz values). That's not at all what you want.

That is indeed what we want. We want to find the probability of encountering an observation, like the one we observe, with those specific deltaz values or less. The math is clear. The z values of QSOs in an given local region (say near a galaxy) are independent of one another. And their individual probabilities, with respect to the Karlsson values, have nothing to do with one another. They are independent, hence the combined probability of seeing an observation with those specific deltaz is the product of the individual probabilities. That IS what we want.

Suppose there's only a single data point, drawn from a flat distribution, and it's (I don't know) .22370000. Now, what's the probability of finding that result? My god - it's zero!!! The odds of finding that value, precisely that value, are 0!

That's not what we are calculating here, sol. We don't seek the probability of finding a z with 0.22370000 but the probability of finding an observation where deltaz (0.5 - 0.2237) is equal to OR LESS THAN the deltaz for the observed z value of the data point. Say the data point is 1.0. Deltaz = 0.5. The probability that we will see an observation with that deltaz is equal to 1.0. As it should be. As it will be calculated by the formula. Say the data point is 0.5. Deltaz is 0.0. The probability that a data point will be observed that is exactly at 0.500000000000 is zero. THAT is the problem we are solving here, sol. The formula is correct. It's the mainstream theory that may be wrong. Unless you want to critique my assumptions regarding the number of quasars and their distribution. :D

Quote:
I've never suggested that the single sample probability in my calculations isn't going to be a small value even if the distribution from which the z come is really uniform and not quantized.

Yes you have - many times.

Challenge. Quote where I've stated the single sample probability in my calculations is going to be anywhere near 1. Dare you.

for example, in the case above with 10 QSOs the typical P value was around 10^-7, so a P value like that is not at all significant, but a P value of 10^-10 would be

Which is what I said ... but with one important, additional caveat. If the maximum number of samples that could possibly exist were only 1000 (in the above example), and you were seeing multiple observations with calculated probabilities of 10^-7 according to your model, then that would tell you that your assumption that the z distribution is uniform is probably wrong. :)

You have done not done such a calculation, so the smallness of the P values you were finding is completely meaningless.

Wrong. I have done such a calculation. I've estimated the max possible number of cases in the sky for a given observation and then multiplied that by the single sample probability. And because the result has indicated a probability that is << 1, I've suggested that the assumption that z is uniformly distributed is wrong. And that conclusion is strengthened by the fact that the number the single sample probability should be multiplied by is even smaller ... the fraction of quasar/galaxy associations that have actually been investigated before finding the problematic observations.
 
Wrong. I have done such a calculation. I've estimated the max possible number of cases in the sky for a given observation and then multiplied that by the single sample probability. And because the result has indicated a probability that is << 1, I've suggested that the assumption that z is uniformly distributed is wrong.

OK, let's take a quick look at that, then.

First, let's take my case with 10 QSOs from 0 to 1. If the accuracy of each measurement is .01, there are 100^10 = 10^20 possible data sets, almost all of which are not clustered around .5. So a "P" of 10^-7 is totally meaningless (as we already knew).

Now let's take your case. You had around 7 QSOs distributed from 0 to 3. In at least one case you had a deltaz of .01, so you're assuming the measurements are at least that accurate (otherwise the formula would be wrong for another reason). So that gives 30^7 = 2*10^10 possible values for those 7 QSOs. So if the probability is not of order 10^-10, you can't reject the null hypothesis.

What were the P values you calculated :)?

Of course the method you've chosen is extremely crude - there are much more sophisticated ways to do this analysis - but with that correction it's very roughly correct. (One caveat is that if these particular galaxies were cherrypicked from a bigger data set (that is, chosen because they have small "P" values) we must also multiply by the number of data sets they were picked from.)
 
Last edited:
Um BAC, I think you will have a hard time finding that the mainstream say that z values or QSO location are uniformly distrinuted, that is your straw man.

So why not back it up with citation and evidence.

DRD has already called you on the second one, the mainstream does not say that the QSOs are evenly distributed.

But ignorance (not seeing what others say) is your forte.

So attack your strawman in your forte. You aren't challenging the mainstream just your misinterpretation of it.
 
Um BAC, I think you will have a hard time finding that the mainstream say that z values or QSO location are uniformly distrinuted, that is your straw man.

As far as I can tell, BAC's (really Arp's) hypothesis is that the redshifts are quantized near (or at) these so-called Karlsson peaks, adjusted for the redshift of the parent galaxy. That's certainly a testable hypothesis which contradicts the mainstream view. Now it's true that QSO redshifts are not going to be uniform (for quite a few different reasons), but we could nonetheless take that the be the null hypothesis and try to reject it.

However, as I have shown, BAC's "P" value being small does NOT allow one to reject the null hypothesis with any confidence at all.
 
Last edited:
First, let's take my case with 10 QSOs from 0 to 1. If the accuracy of each measurement is .01, there are 100^10 = 10^20 possible data sets

Now you really have me thinking you don't understand at all what is being calculated here, sol. You have 10 QSOs with a measured z for each. If the measurements are uncertain, as long as they are uncertain to but a small fraction of the deltaz values and not biased to any significant degree, then that has little impact on the overall probability. The "true" value of some data points will be a little larger and some cases will be a little smaller. Some probabilities will be a little larger than their "true" value and some will be a little smaller. The effect on the final probability will hopefully tend to wash out in the end.

For example, let's take the first set of random data you cited. You had z = .284, 0.944, 0.518, 0.817, 0.340, 0.726, 0.516, 0.388, 0.687, 0.829 and said it produced a probability of 2.3 x 10^-6. Suppose every other one of those z are too low by 0.01 and the others are too high by 0.01. That means the "true" combined probability is 2^10*(.5-.294)*(.5-.934)*(.5-.528)*(.5-.807)*(.5-.350)*(.5-.716)*(.5-.526)*(.5-.378)*(.5-.697)*(.5-819) = 1024*.206*.434*.028*.307*.150*.216*.026*.122*.197*.319 = 5.08 x 10^-6. In other words, it doesn't significantly change.

Now let's take your case. You had around 7 QSOs distributed from 0 to 3. In at least one case you had a deltaz of .01, so you're assuming the measurements are at least that accurate (otherwise the formula would be wrong for another reason).

Well, first of all, the measurements of z are that accurate. In fact, the values given for z are routinely seen to have an accuracy of 0.001 ... else all the studies out there citing z that of x.xxx only prove that astronomers as a rule don't understand the accuracy of their measurements at all. You are grasping at straws now, sol. Concern about whether the z I used is 0.011 or 0.009 is a red herring. That doesn't affect the results by the orders of magnitude that would be required to invalidate the conclusions I am reaching through my calculations.

So that gives 30^7 = 2*10^10 possible values for those 7 QSOs.

And statements like this, frankly, just show you don't even understand what's being calculated here.

What were the P values you calculated ?

For NGC 3516, NGC 5985, NGC 2639, NGC 1068, NGC 4030? That was covered in post #391 to you. The probabilities I calculated for each were 1.4, 0.12, 0.007, 0.0007, and 0.35, assuming that all possible quasar/galaxy associations (conservatively estimated) had been examined. In reality, only a fraction of those associations were examined before finding these cases so the real probabilities are much lower than those ... probably several orders of magnitude lower at least. Therefore, in each case it would be a big surprise to have found such an observation. :D

Of course the method you've chosen is extremely crude - there are much more sophisticated ways to do this analysis

Yeah ... I keep getting told that by folks who seem very reluctant to even attempt their own estimate of the probability in each of these cases ... other than to claim it's 1.0. ;)

One caveat is that if these particular galaxies were cherrypicked from a bigger data set (that is, chosen because they have small "P" values) we must also multiply by the number of data sets they were picked from.)

Go back and look at what I did in post #391, sol. I multiplied them by a number much larger than the number of quasar/galaxy associations that had actually been examined when those cases were found ... what I believe is a very conservative estimate of the total number of observable quasar/galaxy associations that are possible in the sky as a whole given current technology. Again, you are just demonstrating, like all the rest of the Big Bang defenders on this thread, that you haven't actually read what I posted or understood it. This grows very tiresome.

And by the way, one more point. Your description that these cases are "cherry picked" is not correct. I have not picked them because I did a calculation and found they were the low probability ones. They were simply some of the cases I found in the literature where claims were being made that the position of the quasars relative to galaxy features such as the minor axis ... not the redshifts relative to Karlsson values. I'd be happy to add any observation with a large number of quasars near a galaxy to this list that you like and calculate it's expected probability given the mainstream's assumptions. I did that with NGC 4030 suggested by DRD. I've also added quasars in some of the examples when their presence was identified by others. If you want to pick some cases to add to the list in order to see what we find, be my guest and offer them up. But keep in mind that even we encounter some cases where the probabilities calculated are more in line with what we'd expect, that doesn't invalidate the other cases. No claim is made that all quasars are the result of some process related to Karlsson values. But I guess we can cross that bridge when your side actually does begin to offer real observations that fit the mainstream assumptions rather than increase doubt. :)
 
You're just not getting it, BAC. I don't know how to be any more clear. Your "probabilities" are totally wrong.

Let me ask you a very simple question. Take my example - 10 values between 0 and 1, with one Karlsson peak at .5. Let's say each value is measured to 2 significant figures.

Answer the following basic question: given such a data set, how small does P have to be before we can reject the hypothesis (say with 95% confidence) that the data set was drawn from a flat random distribution?

If you can't answer that question, your method is useless. If you can, we will go on from there.
 
Last edited:
As far as I can tell, BAC's (really Arp's) hypothesis is that the redshifts are quantized near (or at) these so-called Karlsson peaks, adjusted for the redshift of the parent galaxy. That's certainly a testable hypothesis which contradicts the mainstream view. Now it's true that QSO redshifts are not going to be uniform (for quite a few different reasons), but we could nonetheless take that the be the null hypothesis and try to reject it.

However, as I have shown, BAC's "P" value being small does NOT allow one to reject the null hypothesis with any confidence at all.

For reasons that DRD discussed QSOs are not evenly distributed but clumpy to some extent, the z-shift I have mis-spoken.

The Karlson values are little vague and strange and somewhat broad as well.

BAC will refuse to test for a null hypotesis which is one point of the thread, there is no normative distribution that is compared to, just an average value.
 
Um BAC, I think you will have a hard time finding that the mainstream say that z values or QSO location are uniformly distrinuted, that is your straw man.

Um David, as I pointed out to DRD previously ... posts I guess you that read without actually reading ... the uniform distribution is an approximation that captures the essence of the mainstream model. As I've stated, it accounts for the fact that the mainstream claims the frequency distribution of z has no periodicity to it ... in other words, there are no blips (increased frequency) at specific redshifts. Second, over the range from 0 to 3, I showed previously that even if we alter the weight of the various data points to reflect the real non-uniform distribution, it doesn't alter the conclusions. The probabilities are still too small to be easily explained away and in some cases, a better accounting of the distribution's shape actually lowers the probability even further. But since you raise this concern again, let's look at the effect on one of the cases I calculated. :)

First, the real distribution (according to mainstream documents) has the frequency at various z rising from about one-eight the maximum value (call it zero) near z=0 to a maximum around z equals 1 to 1.5. Then it stays roughly constant out to a z of about 2.5 to 3.0. Then it drops rather quickly back to value of only 1/40th to 1/50th the maximum at z=6. Thus, the low z data points (below z = 1) will tend to be overweighted if we assume a uniform. In order words, the impact on the total probability calculated of those z < 1 data points is more than it should be. Similarly, one can see that the impact of the higher z values is less than it should be under the real distribution.

Let's look at a real case. For NGC 3516, the observed z = 0.33, 0.69, 0.93, 1.40, 2.10. The Karlsson z = 0.3, 0.6, 0.96, 1.41, 1.96, 2.64. Therefore, the spacings are 0.03, 0.09, 0.03, 0.01, 0.14 . The contribution of each data point to making the probability low (0.00025 in my most recent calculation) is directly proportional to the size of each of the above spacings. Now let's see if we can weight them to account for the real frequency distribution. We gave too much weight to the data points that are at z = 0.33, 0.69 and 0.93. The data point at z=1.40 and 2.10 are underweighted. The power law weighting that I suggested would raise each of the individual probabilities to the w=z*1.201; w<=1.20 power. So

P = ((2*7)/3)5 * (0.03.40* 0.09.83*0.031.15*0.011.20*0.141.2*) = 2213 * .25 * .14 * .18 * 0.004 * .095 = 0.0053 .

Now my gut feeling is that weighting is conservative ... that the real effect of the non-uniform nature of the distribution compared to the uniform one is less than that. And as it is, it only raised the probability by a factor of 20.

Another, perhaps more reasonable way of weighting the effect is simply to ratio the individual probabilities by the ratio of the frequency of the non-uniform (real) distribution to the uniform distribution at each of their z values. In other words, for the first point with P1 = 0.03, the frequency of the non-uniform case is about .33/(1/1.2) = 0.4 of the assumed uniform distribution frequency. So the *real* probability of that point should be 0.03/0.4 = 0.075, not 0.25 as calculated with power law weighting. Doing something similar for the other cases would yield P = 2213 * 0.075 * 0.109 * 0.026 * 0.00833 * 0.117 = 0.00046. Not all that different from 0.00025.

And please note that the above methods do not always result in higher probabilities. As I showed previously with some of them, it can actually produce a lower probability estimate. In any case, I welcome any constructive suggestions you might have, David, as to the proper way of accounting for the non-uniform nature of the mainstreams z frequency distribution. Constructive, David. Else I'll simply continue to ignore you.
 
You're just not getting it, BAC. I don't know how to be any more clear. Your "probabilities" are totally wrong.

Sol, you're just not getting it. I don't know how to be any more clear. You don't even seem to understand what probabilities are being calculated here (I'm not calculating the probability of z or deltaz being a specific number). And the formula I'm now using is correct ... as I've amply demonstrated using your own examples.

Take my example - 10 values between 0 and 1, with one Karlsson peak at .5. Let's say each value is measured to 2 significant figures.

Why 2 significant figures? Why not 3 as the mainstream appears to think it can measure?

Answer the following basic question: given such a data set, how small does P have to be before we can reject the hypothesis (say with 95% confidence) that the data set was drawn from a flat random distribution?

Well that depends on how many total samples might be possible, how many cases with very low probabilities have actually been observed, and how much of the total population has been sampled to get those observations. You see, sol, the fact that you forgot that shows that you just don't get it. :D
 
Let's look at a real case. For NGC 3516, the observed z = 0.33, 0.69, 0.93, 1.40, 2.10. The Karlsson z = 0.3, 0.6, 0.96, 1.41, 1.96, 2.64. Therefore, the spacings are 0.03, 0.09, 0.03, 0.01, 0.14 .

Sorry, BAC that last spacing is 0.54. We will have to redo the calculation, but I haven't the time now.......
 
Um David, as I pointed out to DRD previously ... posts I guess you that read without actually reading ... the uniform distribution is an approximation that captures the essence of the mainstream model.
In terms of ?

As DRD pointed out the mainstream models does not say that QSOs will be evenly distributed. So that is a strawman. As I said to Sol, I evidently mispoke abot z-shifts. And I will say it to you, I was wrong.

yet, you have not said how your method or Arp's will say that the distribution of QSOs near Arp galaxies is any different than the ditribution of QSOs in general.

So again, you might want to address that.

Maybe not.

I did ask you a bunch of questions about what hypothesis you are promoting. there seem to be twem, that QSOs have a higher association with Arp galaxies that would be 'normal. yes?

And that there are these Karlson peaks which somehow indicate that the z-shifts of QSOs near Arp galaxies are abberant. Yes?
As I've stated, it accounts for the fact that the mainstream claims the frequency distribution of z has no periodicity to it ... in other words, there are no blips (increased frequency) at specific redshifts. Second, over the range from 0 to 3, I showed previously that even if we alter the weight of the various data points to reflect the real non-uniform distribution, it doesn't alter the conclusions. The probabilities are still too small to be easily explained away and in some cases, a better accounting of the distribution's shape actually lowers the probability even further. But since you raise this concern again, let's look at the effect on one of the cases I calculated. :)
I asked about the proximity distribution of QSOs, will you answer about that? I don't recall mentioning the z-shift in the OP. I did ask if you has some good introductory sites on the Karlsen peak phenomena.
First, the real distribution (according to mainstream documents) has the frequency at various z rising from about one-eight the maximum value (call it zero) near z=0 to a maximum around z equals 1 to 1.5. Then it stays roughly constant out to a z of about 2.5 to 3.0. Then it drops rather quickly back to value of only 1/40th to 1/50th the maximum at z=6. Thus, the low z data points (below z = 1) will tend to be overweighted if we assume a uniform. In order words, the impact on the total probability calculated of those z < 1 data points is more than it should be. Similarly, one can see that the impact of the higher z values is less than it should be under the real distribution.

Let's look at a real case. For NGC 3516, the observed z = 0.33, 0.69, 0.93, 1.40, 2.10. The Karlsson z = 0.3, 0.6, 0.96, 1.41, 1.96, 2.64. Therefore, the spacings are 0.03, 0.09, 0.03, 0.01, 0.14 . The contribution of each data point to making the probability low (0.00025 in my most recent calculation) is directly proportional to the size of each of the above spacings. Now let's see if we can weight them to account for the real frequency distribution. We gave too much weight to the data points that are at z = 0.33, 0.69 and 0.93. The data point at z=1.40 and 2.10 are underweighted. The power law weighting that I suggested would raise each of the individual probabilities to the w=z*1.201; w<=1.20 power. So

P = ((2*7)/3)5 * (0.03.40* 0.09.83*0.031.15*0.011.20*0.141.2*) = 2213 * .25 * .14 * .18 * 0.004 * .095 = 0.0053 .

Now my gut feeling is that weighting is conservative ... that the real effect of the non-uniform nature of the distribution compared to the uniform one is less than that. And as it is, it only raised the probability by a factor of 20.
Okay, what about the proximity distribution of QSOs and Arp galaxies?
Another, perhaps more reasonable way of weighting the effect is simply to ratio the individual probabilities by the ratio of the frequency of the non-uniform (real) distribution to the uniform distribution at each of their z values. In other words, for the first point with P1 = 0.03, the frequency of the non-uniform case is about .33/(1/1.2) = 0.4 of the assumed uniform distribution frequency. So the *real* probability of that point should be 0.03/0.4 = 0.075, not 0.25 as calculated with power law weighting. Doing something similar for the other cases would yield P = 2213 * 0.075 * 0.109 * 0.026 * 0.00833 * 0.117 = 0.00046. Not all that different from 0.00025.

And please note that the above methods do not always result in higher probabilities. As I showed previously with some of them, it can actually produce a lower probability estimate. In any case, I welcome any constructive suggestions you might have, David, as to the proper way of accounting for the non-uniform nature of the mainstreams z frequency distribution. Constructive, David. Else I'll simply continue to ignore you.

You alread are, I am trying to converse with you.
 
Why 2 significant figures? Why not 3 as the mainstream appears to think it can measure?

Fine - use 3.

Well that depends on how many total samples might be possible, how many cases with very low probabilities have actually been observed, and how much of the total population has been sampled to get those observations. You see, sol, the fact that you forgot that shows that you just don't get it. :D

Suppose we've looked at one place in the sky and seen 10 QSOs. That's it - we've only looked in that region, and no more observations have yet been made. The QSOs have some z values, and from that you compute your P. If you think you need to assume something about how many more QSOs might eventually be observed (which is wrong), go ahead and assume something.

So - what's the answer? How small does P have to be to reject the hypothesis that the QSOs are uniformly distributed in z?
 
Last edited:
Um David, as I pointed out to DRD previously ... posts I guess you that read without actually reading ... the uniform distribution is an approximation that captures the essence of the mainstream model. As I've stated, it accounts for the fact that the mainstream claims the frequency distribution of z has no periodicity to it ... in other words, there are no blips (increased frequency) at specific redshifts. Second, over the range from 0 to 3, I showed previously that even if we alter the weight of the various data points to reflect the real non-uniform distribution, it doesn't alter the conclusions. The probabilities are still too small to be easily explained away and in some cases, a better accounting of the distribution's shape actually lowers the probability even further. But since you raise this concern again, let's look at the effect on one of the cases I calculated. :)

First, the real distribution (according to mainstream documents) has the frequency at various z rising from about one-eight the maximum value (call it zero) near z=0 to a maximum around z equals 1 to 1.5. Then it stays roughly constant out to a z of about 2.5 to 3.0. Then it drops rather quickly back to value of only 1/40th to 1/50th the maximum at z=6. Thus, the low z data points (below z = 1) will tend to be overweighted if we assume a uniform. In order words, the impact on the total probability calculated of those z < 1 data points is more than it should be. Similarly, one can see that the impact of the higher z values is less than it should be under the real distribution.

Let's look at a real case. For NGC 3516, the observed z = 0.33, 0.69, 0.93, 1.40, 2.10. The Karlsson z = 0.3, 0.6, 0.96, 1.41, 1.96, 2.64. Therefore, the spacings are 0.03, 0.09, 0.03, 0.01, 0.14 . The contribution of each data point to making the probability low (0.00025 in my most recent calculation) is directly proportional to the size of each of the above spacings. Now let's see if we can weight them to account for the real frequency distribution. We gave too much weight to the data points that are at z = 0.33, 0.69 and 0.93. The data point at z=1.40 and 2.10 are underweighted. The power law weighting that I suggested would raise each of the individual probabilities to the w=z*1.201; w<=1.20 power. So

P = ((2*7)/3)5 * (0.03.40* 0.09.83*0.031.15*0.011.20*0.141.2*) = 2213 * .25 * .14 * .18 * 0.004 * .095 = 0.0053 .

Now my gut feeling is that weighting is conservative ... that the real effect of the non-uniform nature of the distribution compared to the uniform one is less than that. And as it is, it only raised the probability by a factor of 20.

Another, perhaps more reasonable way of weighting the effect is simply to ratio the individual probabilities by the ratio of the frequency of the non-uniform (real) distribution to the uniform distribution at each of their z values. In other words, for the first point with P1 = 0.03, the frequency of the non-uniform case is about .33/(1/1.2) = 0.4 of the assumed uniform distribution frequency. So the *real* probability of that point should be 0.03/0.4 = 0.075, not 0.25 as calculated with power law weighting. Doing something similar for the other cases would yield P = 2213 * 0.075 * 0.109 * 0.026 * 0.00833 * 0.117 = 0.00046. Not all that different from 0.00025.

And please note that the above methods do not always result in higher probabilities. As I showed previously with some of them, it can actually produce a lower probability estimate. In any case, I welcome any constructive suggestions you might have, David, as to the proper way of accounting for the non-uniform nature of the mainstreams z frequency distribution. Constructive, David. Else I'll simply continue to ignore you.
.

BAC, there are many errors, mis-statements, and so on in this post of yours.

Many, perhaps most, of these I have tried to get you to address, in many, many posts in this thread.

Central to the problems is your approach - what is the hypothesis you are testing (quantitatively)? what is the null hypothesis? how, in the calculations you perform, are you controlling for the many, many, many selection effects and biases? And so on.

For whatever reason, you have either not answered my questions, not responded to my posts, or dismissed what I wrote without (it is patently obvious) understanding.

Worse, on at least two occasions, I have offered you some ideas on how to show your approach might/could/should (or not) work, outside the specific, a posterori, cases you are so obviously enamoured with. In neither case did you even bother to respond.

May I ask why you seem so immune to engaging in a real discussion? Particular if, as seems clear, you are convinced that you have found something truly remarkable (that thousands, or more, of professionals who've worked on similar topics for decades, have missed)?
 
Last edited:
Hmm, in this paper hot off the press Arp says that he went looking for QSOs that matched the peaks, hardly what i would call good sampling, expecially when he says later it fits like a lock in a key.

Well... he is still a great astronomer even if I disagree with his methods.

http://arxiv.org/PS_cache/arxiv/pdf/0803/0803.2591v1.pdf

Great find on that paper.

I agree with your comments:

Bad sampling, good astronomer.

I have always had a soft heart for the rogue underdogs.

Keith
 
.

It seems I was insufficiently clear.

Here's an example of what I mean by different inputs:

Which galaxy/xies do I choose to run the calculations on? How do I know if I've chosen the 'right' galaxy (or kind of galaxy)? Can I choose any old low redshift galaxy? or must it be a galaxy found in one of Arp's papers?

Having chosen a galaxy, how far out do I look for quasars? 30'? 60'? 90'? more? How do I decide how far out I should look, in general?

Having decided how far out to go, how do I decide which objects within the circle to choose? Only those which are BSOs on Palomar Schmidt plates AND are x-ray sources? how about an AGN which is in a type 2 Seyfert? or a type 2 quasar? a BL Lac object?

Having selected my set of 'quasars', how do I decide which ones are 'on' the minor axis of my chosen galaxy? Do I say the cutoff is 45o (the criterion L-C&G used)? or is it 25o (what we might infer from reading some of the Arp et al. papers)? Or can I arbitrarily select a criterion?

Having selected my 'minor axis quasars', how do I calculate which Karlsson peak each is 'near'? Do I use one of Karlsson's papers for those peaks? or one of Arp's? In calculating 'distance' from a peak, what do I do if the redshift is 'near' the midpoint between two peaks? How do I incorporate stated observational uncertainty in the published redshifts?

So you see, BAC, there are a lot of things one must decide before even starting any calculation ... and I freely confess to be not knowing how to make any of the decisions in the chain I briefly outlined above.

And to reinforce my point: if you, BAC, are the only person who can say whether the inputs have been selected correctly or not (before a calculation begins), how can your approach be said to be objective?
.

BeAChooser hasn't replied to this post of mine yet, but I don't want to wait, so I've proceeded to do some 'BAC approach' research anyway.

In light of BAC's oft stated (usually with great conviction, certainty, etc) opinion that his approach can be used to test whether a particular configuration of 'quasars' has an unbelievably low probability of occurring, using 'mainstream assumptions', 'mainstream theory', or 'mainstream models', I'll present my results in stages.

I would like BAC - or anyone else who thinks they understand his approach sufficiently well to successfully apply it - to calculate the 'probability' of these configurations, in the same way as he has done for the Chu et al. 'quasars' around NGC 5985 (or was it NGC 3516's quasars?).

First, here are the redshifts of the 33 'QSOs' NED lists as being within 30' of a bright, low redshift spiral galaxy (z = 0.00411):

0.159193, 0.162, 0.336373, 0.421634, 0.607041, 0.7359, 0.7361, 0.777445, 0.7954, 0.8798, 1.04486, 1.0928, 1.2489, 1.2649, 1.371, 1.4574, 1.4791, 1.5363, 1.5962, 1.61306, 1.6569, 1.73016, 1.7623, 1.8089, 1.841, 1.9192, 1.9593, 1.9728, 2.13761, 2.19801, 2.2645, 2.49, and 2.57.

Next, here are the quasars that lie 'predominantly along' a preferred direction (one such is the 'minor axis', the other is a different 'preferred direction'; I'm not saying which is which):

a) 0.162, 0.336373, 1.371, 1.4791, 1.61306, 2.13761, 2.19801

b) 0.607041, 0.8798, 1.04486, 1.73016, 1.841, 1.9192, 2.57.

I have also come across a 'redshift relationship' in the astronomical literature that I am in the process of converting to something equivalent to the Karlsson peaks. Once I am done, I'd like BAC to calculate the 'probabilities' using these (call them the 'Amaik peaks', for now) too.
 
Here are the "Amaik peaks":

0.04
0.43
0.78
1.13
1.56
2.33
2.43.

It would also be interesting to see what the 'BAC approach probabilities' are, using some random, a posterori values. Just for fun, instead of random, I have chosen this set of seven values: 0.38, 0.75, 1.13, 1.5, 1.88, 2.25, 2.63.

In forthcoming posts I will present the redshifts of quasars found within 60' of a different, low-redshift, bright spiral galaxy (than the one in my last post), and note those which lie 'predominantly' along the minor axis (as well a different 'preferred direction'); I will also do the same for quasars within 60' of a random position in the sky, and two random directions, such that the 'predominantly along' objects do not overlap. I am, as per the last post, most interested in having BeAChooser calculate the probabilities of each 'predominantly along' set that are 'near' Karlsson peaks, Amaik peaks, and the 'random' values, for all six sets.

I will also, later, comment on the variations in the areal density of quasars.
 

Back
Top Bottom