Arp objects, QSOs, Statistics

The two with the 'same' redshift as those in the Arp paper seem to be quite different quasars:

1537+595 NED01 and NED02 are both listed as being 7.5' from NGC 5985, and have redshifts of 2.132 and 1.968 (respectively); the two 'Arp' quasars have redshifts listed as 2.125 (12.0' distant) and 1.968 (25.2') (respectively).

Ok. Now that you clarify things, I'll agree that some additional quasars should be added in this case to the calculation. Let's do that:

Revised calculation for NGC 5985.

There are quasars at z = 0.69, 0.81, 1.90, 1.968, 1.968, 2.125, 2.132 and 3.88. With Karlsson z = 0.06, 0.3, 0.6, 0.96, 1.41, 1.96, 2.64, 3.48, the spacing to the nearest Karlson values are 0.09, 0.15, 0.06, 0.008, 0.008, .164, .172 (and I will ignore the z=3.88 datum as it's outside range of Karlsson values and quasar distribution that's defined). The increment for each is 0.18, 0.30, 0.12, 0.016, 0.016, 0.328, .344 so n equal 16, 10, 25, 187, 187, 9 and 8. The weighting factors (whether I should use them is a question) are 0.83, 0.97 and 1.201 for all the rest.

Now, P1G = 7! * (1/16)0.83 * (1/10)0.97 * (1/25)1.201 * (1/187)1.201 * (1/187)1.201 * (1/9)1.201 * (1/8)1.201 = 5040 * 0.100 * 0.107 * 0.021 * 0.0019 * 0.0019 * 0.071 * 0.082 = 2.3 x 10-8 (compared to 6.8 x 10-7 previously) with an unweighted probability of 5 x 10-7 (compared to 5 x 10-6 previously)

Since NmaxGwithr = 1724 for r = 7,

PzTotalNGC5985 = 2.3 x 10-8 * 1724 = 4 x 10-5[/sup = 0.00004 (compared to 0.007 previously). DRD, without even looking at the alignment probability, which will basically be unchanged from before since I seem to have used 7 quasars in that calculation with only 4 aligned, this is a very unlikely observation even were we able to examine every possible quasar/galaxy association in the sky. Wouldn't you agree? This is like my holding what you believe is a regular deck of cards, watching me shuffle them, and then begin turning over cards. I turn over the first 4 cards and find 4 of a kind. You think "what luck". I turn over then next 4 cards and reveal another 4 of a kind. You think "incredible" and "what are the odd of this happening". But if the next 4 cards turn out to be another 4 of a kind, you should start to wonder if your assumptions about the deck and shuffling are correct. :)
How about I give you the RA and Dec of all five, and you tell us whether they are sufficiently close to the minor axis to warrant including in your calculations?
Do you think it will help your case? :D
 
(continued)

BAC, would you mind taking a few minutes to write down what it is that you (or the model you are interested in) expect to find, in the vicinity of ('near') 'low redshift' 'galaxies'?

For example, do you expect that 'all' 'quasars' 'near' the 'minor axis' of such galaxies will have redshifts that are 'near' one of the 'Karlsson peaks'?

And for each of the key words (near (as in 'near' a low redshift galaxy), low redshift, galaxy, all, quasars, near (as in 'near' the minor axis), minor axis, and Karlsson peak), do you have a priori quantitative values/expressions?

One of my reasons for asking these questions is that you have to hand all that you need to test the model ... SDSS DR6 contains consistent, high quality data on an awful lot of 'low redshift galaxies' and 'quasars', perhaps enough to test your model far, far more extensively than can be done with just a handful of objects selected from papers by Arp et al.

In principle, it would be relatively easy to simply download the galaxy and quasar data from the SDSS data server and analyse it yourself.

Of course, to avoid the sort of a posterori issues that are so prominent in this thread, you'd need to state the hypotheses you intend to test, including the null hypothesis/ses, very clearly, before you begin the analysis. And, even better, if you create a mock catalogue (or 10,000 mock catalogues), and do some Monte Carlo analyses on these, before you start on the real data, you'd be in even better shape to address the inevitable questions about confirmation bias ...
 
DeiRenDopa said:
The hypothesis that you are testing has to do with an idea that '(all) quasars' are ejected '(predominantly) along the minor axes' of 'active galaxies', and have redshifts 'at Karlsson peaks',
I'm not testing "all" quasars, DRD. I'm testing the quasars in these particular galaxies for which I'm doing calculations.
.
The ones, and only the ones, that you have selected?

Or all quasars 'predominantly along' the minor axes?

Out to what distance?

How many 'particular galaxies'?

Or all galaxies of that kind (whatever kind that is)?

.
I tell you what, DRD. Since you don't seem to like the calculations I've done, why don't you show us what the probability of observing those cases is given the mainstream theory? And don't answer 1.0 because that's not an answer ... that's avoiding the question. Do an analysis like mine to calculate the probability of those observations. You can do that, can't you? :D
.

Well, the answer is, as you have already stated, 1 ... because they are observed to be there.

And no, I can't do any calculations of the kind you have done, if only because I (still) don't (yet) understand what those calculations are! See my last post.

In fact, I'd challenge any reader of this thread to repeat the kind of 'probability estimates' you have come up with, using a different set of input data ....
 
DeiRenDopa said:
L-C&G did, in fact, count quasars more than once,
So you think their method overcounted quasars by a factor of 4? :rolleyes:
.
I'll allow any reader sufficiently interested to go check for themselves (L-C&G are quite open about what they did, and why ...)
.
{snip}
But mostly I think you yourself didn't actually read L-C&G ... Table 1 ("Anisotropy of QSOs with different constraints") lists, in the column "Number of pairs" (meaning, galaxy-QSO pairs) "8698" on the "1.0o" line in the column thetamax .
Again, the L-C&G paper states very clearly that the number of samples is insufficient to draw any conclusions. You seem to want to pick and choose ... accept some results and ignore others.
.

Indeed.

Which leads right back to the question I asked ... if nearly 9000 quasar-galaxy pairs, involving ~70 galaxies, are inadequate for L-C&G to show a minor-axis anisotropy, what is it about ~20 (40?) quasar-galaxy pairs, involving ~4 galaxies, that makes your a posterori selection statistically valid?

Normally, when you increase the sample size by a factor of several dozen (or more), you expect a signal that is statistically significant in the smaller sample to become blindingly obvious in the bigger one ... yet this did not, apparently, happen; in fact the opposite happened - how come?
 
I'm not testing "all" quasars, DRD. I'm testing the quasars in these particular galaxies for which I'm doing calculations.

I tell you what, DRD. Since you don't seem to like the calculations I've done, why don't you show us what the probability of observing those cases is given the mainstream theory? And don't answer 1.0 because that's not an answer ... that's avoiding the question. Do an analysis like mine to calculate the probability of those observations. You can do that, can't you? :D

And your probability shows that a full house in five card no draw poker is from non-random processes.

You can't tell a honest dealer from a cheat with your method.
 
{snip}

Do you think it will help your case? :D
.
I don't know what 'case' you think I'm making (other than to try to understand what you are doing), but in any case, this thread is (as I understand it) an examination of the statistical methods you (and Arp et al.) have used ...

So far my comments have been limited to the inputs - 'what is a quasar?', how was X selected?, what about Y?, etc - rather than the actual calculations themselves.

But maybe I did present a 'case'; if so, would you please point to the post(s) in which I did so?
 
{snip}

There are quasars at z = 0.69, 0.81, 1.90, 1.968, 1.968, 2.125, 2.132 and 3.88. With Karlsson z = 0.06, 0.3, 0.6, 0.96, 1.41, 1.96, 2.64, 3.48, the spacing to the nearest Karlson values are 0.09, 0.15, 0.06, 0.008, 0.008, .164, .172 (and I will ignore the z=3.88 datum as it's outside range of Karlsson values and quasar distribution that's defined). The increment for each is 0.18, 0.30, 0.12, 0.016, 0.016, 0.328, .344 so n equal 16, 10, 25, 187, 187, 9 and 8. The weighting factors (whether I should use them is a question) are 0.83, 0.97 and 1.201 for all the rest.

{snip}
.
As I noted earlier, if the model being tested is 'quantized redshifts', and if those redshifts are 'Karlsson peaks', then first we need a precise definition of those peaks, and a calculation of the z values of those peaks to at least 3 decimal places.

Then we need the stated observational uncertainty of the observed redshifts.

Then, if a relevant peak is more than 3 sigma from an observed redshift, it counts as a 'miss'.

This of course makes the calculations a lot simpler - like heads and tails of flipping a coin.

However, if the model being tested is not 'quantized redshifts', then what is it?

In the absence of stated observational uncertainties we can't do a calculation of how well the data match the model; however, we can sketch one.

Suppose each redshift has an observational, 1 sigma, uncertainty of 0.001.

Then none of the quasars has an observed redshift within 3 sigma of a Karlsson peak (assuming BAC's numbers can be independently verified/validated) - the closest are two of 0.008.

Thus the NGC 5895 data are clearly inconsistent with a model that says quasars ejected (predominantly) along the minor axes of active galaxies have quantized redshifts corresponding to the Karlsson peaks.
 
I am not an expert, nor have i stated that I am one

I'm sorry. I guess I misunderstood your boasts that you are "used to population sampling", "trained in sampling theory, practiced sampling theory and read a lot of publications regarding sampling theory and research articles using sampling theory" and bold assertions that I don't understand probability. To me that seemed like a claim of expertise. :)

Bayesian theory is used in very specific situations and the population distribution of traits is not one of them.

Actually, you might be surprised at how widely used Bayesian theory is outside of your world, David. Maybe that's why you get over 5 million hits if you plug in "Bayesian" into Google. :)

How can your methodology shopw a difference in placement association of QSOs in relation to Arp galaxies?

Now you want to change your question? Actually, David, I don't think you even know the question. I'm not sure you even know what we are discussing here. :D

BeAChooser wrote: More specifically, I'm calculating in each case the probability of encountering an observation with a set of z's that match the Karlsson values within the accuracy that the observation actually matches the Karlsson values, while assuming that the z values are not quantized around Karlsson values but instead come from a relatively uniform distribution of z between 0 and 3.

And your numbers would be exactly the same.

Compared to what, David? Be more specific if you want me to understand what you are saying. Be aware that I already demonstrated to DRD that if one plugs in purely random values for z between 0-3 into my calculations instead of the observed z, one obtains VERY different results.

You do know that Bayes theorem is no longer used in most statitical setting like epidemiology and drug efficacy don't you?

Like I said, David, you apparently are unaware how widely used Bayes Theorem is outside your little domain. But then I'm not sure why you're even mentioning Bayes Theorem with regard to the above since that's an aspect of my analysis that comes AFTER the probability I described above is calculated.

If I gace you a random placement of QSOs around galaxies and a causal placement of QSOs around galaxies, your method would give the same numbers.

You are completely wrong, David. I already demonstrated that to DRD for one of the observations by picking a set of z out of my head and calculating a probabilty to compare to the one generated using the observed z in that case. Did you miss that?

I tell you what, why don't you prove it to yourself. Take the case of NGC 5985 that I just recalculated above and instead of using the observed z in that calculation, generate groups of 7 random values between 0.000 and 3.000 and plug them into the method. Ignore the weighting factors since that won't affect the conclusion. Compile a list of the probabilities you get and then compare the average of those probabilities to the unweighted probability I calculated using the 7 observed z. I predict your results won't even be close, David. Care to bet? :D

why are you using Bayes theorem in a method which is discredited and not used in comparable studies?

Bayes theorem isn't used in the above calculation of probability. Or have you not even figured that out, David? Hmmmm?

In other words, given a relatively limited number of observations, we simply don't expect to see any cases where all or most of the z values are close to Karlsson values.

No, that is not true, that is like saying that you should not get a full house in stud, no draw poker unless it is non random.

I'd love to play poker with you. Tell us, David ... knowing the likely distribution of cards in a shuffled deck, do you expect to see 4 of a kind if I turn over the first 4 cards in the deck? Do you expect to see the next four cards also be 4 of a kind? Because the probability of seeing those quasar observations I've calculated work out to LESS than the probability of seeing 4 of a kind in the above example.

You haven't demonstrated anything except your use of bayes theorem is misplaced.

You keep talking about Bayes Theorem and I'm not sure why given that the methodology in my last several posts doesn't involve it. And frankly, I don't think you even understand Bayes Theorem, David.

SO, to answer your question, my procedure is most definitely capable of distinguishing between a uniform (random) and a non-uniform (causal) distribution of z.

No it is not. it would give the same values for a random and a causal set.

Go ahead, David. Do that little calculation I suggested. Prove me wrong. Prove you get no significant difference in probability if you randomly draw z from a uniform distribution and compare it to what you get from drawing z from a distribution where all the values are close to the specific Karlsson values. Here's your chance to definitively prove me wrong, David. So go for it. :D
 
BAC, you are basically saying that someone who gets a fullhouse in a stud poker came can only do so from a non-random process.

David, I've said nothing of the sort. I grow weary of you again. Do the little Monte Carlo analysis I suggested and stop embarrassing yourself.

2.8301 x10-9

And do you expect to get a specific hand ... say a Royal Flush in Spades ... in the first few thousand hands, David? Apparently so. :D
 
I'm sorry. I guess I misunderstood your boasts that you are "used to population sampling", "trained in sampling theory, practiced sampling theory and read a lot of publications regarding sampling theory and research articles using sampling theory" and bold assertions that I don't understand probability. To me that seemed like a claim of expertise. :)



Actually, you might be surprised at how widely used Bayesian theory is outside of your world, David. Maybe that's why you get over 5 million hits if you plug in "Bayesian" into Google. :)



Now you want to change your question? Actually, David, I don't think you even know the question. I'm not sure you even know what we are discussing here. :D



Compared to what, David? Be more specific if you want me to understand what you are saying. Be aware that I already demonstrated to DRD that if one plugs in purely random values for z between 0-3 into my calculations instead of the observed z, one obtains VERY different results.



Like I said, David, you apparently are unaware how widely used Bayes Theorem is outside your little domain. But then I'm not sure why you're even mentioning Bayes Theorem with regard to the above since that's an aspect of my analysis that comes AFTER the probability I described above is calculated.



You are completely wrong, David. I already demonstrated that to DRD for one of the observations by picking a set of z out of my head and calculating a probabilty to compare to the one generated using the observed z in that case. Did you miss that?

I tell you what, why don't you prove it to yourself. Take the case of NGC 5985 that I just recalculated above and instead of using the observed z in that calculation, generate groups of 7 random values between 0.000 and 3.000 and plug them into the method. Ignore the weighting factors since that won't affect the conclusion. Compile a list of the probabilities you get and then compare the average of those probabilities to the unweighted probability I calculated using the 7 observed z. I predict your results won't even be close, David. Care to bet? :D



Bayes theorem isn't used in the above calculation of probability. Or have you not even figured that out, David? Hmmmm?



I'd love to play poker with you. Tell us, David ... knowing the likely distribution of cards in a shuffled deck, do you expect to see 4 of a kind if I turn over the first 4 cards in the deck? Do you expect to see the next four cards also be 4 of a kind? Because the probability of seeing those quasar observations I've calculated work out to LESS than the probability of seeing 4 of a kind in the above example.



You keep talking about Bayes Theorem and I'm not sure why given that the methodology in my last several posts doesn't involve it. And frankly, I don't think you even understand Bayes Theorem, David.



Go ahead, David. Do that little calculation I suggested. Prove me wrong. Prove you get no significant difference in probability if you randomly draw z from a uniform distribution and compare it to what you get from drawing z from a distribution where all the values are close to the specific Karlsson values. Here's your chance to definitively prove me wrong, David. So go for it. :D


So how would you use you method to determine that a placement is causal or random, which is the issue of the thread.

I say that the statitics that you and Arp use can not do that. You say that they can, so show me.

Thanks for paying some attention.

While you are at it why don't you address the issue of sample bias.?

I know that you have your own thoughts, but given the fact that Arp's and your analysis are not based upon distributions of frequency, how can you tell a random placement of QSOs around a galaxy from a causal one.

That is the thrust of the thread.
 
David, I've said nothing of the sort. I grow weary of you again. Do the little Monte Carlo analysis I suggested and stop embarrassing yourself.



And do you expect to get a specific hand ... say a Royal Flush in Spades ... in the first few thousand hands, David? Apparently so. :D

I expect that they will fall around a random distribution, unless the dealer is cheating. So they hands should follow distributio patters. You can only tell if it is cheating or random from a large number of hands.

Duh.

I notice again that you have not answered the question;

How can your method tell a random placement of QSOs around a galaxy from a causal one?

Perhaps you will talk about that and how small samples have a higher chance for bias?
 
A (type 1) quasar is a luminous active galactic nucleus (AGN) for which we have an unobscured view of the accretion disk

Your definition ASSUMES that you know for a fact that a quasar is a black hole. But you don't. You have only inferred that from your belief that redshift always equates to distance. Because quasars are relatively bright and (you think) very far away, and their energy output can change rapidly, you have inferred that it must be a black hole. But if redshift is not always related to distance as is the subject of this thread, then there would be no need to hypothesize a black hole to explain the energy output of a quasar.
 
First, the only way to do this kind of analysis, properly, is to start with the physical models; in the case of 'Karlsson peaks'

That's nonsense, DRD. Many an important discovery has been made because someone observed that existing theory could not explain an observation or outright contradicted it. And once they were convinced of that, scientists then went looking for an explanation. I don't have to have a physical model for why quasar redshifts might tend towards Karlsson peaks to show that they do, contrary to what the mainstream model would claim.

Second, if the 'Karlsson peaks' are indeed fully quantized, then they have precise values, to as many decimal places as you care to calculate.

Since you and David are apparently unable to show that the probabilities I've calculated are wrong, you resort to word games. I've never claimed ... nor has anyone ... not even Karlsson ... that quasar redshifts have precise numbers. The term quantized has been used here to denote a marked tendency TOWARD certain values. What term would you prefer I use for that? I'll be happy to oblige as long as it captures the essence of what is meant.

Third, as I read the 'Karlsson peak' literature, there is only a general consensus on what these are, more or less; the actual values, to 3 or 4 significant figures, seem to vary from paper to paper.

Why would you expect the values in different studies to be *precisely* the same given that they are based on statistical analysis of data? Whether the first peak is at 0.06 or 0.062 and the third peak at 0.96 or 0.953 is of no account. It doesn't affect the calculated probabilities in any significant way. You are simply obfuscating in order to avoid directly addressing those probability estimates and the methodology by which they were derived.

A corollary is that 'missing' a Karlsson peak by more than ~0.002 is the same as 'missing' one by 0.1, or even 0.5

This is absolutely bogus logic because no one is claiming that redshifts are exactly at Karlsson peaks in the Karlsson model. We don't know the physical model accounting for an apparent TENDENCY of redshifts to be near Karlsson values. But like it or not, the calculations clearly show there is such a tendency and maybe if astronomers were actually doing their job and not ignoring anything that threatens their model full of gnomes they'd be investigating why.

Fifth, as I have already said, more than once, the distribution of observed AGN redshifts is quite smooth, with no 'Karlsson peaks'

Actually, that issue isn't settled. More than one study (I've referenced several now) have looked at the data and indeed found evidence that quasar redshifts have a tendency towards Karlsson peaks.

Of course, if the physical model you are seeking to test applies to only a tiny subset of observed quasars ...

Haven't you been listening to what I've been saying at all, DRD? I've never made a claim about ALL redshifts being quantized. It is your side that makes claims regarding ALL redshifts ... namely, that they ALL equate to distance and that ALL high redshift quasars are randomly located with respect to low redshift, local galaxies.

Sixth, as has been said many, many times, the kind of a posterori approach you are using for your analysis is invalid ...

This is again a bogus argument. The mainstream has a model that says quasars are randomly distributed across the sky independent of low redshift galaxies. Their model says that the redshift of these objects come from a continuous distribution with no peaks at Karlsson values. The mainstream has supplied estimates for the number of quasars and the number of low redshift galaxies. I've made some additional (conservative) assumptions about the distribution of those quasars with respect to those galaxies. Using all the above, I've then calculated the probability that we'd see certain observations that indeed we have already seen. The results can be viewed a prediction (made long ago) by the very nature of your model. So all I'm doing, therefore, is demonstrating that your model failed that prediction. I am dealing cards from a deck ... your deck ... and not expecting to see certain hands in a limited number of deals ... in fact, probabilities derived using your model indicate I shouldn't expect to see those hands even if I deal until the cards wear out. Yet I do. Your model therefore fails the test. And this argument of yours about posteriori and a priori is nothing more than a smoke screen to help you avoid that glaring fact.
 
If it's the 'mainstream' understanding of how 'quasars' are distributed, then the numbers chosen as inputs need to be replaced by estimates of AGNs

Wrong, because you've only ASSUMED that quasars are AGNs whose black hole have turned on. Otherwise you can't explain quasars at all. I'm saying that assumption is clearly wrong if the redshifts are indeed quantized (and you know full well what I mean by that term). I'm saying that assumption is clearly wrong if there are indeed high redshift quasars that are on this side of low redshift galaxies ... as also seems to be clear in some observations.

Further, the approach in this post contains no 'null test'; for example, in addition to a (very) small number of large, bright (active?) galaxies, there should be at least an equal number of other large, bright galaxies (perhaps a range of galaxy types, including ellipticals, lenticulars, spirals, irregulars; dwarfs and giants; merging and colliding galaxies; ...).

Actually, your example is completely irrelevant to the calculations I made. Nor if I did that would it help your case. I didn't throw out any possible quasar/galaxy associations because their galaxies were not of the type in the cases I examined. If I had, the probability of seeing those cases would be even lower.

In addition, if the distribution of redshifts of 'quasars' in the selected subset does not match that of the population (to within the relevant statistic), then by definition it is not a random selection.

By subset, I presume you mean the specific galaxies for which I did calculations? That you make this statement demonstrates that you STILL don't understand the calculation methodology at all. What am I to do folks? He's the *expert*. :rolleyes:

The method outlined in this post also seems to assume that large, bright galaxies are not clustered in any way; clearly they are.

If you put more galaxies in one location in the sky, yet the quasar locations remain random with respect to those galaxies, that doesn't improve the probability of seeing galaxies with more quasars. It probably lowers it. Or simply makes no difference in the larger picture because you've lowered the probability of seeing large quasar/galaxy associations due to the rest of the quasar sample. Your argument is again superfluous. Irrelevant. And by "clustered" how far apart are they from our viewing angle? Show us how this would specifically affect my calculation because, frankly, I think you are handwaving ... throwing out red herring.

I think the 'low redshift galaxies' numbers (etc) are also incorrect

Well, you are going to have to be a little more specific. :)
 
BAC, would you mind taking a few minutes to write down what it is that you (or the model you are interested in) expect to find, in the vicinity of ('near') 'low redshift' 'galaxies'?

Well that depends on how many low redshift galaxies we look at. Right? It has to be expressed probabilistically. Right? And I've done that. I've calculated the probability ... with what I believe to be mainstream model assumptions, mainstream observations regarding quasars and low redshift galaxies, and some (what I believe to be) conservative assumptions about quasars placement relative to low redshift galaxies ... of seeing certain observations. Since those probabilities were <<1 ,even if all the possible quasar/galaxy associations in the sky were presumably looked at, I surely wouldn't expect to see them if only a small fraction of possible quasar/galaxy associations had been looked at. Would you?

For example, do you expect that 'all' 'quasars' 'near' the 'minor axis' of such galaxies will have redshifts that are 'near' one of the 'Karlsson peaks'?

No, and I think I've said that. In fact, some of those specific galaxies I did calculations for had quasars that weren't near Karlsson peaks. That fact is accounted for in those calculations.

One of my reasons for asking these questions is that you have to hand all that you need to test the model ... SDSS DR6 contains consistent, high quality data on an awful lot of 'low redshift galaxies' and 'quasars', perhaps enough to test your model far, far more extensively than can be done with just a handful of objects selected from papers by Arp et al.

Probably so. But I'm not an astronomer so it's not my job to do that. You are. Right? :D
 
This is again a bogus argument. The mainstream has a model that says quasars are randomly distributed across the sky independent of low redshift galaxies. Their model says that the redshift of these objects come from a continuous distribution with no peaks at Karlsson values. The mainstream has supplied estimates for the number of quasars and the number of low redshift galaxies. I've made some additional (conservative) assumptions about the distribution of those quasars with respect to those galaxies. Using all the above, I've then calculated the probability that we'd see certain observations that indeed we have already seen. The results can be viewed a prediction (made long ago) by the very nature of your model. So all I'm doing, therefore, is demonstrating that your model failed that prediction. I am dealing cards from a deck ... your deck ... and not expecting to see certain hands in a limited number of deals ... in fact, probabilities derived using your model indicate I shouldn't expect to see those hands even if I deal until the cards wear out. Yet I do. Your model therefore fails the test. And this argument of yours about posteriori and a priori is nothing more than a smoke screen to help you avoid that glaring fact.

Hi BeAChooser.
A low probability of an event (e.g. of a royal flush in dealt cards) does not mean that the event will never happen nor does it mean that it can only happen after a large number of tests. What it means is that each time you do a test (deal the cards) there is a probability that the event will happen. So you can get a royal flush on any deal - the first, second or millionth. Likewise a low probability of QSOs aligned with the minor axis of a low redshift galaxy does not mean that you will not see the alignment until you do an enormous number of observations. You can see the alignment on the first, second or millionth observation.
 
BeAChooser wrote: And don't answer 1.0 because that's not an answer ... that's avoiding the question. Do an analysis like mine to calculate the probability of those observations. You can do that, can't you?

Well, the answer is, as you have already stated, 1 ... because they are observed to be there.

An answer which demonstrates you are either deliberately avoiding the question because you know the answer doesn't help your case or demonstrates you don't understand a VERY simple concept where probability analysis is concerned. :)

In fact, I'd challenge any reader of this thread to repeat the kind of 'probability estimates' you have come up with, using a different set of input data ....

Wow! Insulting our readers. Come on DRD ... you don't think our readers are capable of

- coming up with a random set of r values of z between 0.00 and 3.00 to replace the ones I used as observations in one of the calculations I did?

- finding the difference between those z values and the nearest Karlsson values to each?

- doubling those differences to get a set of increments?

- dividing 3.0 by those increments to get a set of n?

- finding the probably with this formula: P = r! * (1/n1)*(1/n2)* ... snip ... * (1/ni=r) ?

- Comparing that probability to the one I got for that case using the observed z?

You should have more faith in their and your abilities. :D
 
Hiya BAC, i have two sets of double numbers from 1-100
A set is causal and would be noticed as such over a large trials of generation, a set is random (both values are just (x,x) with x=(1-100))

I assure you that one set is very biased and weighted. I will show the generation in two days.

Here they are in random order

Set 1
(56,81)(15,77)(53,8)(29,23)(54,60)(53,34)(63,71)(64,57)(38,66)(56,16)

Set 2
(10,78)(60,18)(53,77)(74,15)(27,80)(63,80)(32,61)(9,79)(10,40)(8,56)

Which set is random and which set is weighted, if a set is weighted, what is the algorithim?
 
Which leads right back to the question I asked ... if nearly 9000 quasar-galaxy pairs, involving ~70 galaxies, are inadequate for L-C&G to show a minor-axis anisotropy, what is it about ~20 (40?) quasar-galaxy pairs, involving ~4 galaxies, that makes your a posterori selection statistically valid?

First of all, you misrepresent the report. The report doesn't claim there are 9000 quasars ... just 9000 QSO-galaxy pairs. That might correspond to less than 9000 quasars if any of the 1 deg radius circles around each of the 71 galaxies overlap so that quasars can be paired with more than one galaxy. If the area around each galaxy searched is PI*12 = 3 deg2 and none of those circles overlap, that's 213 deg2. Now the SDSS surveyors said that the average density was about 15 quasars per square degree. So we'd expect to see about 3195 quasars in that much area. That suggests there are overlaps. And if there are on average 15 quasars per deg2, and most of the alignments I've studied occur over distances only a fraction of degree in diameter, the number of quasar/galaxy associations in that sample like the ones in the observations I've studied (with r's of 4 to 8) must be very, very small. Perhaps there isn't even a case like that in the sample? So perhaps that sample isn't a fair representation of the population as a whole? Which leads right back to the questions I ask you. What makes you sure that a small group of galaxies with associated quasars would contain enough cases with 4 or 5 or 6 aligned quasars to show up at all if the probabilities I've calculated for such alignments are accurate? Or if one or two did, what makes you sure the statistical approach wouldn't mask the effect? Furthermore, what makes you think that minor axes are the only alignments that quasars can have with respect to galaxy features? Perhaps some of those other alignments are more important but also harder to detect because the features are more difficult to detect than minor axis alignment?
 

Back
Top Bottom