Arp objects, QSOs, Statistics

As I noted earlier, if the model being tested is 'quantized redshifts', and if those redshifts are 'Karlsson peaks', then first we need a precise definition of those peaks, and a calculation of the z values of those peaks to at least 3 decimal places.

As I noted earlier, the term "quantized" has never been used with regard to quasar redshifts to suggest anything other than that there is a tendency for z values to be closer to certain Karlsson values than the mainstream model with no peaks whatsoever would imply. My calculation tests exactly that.

Then we need the stated observational uncertainty of the observed redshifts.

Then, if a relevant peak is more than 3 sigma from an observed redshift, it counts as a 'miss'.

This of course makes the calculations a lot simpler - like heads and tails of flipping a coin.

However, if the model being tested is not 'quantized redshifts', then what is it?

I've decided you don't really want to know, DRD, since what we get from you is deliberately coy obfuscation ... like the above.

Tell you what, DRD. You tell us what the expected probability of seeing an observation like NGC 3516 or NGC 5985 is according to the mainstream model. And the answer isn't one. That's coy obfuscation. :D
 
A low probability of an event (e.g. of a royal flush in dealt cards) does not mean that the event will never happen nor does it mean that it can only happen after a large number of tests.

Well, duh!!!

I've never said or implied that a low probability means it can never happen or that it can only happen after many tests. Where do you get the idea I have ... from listening to David? Well that might be your problem to real understanding here.

Look at it this way. Suppose we make a 100 bet (not a real bet, because we aren't supposed to do that, but a hypothetical bet). I'm going to take a normal 52 card deck, shuffle it and deal one card face up. We will see what it is then I'll repeat the process ... collect the card that was dealt, shuffle the deck and deal another card. And we'll repeat that process ... say 4 times. How about you bet me 100 dollars that I will turn up the ace of spades within the first 4 cards. If I do, you win a 100 dollars. If I don't, I win the 100. Will you take that bet? Of course not. The same thing is going on here. The probabilities I calculated are telling me we shouldn't be seeing so many aces after so few observations. But we are and that suggests there might be a serious problem with the deck of cards. Maybe it's actually got a dozen or so aces in it.

Likewise a low probability of QSOs aligned with the minor axis of a low redshift galaxy does not mean that you will not see the alignment until you do an enormous number of observations. You can see the alignment on the first, second or millionth observation.

But suppose the probability is still only 0.001 that you'll see a given number turn up after hundreds of thousands of trials? That's the situation here. That's what the probability calculation indicates. That the probability is still VERY low of seeing those observations even if we were to sample every quasar/galaxy association observable in the sky. Yet here we are already having seen at least 4 such highly improbable events after only looking at a fraction of the possible quasar/galaxy associations. Surely that result contains some information about the assumptions underlying that probability calculation. Surely.

Now I've laid that calculation and its assumptions out in black and white for all to see and critique. If you can show a problem with the calculation method, the parameter values used in the calculation or the assumptions I made along the way, do so. Otherwise, I shall conclude the probability is correct and that can only mean that there is something crooked about the mainstream's deck of cards. :D
 
Hiya BAC, i have two sets of double numbers from 1-100

I'm not going to waste my time looking at this because it has no relation to the actual methodology I used, David. If you want to engage me in further debate, then do the relatively simple Monte Carlo analysis I suggested to you. If you aren't sure what you have to do, then see the post I made to DRD a few posts back. You made specific claims about what my model would show and the only way to prove your claims is to do this analysis. If you aren't willing to do that, we can only suspect that you know you were wrong or you aren't capable of doing that simple Monte Carlo analysis.
 
Well, duh!!!
I've never said or implied that a low probability means it can never happen or that it can only happen after many tests. Where do you get the idea I have ... from listening to David? Well that might be your problem to real understanding here.

This is what you said:
I am dealing cards from a deck ... your deck ... and not expecting to see certain hands in a limited number of deals ... in fact, probabilities derived using your model indicate I shouldn't expect to see those hands even if I deal until the cards wear out. Yet I do.
I may have misinterpreted the "Yet I do" part. Maybe you mean that even with a low probability it is possible to get the hand on the first deal. In that case it is possible to get the alignment on the first observation or the first few observations (especially if you are only looking for alignments and not collecting observations without alignments).
 
Quote:
I am dealing cards from a deck ... your deck ... and not expecting to see certain hands in a limited number of deals ... in fact, probabilities derived using your model indicate I shouldn't expect to see those hands even if I deal until the cards wear out. Yet I do.

I may have misinterpreted the "Yet I do" part. Maybe you mean that even with a low probability it is possible to get the hand on the first deal.

Perhaps you don't understand what is meant by the word "expect"?

In that case it is possible to get the alignment on the first observation or the first few observations (especially if you are only looking for alignments and not collecting observations without alignments).

Sure it is POSSIBLE. But is it PROBABLE? Again, if the probability calculation says that there's only a 1 in a 1000 chance of seeing a certain observation if we examine every single quasar/galaxy association in the sky, do you expect to see 4 such observations in the first couple thousand observations? And if you do see 4, doesn't that suggest a problem with the assumptions underlying the probability calculation. You are free to try and identify where the problem lies in my calculation because I laid it out in black and white, step by step. I maintain the problem lies in the assumption of the mainstream that quasars are not quantized and not local. If you want to disagree with that, then you need to specifically show where else in the calculation the problem might be. For example, do you disagree with the maximum number of observable quasars I chose and the way I distributed them amongst low redshift galaxies? If not, where do you think the problem in the calculation lies?
 
DeiRenDopa said:
A (type 1) quasar is a luminous active galactic nucleus (AGN) for which we have an unobscured view of the accretion disk
Your definition ASSUMES that you know for a fact that a quasar is a black hole. But you don't. You have only inferred that from your belief that redshift always equates to distance. Because quasars are relatively bright and (you think) very far away, and their energy output can change rapidly, you have inferred that it must be a black hole. But if redshift is not always related to distance as is the subject of this thread, then there would be no need to hypothesize a black hole to explain the energy output of a quasar.
.

Here's how the post of mine you are quoting from begins: "A quick summary of contemporary, mainstream, answer to 'what's a quasar?'"

Note:

* I am not presenting my (DeiRenDopa's) definition of anything

* if you think that what I presented, albeit brief, is an inaccurate summary, then please say so

* the sets of observations, coupled with well-tested physics theories, that have lead to the current view of what a quasar/AGN is, are very extensive

* those observations, and physics theory, are considerably broader and deeper than the few words in your post.

If you are interested in understanding more details of how the contemporary definition came to be what it is, why not start a new thread on just that topic? I think it's a fascinating story, one well worth taking the time to come to grips with.

At a deeper level, your post hints at questions bordering on the philosophical ... for example, if all we detect are photons, how can we conclude anything about the 'reality' of what emits (or absorbs) those photons?

In between there's also the question of just how much physics is embedded in the very language we use ('photon', for example, or 'electron'), much less in the equipment (telescopes, spectroscopes, etc) we use to do astronomy.
 
Sure it is POSSIBLE. But is it PROBABLE? Again, if the probability calculation says that there's only a 1 in a 1000 chance of seeing a certain observation if we examine every single quasar/galaxy association in the sky, do you expect to see 4 such observations in the first couple thousand observations? And if you do see 4, doesn't that suggest a problem with the assumptions underlying the probability calculation. You are free to try and identify where the problem lies in my calculation because I laid it out in black and white, step by step. I maintain the problem lies in the assumption of the mainstream that quasars are not quantized and not local. If you want to disagree with that, then you need to specifically show where else in the calculation the problem might be. For example, do you disagree with the maximum number of observable quasars I chose and the way I distributed them amongst low redshift galaxies? If not, where do you think the problem in the calculation lies?

I actually do not see any problem with the calculation. There may be a problem with the use of a posteriori statistics.
And yes there can be 4 such observations in the first couple thousand observations. A probability of N/M does not mean that you will see exactly N events in M observations. It means that for each observation there is a chance of N/M that it will be the given event.

Even if your calculation is correct in all respects, there is still another step to go. The statistics show a possible correlation between quasars and low redshift galaxies. But as you well know a correlation does not always mean causation. So the next step is a viable method of causing this subset of quasars to be associated with this subset of galaxies. However this is off-topic.
 
Hi BeAChooser.
A low probability of an event (e.g. of a royal flush in dealt cards) does not mean that the event will never happen nor does it mean that it can only happen after a large number of tests. What it means is that each time you do a test (deal the cards) there is a probability that the event will happen. So you can get a royal flush on any deal - the first, second or millionth. Likewise a low probability of QSOs aligned with the minor axis of a low redshift galaxy does not mean that you will not see the alignment until you do an enormous number of observations. You can see the alignment on the first, second or millionth observation.

True or false BAC?
 
There may be a problem with the use of a posteriori statistics.

How can using observations obtained after a theory is postulated to question whether the theory is right be a problem? If it were, no progress at all would be made in science. Mind you, I'm not trying to prove my theory of redshift is right ... only that some details of the mainstreams theory are probably wrong. That's why this a posteriori argument is bogus.

And yes there can be 4 such observations in the first couple thousand observations.

Yes, there can ... but again ... are they likely? Would you bet your life on seeing four observations in the couple thousand observations that have a probability of 10-6 in any given observation? Of course not.

Even if your calculation is correct in all respects, there is still another step to go. The statistics show a possible correlation between quasars and low redshift galaxies. But as you well know a correlation does not always mean causation. So the next step is a viable method of causing this subset of quasars to be associated with this subset of galaxies.

Yes, but first we need to come to some agreement that there is a problem. Looks like you might be willing to get on board but the folks that control Big Astronomy and most of it's funding and instruments don't seem willing at all. :)
 
Well, duh!!!

I've never said or implied that a low probability means it can never happen or that it can only happen after many tests. Where do you get the idea I have ... from listening to David? Well that might be your problem to real understanding here.

Look at it this way. Suppose we make a 100 bet (not a real bet, because we aren't supposed to do that, but a hypothetical bet). I'm going to take a normal 52 card deck, shuffle it and deal one card face up. We will see what it is then I'll repeat the process ... collect the card that was dealt, shuffle the deck and deal another card. And we'll repeat that process ... say 4 times. How about you bet me 100 dollars that I will turn up the ace of spades within the first 4 cards. If I do, you win a 100 dollars. If I don't, I win the 100. Will you take that bet?
Well I have seen people bet on similar situations, more likely they would want some sort of odds.
Of course not. The same thing is going on here. The probabilities I calculated are telling me we shouldn't be seeing so many aces after so few observations.
Excuse me but you over extended yourself here.

You have not demonstrated anything.

And how could you tell if it was random or not?

If you drew shuffled and replaced three times and then shuffled and drew so you had four draws.

The chance for each draw is 1/52 for the Ace of Spades.

It does not change. Each time you draw after the shuffle there is a 1/52 chance that it will be the Ace of Spades.

So you would not demonstrate anything by drawing the Ace of Spades four times after a replacement and shuffle.

It could be random.

I thought you were at least as smart I as I am and actually smarter.

So what if you drew the Ace of Spades four times after a replacement and shuffle, you can't prove that it was not random chance.

Unless you double blind an observation of the deck after the shuffle and prior to the draw.

How would you show that it wasn't random chance?
But we are and that suggests there might be a serious problem with the deck of cards. Maybe it's actually got a dozen or so aces in it.
Answer how from four draws you could make any conclusion about a deck that has the card replaced and shuffled?

Please, it is crucial to your argument.

the chance for each draw is 1/52.

it is not (1/52)2 on the second draw, assuming one prior AS. It is 1/52.

it is not (1/52)3 on the third draw, assuming two prior AS. It is 1/52.

it is not (1/52)4 on the fourth draw, assuming three prior AS. It is 1/52.

Each time the chance of drawing the Ace of Spades is 1/52 ! Assuming replacement and a clean shuffle it will be 1/52. The probability of drawing a card does not change from the cards drawn before or after. If we assume replacement and a clean shuffle.

But suppose the probability is still only 0.001 that you'll see a given number turn up after hundreds of thousands of trials? That's the situation here. That's what the probability calculation indicates.
Not really, because of potential sampling error or bias from a small sample.

You have not compared Arp's galaxies to normative galaxies and random points on the sky.

So you can not say that it is not from random chance, because you don't know what the random chance actually is.
That the probability is still VERY low of seeing those observations even if we were to sample every quasar/galaxy association observable in the sky.
You don't know that, you just assert it. When you sample 1000 normative galaxies and 1000 random points on the sky, then you would have something to compare them to. That is the point of representative samples of a census.

Yet here we are already having seen at least 4 such highly improbable events after only looking at a fraction of the possible quasar/galaxy associations.
So a dealer who deals a hand of Royal Flush Hearts in five card no draw poker is cheating?
Surely that result contains some information about the assumptions underlying that probability calculation. Surely.
Not really, if you sampled 1000 normative and 1000 random points, then you might have a clue, if you sampled 10000 that would be better, if you sampled 100,000 normative galaxies and random spots then I would say that it would be very close to definitive. if Arp's sample rose more than one standard deviation above the mean, especially for the random points sample.
Now I've laid that calculation and its assumptions out in black and white for all to see and critique. If you can show a problem with the calculation method, the parameter values used in the calculation or the assumptions I made along the way, do so. Otherwise, I shall conclude the probability is correct and that can only mean that there is something crooked about the mainstream's deck of cards. :D

No, it means you can't tell a random placement from a causal one.
 
Last edited:
I'm not going to waste my time looking at this because it has no relation to the actual methodology I used, David. If you want to engage me in further debate, then do the relatively simple Monte Carlo analysis I suggested to you. If you aren't sure what you have to do, then see the post I made to DRD a few posts back. You made specific claims about what my model would show and the only way to prove your claims is to do this analysis. If you aren't willing to do that, we can only suspect that you know you were wrong or you aren't capable of doing that simple Monte Carlo analysis.

Considering the thread is meant to be about the alleged association between Arp galaxies and QSOs a method for determining random placement from causal is crucial.

I knwo you are trying to shift the conversation over to one that you would like, and I appreciate that. i am not even going to say that it is diversion.

But you still haven't answered the question, how can you tell that Arp's associations of placement of Arp galaxies and QSOs is not just a random placement?

I will ask you again because that is the point of the thread.

So you give a decent answer to mine and I will run three trials of yours.

How is one to tell that Arp's galaxy/QSO associations are not just due to random opitical alignment? (Exclude the red shift sorting for now)

As I said give me a reasonable explanation and I will do exactly as you have asked. And I will give you three trials as I stated before.

However I feel that it would be avoiding my main intent to discuss the sorting of redshifts at this point.

I maintain that the Arp galaxy/QSO association could be just from random placement and that representative sampling would be the best way to determine if there is an association.

How does your method tell a random placement from a causal one?

And believe me, the weighting I gave the causal set is very heavy, I think that a sample of ten for both sets would show which one is weighted random and which one is pure random, maybe even as low as five. I tried to make it easy. I really think a plot of five samples would show the difference, maybe even three samples.

My point is that you have to have the samples to compare to tell the difference. From a small sample you can't tell even when there is an obvious causal effect.
 
How can using observations obtained after a theory is postulated to question whether the theory is right be a problem? If it were, no progress at all would be made in science. Mind you, I'm not trying to prove my theory of redshift is right ... only that some details of the mainstreams theory are probably wrong. That's why this a posteriori argument is bogus.



Yes, there can ... but again ... are they likely? Would you bet your life on seeing four observations in the couple thousand observations that have a probability of 10-6 in any given observation? Of course not.
That is your intuitive feeling but that does not make it so. [gentle]

If you compared it to 10000 'normative samples' then you would be making a better bet.
Yes, but first we need to come to some agreement that there is a problem. Looks like you might be willing to get on board but the folks that control Big Astronomy and most of it's funding and instruments don't seem willing at all. :)


It would be dirt cheap BAC, the samples are not really large as of yet but the Sloan survey is large enough to do a tentative representative sample of at least 10,000 galaxies and 10,000 random points, then there would be a baseline to truly compare Arp's association to.
And it would be very definitive, if Arp's galaxies showed an association that was one standard deviation and maybe even two, that would be something that would rock astronomy hard. Especially if the normative samples were really large like 100,000 and some sorting mechanism was found from the sampling to indicate which are the galaxies most likely to have associated QSOs. The ability to demonstrate correlation would rise, as would potential causality.

It would be definitive and undeniable. especially with large comparative samples and a way to determine which galaxies show the association.

Monumental even!

Like Arp Library and Arp Hall and Arp Chairs, all that good stuff.

Which is why i think another representative sample would be the AGN because if Arp is right that would be a confirming sample that would strengthen the case.

Seriously, it would be cheap and very effective, if it even showed that Arp's galaxies came close to one standard deviation above the mean for the random points, that would be mighty cool!

Really, not even just the Nobel but a real theory that would be remembered for at least two hundred years.
 
This is what you said:

I may have misinterpreted the "Yet I do" part. Maybe you mean that even with a low probability it is possible to get the hand on the first deal. In that case it is possible to get the alignment on the first observation or the first few observations (especially if you are only looking for alignments and not collecting observations without alignments).

Yes it is, and is was a sorted sample to some extent.
 
How can using observations obtained after a theory is postulated to question whether the theory is right be a problem? If it were, no progress at all would be made in science. Mind you, I'm not trying to prove my theory of redshift is right ... only that some details of the mainstreams theory are probably wrong. That's why this a posteriori argument is bogus.
I suspect that the a posteriori argument logic point is something like: a configuration of a system is observed, the odds of it happening by chance are calculated, the odds are low thus that configuration is unlikely and there may be a cause for the configuration. But this logic applies to any configuration.
As an example here is a set of hypothetical observations which by the above logic would have at least one and probably several different causes:
Observation 1: A low redshift galaxy with a quasar aligned to the minor axis (low probability therefore has cause A).
Observation 2: A low redshift galaxy with a quasar aligned to the major axis (low probability therefore has cause B).
Observation 3: A low redshift galaxy with a quasar aligned to a spiral arm (low probability therefore has cause C).
Observation 4: A low redshift galaxy with a quasar aligned to a jet from the galaxy (low probability therefore has cause D).
Observation 5: A low redshift galaxy with a quasar aligned to an arbitary angle between the minor and major axis (low probability therefore has cause E).

Yes, there can ... but again ... are they likely? Would you bet your life on seeing four observations in the couple thousand observations that have a probability of 10-6 in any given observation? Of course not.
Of course not but that does not mean that it cannot happen.

Yes, but first we need to come to some agreement that there is a problem. Looks like you might be willing to get on board but the folks that control Big Astronomy and most of it's funding and instruments don't seem willing at all. :)
Luckily you added the smilely :) otherwise people will begin to think that you are one of those conspiracy theory nuts!

As for "getting on board", I am not even on the gangplank. Statistics are nice but there still is the problem of going from a correlation to a cause. If a suggested cause is that the quasars are ejected by the galaxy then I would like to see a bit of evidence such as some of the following: one in the process of being ejected, shock waves from the ejection process, signs of disruption along the exit path(s), lines of quasars stretching out from the galaxy, no sign that the quasars are embedded in a galaxy (like the mainstream quasars), evidence that the quasars have internal structure (unlike mainstream quasars). An explanation of the high observed redshift of the quasars would be good. Note that any cosmological theory that has matter creation close to massive bodies (black holes or their equivalent in the theory) is wrong - see the Hoyle-Narlikar Theory thread which includes other defects of that theory. It would laso ne nice if the quasar containing 2 black holes is explained.
 
DeiRenDopa said:
First, the only way to do this kind of analysis, properly, is to start with the physical models; in the case of 'Karlsson peaks'
That's nonsense, DRD. Many an important discovery has been made because someone observed that existing theory could not explain an observation or outright contradicted it. And once they were convinced of that, scientists then went looking for an explanation. I don't have to have a physical model for why quasar redshifts might tend towards Karlsson peaks to show that they do, contrary to what the mainstream model would claim.
.
That may, or may not, be so.

However, the part of your post that I quoted, when I wrote the words above, is as follows (I added some bolding): "Your question is how can my methodology show a difference between a random placement and a causal one? But to be more precise, your question should be restated: how can my methodology distinguish between the existence of physics which produce a relatively uniform distribution z (like the mainstream model) and physics which produce a highly non-uniform distribution of z (like Karlsson's quantized values)?"

In other words, you seem (to me at least) to be saying that there are physical models (or at least physics) underlying the Karlsson peaks. If, in fact, there is no such physical model (or even physics), then I agree that my 'First' is not applicable.

However, if 'Karlsson peaks' are purely empirical, then there is still a requirement for rigour in the analysis; indeed, there is a much stronger requirement for hypothesis development and testing, if only because there is no other path to testing.
.
Second, if the 'Karlsson peaks' are indeed fully quantized, then they have precise values, to as many decimal places as you care to calculate.
Since you and David are apparently unable to show that the probabilities I've calculated are wrong, you resort to word games. I've never claimed ... nor has anyone ... not even Karlsson ... that quasar redshifts have precise numbers. The term quantized has been used here to denote a marked tendency TOWARD certain values. What term would you prefer I use for that? I'll be happy to oblige as long as it captures the essence of what is meant.
.
I'm glad you clarified that point … you see, the word 'quantized' has a default meaning much like the way I interpreted it.

You can use any term you like, but, as with 'quasar' you really need to take a lot more trouble to define what you mean.

This is especially true because you have stated that what you are presenting is your own opinion only; among other things this means you cannot rely upon anything in any textbook (or even any papers), but must always be prepared to spell out what you mean, if asked.

So, in the case of 'Karlsson peaks', can you please state, in as much quantitative detail as possible, just what you mean?

Specifically, what is the functional form of the expected 'marked tendency TOWARD certain values'? And what, exactly, are those 'certain values'?

Third, as I read the 'Karlsson peak' literature, there is only a general consensus on what these are, more or less; the actual values, to 3 or 4 significant figures, seem to vary from paper to paper.
Why would you expect the values in different studies to be *precisely* the same given that they are based on statistical analysis of data? Whether the first peak is at 0.06 or 0.062 and the third peak at 0.96 or 0.953 is of no account. It doesn't affect the calculated probabilities in any significant way. You are simply obfuscating in order to avoid directly addressing those probability estimates and the methodology by which they were derived.
.
I now understand that there is, shall we say, a certain amount of leeway in the numbers.

The main question still remains though: what is the hypothesis concerning 'Karlsson peaks' being tested?

A corollary is that 'missing' a Karlsson peak by more than ~0.002 is the same as 'missing' one by 0.1, or even 0.5
This is absolutely bogus logic because no one is claiming that redshifts are exactly at Karlsson peaks in the Karlsson model. We don't know the physical model accounting for an apparent TENDENCY of redshifts to be near Karlsson values. But like it or not, the calculations clearly show there is such a tendency and maybe if astronomers were actually doing their job and not ignoring anything that threatens their model full of gnomes they'd be investigating why.
.
Indeed.

However, if the hypothesis being tested involves something to do with Karlsson peaks, then it certainly needs to be sharpened up a lot.

For example, is it expected that a tendency towards Karlsson peaks will be found only for 'quasars' 'near' the 'minor axes' of 'large' 'active' galaxies? If so, then let's have some clear, quantitative definitions of all the key terms.

Fifth, as I have already said, more than once, the distribution of observed AGN redshifts is quite smooth, with no 'Karlsson peaks'
Actually, that issue isn't settled. More than one study (I've referenced several now) have looked at the data and indeed found evidence that quasar redshifts have a tendency towards Karlsson peaks.
.
Ah yes, more studies by Bell (if I remember correctly)!

In any case, without a a priori clear, quantitative statement of the empirical relationship ('quasar redshifts have a tendency towards Karlsson peaks'), there is nothing anyone can do to actually formulate a hypothesis to test, much less actually do such a test … and all that's necessary before the results of different tests can be compared, for possible inconsistencies for example.

Of course, if the physical model you are seeking to test applies to only a tiny subset of observed quasars ...
Haven't you been listening to what I've been saying at all, DRD? I've never made a claim about ALL redshifts being quantized. It is your side that makes claims regarding ALL redshifts ... namely, that they ALL equate to distance and that ALL high redshift quasars are randomly located with respect to low redshift, local galaxies.
.
Now I'm quite confused.

Do I understand that you claim 'quasars' (however you choose to define them) are a heterogeneous class? That some are at distances commensurate with their redshifts (via the Hubble relationship) while others are not?

If so, then how do you go about choosing which 'quasars' are which?

Specifically, how can you test any hypothesis concerning Karlsson peaks without addressing the question of how to decide which quasars are at cosmological distances and which are not?

Sixth, as has been said many, many times, the kind of a posterori approach you are using for your analysis is invalid ...
This is again a bogus argument. The mainstream has a model that says quasars are randomly distributed across the sky independent of low redshift galaxies. Their model says that the redshift of these objects come from a continuous distribution with no peaks at Karlsson values. The mainstream has supplied estimates for the number of quasars and the number of low redshift galaxies. I've made some additional (conservative) assumptions about the distribution of those quasars with respect to those galaxies. Using all the above, I've then calculated the probability that we'd see certain observations that indeed we have already seen. The results can be viewed a prediction (made long ago) by the very nature of your model. So all I'm doing, therefore, is demonstrating that your model failed that prediction. I am dealing cards from a deck ... your deck ... and not expecting to see certain hands in a limited number of deals ... in fact, probabilities derived using your model indicate I shouldn't expect to see those hands even if I deal until the cards wear out. Yet I do. Your model therefore fails the test. And this argument of yours about posteriori and a priori is nothing more than a smoke screen to help you avoid that glaring fact.
.
This is, more or less, the case I thought you were trying to make.

However:

* why do your calculations involve 'Karlsson peaks'?

* why have you not stated the hypothesis you claim to be testing, in a way that is derived directly from the model you claim it tests?

* the quasars etc have already been observed, so what you have is a post-diction, not a prediction (assuming that you eventually can present a clear, quantitative hypothesis derived from the relevant astrophysics, including the null hypothesis) … and a posterori statistics on post-dictions is very bad science indeed.

Have you heard of the term 'confirmation bias'? If you have, then I'm sure you'll recognise that it's pretty heavily built into your approach (at least, as I understand that approach; I'm still a long way from having confidence that I do).
 
{skip}
DeiRenDopa said:
Further, the approach in this post contains no 'null test'; for example, in addition to a (very) small number of large, bright (active?) galaxies, there should be at least an equal number of other large, bright galaxies (perhaps a range of galaxy types, including ellipticals, lenticulars, spirals, irregulars; dwarfs and giants; merging and colliding galaxies; ...).
Actually, your example is completely irrelevant to the calculations I made. Nor if I did that would it help your case. I didn't throw out any possible quasar/galaxy associations because their galaxies were not of the type in the cases I examined. If I had, the probability of seeing those cases would be even lower.
.
I continue to be surprised, as I ask more questions, and get more answers, concerning the hypothesis/ses you think you are testing, and the approach you have taken in general.

In post #354, which I have just posted a response to, I learned for the first time (perhaps I'm in the slow class; perhaps you'd made it clear much earlier), that 'quasars' (however you end up defining them) are a heterogeneous class, for the purposes of your tests, calculations, etc.

If so, then I agree that much of what I have written is irrelevant ... until questions about how you chose to put any given 'quasar' into one class (at a cosmological distance commensurate with its redshift, via the Hubble relationship say) and not any other.

And even more fundamentally, how many classes of 'quasar' are there?

I expect much of this post of yours I'm responding to will also be largely irrelevant until this heterogeneity issue is clarified.
.
In addition, if the distribution of redshifts of 'quasars' in the selected subset does not match that of the population (to within the relevant statistic), then by definition it is not a random selection.
By subset, I presume you mean the specific galaxies for which I did calculations? That you make this statement demonstrates that you STILL don't understand the calculation methodology at all. What am I to do folks? He's the *expert*. :rolleyes:
.
I have already stated, several times, that I don't understand 'the calculation methodology'; would you prefer that I begin each and every one of my posts, in this thread, with a statement to that effect?

In any case, my comment is moot (for now) ... quasar heterogeneity needs to be dealt with first ...
.
The method outlined in this post also seems to assume that large, bright galaxies are not clustered in any way; clearly they are.
If you put more galaxies in one location in the sky, yet the quasar locations remain random with respect to those galaxies, that doesn't improve the probability of seeing galaxies with more quasars. It probably lowers it. Or simply makes no difference in the larger picture because you've lowered the probability of seeing large quasar/galaxy associations due to the rest of the quasar sample. Your argument is again superfluous. Irrelevant. And by "clustered" how far apart are they from our viewing angle? Show us how this would specifically affect my calculation because, frankly, I think you are handwaving ... throwing out red herring.
.
Ditto
.
I think the 'low redshift galaxies' numbers (etc) are also incorrect
Well, you are going to have to be a little more specific. :)
.
Ditto.
 
I suspect that the a posteriori argument logic point is something like: a configuration of a system is observed, the odds of it happening by chance are calculated, the odds are low thus that configuration is unlikely and there may be a cause for the configuration.

Here's an example I like: the observed size of the moon in our sky is extremely close to that of the sun (which means we get to see nice solar eclipses). In the "mainstream" theory this is a rather remarkable coincidence. In fact we could go ahead and use BACian logic to compute the odds of it being true. I haven't done so, but I expect the answer would be <1%.

So we observe a coincidence that goes against our expectation - one which has less than a 1% chance of occurring if our theories are correct. Do we therefore conclude that the standard theory (that the earth is a planet going around the sun, that the observed size of the moon compared to the sun is not important for the evolution of life, etc.) is incorrect?

The only circumstance under which that would be reasonable would be if we had an alternative theory which predicted it, and which also predicted everything else the standard theory predicts with at least as good accuracy. If the alternative theory predicts the moon and the sun will have the same size, but it also predicts the sun goes around the earth, it doesn't cut it.

So you cannot simply point out unlikely coincidences and conclude anything from them. You must remember that the world is a big place full of many, many different phenomena. Statistics predicts with great confidence that some of those phenomena will look highly significant - by chance! If it wasn't the moon and the sun, it would be the period of the year compared to the length of a day, or the radius of the orbits of the planets, or one of thousands of other potential coincidences one can think of. At least a few of those - roughly 1%, in fact - will be "significant" at the level of 1%!

And when you start talking about continuous variables like the redshifts of QSOs, which could be nearly anywhere and in nearly any configuration, the number of of potential "significant" arrangements is enormous. One therefore expects to find many such false patterns - in fact if there weren't any, it would require an explanation....
 
Last edited:
DeiRenDopa said:
BAC, would you mind taking a few minutes to write down what it is that you (or the model you are interested in) expect to find, in the vicinity of ('near') 'low redshift' 'galaxies'?
Well that depends on how many low redshift galaxies we look at. Right? It has to be expressed probabilistically. Right? And I've done that. I've calculated the probability ... with what I believe to be mainstream model assumptions, mainstream observations regarding quasars and low redshift galaxies, and some (what I believe to be) conservative assumptions about quasars placement relative to low redshift galaxies ... of seeing certain observations. Since those probabilities were <<1 ,even if all the possible quasar/galaxy associations in the sky were presumably looked at, I surely wouldn't expect to see them if only a small fraction of possible quasar/galaxy associations had been looked at. Would you?

{skip}
.
OK, in light of what I now know about your approach and assumptions ('quasars' are heterogeneous), how does your calculation of probability incorporate this heterogeneity?

Oh, and to repeat: your calculations include inputs and assumptions that are not found in any 'mainstream models' that I know of - 'Karlsson peaks' for example - and leave out some key components (such as lensing). I have asked you - many times - for a clear, quantitative statement of the hypothesis/ses you think you are testing (including the null hypothesis), and have yet to get a straight answer.

Without such a clear, quantitative statement, I doubt I (for one) would ever be able to understand your calculations and your approach.
 
(emphasis added)
DeiRenDopa said:
Which leads right back to the question I asked ... if nearly 9000 quasar-galaxy pairs, involving ~70 galaxies, are inadequate for L-C&G to show a minor-axis anisotropy, what is it about ~20 (40?) quasar-galaxy pairs, involving ~4 galaxies, that makes your a posterori selection statistically valid?
First of all, you misrepresent the report. The report doesn't claim there are 9000 quasars ... just 9000 QSO-galaxy pairs. That might correspond to less than 9000 quasars if any of the 1 deg radius circles around each of the 71 galaxies overlap so that quasars can be paired with more than one galaxy. If the area around each galaxy searched is PI*12 = 3 deg2 and none of those circles overlap, that's 213 deg2.
.
please note what I wrote, and them compare with what you wrote.
Now the SDSS surveyors said that the average density was about 15 quasars per square degree. So we'd expect to see about 3195 quasars in that much area. That suggests there are overlaps. And if there are on average 15 quasars per deg2, and most of the alignments I've studied occur over distances only a fraction of degree in diameter, the number of quasar/galaxy associations in that sample like the ones in the observations I've studied (with r's of 4 to 8) must be very, very small. Perhaps there isn't even a case like that in the sample? So perhaps that sample isn't a fair representation of the population as a whole? Which leads right back to the questions I ask you. What makes you sure that a small group of galaxies with associated quasars would contain enough cases with 4 or 5 or 6 aligned quasars to show up at all if the probabilities I've calculated for such alignments are accurate? Or if one or two did, what makes you sure the statistical approach wouldn't mask the effect? Furthermore, what makes you think that minor axes are the only alignments that quasars can have with respect to galaxy features? Perhaps some of those other alignments are more important but also harder to detect because the features are more difficult to detect than minor axis alignment?
.
Many good questions.

However, if a study with a sample >~10 times larger than yours fails to find a statistically significant result - despite the fact that it was done on an a priori basis, and seems to have used a measure of anisotropy that would make that found in the 1, 2, or 3 galaxy studies you cite positively jump off the page - and if (as I now know) your hypothesis is purely empirical, how can you proceed?

I mean, it seems to me that, empirically, some galaxies (up to 5?) seem to show some sort of minor axis anisotropy, but a much larger sample of galaxies (~70) does not.

What, then, is a clear, quantitative statement of the empirical relationship that is consistent with both sets of results (and which is also not restricted to only the small sample of galaxies)?
 

Back
Top Bottom