BeAChooser
Banned
- Joined
- Jun 20, 2007
- Messages
- 11,716
But maybe I did present a 'case'; if so, would you please point to the post(s) in which I did so?
Is that your coy way of defending the mainstream?
But maybe I did present a 'case'; if so, would you please point to the post(s) in which I did so?
As I noted earlier, if the model being tested is 'quantized redshifts', and if those redshifts are 'Karlsson peaks', then first we need a precise definition of those peaks, and a calculation of the z values of those peaks to at least 3 decimal places.
Then we need the stated observational uncertainty of the observed redshifts.
Then, if a relevant peak is more than 3 sigma from an observed redshift, it counts as a 'miss'.
This of course makes the calculations a lot simpler - like heads and tails of flipping a coin.
However, if the model being tested is not 'quantized redshifts', then what is it?
A low probability of an event (e.g. of a royal flush in dealt cards) does not mean that the event will never happen nor does it mean that it can only happen after a large number of tests.
Likewise a low probability of QSOs aligned with the minor axis of a low redshift galaxy does not mean that you will not see the alignment until you do an enormous number of observations. You can see the alignment on the first, second or millionth observation.
Hiya BAC, i have two sets of double numbers from 1-100
Well, duh!!!
I've never said or implied that a low probability means it can never happen or that it can only happen after many tests. Where do you get the idea I have ... from listening to David? Well that might be your problem to real understanding here.
I may have misinterpreted the "Yet I do" part. Maybe you mean that even with a low probability it is possible to get the hand on the first deal. In that case it is possible to get the alignment on the first observation or the first few observations (especially if you are only looking for alignments and not collecting observations without alignments).I am dealing cards from a deck ... your deck ... and not expecting to see certain hands in a limited number of deals ... in fact, probabilities derived using your model indicate I shouldn't expect to see those hands even if I deal until the cards wear out. Yet I do.
Quote:
I am dealing cards from a deck ... your deck ... and not expecting to see certain hands in a limited number of deals ... in fact, probabilities derived using your model indicate I shouldn't expect to see those hands even if I deal until the cards wear out. Yet I do.
I may have misinterpreted the "Yet I do" part. Maybe you mean that even with a low probability it is possible to get the hand on the first deal.
In that case it is possible to get the alignment on the first observation or the first few observations (especially if you are only looking for alignments and not collecting observations without alignments).
.Your definition ASSUMES that you know for a fact that a quasar is a black hole. But you don't. You have only inferred that from your belief that redshift always equates to distance. Because quasars are relatively bright and (you think) very far away, and their energy output can change rapidly, you have inferred that it must be a black hole. But if redshift is not always related to distance as is the subject of this thread, then there would be no need to hypothesize a black hole to explain the energy output of a quasar.DeiRenDopa said:A (type 1) quasar is a luminous active galactic nucleus (AGN) for which we have an unobscured view of the accretion disk
Sure it is POSSIBLE. But is it PROBABLE? Again, if the probability calculation says that there's only a 1 in a 1000 chance of seeing a certain observation if we examine every single quasar/galaxy association in the sky, do you expect to see 4 such observations in the first couple thousand observations? And if you do see 4, doesn't that suggest a problem with the assumptions underlying the probability calculation. You are free to try and identify where the problem lies in my calculation because I laid it out in black and white, step by step. I maintain the problem lies in the assumption of the mainstream that quasars are not quantized and not local. If you want to disagree with that, then you need to specifically show where else in the calculation the problem might be. For example, do you disagree with the maximum number of observable quasars I chose and the way I distributed them amongst low redshift galaxies? If not, where do you think the problem in the calculation lies?
Hi BeAChooser.
A low probability of an event (e.g. of a royal flush in dealt cards) does not mean that the event will never happen nor does it mean that it can only happen after a large number of tests. What it means is that each time you do a test (deal the cards) there is a probability that the event will happen. So you can get a royal flush on any deal - the first, second or millionth. Likewise a low probability of QSOs aligned with the minor axis of a low redshift galaxy does not mean that you will not see the alignment until you do an enormous number of observations. You can see the alignment on the first, second or millionth observation.
There may be a problem with the use of a posteriori statistics.
And yes there can be 4 such observations in the first couple thousand observations.
Even if your calculation is correct in all respects, there is still another step to go. The statistics show a possible correlation between quasars and low redshift galaxies. But as you well know a correlation does not always mean causation. So the next step is a viable method of causing this subset of quasars to be associated with this subset of galaxies.
Well I have seen people bet on similar situations, more likely they would want some sort of odds.Well, duh!!!
I've never said or implied that a low probability means it can never happen or that it can only happen after many tests. Where do you get the idea I have ... from listening to David? Well that might be your problem to real understanding here.
Look at it this way. Suppose we make a 100 bet (not a real bet, because we aren't supposed to do that, but a hypothetical bet). I'm going to take a normal 52 card deck, shuffle it and deal one card face up. We will see what it is then I'll repeat the process ... collect the card that was dealt, shuffle the deck and deal another card. And we'll repeat that process ... say 4 times. How about you bet me 100 dollars that I will turn up the ace of spades within the first 4 cards. If I do, you win a 100 dollars. If I don't, I win the 100. Will you take that bet?
Excuse me but you over extended yourself here.Of course not. The same thing is going on here. The probabilities I calculated are telling me we shouldn't be seeing so many aces after so few observations.
Answer how from four draws you could make any conclusion about a deck that has the card replaced and shuffled?But we are and that suggests there might be a serious problem with the deck of cards. Maybe it's actually got a dozen or so aces in it.
Not really, because of potential sampling error or bias from a small sample.But suppose the probability is still only 0.001 that you'll see a given number turn up after hundreds of thousands of trials? That's the situation here. That's what the probability calculation indicates.
You don't know that, you just assert it. When you sample 1000 normative galaxies and 1000 random points on the sky, then you would have something to compare them to. That is the point of representative samples of a census.That the probability is still VERY low of seeing those observations even if we were to sample every quasar/galaxy association observable in the sky.
So a dealer who deals a hand of Royal Flush Hearts in five card no draw poker is cheating?Yet here we are already having seen at least 4 such highly improbable events after only looking at a fraction of the possible quasar/galaxy associations.
Not really, if you sampled 1000 normative and 1000 random points, then you might have a clue, if you sampled 10000 that would be better, if you sampled 100,000 normative galaxies and random spots then I would say that it would be very close to definitive. if Arp's sample rose more than one standard deviation above the mean, especially for the random points sample.Surely that result contains some information about the assumptions underlying that probability calculation. Surely.
Now I've laid that calculation and its assumptions out in black and white for all to see and critique. If you can show a problem with the calculation method, the parameter values used in the calculation or the assumptions I made along the way, do so. Otherwise, I shall conclude the probability is correct and that can only mean that there is something crooked about the mainstream's deck of cards.![]()
I'm not going to waste my time looking at this because it has no relation to the actual methodology I used, David. If you want to engage me in further debate, then do the relatively simple Monte Carlo analysis I suggested to you. If you aren't sure what you have to do, then see the post I made to DRD a few posts back. You made specific claims about what my model would show and the only way to prove your claims is to do this analysis. If you aren't willing to do that, we can only suspect that you know you were wrong or you aren't capable of doing that simple Monte Carlo analysis.
That is your intuitive feeling but that does not make it so. [gentle]How can using observations obtained after a theory is postulated to question whether the theory is right be a problem? If it were, no progress at all would be made in science. Mind you, I'm not trying to prove my theory of redshift is right ... only that some details of the mainstreams theory are probably wrong. That's why this a posteriori argument is bogus.
Yes, there can ... but again ... are they likely? Would you bet your life on seeing four observations in the couple thousand observations that have a probability of 10-6 in any given observation? Of course not.
Yes, but first we need to come to some agreement that there is a problem. Looks like you might be willing to get on board but the folks that control Big Astronomy and most of it's funding and instruments don't seem willing at all.![]()
This is what you said:
I may have misinterpreted the "Yet I do" part. Maybe you mean that even with a low probability it is possible to get the hand on the first deal. In that case it is possible to get the alignment on the first observation or the first few observations (especially if you are only looking for alignments and not collecting observations without alignments).
I suspect that the a posteriori argument logic point is something like: a configuration of a system is observed, the odds of it happening by chance are calculated, the odds are low thus that configuration is unlikely and there may be a cause for the configuration. But this logic applies to any configuration.How can using observations obtained after a theory is postulated to question whether the theory is right be a problem? If it were, no progress at all would be made in science. Mind you, I'm not trying to prove my theory of redshift is right ... only that some details of the mainstreams theory are probably wrong. That's why this a posteriori argument is bogus.
Of course not but that does not mean that it cannot happen.Yes, there can ... but again ... are they likely? Would you bet your life on seeing four observations in the couple thousand observations that have a probability of 10-6 in any given observation? Of course not.
Luckily you added the smilelyYes, but first we need to come to some agreement that there is a problem. Looks like you might be willing to get on board but the folks that control Big Astronomy and most of it's funding and instruments don't seem willing at all.![]()
.That's nonsense, DRD. Many an important discovery has been made because someone observed that existing theory could not explain an observation or outright contradicted it. And once they were convinced of that, scientists then went looking for an explanation. I don't have to have a physical model for why quasar redshifts might tend towards Karlsson peaks to show that they do, contrary to what the mainstream model would claim.DeiRenDopa said:First, the only way to do this kind of analysis, properly, is to start with the physical models; in the case of 'Karlsson peaks'
.Since you and David are apparently unable to show that the probabilities I've calculated are wrong, you resort to word games. I've never claimed ... nor has anyone ... not even Karlsson ... that quasar redshifts have precise numbers. The term quantized has been used here to denote a marked tendency TOWARD certain values. What term would you prefer I use for that? I'll be happy to oblige as long as it captures the essence of what is meant.Second, if the 'Karlsson peaks' are indeed fully quantized, then they have precise values, to as many decimal places as you care to calculate.
.Why would you expect the values in different studies to be *precisely* the same given that they are based on statistical analysis of data? Whether the first peak is at 0.06 or 0.062 and the third peak at 0.96 or 0.953 is of no account. It doesn't affect the calculated probabilities in any significant way. You are simply obfuscating in order to avoid directly addressing those probability estimates and the methodology by which they were derived.Third, as I read the 'Karlsson peak' literature, there is only a general consensus on what these are, more or less; the actual values, to 3 or 4 significant figures, seem to vary from paper to paper.
.This is absolutely bogus logic because no one is claiming that redshifts are exactly at Karlsson peaks in the Karlsson model. We don't know the physical model accounting for an apparent TENDENCY of redshifts to be near Karlsson values. But like it or not, the calculations clearly show there is such a tendency and maybe if astronomers were actually doing their job and not ignoring anything that threatens their model full of gnomes they'd be investigating why.A corollary is that 'missing' a Karlsson peak by more than ~0.002 is the same as 'missing' one by 0.1, or even 0.5
.Actually, that issue isn't settled. More than one study (I've referenced several now) have looked at the data and indeed found evidence that quasar redshifts have a tendency towards Karlsson peaks.Fifth, as I have already said, more than once, the distribution of observed AGN redshifts is quite smooth, with no 'Karlsson peaks'
.Haven't you been listening to what I've been saying at all, DRD? I've never made a claim about ALL redshifts being quantized. It is your side that makes claims regarding ALL redshifts ... namely, that they ALL equate to distance and that ALL high redshift quasars are randomly located with respect to low redshift, local galaxies.Of course, if the physical model you are seeking to test applies to only a tiny subset of observed quasars ...
.This is again a bogus argument. The mainstream has a model that says quasars are randomly distributed across the sky independent of low redshift galaxies. Their model says that the redshift of these objects come from a continuous distribution with no peaks at Karlsson values. The mainstream has supplied estimates for the number of quasars and the number of low redshift galaxies. I've made some additional (conservative) assumptions about the distribution of those quasars with respect to those galaxies. Using all the above, I've then calculated the probability that we'd see certain observations that indeed we have already seen. The results can be viewed a prediction (made long ago) by the very nature of your model. So all I'm doing, therefore, is demonstrating that your model failed that prediction. I am dealing cards from a deck ... your deck ... and not expecting to see certain hands in a limited number of deals ... in fact, probabilities derived using your model indicate I shouldn't expect to see those hands even if I deal until the cards wear out. Yet I do. Your model therefore fails the test. And this argument of yours about posteriori and a priori is nothing more than a smoke screen to help you avoid that glaring fact.Sixth, as has been said many, many times, the kind of a posterori approach you are using for your analysis is invalid ...
.{skip}
Actually, your example is completely irrelevant to the calculations I made. Nor if I did that would it help your case. I didn't throw out any possible quasar/galaxy associations because their galaxies were not of the type in the cases I examined. If I had, the probability of seeing those cases would be even lower.DeiRenDopa said:Further, the approach in this post contains no 'null test'; for example, in addition to a (very) small number of large, bright (active?) galaxies, there should be at least an equal number of other large, bright galaxies (perhaps a range of galaxy types, including ellipticals, lenticulars, spirals, irregulars; dwarfs and giants; merging and colliding galaxies; ...).
.By subset, I presume you mean the specific galaxies for which I did calculations? That you make this statement demonstrates that you STILL don't understand the calculation methodology at all. What am I to do folks? He's the *expert*.In addition, if the distribution of redshifts of 'quasars' in the selected subset does not match that of the population (to within the relevant statistic), then by definition it is not a random selection.![]()
.If you put more galaxies in one location in the sky, yet the quasar locations remain random with respect to those galaxies, that doesn't improve the probability of seeing galaxies with more quasars. It probably lowers it. Or simply makes no difference in the larger picture because you've lowered the probability of seeing large quasar/galaxy associations due to the rest of the quasar sample. Your argument is again superfluous. Irrelevant. And by "clustered" how far apart are they from our viewing angle? Show us how this would specifically affect my calculation because, frankly, I think you are handwaving ... throwing out red herring.The method outlined in this post also seems to assume that large, bright galaxies are not clustered in any way; clearly they are.
.Well, you are going to have to be a little more specific.I think the 'low redshift galaxies' numbers (etc) are also incorrect![]()
I suspect that the a posteriori argument logic point is something like: a configuration of a system is observed, the odds of it happening by chance are calculated, the odds are low thus that configuration is unlikely and there may be a cause for the configuration.
.Well that depends on how many low redshift galaxies we look at. Right? It has to be expressed probabilistically. Right? And I've done that. I've calculated the probability ... with what I believe to be mainstream model assumptions, mainstream observations regarding quasars and low redshift galaxies, and some (what I believe to be) conservative assumptions about quasars placement relative to low redshift galaxies ... of seeing certain observations. Since those probabilities were <<1 ,even if all the possible quasar/galaxy associations in the sky were presumably looked at, I surely wouldn't expect to see them if only a small fraction of possible quasar/galaxy associations had been looked at. Would you?DeiRenDopa said:BAC, would you mind taking a few minutes to write down what it is that you (or the model you are interested in) expect to find, in the vicinity of ('near') 'low redshift' 'galaxies'?
{skip}
.First of all, you misrepresent the report. The report doesn't claim there are 9000 quasars ... just 9000 QSO-galaxy pairs. That might correspond to less than 9000 quasars if any of the 1 deg radius circles around each of the 71 galaxies overlap so that quasars can be paired with more than one galaxy. If the area around each galaxy searched is PI*12 = 3 deg2 and none of those circles overlap, that's 213 deg2.DeiRenDopa said:Which leads right back to the question I asked ... if nearly 9000 quasar-galaxy pairs, involving ~70 galaxies, are inadequate for L-C&G to show a minor-axis anisotropy, what is it about ~20 (40?) quasar-galaxy pairs, involving ~4 galaxies, that makes your a posterori selection statistically valid?
.Now the SDSS surveyors said that the average density was about 15 quasars per square degree. So we'd expect to see about 3195 quasars in that much area. That suggests there are overlaps. And if there are on average 15 quasars per deg2, and most of the alignments I've studied occur over distances only a fraction of degree in diameter, the number of quasar/galaxy associations in that sample like the ones in the observations I've studied (with r's of 4 to 8) must be very, very small. Perhaps there isn't even a case like that in the sample? So perhaps that sample isn't a fair representation of the population as a whole? Which leads right back to the questions I ask you. What makes you sure that a small group of galaxies with associated quasars would contain enough cases with 4 or 5 or 6 aligned quasars to show up at all if the probabilities I've calculated for such alignments are accurate? Or if one or two did, what makes you sure the statistical approach wouldn't mask the effect? Furthermore, what makes you think that minor axes are the only alignments that quasars can have with respect to galaxy features? Perhaps some of those other alignments are more important but also harder to detect because the features are more difficult to detect than minor axis alignment?