Arp objects, QSOs, Statistics

NGC 3516 and its quasars ...

It's ~a decade since the Chu et al. paper was published, and no doubt many more quasars (and galaxies) 'near' it have been discovered.

Based on Arpian ideas, what do you predict the redshifts of those 'newly discovered' quasars (and galaxies) will be BeAChooser? Where will they be, in relation to NGC 3516?

And what would you predict for the next ten years, and the ten years after that, and ...?
 
Sorry BAC, the first part of my answer ran a little long, but it is releavnt to how a large number of samples and sample sets is needed to determine association.

David, the coin toss samples are completely independent events drawn from a process with the exact same probability of producing a head every single time. Do you think that the likelihoods of quasar/cluster arrangements and redshifts are completely independent of one another?
Now you are using apriori arguments to show your use of a posteriori argument.

The assumption that is the null assumption is that there is random distribution of the objects, that in other words there is no pattern before hand.

And remember that random does not mean evenly distributed. You can take a matrix of 1,000 x 1,000 dots and randomly place one hundred objects in this using the pseudo random generator of you choice or six ten sided dice, or what ever means you want for randomly placing dots in the 2d matrix.
Then place a number of dots in the matrix, clear and place, clear and place, say ten dots, one hundred dots and one thousand dots in three separate runs.

In the ten dot placement the chances of each dot being in a particular coordinate position are very low, .00001, for the 100 dot placement they are still low .0001, in the thousand dot matrix they are still low .001.

Yet here is the thing, patterns can arise from a totally random system, even in the ten dot system you could have dots next to each other from a random process.

Say that a dot is at coordinate position (X,Y), in the ten dot placement what is the chance that if you have a dot next to that dot but one to the left (X+1,Y), the chance does not change it is still .00001, it does not go up or down because there is a dot at (X,Y), the probability for each of the spots next to the placed dot is .00001, and there are eight space that are adjacent for the placed dot, each of the adjacent spaces has the exact same chance of placement .00001. Regardless of any prior placements. Now you can look at each possible combination of ten dots in the matrix which is a very high number.

Which is a really big number.


But that does not mean that a particular configuration is more or less likely than any other configuration.

A specific placement of all ten dots in a line has exactly the same chance of occurrence as any other the possible configurations. That is why it is random.

So there is an equal probability of pattern of distribution amongst all the configurations. A pattern with the dots in a line is no less likely that a pattern with the dots dispersed, they are equally likely.

So when you have a hundred dots and a thousand dots, you can get all sorts of patterns and associations, but they are still arising from a totally random process.

What you would have to do it study large numbers of configurations to determine if there is a random distribution or a weighted distribution of some sort.

Say we have two algorithms for determining dot placement.
1. Is random there is no weighting or bias to the distribution.
2. Is weighted in that there is a small possibility that a square next to a dot will receive another dot at a higher rate. Say that the algorithms places dots on the following basis, for each dot after a dot had been placed there is a 10% chance this will be a biased dot, it will be placed near an existing dot within a three square radius, randomly determined for radial distance from the originating dot and which is the originating dot. (So first there is a 1/10 probability of being a biased dot after dot n=1 is placed, then if it is a biased an originating dot is randomly chosen from all existing dots, the biased dot is then placed within three radial square of the selected dot.)


Now say that you are given one configuration for each of the two algorithms, are you going to be able to say which one is from a random placement and which one is from a weighted placement. Not very likely, you do not have enough samples to really tell.

The ability to determine a random placement from the weighted distribution requires multiple samples generated by the algorithms. For a dual sample size of 1 Sn=1, it will be impossible to tell the two apart, because you can not determine a distribution pattern for the algorithm.

It is only with much larger Sn, such as 100, 1000, 10000 that you would be able to detect the difference between the two algorithms with any sort of accuracy.

This is counter intuitive, I understand that, but I am discussing real world things here. With a Sn=1 it would really be hard to distinguish if there was a difference in the algorithms, it is only as Sn gets to be very high that a determination could be made that algorithm number two has a weighted distribution.

Both will exhibit patterns, it is only from the comparison of a large Sn that a determination could be made with any accuracy.

So, this is very comparable to determining if QSO placement is random or not, a visible pattern is equally possible in a single configuration. So a limited SN, say of 25 is not going to give you enough data to say that there is a weighted distribution.

It is only be comparing large Sn, and looking at the patterns between configurations that the weighting will become visible. In that way by having a large SN, the average distance between dots will be noticed to be different between the two algorithms. It will not be apparent except for when the Sn is high enough.
 
Last edited:
Hiya BAC,

The first post ran long but I hope I made things a little clearer, the questions is how do you decide that there is a difference between your theorized effect and a random distribution, and it is the plague of many a science.

Back to your post, and thanks for the dialogue.
...
When you identify a sample in the population by naming it, don't you remove it from the population for drawing the next sample?
Well this is going to sound goofy because it depends upon the sampling technique you use.

In a total census survey, say something like what the US government does there is an effort to sample every member of the population. And so you do not want to repeat a survey in that case, and it is good practice in all cases.

However it is rather more complex than that and a real bugaboo for things like ‘political opinion surveys”.

The problem is that you may want to know the opinion of 300 million people but you most likely do not want to pay for it. Or you want to estimate combat death in a war which is very risky. So you try to take a representative sample of the population.

So say for the political opinion survey, you want to decide what you sample size is going to be, and because of cost the sample never even approach 1% of the population. So what you try to do is make sure that you sample of surveys or the survey itself reflects the demographic of the country as a whole . (Which is a huge sticking point and real source of sampling bias).

So what you want to do is take a sample of the country and match it to the demographic makeup of the country.

Again not as easy as it seems, if you are going to survey 2,000 people, then you don’t want to just call 40 people in each state, you want the numbers called to reflect the actual population density. So you start to have to use other surveys before you even start your survey. Then you want to match socio economic status , education, employement and a host of other variables. So again more surveys before you even survey, if you really want a decent sample.

This is not what is done, there are two methods and plenty more but the first is to pretend that you are getting a random sample by just calling two thousand people on the phone, randomly from your data base and pretending that it matches the demographic of the country.

The other one is to find ‘representative markets’, say a town that does reflect the demographic of a region, and you sample randomly in those ‘markets’ based upon the idea that they do accurately reflect the region they are meant to represent.

Now in the matter at hand, the QSOs, the best method is to determine the standard or normative values of chosen populations and subpopulations, which is what I have suggested.

You should not repeat a member in different samples, so as you suggest a member is removed from the other sample sets.

But a thousand of each of the following would give you a huge sample set to begin to look at
-‘normal’ galaxies
-AGN galaxies
-old galaxies
-young galaxies
-random spots on the sky

Then you can learn certain things, like what is the areal density of QSOs in areas that are not deliberately associated with galaxies (the random points) this is the first normative sample that you hope gives you your base line ‘noise’ IE random levels of QSO statistics.

The others would be chosen as sub[populations that could really make or break the case for Arp’s association.

What you would be looking for is that the level of QSOs associated with Arp objects rises one or more standard deviations above the noise level, the normative value for the random spot on the sky.

So yes you want to not have the same area or galaxy represented twice in your survey. But that does not mean you can say that the QSO association is 10^-6 probable because you don’t really know what the distribution of QSOs is, and you need to compare that to a representative sample of the QSO population, then you can say that there is something above noise level or that larger samples are needed to determine an effect, especially if it comes close to the standard deviation.

Then you take larger representative samples and larger study object samples hopefully to clarify the data.

They are of course a huge source of potential bias but it is cost versus representation.
 
.
Specifically, if the authors of a catalogue explicitly state their catalogue should not be used for statistical analyses, and someone proceeds to do just that, what degree of credibility do you think should be given to that someone's paper?


Hi DRD, my brain is now up to speed, i would say that you would have to be very careful to aknowledge what ever inconsistancies or rational led the original researchers to say that thier survey should not be used in that fashion, and so it would be used with great care and caution.
 
Last edited:
Sorry BAC, I hope I haven't bored you to death, here is a third lenthy post.

Look at it this way, if you have a field of walnut shells with one pea under one of the shells there is a certain probability that you will find a pea if you lift a shell. If that shell contains the pea, do you think the probability of finding a pea if you lift another shell is the same? Apparently so.
This example is not relevant to the issue at hand, in this case it was stated by you that there is one pea under one shell.

So I don’t see how that is analogous.

I can use something you mentioned earlier, thirty quasars per a field so in this case we will say the field is ten by ten , and that there are thirty peas under the shells and that each shell may only have one or no peas.

So the peas are randomly distributed correct?

You pick up a shell but before you do, what is the probability that there is a pea under it?
30/100 or .333333, do we agree?

What is the probability that there is a pea under the next shell that you pick?
It is still the same, despite the logic that some people use. Prior to examination the shell always has a 1/3 chance of having a pea under it. No matter how many shells you have lifted and how many peas you have found. The statistic does not change. It is counter intuitive and goes contrary to the Deal of No Deal sort of logic.

So you have lifted a shell and there is a pea under it, what is the chance that if you lift the shell one to the left that it will have a pea under it? 1/3
And so on, no matter that you chose the shells in a straight row across the field, each shell will always have a 1/3 chance of having the pea under it.

Now you say “But I have turned over 29 shells and there is only one pea left.” So the chance that the pea is under the other 71 shells is 1/71 chance not 1/3. And in that case it is true because you have counted the peas and you know the number to start and how many shells are left.

But what if you were to turn over the ‘next’ shell. And you didn’t know that information. So there you are, you about to turn over shell (5,6) and you say well, I know that it is 1/71 , not one third. That is the Deal or No Deal logic, which is post facto sampling. If you were to have chosen to turn over shell (5,6) before any other shell, then what would the chance be, 1/71 or 1/3 ?

Random distributions say that the chance that a shell will have a pea under it is 1/3, you are using post facto information to change you knowledge of the distribution. The probability remains the same.

It is a bug bear and very confusing, because we do always adjust out data to support new thinking. And the average distribution won’t help because there will be an average of thirty , so some fields will have 60 peas and some will have zero peas.


But lets us say that there is really huge field of shells like a million by a million matrix. And we don’t actually know how many peas are out there. And what we want to know is how many peas are out there.

If you want to sample this large field will you turn over all the peas in a row? That is one strategy, or if you were listening to old Ms. Battleaxe in class you remember that you should really move your chosen shells , in case you run into an aberrant cluster of peas that throws your sample off.


Now say that there are apples in and amongst the shells and you decide to start looking at the shells near the apples. And you begin to notice that there are some apples that seem to have a lot of peas near them and some that appear to have no peas near them.

1. What conclusions can you draw about the association of peas and apples? Are there special apples which have more peas near them?
2. How would you decide?
3. If you have only turned over ten thousand shells within a ten shell radius of some apples, do you know what the average density of peas is? You have only sampled 10^5 out of 10^12 shells.
4. There are 300 peas that you have found, can you say that the average occurrence of a pea is 300/10000=3/100 or 3%
5. If you know that you have found an apple that has 4 peas immediately adjacent to it can you say that the random probability of that occurring is .03x..03x..03x.03 or .00000000006561?


I know these questions seem silly and foolish but they are relevant to sampling theory. And it begs the question, do you have a representative sample of peas under shells?
 
Now say that there are apples in and amongst the shells and you decide to start looking at the shells near the apples. And you begin to notice that there are some apples that seem to have a lot of peas near them and some that appear to have no peas near them.

1. What conclusions can you draw about the association of peas and apples? Are there special apples which have more peas near them?
2. How would you decide?
3. If you have only turned over ten thousand shells within a ten shell radius of some apples, do you know what the average density of peas is? You have only sampled 10^5 out of 10^12 shells.
4. There are 300 peas that you have found, can you say that the average occurrence of a pea is 300/10000=3/100 or 3%
5. If you know that you have found an apple that has 4 peas immediately adjacent to it can you say that the random probability of that occurring is .03x..03x..03x.03 or .00000000006561?


I know these questions seem silly and foolish but they are relevant to sampling theory. And it begs the question, do you have a representative sample of peas under shells?

DD, thanks for trying to make these statistical aspects easier for one like me to understand. :)

I like your analogy about the shells, peas and apples.

However, I think that the analogy is flawed in a sense.

3. If you have only turned over ten thousand shells within a ten shell radius of some apples, do you know what the average density of peas is? You have only sampled 10^5 out of 10^12 shells.

You see, if this analogy is directly comparable to the Arp statistics, haven't they always referenced the average density of peas from the SDSS or 2dF catalogs, or similar sources? They aren't looking around the apples to determine the average density, they already have good estimates of that. The surveys have already looked at vast swaths of shells, and seen a number of peas. And these swaths were chosen to provide a good representation of the entire field of shells.

3. If you have only turned over ten thousand shells within a ten shell radius of some apples, do you know what the average density of peas is? You have only sampled 10^5 out of 10^12 shells.

We would know the average density of peas around the apples. We can then compare this average to the average of our "swath" survey, and see if the density around the apples is the same as the "swath" density.

4. There are 300 peas that you have found, can you say that the average occurrence of a pea is 300/10000=3/100 or 3%

Around a apple, yes, we could say the aveage occurrence of a pea is 3%.

It seems as if I have seen other astro papers discussing data reduced in Poisson statistical fashion, I don't remember seeing the use of multiple samples of different types.
 
Wrangler, one thing to remember is that, in any of the surveys (such as 2dF and SDSS), there is an average areal density of 'quasars' (however defined), and also a measure of how that average varies across the sky (a standard deviation, for example), and a measure of how the distribution of averages differs from Gaussian (for example), and so on.

As I mentioned in an earlier post in this thread, one thing which the papers BeAChooser has cited all seem to lack is a recognition that their calculations should include more than just the respective areal averages (much less quantitative work to estimate it, etc).

There are, of course, many other factors and aspects, but they are (all?) beyond the scope of highly simplified example DD is working through.
 
(just one cited paper)

https://ritdml.rit.edu/dspace/bitstream/1850/1788/1/SBaumArticle11-2004.pdf "The host galaxies of luminous quasars, David J. E. Floyd, Marek J. Kukula, James S. Dunlop, Ross J. McLure, Lance Miller, Will J. Percival, Stefi A. Baum and Christopher P. O’Dea, 2006"
.
Thanks BAC, for bringing this paper to readers' of this thread's attention.

There has been some discussion of how we can tell that quasars are at (or near) the distances implied by their redshifts.

This paper, and the earlier ones it cites, provides some detailed material on how - look at the host galaxies of quasars, and see how well they match galaxies which don't have quasars in their nuclei.

This paper also covers some models of quasars and their host galaxies, quasar evolution, and so on.

In other words, it's part of a long-standing effort to build an extensively cross-linked and self-supporting understanding of quasars ... a set of consistent models.

At another level, a good example of how astronomy, as a science, works.
 
DD, thanks for trying to make these statistical aspects easier for one like me to understand. :)

I like your analogy about the shells, peas and apples.

However, I think that the analogy is flawed in a sense.
Well all analogies always are, it is meant to be more a means of opening discussion. And trying to get the discussion to a point where the validity of a posteriori statistics is reached. There is apparently some use of Bayesian models, but I am more familiar with the other model. I will have to do more reading to discuss it with even a little understanding. I am used to population sampling, not belief judgments based upon the data at hand. (It just goes against all my training and biases.)
You see, if this analogy is directly comparable to the Arp statistics, haven't they always referenced the average density of peas from the SDSS or 2dF catalogs, or similar sources?
Yes and no, what I have a problem with the calculation of the odds of an event occurring using the a posteriori method. In the science I am most familiar with (psychology) the use of population sampling is usually a huge part of any discussion. It is often abused without even getting to something like meta analysis. So the use of a posteriori reasoning is just foreign to me.
The issue is that a density determination can not be applied to the probability judgment the way that I have seen some do it here.

To take an average density and then compute the likelihood of an event seems, just not right. You should sample the actual distribution and then try to compare your effect sample to the representative sample. That way you are basing the representative sample upon more observation.

This is like totally huge in certain fields like archaeology and highly resisted and embraced in equal measure. Modern theory says that is you have a site you want to dig at that you should randomly sample the site to see what is really there. Not just do like Schliemann and dig up the one you think looks coolest.
They aren't looking around the apples to determine the average density, they already have good estimates of that. The surveys have already looked at vast swaths of shells, and seen a number of peas. And these swaths were chosen to provide a good representation of the entire field of shells.
Yes but I disagree that would allow you to say that the average density is thus and such and that the probability of a certain arrangement is 10^-6 because of the average density.

That is what the first post about the dots in the matrix is about.

The ten dot placement with all the dots in a line is just as likely as other dispersed arrangement.

The apples is more just trying to get to talking about what Arp, Burbidge and others have done.
We would know the average density of peas around the apples. We can then compare this average to the average of our "swath" survey, and see if the density around the apples is the same as the "swath" density.
yes that is something you could do, but in my example we only know the average pea density in the sample.


So again the best procedure would not to be to assume that the swaths are representative. But to wait until the known data is large enough and take samples from it.

Better would be to do the random point sample I talked about. And to also sample around galaxies.

That would mean you would choose the random points in the walnut field and sample the pea arrangements around them and that you would try to get a sample of representation around the apples as well.

Remember that an average density is just that, an average, so if your average is 3, you can still randomly have samples with 0 and 6 peas respectively. So you can not conclude that a certain arrangement is out of line. With a field area and density average you can not go back and say that an arrangement is X likely.
(Or even worse in some distribution the average could be based upon distribution varying from 0 to 12,000)
To do that the best method would be large sets of representative sampling.
Around a apple, yes, we could say the average occurrence off a pea is 3%.
In our sample the average density is .03 . :)
It seems as if I have seen other astro papers discussing data reduced in Poisson statistical fashion, I don't remember seeing the use of multiple samples of different types.

Well that may be, but it would seem that you and DRD have referenced some that discuss sampling bias.


Which I would say is only valid when the events are rare and a larger sample is not available.

Indicative but not definitive.
 
Wrangler, one thing to remember is that, in any of the surveys (such as 2dF and SDSS), there is an average areal density of 'quasars' (however defined), and also a measure of how that average varies across the sky (a standard deviation, for example), and a measure of how the distribution of averages differs from Gaussian (for example), and so on.

As I mentioned in an earlier post in this thread, one thing which the papers BeAChooser has cited all seem to lack is a recognition that their calculations should include more than just the respective areal averages (much less quantitative work to estimate it, etc).

There are, of course, many other factors and aspects, but they are (all?) beyond the scope of highly simplified example DD is working through.

Yup, no doubt, my example is aimed at one very specific set of methods. The a posteriori model of distribution.

The analogy could be extened to other area (but would become unweildy) like QSO defintion in sampling.
 
Right David. :D

Well BAC it sure seems to me that you don't, unless you are being deliberately obtuse.

See, here is the deal, I have been trained in sampling theory, practiced sampling theory and read a lot of publications regarding sampling theory and research articles using sampling theory. And not until I looked at some stuff did I realize that there are people who DO use a posteriori statistics in trying to find out probability distributions. So in all my reading and research in psychology, mental health, domestic violence and epidemiology which were of professional interest to me, and then is other areas like biology, genetics, astronomy and history: I had never encountered a posteriori use like the Poisson distribution or Bayes theorem until just recently. I knew that they existed from my class work but I didn’t know that there were people who used them in research.

So after years (since 1978) of reading and researching various stuff in the social sciences, public health and epidemiology I am shocked to learn that there are people who use a posteriori probability and I am trying to read about it, but it has never been used in my formal class work, the research I have involved in school and my professional life and the extensive reading that I have done as a professional and an interested person.

So I acknowledge my bias, I acknowledge my instructors and trainers bias, I acknowledge the bias of the researchers I have worked with, I acknowledge the bias of all the researchers whose articles I have read, it is very hard for me to not dismiss the a posteriori use of statistics out of hand.

I am reading and try to wrap my mind around them, but as I said it is hard for me to not just dismiss them out of hand as unreliable and prone to sampling error.
So David, are you ever going to get around to providing me with some numbers (for quasars, galaxies, distribution and the completeness of Arp's survey) so that I can run that calculation over to your liking? :cool:

Well here is the rub BAC, I didn't know that there were people who actually used Bayesian statistics and other a posteriori methods. In all my training, education and reading they are just not used and considered to be totally suspect. So if you want to use Bayes Theorem, I have to really try to pout it in context and cross interpret it to the research bias that I already have.

In all the research I have been involved in and read and been trained in, a posteriori statistics are just ruled out of hand and never used.

So it will be a while before I can make a coherent argument and not just do as my training and bias says and dismiss them out of hand.

So I will ask you in return:

In what areas do you feel that the use of a posteriori statistics is a viable way of analyzing data?

To me it is as though someone has suggested crystal gazing as a way of prospecting for oil, I am trying to read about it and not just react from my bias.
 
Dancing David, please do not take anything in this post as a substitute for BeAChooser's views and understandings of the method(s) he used in his post125 or his understanding of those used in any of the papers by Arp et al. that he has cited.

However, I think it's not clear just how 'clean' the method in BAC's post is, and in particular I think it contains more than just a pure a posterori analysis.

Further, although you would have no difficulty finding (pure) a posterori analyses in early Arp et al. papers (and, it must be said, in papers by other authors three decades or so ago too), their later papers are no longer so obviously awful (except, of course, the Bell one!) ... even the Chu et al. one that BAC based his analysis on is more nuanced than how BAC has presented it.

For a fairly straight-forward example of how statistics are used in modern astronomy, you may be interested in Freedman et al. (2001), "Final Results from the Hubble Space Telescope Key Project to Measure the Hubble Constant". To say that it's a landmark paper would be an understatement; it has already been cited >1200 times!

For an example of much more extensive - and rigorous - use of statistics, try Dunkley et al. (2008?) "Five-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Likelihoods and Parameters from the WMAP data", and some of the papers it references. Fair warning though, some of this is really heavy duty!
 
Dancing David, please do not take anything in this post as a substitute for BeAChooser's views and understandings of the method(s) he used in his post125 or his understanding of those used in any of the papers by Arp et al. that he has cited.

However, I think it's not clear just how 'clean' the method in BAC's post is, and in particular I think it contains more than just a pure a posterori analysis.

Further, although you would have no difficulty finding (pure) a posterori analyses in early Arp et al. papers (and, it must be said, in papers by other authors three decades or so ago too), their later papers are no longer so obviously awful (except, of course, the Bell one!) ... even the Chu et al. one that BAC based his analysis on is more nuanced than how BAC has presented it.

For a fairly straight-forward example of how statistics are used in modern astronomy, you may be interested in Freedman et al. (2001), "Final Results from the Hubble Space Telescope Key Project to Measure the Hubble Constant". To say that it's a landmark paper would be an understatement; it has already been cited >1200 times!

For an example of much more extensive - and rigorous - use of statistics, try Dunkley et al. (2008?) "Five-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Likelihoods and Parameters from the WMAP data", and some of the papers it references. Fair warning though, some of this is really heavy duty!

Sure and thanks, like I said it just blows my mind that someone would use Bayesian statitics. Now if you only have three events to evaluate then it makes sense. But I am trying to be polite about it. I have issues with p-certainty when sample sets are small as well. So it is not as though I just substitute one for the other. My favorite per peeve is when some political survey mentions a 'margin of error' which usually refers to the sampling statistic of repetition, not actual representation.

But Bayesian statitstics just blow my mind. What I have read about them does not seem to indicate they have much use except when the data involves very limited samples.
 
Thanks DRD, that first paper is amazing, almost every paragraph refers to current or past research and they discuss the rationales of thier methodology very acrefully. There is some wonderful stuff about teh statitics they use and the built in probles.

And that was just the first five pages, they seem to address all the confounding factots that they can. My brain needs a rest now.

From page six , here is an example of what makes this paper a joy to read. (And I was skimming ahead, it looks rather juicy and tasty in many places.) From page six
http://arxiv.org/abs/astro-ph/0012376

Since each individual secondary method is likely to be affected by its own (independent)
systematic uncertainties, to reach a final overall uncertainty of ±10%, the numbers of
calibrating galaxies for a given method were chosen initially so that the final (statistical)
uncertainty on the zero point for that method would be only 5%. (In practice, however,
some methods end up having higher weight than other methods, owing to their smaller
intrinsic scatter, as well as how far out into the Hubble flow they can be applied – see
§7). In Table 1, each method is listed with its mean dispersion, the numbers of Cepheid
calibrators pre– and post–HST, and the standard error of the mean. (We note that the
fundamental plane for elliptical galaxies cannot be calibrated directly by Cepheids; this
method was not included in our original proposal, and it has the largest uncertainties. As
described in §6.3, it is calibrated by the Cepheid distances to 3 nearby groups and clusters.)
The calibration of Type Ia supernovae was part of the original Key Project proposal, but
time for this aspect of the program was awarded to a team led by Allan Sandage.

So while they aknowledge error , it looks as they go out of thier way to talk about it, marvelous.
 
Last edited:
DD and DRD make some good statistical sense, IMHO.

How hard would it be for Arpians to take a look at the ~30 bright galaxies with quasars associated, and then look at another ~3o bright galaxies and examine the quasar distribution around them.

Wouldn't that at least go in the right direction, if not being ideal in terms of n samples?

That way, they could begin some tentative steps towards evaluating possible errors in their work via multiple statistical methods, which a number of these other papers demonstrate.
 
DD and DRD make some good statistical sense, IMHO.

How hard would it be for Arpians to take a look at the ~30 bright galaxies with quasars associated, and then look at another ~3o bright galaxies and examine the quasar distribution around them.

Wouldn't that at least go in the right direction, if not being ideal in terms of n samples?

That way, they could begin some tentative steps towards evaluating possible errors in their work via multiple statistical methods, which a number of these other papers demonstrate.
.
Wrangler, as I understand the Arpian idea, it's both very loose and very tight ...

Part of the problem is 'what is a quasar?' - if 'quasars' are AGNs, just like in mainstream astronomy, then it should be both very, very easy to do a test somewhat like what you describe ... and also almost impossible.

If there were a clear, quantitative relationship concerning redshift, ejector, and speed of ejectee (with respect to the ejector), then it would be very simple ... and, no doubt, long since confined to the dustbin of history.

But it's anything but; in fact, if you read the Arpian papers over the past few decades carefully, you may get intensely annoyed ... it's almost as if they are being deliberately vague and obscure, leaving as much wriggle room as possible, and comitting to nothing.

For example, it used to be that all quasars were ejected from the likes of Seyfert galaxies, or other 'active' galaxies. Then it became galaxies that may have been, once, active, but are no longer. It was also, once, that the ejectees were on a linear, no-room-for-fudging path: decreasing redshift with distance, increasing maturity (read less like a quasar) with redshift, etc, etc, etc. Now, apparently, it's all up in the air, with nothing concrete.

Too, there was a time when all redshifts had to fit the Karlsson formula (or some variant); now they don't.

Whatever.

Consider this though: if it's variable mass, and if SR rules, then how come we don't detect protons (etc) with differing masses in the flood of cosmic rays? After all, we now know that a not insignificant fraction of CRs come from distances that are much greater than those to the nearest 'quasar ejecting active galaxies' (M82 is the nearest, I think).

Or how come these ejected quasars never seem to create wakes in the inter-galactic medium through which they must travel, especially in rich clusters? After all, such wakes are now well detected, from their radio and x-ray signatures.

And so on.
 
DD and DRD make some good statistical sense, IMHO.

How hard would it be for Arpians to take a look at the ~30 bright galaxies with quasars associated, and then look at another ~3o bright galaxies and examine the quasar distribution around them.

Wouldn't that at least go in the right direction, if not being ideal in terms of n samples?

That way, they could begin some tentative steps towards evaluating possible errors in their work via multiple statistical methods, which a number of these other papers demonstrate.


Regardless of Arps sample, it is the effect sample and is permitted to be very small, I would want the normative groups to be very high, ten thousand if possible or a thousand at least. And it should include both galaxies and random spots on the sky.

I suggest AGN's because if Arp is right they should show a higher than average, like the Arp galaxies. I suggest young galaxies and old galaxies for the same reason, there should be a differential, ithere should be a difference if Arp's model is correct.

I also agree with DRD, there should be some other measures that would say that these objects are ejected and aquiring mass, but the mechanism is beyond me, so I don't know what signatures we would see...
 
WARNING! This is a gratuitous bump!

In case BeAChooser is still with us, and reading this thread: you have not been forgotten!

I, DeiRenDopa, am patiently waiting for you to return and answer the (many) open questions about the stuff that you posted here (no doubt there are others also waiting ...).
 
.
But that doesn't rule out the possibility that the two phenomena are connected ... especially if it turns out the high redshift x-ray emitting object near galaxies are improbably quantized and distributed in a pattern matching the theory that Arp, Narlikar, et al have proposed for their origin and behavior.
.

I'm still not at all clear on what you're trying to say here; would you mind clarifying please? Specifically, I am not aware of any "theory that Arp, Narlikar, et al have proposed" which quantitatively accounts for "high redshift x-ray emitting object near galaxies [...] improbably quantized and distributed in a [specific] pattern".

Where is that theory published? Where are the specific, quantitative behaviours explicitly derived from that theory?


I suggest you look up all the papers and books that Narlikar and his associates have published. There seem to be dozens. :)

And I found this recently. It might be of some interest to you:

http://arxiv.org/pdf/physics/0608164 "A Proposed Mechanism for the Intrinsic Redshift and its Preferred Values Purportedly Found in Quasars Based on the Local-Ether Theory Ching-Chuan Su Departmentof Electrical Engineering National Tsinghua University Hsinchu, Taiwan, August 2006, Abstract – Quasars of high redshift may be ejected from a nearby active galaxy of low redshift. This physical association then leads to the suggestion that the redshifts of quasars are not really an indication of their distances. In this investigation, it is argued that the high redshift can be due to the gravitational redshift as an intrinsic redshift. Based on the proposed local-ether theory, this intrinsic redshift is determined solely by the gravitational potential associated specifically with the celestial object in which the emitting sources are placed. During the process with which quasars evolve into ordinary galaxies, the fragmentation of quasars and the formation of stars occur and hence the masses of quasars decrease. Thus their gravitational potentials and hence redshifts become smaller and smaller. This is in accord with the aging of redshift during the evolution process. In some observations, the redshifts of quasars have been found to follow the Karlsson formula to exhibit a series of preferred peaks in their distributions. Based on the quasar fragmentation and the local-ether theory, a new formula is presented to interpret the preferred peaks quantitatively." :)

BeAChooser, in one way I should thank you; what you wrote is good resource material for my study on why threads like these are often so long.

Oh ... you doing a study on that? Out of curiosity, who is funding your study? Or is that just a personal interest of yours? Is there something about long threads that you don't like? Do they irritate you? Will you be publishing this study in some journal? :)

The last is, of course, just what a troll does, and I note that many JREF forum regulars have called you just that. If that's so, then it's an obvious conclusion - one reason why threads like this are so long is that people keep feeding the trolls.

So is that how you are going to escape explaining the improbabilities surrounding NGC 3516, NGC 5985 and NGC 3628? Label me a "troll" and walk away (so you don't feed the troll)? Maybe I'll do a study on people who use that adhominem as a means of avoiding the issues. :)
 

Back
Top Bottom