Arp objects, QSOs, Statistics

And now do you notice that all the objects in the NGC 5985 are also close to the redshifts quantizations theorized by Karlsson? So maybe we were too quick to dismiss that case. Care to comment on the probability of all these observations taken together? Not only do we have the probability that both have 6 high redshift objects aligned along the minor axis, but almost all the objects are relatively close (within the margin of error) to Karlsonn's values and in one case all five are even in the same numerically increasing order as Karlsonn's values. Very, very curious and given all the values that redshift could take, improbable? :D

BAC, these authors: http://www.journals.uchicago.edu/doi/abs/10.1086/432754

Have looked for periodicity in QSO redshifts. They looked at both 2dF and SDSS data.

Guess what they found?

In summary, using samples from SDSS and 2QZ, we demonstrate
that not only is there no periodicity at the predicted
frequency in log (1 þ z) and z, or at any other frequency, but
there is also no strong connection between foreground active
galaxies and high-redshift QSOs. These results are against the
hypothesis that QSOs are ejected from active galaxies or have
periodic intrinsic noncosmological redshifts.

Now, it is only one paper, but I dunno if I can concede that there is quantization of QSO redshift.
 
Guess what they found?

Quote:
In summary, using samples from SDSS and 2QZ, we demonstrate
that not only is there no periodicity at the predicted
frequency in log (1 þ z) and z, or at any other frequency, but
there is also no strong connection between foreground active
galaxies and high-redshift QSOs. These results are against the
hypothesis that QSOs are ejected from active galaxies or have
periodic intrinsic noncosmological redshifts.

Now, it is only one paper, but I dunno if I can concede that there is quantization of QSO redshift.

Well if there is no periodicity at all, then that makes the two cases that I described above all the more remarkable. Could it be there is periodicity in a certain type of quasar or situation that is being masked by mixing them in with the overall database?
 
I am wondering if lensing could explain both the coincidence of axial alignment, and the periodicity of z for some cases (two at least, mentioned in these recent posts).

I am reading the L'C-G paper about axial alignment...I will post more later!
 
Minor axis preference?

I have taken a look at the L-C/G paper:

http://arxiv.org/PS_cache/astro-ph/pdf/0609/0609514v1.pdf

First thing, they have some histograms showing the number of QSOs for position angles off of the minor axis from 0 to 90 degrees. The Figures 2, 10, and 13 all have different lower limits on the y-axis, which just bugs me. You think that there is a huge difference in the number of QSOs within 5 degrees if the minor axis, compared to the major axis 90 degrees away, and then you see the y-axis of the histogram...what the? the excess on the minor axis is about 2 quasars? The only histogram that has a lower y-axis limit of zero is Figure 10, which shows a very high level of anisotropy...that's probably why they let that histogram be presented with the y-min at zero!

They say that extinction on the order of ~0.1 magnitudes could explain the anisotropy that they see.

This paper: http://journals.cambridge.org/download.php?file=%2FIAU%2FIAU2004_IAUS225%2FS1743921305002218a.pdf&code=9034b9fd5b4ac953ec9cdaa4de02bcf5 gives the expected changes in QSO magnitudes from absorber type massive lenses to be up to -0.2 magnitude, so it seems possible that the extinction effect may have some bearing on the anisotropy.

I think that L-C and G are right to be cautious about the anisotropy.
 
Wrangler, DeiRenDopa, and Dancing David ... let's look closer at the probabilities of the alignment in NGC 3516 occurring by sheer coincidence. Recall, this is the case where 5 quasars aligned along a minor axis have 5 z's that individually match the quantized z's that Karlsson predicted in the 1970's (http://adsabs.harvard.edu/abs/1977A&A....58..237K Karlsson, K.G., "On the existence of significant peaks in the quasar redshift distribution", Astron. Astrophys. 58:237–240, 1977). Note that I am assuming these particular quasars weren't used in Karlsson's study so it's a prediction.

Let's simplify the problem and just say that there are 30 possible values (or small ranges) of z ... from 0 to 3.0 ... in increments of 0.1. Now what are the number of permutations of 5 (r) ordered values from 30 (n) distinct values? It turns there are n!/(n-r)! possibilities. Which in this case is 17,100,720. That means the probability of picking those 5 specific z's in order from a range of 30 z's is 5.8 x 10-8.

Now the surface area of a sphere has about 4 PI 572 = 41,250 square degree areas. If we assume there are 30 quasars (just to pick a number I hope is conservative) per square degree (over the range of magnitudes we seem to observe) then there are a possible 1,237,500 quasars. That means there could be at most 250,000 groups of 5 located next to 250,000 different galaxies.

In which case, the probability of finding those 5 specific z's in 250,000 samples is 250,000 x 5.8 x 10-8 = 0.015. In other words, there would be only a 1.5% chance of seeing that specific object (NGC 3516) in the heavens even with if we looked at every single galaxy.

But Arp didn't study 250,000 different samples to find this one. He probably studied no more than a few percent of the galaxies, if that. So it seems to me we should factor in the probability of him actually sampling from the right group of galaxies. In other words, let's assume that Arp actually looked at 25,000 galaxies. What is the probability that his sample would have contained that specific 5 z group? 1/10? So am I correct in suggesting the chance of Arp actually coming up with this particular case for us to study is no better than 0.15 percent (0.0015).

And in addition to that, we have to add in the fact that all 5 objects are aligned rather narrowly along the minor axis. What's the probability of seeing that happen? Well assume we throw darts at a dart board. What's the chance that a given dart will land within a 15 degree zone extending from opposite sides of the center of the dart board. 30/360 = 1/12? So if we throw 5 independent darts, there's a 0.0835 chance of all five landing in that zone = 3.9 x 10-6. If there are 250,000 samples then the probability of it happening at least once is about 1.0. But again, Arp didn't look at anything near 250,000 cases. Again, he probably only looked at 1/10 that number, if that. Then the likelihood of him picking a group that contained 5 lined up is probably no more than 0.1. In which case, the probability that Arp found this one case considering all the quasars out there is no more than 0.015 percent (0.00015).

And then consider the fact that not all the quasars are going to be centered around galaxies. Many of them are going to be spread out over regions where there are no galaxies. I suspect that would reduce the number of groups of 5 from which one could sample by a factor of 10 (at least) which would lower the probabilities by that same amount. So now we are talking about a probability of 0.0015 percent (0.000015)

And I also think I'm still looking at the drawing of the z's from the range 0 to 3.0 in a conservative manner. I suspect that if one looked at the problem closer, one would find that the probability of picking 5 ordered z's that lie within 0.05 of a particular z over a range from 0.00 to 3.00 is less than what I calculated above by dividing the range into 30 zones. But it's getting too late for me to ponder that one. :)

In any case, I hope you see that even considering the full database, the probability of Arp having encountered this particular case (NGC 3516) based on pure coincidence has to be very, very small.

Or did I make another stupid math mistake? I leave it to you to tell me. :)
 
Last edited:
In any case, I hope you see that even considering the full database, the probability of Arp having encountered this particular case (NGC 3516) based on pure coincidence has to be very, very small.

BAC, I certainly agree that Arp encountering this out of pure coincidence is serendipitous, and the probability is very low.

However, I feel that it is dangerous to draw from this probability that there physical sigificance to the association.

If you wish to draw conclusions of physical association, what do you think about the ring galaxy shown here.

Ring galaxies are already rare, <5% of the total number of galaxies are of this ring galaxy type.

Look closely inside the ring, and you will see another ring galaxy far behind the larger ring galaxy.

Or at least, we assume it is far behind.

Surely the probability of finding a unique alignment of two unusual galaxy types such as this is vanishingly small?

But, regardless of the probability involved, that probability should not be used by me to establish the basis of a physical relationship between these two objects.

It seems like that's what Arp and all are doing.
 

Attachments

  • hoag_hst.jpg
    hoag_hst.jpg
    79.4 KB · Views: 0
Still waiting to hear what it is ... ;)



Fair enough.

Out of curiousity, can you tell us whether these huge data bases have been looked at to see if there are concentrations of quasars similar to those around Arp et al's galaxy nuclei in cases where no galaxy is seen?

Has anyone tried to write software to look for linear arrangements of quasars? Any cases where 6 have lined up within a degree and what are the statistics of that? What are the z's in those cases, by the way?

These two comments and a number of others you have made in the posts right above this one, they get right to the core of the question I am asking in this thread BAC, they get into observational science and the use of statistics.

And I ask that you not use peer reviewed papers as a requirement because in this case I can come up with hundreds of thousands, if not millions.

This gets right to the core of population and observations statistics.

The burden of proof is upon the person making the claim, in this case Arp et al. . When someone one claims and reports and makes conclusion based upon data sets it is up to them to provide controls.

I will use a number of examples from the real world 9although they get abused there as well), in each case the person who is alleging an association must demonstrate that there is not sample bias involved and there are different models to do that.

Forensic analysis, public health epidemiology and toxic exposure, there are literally thousand of other.

In science the burden of proof rests on the person making a claim, they are the ones who must show that confounding factors do not apply.

Forensic analysis, did you know that when DNA evidence is presented at a trial, the person who makes the claim of a DNA match must provide control samples and demonstrate that there is not a random association. (Paternity testing is a little different because the samples may have known or suggested parameters)

Say that a suspect is said to have a 'match' to DNA found at the crime scene. There is a process that backs this up that involves sampling controls and data to show the level of occurrence in a population. If the sample used has a match they must state what percentage of the sample is in alignment with the evidence. If there is a 100% match then it is very easy to demonstrate that there is an association but it can be required by the court or the defense. A hundred percent match can be compared to the overall match to the population and the frequency of occurrence. This is not very common if the evidence sample is large and there are no amplifications used in either sample. But it still is done, the prosecution or the defense must state how the suspects DNA matches the evidence, the control sample (the general population) and the probability of a match. This is made easy by the huge amount of prior data and study but still can be called into question and must be gone over.

It is much more of an issue when there is a degraded evidence sample or a very small evidence sample and amplification is used. Then there may be a set of markers, lets us say twenty, that the suspect is matched for on 19 out of twenty. Either side may then call this into question, then there must be comparison to sample populations to establish the prevalence of the twenty markers (there are twenty areas of the genome that are the same for all human being) so it is crucial to show that the twenty markers occur with x prevalence in the population and sub population. This is a standard practice for people using DNA evidence in a court and while it sounds silly, sometimes they have to go out and actually sample a sub population (say family members who are also suspects) to determine the prevalence of the markers in a sub population.

Then there is the epidemiology of public health, sample bias is a common occurrence in many ways and must be controlled for. I will use food contamination. Say that a diner has two hundred customers on a given day and that fifty of them contract the symptoms of 'food poisoning' within 48 hours. Seems like it would be a slam dunk to show that the contamination was at the diner, actually no, the public health workers have to go and interview every one because of potential sample bias. It is possible that those fifty people were at a sporting event prior to eating at the diner and that the food contamination was at the sporting event. So they interview every one that they can, just so they get a good demographic of the overall population of people who ate at the diner that day, including those who did not complain of food poisoning. For one it may turn out that all two hundred had symptoms, just fifty reported, seems like a slam dunk that it was the diner, but you have to eliminate things before you made that conclusion, what if they all drank from a contaminated water supply. So good practice is to sometimes look outside the sample of people at the diner.

Then there is the whole issue of the public health of toxic exposure and subsequent effects. Cigarette smoke, industrial pollution, food patterns/heart disease are all examples of how complicated these statistical issues can become and why huge samples and sampling controls are needed.

So I am just pointing out that standard practice is for the person alleging the association exists to show that there have a reasonable sample and there is not a sample bias in the association they allege exists.

So in the case of Arp , or those who allege that a toxic chemical spill has harmed people, the burden of proof is on the one alleging the association. It would be up to Arp et al. to go back and replicate the events and reconstruct the samples. This is standard practice in areas where population statistics are used and the burden is on the one who alleges the association.
 
Wrangler, DeiRenDopa, and Dancing David ... let's look closer at the probabilities of the alignment in NGC 3516 occurring by sheer coincidence. Recall, this is the case where 5 quasars aligned along a minor axis have 5 z's that individually match the quantized z's that Karlsson predicted in the 1970's (http://adsabs.harvard.edu/abs/1977A&A....58..237K Karlsson, K.G., "On the existence of significant peaks in the quasar redshift distribution", Astron. Astrophys. 58:237–240, 1977). Note that I am assuming these particular quasars weren't used in Karlsson's study so it's a prediction.

Let's simplify the problem and just say that there are 30 possible values (or small ranges) of z ... from 0 to 3.0 ... in increments of 0.1. Now what are the number of permutations of 5 (r) ordered values from 30 (n) distinct values? It turns there are n!/(n-r)! possibilities. Which in this case is 17,100,720. That means the probability of picking those 5 specific z's in order from a range of 30 z's is 5.8 x 10-8.

Now the surface area of a sphere has about 4 PI 572 = 41,250 square degree areas. If we assume there are 30 quasars (just to pick a number I hope is conservative) per square degree (over the range of magnitudes we seem to observe) then there are a possible 1,237,500 quasars. That means there could be at most 250,000 groups of 5 located next to 250,000 different galaxies.
If they are evenly distributed. Which would have to be demonstrated by the person alleging an association.
In which case, the probability of finding those 5 specific z's in 250,000 samples is 250,000 x 5.8 x 10-8 = 0.015. In other words, there would be only a 1.5% chance of seeing that specific object (NGC 3516) in the heavens even with if we looked at every single galaxy.
See here is the rub, you would actually need to do a sub population control on the z values before you can draw that conclusion. The best would again be to choose a number of sample sets (hopefully ten thousand per set)
-old galaxies
-young galaxies
-random galaxies
-AGN
-Arp
-random point on the sky

Then you could make some assertions about the population of 'every galaxy' but you still have not sampled every galaxy.

You can't say what the probability is for 'every galaxy' unless you sample the whole set or try to sample representative subgroups and see what the actual values are. Sorry.
But Arp didn't study 250,000 different samples to find this one. He probably studied no more than a few percent of the galaxies, if that.
I am sorry BAC (sorry there is not a soft gentle emoticon) but that makes the potential for sample bias go up, it does not make the association stronger, it makes it weaker.
So it seems to me we should factor in the probability of him actually sampling from the right group of galaxies. In other words, let's assume that Arp actually looked at 25,000 galaxies. What is the probability that his sample would have contained that specific 5 z group? 1/10? So am I correct in suggesting the chance of Arp actually coming up with this particular case for us to study is no better than 0.15 percent (0.0015).
:soft gentle:

No because it makes the potential for sample bias worse, not better. You can not state the potential for the probability without trying to measure it that is what census models in statistics try to do. If you can't (as is often possible) survey all the members of a set you try to create and sample representative groups of the set.
And in addition to that, we have to add in the fact that all 5 objects are aligned rather narrowly along the minor axis. What's the probability of seeing that happen?
:softly:
That is another thing that would have to be observed and counted in large representative samples.
Well assume we throw darts at a dart board. What's the chance that a given dart will land within a 15 degree zone extending from opposite sides of the center of the dart board. 30/360 = 1/12? So if we throw 5 independent darts, there's a 0.0835 chance of all five landing in that zone = 3.9 x 10-6. If there are 250,000 samples then the probability of it happening at least once is about 1.0. But again, Arp didn't look at anything near 250,000 cases. Again, he probably only looked at 1/10 that number, if that. Then the likelihood of him picking a group that contained 5 lined up is probably no more than 0.1. In which case, the probability that Arp found this one case considering all the quasars out there is no more than 0.015 percent (0.00015).
:gentle:

A small sample makes the potential for random occurrence and sample bias higher, not lower.
And then consider the fact that not all the quasars are going to be centered around galaxies. Many of them are going to be spread out over regions where there are no galaxies. I suspect that would reduce the number of groups of 5 from which one could sample by a factor of 10 (at least) which would lower the probabilities by that same amount. So now we are talking about a probability of 0.0015 percent (0.000015)
Again hat just makes it worse, not better.

Which is one of the sample population groups should be random points on the sky and perhaps another random sample of points on the sky that have no galaxies ( set magnitude parameter within an arc radius of the sampling point.

This makes the sample bias potential worse , not better.
And I also think I'm still looking at the drawing of the z's from the range 0 to 3.0 in a conservative manner. I suspect that if one looked at the problem closer, one would find that the probability of picking 5 ordered z's that lie within 0.05 of a particular z over a range from 0.00 to 3.00 is less than what I calculated above by dividing the range into 30 zones. But it's getting too late for me to ponder that one. :)

In any case, I hope you see that even considering the full database, the probability of Arp having encountered this particular case (NGC 3516) based on pure coincidence has to be very, very small.

Or did I make another stupid math mistake? I leave it to you to tell me. :)

No it is not a stupid math mistake. it is a very common issue. it occurs in almost all areas of population sampling and statistical control.

Well BAC this gets right to the core of the issue.

It is up to Arp to demonstrate what the density of QSOs is, and to show that there is not a sample bias in that density determination . It is reasonable to state that there was a small sample originally, it happens often, that is why there is follow up and later revision of results. Happens all the time.
 
Can you clarify something? Are you entirely ruling out the possibility of intrinsic redshift (due to something other than motion)? Are you insisting that the entire value associated with any given z for every space object can only be interpreted as a measure of relative velocity and/or distance?
.
I'm not ruling out, or in, anything entirely.

However, no physical mechanism for 'intrinsic redshift' has been proposed (other than the Arp-Narlikar VMH) - so by definition it can't be tested in any lab here on Earth - and every paper (that I know of) on this topic that seriously tries to make the case for the existence of such a thing either contains significant methodology flaws or has conclusions that can be ruled out by later, far better observations (or analyses).

Back to the question I'm interested in: how do you go about evaluating material such as that in the various Arp et al. papers you have cited?
 
I'm puzzled by your response since I was actually agreeing with your conclusions.

I was admitting to being convinced by your argument that the quasar clustering around NGC 5985 is probably a result of lensing and that Arp was able to find one with a lot of quasars aligned along the minor axis (you must admit an improbable event) simply because there are lots of galaxies and quasars out there.

But now I think I'll take another look, because I just found another observation that involves 6 high redshift objects with a curious additional feature. But first, let's ask the question ... what are the specific redshifts of the 6 objects along the minor axis of NGC 5985?

This ... http://articles.adsabs.harvard.edu//full/1999A&A...341L...5A/L000006.000.html ... indicates the central Seyfert is at z = .008 and the six objects are at z = 2.13, 1.97 and 0.59 on one side and z = 0.009 (this is said to be dwarf spiral), 0.81 and 0.35 on the other side. There is also one more object in the field (albeit not on the minor axis) at z = 1.90. Now do you notice anything about these numbers?

If not (but I'm sure you do :)), the following will provide a clue. It's a similar case, but with even one more twist.

http://www.journals.uchicago.edu/doi/abs/10.1086/305779 "Quasars around the Seyfert Galaxy NGC 3516, Chu, Wei, Hu, Zhu, Arp, 1998"

And what I find it remarkable is that although there are certainly a lot of galaxies and quasars out there, a case like this could even exist within the number of cases that has likely been studied in any great detail. Here, the x-ray sources around a very active Seyfert, NGC 3516, at z = 0.009 are not only closely aligned along the minor axis but have redshifts that are at values (z = 0.33, 0.69, 0.93, 1.40, and 2.10) which just happen to be nearly the same as Karlsson's (1971, 1977) theoretically predicted values for quantized redshift (namely z = .3, 0.6, 0.96, 1.41, and 1.96). Here's the layout: http://www.haltonarp.com/articles/astronomy_by_press_release/illustrations/figure_1.jpg

Add to that the fact that in 2001, the XMM Newton x-ray telescope reportedly observed two high redshift regions appear on opposite sides of NGC 3516 following an extremely powerful flare (http://www.thunderbolts.info/tpod/2004/arch/041028redshift-rosetta.htm ) and you begin to wonder ... even if all the quasars do lie within a degree of the central galaxy.

And now do you notice that all the objects in the NGC 5985 are also close to the redshifts quantizations theorized by Karlsson? So maybe we were too quick to dismiss that case. Care to comment on the probability of all these observations taken together? Not only do we have the probability that both have 6 high redshift objects aligned along the minor axis, but almost all the objects are relatively close (within the margin of error) to Karlsonn's values and in one case all five are even in the same numerically increasing order as Karlsonn's values. Very, very curious and given all the values that redshift could take, improbable? :D
.
There seem to be two different ideas at play, the improbability of a certain configuration of objects ('quasars'), and a patern in observed (quasar) redshift distribution(s).

Taking the first one for this post.

Improbable things happen all the time - someone wins a lottery, for example, or some highly unlikely combination of circumstances leads to an unfortunate accident.

It is well known that human intuition is pretty bad when it comes to dealing with probability, risk, and so on - there are lots of excellent studies which demonstrate this, and you can even develop a powerful strategy for winning at certain games of chance by exploiting the average person's intuitions in this regard (so I'm told).

Ditto with regard to pattern recognition.

Which is one reason why it has taken many centuries to hone techniques for dealing with the tendency of the human brain to leap to conclusions of the wrong kind - from experimental design to hypothesis formulation and testing to probability theory to ...

Where direct experimentation is not possible - astronomy, geology, etc - it is even more important to make sure you get things right, by very carefully formulating hypotheses first, developing ways to test them, testing the tests before applying them, and so on.

Arp et al. provide many examples of what not to do in this regard - see Keel's comment about 'seek and ye shall find' (or whatever he said), for example, and sol invictus' on a posterori probability calculations.

So let's see if we can write some toy tests that might be interesting and relevant to the 'alignment' claims ...

How about this: we know (don't we?) that stars we see are not in any way physically associated with galaxies (down to, say, 20 mag, and excluding galaxies in the Local Group, and supernovae, and ...). Fine. So what's the probability that you can find some really cool configuration ('alignment') of stars across/around a galaxy? If you go looking for such a configuration, without first stating what it is you are looking for, it's almost certain you will find one! If you then ask 'what's the probability of finding this kind of configuration?' the answer MUST be 1 (because you found one).

Over to others to add to this toy example ...

(just one obvious question: why didn't Arp et al. ever bother to do a 'control' test of their fave configurations, by using stars instead of quasars, for example?)
 
BAC said:
Sure. It would appear that NGC 5985 might be explained by lensing and the probability of finding 6 QSO's in a single galaxy all aligned along the minor axis is still very small. But the sample size to draw from is very large. And Arp just got lucky ... or unlucky ... in finding it. Satisfied?
me said:
May I infer from this a method you use, in evaluating material such as the Arp paper and L-C&G?

Specifically that a study using many hundreds of times more data (and data obtained from a survey that paid far, far more attention to consistency than any of Arp's published work), designed explicitly to test an Arp idea, and (apparently) based solely on that idea ... and which fails to support the Arp idea ... can be dismissed with just some ~50 words? No need to do any analyses, crunch any numbers, painstakingly examine the logic chain, the experimental design, ...?

This is an area I'm particularly interested in - as you no doubt learned from reading my posts in the other two threads - what approaches do you use when you evaluate material such as that published by Arp et al.?
I'm puzzled by your response since I was actually agreeing with your conclusions.

I was admitting to being convinced by your argument that the quasar clustering around NGC 5985 is probably a result of lensing and that Arp was able to find one with a lot of quasars aligned along the minor axis (you must admit an improbable event) simply because there are lots of galaxies and quasars out there.
.
(I added some bolding)

What I was commenting on was the ease with which you chose one possible explanation ("a result of lensing") without bothering to consider how reasonable it is, let alone even hinting at how you might go about testing it.

I mean, why so quickly (apparently) rule out a chance alignment, to take just one possible other explanation? or some combination of factors?

... and that's what I was getting at, with my question ("May I infer from this a method you use, in evaluating material such as the Arp paper and L-C&G?").
 
DeiRenDopa said:
And just because Bell used a source that said it wasn't a complete list of all objects doesn't necessarily invalidate the results.
.
May I also use this to infer something important about how you go about evaluating material by Arp et al.?
Sure, as long as I get to observe that the way you went about evaluating that study was to immediately rule it out simply because it used a source where the author hedged their bets saying *don't use this because it's incomplete*. ;)
.
Do you mind if I ask you to be more careful in how you quote, BeAChooser?

"And just because Bell used a source that said it wasn't a complete list of all objects doesn't necessarily invalidate the results." - those are words you wrote, not me.
 
Sure, as long as I get to observe that the way you went about evaluating that study was to immediately rule it out simply because it used a source where the author hedged their bets saying *don't use this because it's incomplete*. ;)
.
Here's what I actually wrote:
M.B. Bell, "Further Evidence that the Redshifts of AGN Galaxies May Contain Intrinsic Components":
All the sources listed as quasars and active galaxies in the updated Véron-Cetty/Véron
catalogue (Véron-Cetty and Véron 2006) (hereafter VCVcat) are plotted in Fig 2.
Véron-Cetty, M.P. and Véron, P. 2006, A&A, 455, 773
.
And Bell uses the data from VCVcat extensively in his paper; in fact, statistical analyses of data from VCVcat is critical to the conclusions he draws.

And what does VCVcat (a.k.a. Véron-Cetty, M.P. and Véron, P. 2006, A&A, 455, 773) have to say about the data in the catalogue (I have used bolding to ensure that the authors' intentions are crystal clear)?
.
This catalogue should not be used for any statistical analysis as it is not complete in any sense, except that it is, we hope, a complete survey of the literature.
.
As I said, earlier in this thread, this Bell paper is garbage, and should never have been published in ApJ.
.
BeAChooser, perhaps you could spend some time reading VCVcat?

Or perhaps I may use what you wrote here as a reasonable indicator of what you consider to be a valid approach in astronomical research of this kind?

To other readers: I'm curious to know what you think about the acceptability of the approach BeAChooser's comment implies, in (extra-galactic) astronomical research.

Specifically, if the authors of a catalogue explicitly state their catalogue should not be used for statistical analyses, and someone proceeds to do just that, what degree of credibility do you think should be given to that someone's paper?
 
I have taken a look at the L-C/G paper:

http://arxiv.org/PS_cache/astro-ph/pdf/0609/0609514v1.pdf

First thing, they have some histograms showing the number of QSOs for position angles off of the minor axis from 0 to 90 degrees. The Figures 2, 10, and 13 all have different lower limits on the y-axis, which just bugs me. You think that there is a huge difference in the number of QSOs within 5 degrees if the minor axis, compared to the major axis 90 degrees away, and then you see the y-axis of the histogram...what the? the excess on the minor axis is about 2 quasars? The only histogram that has a lower y-axis limit of zero is Figure 10, which shows a very high level of anisotropy...that's probably why they let that histogram be presented with the y-min at zero!

They say that extinction on the order of ~0.1 magnitudes could explain the anisotropy that they see.

This paper: http://journals.cambridge.org/download.php?file=%2FIAU%2FIAU2004_IAUS225%2FS1743921305002218a.pdf&code=9034b9fd5b4ac953ec9cdaa4de02bcf5 gives the expected changes in QSO magnitudes from absorber type massive lenses to be up to -0.2 magnitude, so it seems possible that the extinction effect may have some bearing on the anisotropy.

I think that L-C and G are right to be cautious about the anisotropy.
.
Indeed, and there is at least one other documented effect in the SDSS dataset that may be relevant: completeness (etc) near bright galaxies.

This is mentioned in Scranton et al. 2005, which L-C&G cite, yet L-C&G don't mention it.

I'd have to go read the Scranton et al. paper again, but I think the problem arises because the sky background around bright galaxies contains a consistent/persistent offset/bias, leading to the estimated magnitudes of quasars (and stars) having a systematic error that was not modelled or estimated before DR5 (or maybe not at all).

Given how SDSS photometry works (scan mode), one could come up with a plausible description of how quasar magnitudes could be systematically biassed in a way that could lead to the observed minor axis anisotropy. Further, as L-C&G do not seem to have done any control studies - using bright galaxies other than edge-on spirals, for example - there seems to be no way to test the possibility of any such systematic error.

To their credit, L-C&G do seem to have put more effort into trying to nail down possible systematics that Arp has ever done; however, it seems that at least one possible systematic error was not investigated.
 
Re the "The Discovery of a High Redshift X-Ray Emitting QSO Very Close to the Nucleus of NGC 7319" paper (check earlier posts by BAC for a link).

Apart from the 'areal density' logic chain, and the 'alignment of a jet' one (the latter is a pretty darn good example of a posterori logic; you can almost hear the thinking 'what else can we find about this that might make it look like it's in the foreground/interacting with NGC 7319?' ... no prior list of possible 'evidence for interaction', nor any mention of any such evidence that was not found), there's a strong inference that the spiral arm of this galaxy isn't transparent, in the sense that no background object could be seen through the arm, and certainly not in the x-ray band (because of absorption by neutral hydrogen, for example).

Of course, spiral arms are not optically thick, not even in the x-ray band, as W. Kell has shown in a series of papers, and as this Chandra PR attests (work based on discovery of a hole by Lockman et al.).
 
(parts of the BAC post omitted)

Likewise, how do you explain data suggesting quantization such as this: http://arxiv.org/abs/astro-ph/0501090 . You dismissed this paper with a comment about how one defines "quasars". So why don't you now tell us exactly what you were trying to say with the comment. How exactly has Arp's, et. al., definition of quasars invalidated the conclusion that there is quantization of redshift?
.
I've now read this paper, and I must say it is vintage Arp, and about as good an example as you could expect to find of several methodological shortcomings so common in Arp et al. papers.

First, the authors do not seem to have even considered the issue of completeness, even for 'quasars' as defined by the two surveys, much less within the full context of the Arp 'ejection by active galaxies' idea (Bell does a much better job of this, although, as I have already noted, he clearly failed to read his primary source catalogue; more later).

Second, the paper is full of ad hoc, a posterori explanations (one-sided jets, for example) ... this is bad enough when you are testing a hypothesis that is well on the way to being established and is based on solid astrophysics; it's really sloppy work for something that's so ill-founded and qualitative as Arp's 'quasar ejection' model.

Third, and perhaps most important for now, there are no controls! There's not even a statement of the null hypothesis, much less a systematic examination of possible sources of bias or error ... and certainly no quantitative testing of any such possible systematics. Worse, many such tests/controls should be fairly easy to at least state (and many easy to do, given the accessibility of the full 2dF and SDSS datasets).

Thanks for bringing this paper to my attention, BeAChooser, it reminded me, once again, why Arp has so little credibility among contemporary astronomers.
 
.
Here's what I actually wrote:.
BeAChooser, perhaps you could spend some time reading VCVcat?

Or perhaps I may use what you wrote here as a reasonable indicator of what you consider to be a valid approach in astronomical research of this kind?

To other readers: I'm curious to know what you think about the acceptability of the approach BeAChooser's comment implies, in (extra-galactic) astronomical research.

Specifically, if the authors of a catalogue explicitly state their catalogue should not be used for statistical analyses, and someone proceeds to do just that, what degree of credibility do you think should be given to that someone's paper?

Without further reading and going back to read what has been said over the last three weeks, I should not really comment. (I was unable to process much information and I retain even less do to a brief reoccurance of my sleep apnea.)

It will take considerable reading to understand the large amount of material in those three weeks.
 
BAC, I certainly agree that Arp encountering this out of pure coincidence is serendipitous, and the probability is very low.

However, I feel that it is dangerous to draw from this probability that there physical sigificance to the association.

Funny. The argument that it was dangerous for Arp to infer some physical significance from an apparent association has been repeatedly criticized by thread members on the grounds that the database being used to draw the association was not complete ... that Arp was severely undersampling the data.

Now I show a case where even if you include ALL the possible datapoints, the probability of that case turning up *by chance* is on the order of 10-5 or smaller ... and I'm still told its dangerous to draw any conclusion. ;)

And I'm advised this even though there are other cases where there seems to be an unlikely frequency of redshifts close to the quantized values initially predicted by Karlsson. Take for example NGC 5985 where there are again 5 x-ray emitters near a galaxy that are lined up along a minor axis with redshifts of 2.13, 1.97, 0.59, 0.81 and 0.35. The probability of that occurrence, even on a combinatorial basis, is 7 x 10-6 (compared to the earlier 5.8 x 10-8). So that would make the final probability of this case, taking all the other factors I added in the other case into account, about 10-3.

But the two cases aren't independent. Finding one case makes the probability of finding the other case even smaller. And these aren't the only cases.

Consider NGC 3628

http://www.eitgaastra.nl/pl/f54a.gif

which has 3 quasars at z = 1.94, 2.43 and 0.408 at the base of the east-north-east plume, coincident with the start of an optical jet, three more quasars in the southern plume along the minor axis at z = 0.995, 2.15. 1.75, plus two more quasars, with z = 2.06 and 1.46, aligned along what looks to be the opposite side major axis.

All told there 8 high redshift, x-ray emitting objects, aligned along specific features related to a low redshift z = 0.0028 galaxy, that have redshifts suspiciously close to Karlsson's predicted values of z = .06, .30, .60, .96, 1.41, 1.96, and 2.64. What are the chances of that again occurring just by accident? And because you have to draw this case from the same limited group of clusterings around galaxies in the total sample, this low probability forces the others to be even lower.

And of course that's not the last case either.

If you wish to draw conclusions of physical association, what do you think about the ring galaxy shown here.

Ring galaxies are already rare, <5% of the total number of galaxies are of this ring galaxy type.

First of all, <5% is not rare in the sense that the cases I've noted above are rare. We are talking many orders of magnitude difference in probability. Night and day.

Look closely inside the ring, and you will see another ring galaxy far behind the larger ring galaxy.

Surely the probability of finding a unique alignment of two unusual galaxy types such as this is vanishingly small?

Actually, I don't think that's true at all. Just consider the number of galaxies that are out there compared to the number of identified quasars. And you don't have to align 6 of them up with 1 galaxy ... just one.

QSOs ... ~50,000.

But in 1999 the Hubble Space Telescope team estimated 125 BILLION galaxies in the observable universe. And with a new camera, they recently observed twice as many as before. Now if even 1% of galaxies has its major axis aligned with our viewing angle, that leaves 2 BILLION galaxies. Plus, they are a pretty noticable object. You see one in a field and it stands out ... unlike a quasar. So no, I don't think it is at all improbable that they found a ring galaxy within a ring galaxy.
 
Originally Posted by BeAChooser
...snip ... Now the surface area of a sphere has about 4 PI 572 = 41,250 square degree areas. If we assume there are 30 quasars (just to pick a number I hope is conservative) per square degree (over the range of magnitudes we seem to observe) then there are a possible 1,237,500 quasars. That means there could be at most 250,000 groups of 5 located next to 250,000 different galaxies.

If they are evenly distributed. Which would have to be demonstrated by the person alleging an association.

David, I'm trying to give your side the benefit of the doubt by doing it this way. If you don't evenly distribute them, then the chance of Arp finding one with 5 is even smaller. Plus, I'm trying to give your side the benefit of the doubt by assuming there are only 250,000 galaxies that you have to divide the quasars among. But as I noted to Wrangler above, there are actually 250,000,000,000 or more. Which is why so many observations only show 1 or 2 or no quasars at all near galaxies. The galaxies that have 1 or 2 leave fewer to form groupings of 5. Understand? Now I reduced my probabilities by 1/10th for this effect but I suspect the reduction should be MUCH larger. After all, there are a million times more galaxies to spread the quasars out amongst than I assumed. :)

... snip ...

The rest of your post, David, was a lot of hand waving. And I think most people will see that. :)
 
Funny. The argument that it was dangerous for Arp to infer some physical significance from an apparent association has been repeatedly criticized by thread members on the grounds that the database being used to draw the association was not complete ... that Arp was severely undersampling the data.

Now I show a case where even if you include ALL the possible datapoints, the probability of that case turning up *by chance* is on the order of 10-5 or smaller ... and I'm still told its dangerous to draw any conclusion. ;)

And I'm advised this even though there are other cases where there seems to be an unlikely frequency of redshifts close to the quantized values initially predicted by Karlsson. Take for example NGC 5985 where there are again 5 x-ray emitters near a galaxy that are lined up along a minor axis with redshifts of 2.13, 1.97, 0.59, 0.81 and 0.35. The probability of that occurrence, even on a combinatorial basis, is 7 x 10-6 (compared to the earlier 5.8 x 10-8). So that would make the final probability of this case, taking all the other factors I added in the other case into account, about 10-3.

But the two cases aren't independent. Finding one case makes the probability of finding the other case even smaller. And these aren't the only cases.

(rest of post omitted)
.

The probability of all these things is 1 ... because they have actually been observed.

It might be a good idea, BeAChooser, to start at the beginning, with any one of these - but just one - and write out each step in the calculation (on a separate line), and leave the numbers out (use symbols, perhaps).

Separately - preferably before - write down, very carefully, but only in words (no numbers) an 'astronomical alignment' that you are very sure is highly unlikely. Then do the same thing - write down the individual steps in a calculation that could, in principle, be performed to determine what you think is the probability of that 'astronomical alignment'.

Finally, for this separate, 'thought experiment' case, write down as many 'essentially the same' alignments as you can.

That little exercise might help you get an insight into just how hard it is to do these kinds of calculations ab initio.

Turning to Arp et al.

Can you write down, in a symbolic fashion (i.e. equations), the essential core of Arp's underlying, physical, model for these kinds of 'quasar alignments'?
 

Back
Top Bottom