Arp objects, QSOs, Statistics

You shall receive bluster foam and smoke, yet answer you shall not have.
Reference to noneistant psosts will be made and claims of your inability to comprehend, but answer you shall have none.
Vague allusions, bad statistics and avioded issue will about but an answer you shall not receive.

BAC:
1.How exactly and in what post did you explain that your method can determine a random from a non-random patterns?

And in case you forgot:

2. What mechanism and forces cause galaxies to rotate per perrat's model of galaxy rotation. What observation has been made to support it.
3. What would keep a Lerner plasmoid of 40,000 solar masses in an area of 43 AU diameter from collapsing to a black hole.
4. What makes star cluster rotate faster around a galaxy than gravity minus dark matter can provide?
5. What part of Narlikar's theory has any observable consequence, other than your gnome of intrinsic red shift?
6. How do you explain the quantity of light elements and heavy elements through PC/PU?
7. How does either Narlikar's theory or other PC/PU provide for the cosmic back ground radiation spectrum?

Seven muses and seven mysteries, you cabn drive a deathstar through the holes in your theory BAC.
 
also re post #308:
BeAChooser said:
But why would one want to apply it to all 104 objects given that only a few of those various type objects are hypothesized as being ejected from that galaxy or any galaxy for that matter? Perhaps you STILL don't understand the nature of the calculations and method?
.
Indeed, I have admitted, more than once (I think) that I do not understand "the nature of the calculations and method".

You see, I took you at your word ("I've stated my hypothesis very clearly ... that the calculations I made (including the Bayesian portion) strongly suggest that the mainstream's explanation for quasars is WRONG", to take just one example), and concluded that your hypothesis is a test of some proposition derivable from some mainstream theory or other.

I admit that the clues have been there, and that you have strongly hinted at them, many times.

This part of #308 was one of several that I read today that finally made the connection for me ...

The hypothesis that you are testing has to do with an idea that '(all) quasars' are ejected '(predominantly) along the minor axes' of 'active galaxies', and have redshifts 'at Karlsson peaks', and the null hypothesis has something to do with random (chance) alignments (or something).

And to test your hypothesis you have scoured the literature for papers by Arp et al. which report observations of quasars along (or near) the minor axes of active galaxies (and also something to do with Karlsson peaks).

Your selection method is principally a 'seek and ye shall find' one - the objective is to find as many 'cases' that seem to support your hypothesis as possible - and apply an a posterori probability calculation to them.

Other things that twigged me to this explanation include:

* no explicit statement of the hypothesis

* no null hypothesis

* repeated misunderstanding (or worse) over the need for consistency (e.g. in how 'quasars' are selected, for any part of the calculation)

* gross parody of 'mainstream theory', as what you thought you were testing

* lack of derivation of core details of hypothesis to be tested from an explicit statement of 'mainstream theory'.
 
Re #312:
BeAChooser said:
Four, you must have misinterpreted L-C&G's paper AGAIN. It's easy enough to check. Suppose each of the galaxies was more than several degrees away from each other so that those 1 degree zones did not overlap. Out to a distance of 1 degree, each galaxy would occupy about 3 square degrees. Since there are 41,250 square degrees in the entire sky, each galaxy would occupy 0.0000727 of the entire sky. So 70 galaxies would occupy 0.0051 of the entire sky. So if they had claimed that there were 8698 quasars within 1 degree of 70 galaxies, then they would have been implying a total number of observable quasars in the sky of 8698/0.0051 = 1,705,000 ... more than 4 times the number that the SDSS study estimates (and it's the best survey yet with a claim that estimate accounts for more than 90% estimate of all observable quasars).
.
Um, ... er, ... I think you goofed here BAC ...

L-C&G did, in fact, count quasars more than once, where the circles around the galaxies overlapped (and there is quite a bit in their paper about why this is legitimate).

I also think your arithmetic and/or reading of a source is wrong ... if only because SDSS DR6 alone has confirmed spectra of >100k 'quasars' (and the spectrascopic area is only some 7k square degrees).

But mostly I think you yourself didn't actually read L-C&G ... Table 1 ("Anisotropy of QSOs with different constraints") lists, in the column "Number of pairs" (meaning, galaxy-QSO pairs) "8698" on the "1.0o" line in the column thetamax .
 
Re #318:
BeAChooser said:
DeiRenDopa said:
I also pointed out that there seem to be at least five other quasars within 30' of NGC 5985 (in addition to the two mentioned in the Arp paper).
Actually, the Arp paper mentioned 5 quasars. And the 5 you mentioned "in addition to the two in the Arp paper", two of those appear to have the same redshift as those in Arp's paper. As to the other 3 quasars, one is outside the 0-3 range and the other two we know nothing else about (like where they are located).
.
I think you missed the "within 30'" part ...

Here's what I wrote in #294:
In terms of radial distance, here is the list, ranked by distance (from the Arp paper):
12.0' (z=2.13 quasar)
25.2' (1.97)
36.9' (0.81, and the dSp)
48.2' (0.69)
54.3' (1.90)
90.4' (0.35, which is the S1)
So there are only two quasars, in the Arp paper, within 30' of NGC 5985.

The two with the 'same' redshift as those in the Arp paper seem to be quite different quasars:

1537+595 NED01 and NED02 are both listed as being 7.5' from NGC 5985, and have redshifts of 2.132 and 1.968 (respectively); the two 'Arp' quasars have redshifts listed as 2.125 (12.0' distant) and 1.968 (25.2') (respectively).

.
I wonder how many [of the five 'new' quasars] lie near the minor axis of NGC 5985?
There aren't 5, only 3, and one of those has a z well outside the 0-3 range of my calculation. As to the location of the other two, feel free to find out for us, DRD. I'll be happy to include them in the calculation although the probability will only end up smaller.
.
There are five, not three, as I just showed.

How about I give you the RA and Dec of all five, and you tell us whether they are sufficiently close to the minor axis to warrant including in your calculations?
 
A suggestion for exploring BeAChooser's approach to hypothesis formation and testing (as presented in this thread).

Let's set up a toy, or mock, universe.

Let's assume it's a flat, 2D universe, with a finite boundary, in the shape of a circle (distance R from the centre of the universe).

Let's assume the universe is populated by two kinds of objects, which we will call AG and Q.

'AG' objects are points with a double-headed arrow.

'Q' objects are also points, with a property called z, which is always an integer in the range 0 to 100 (inclusive).

We are looking at this universe from afar.

Every Q can be related to every AG by two numbers: the distance from AG to Q ('rAG-Q'), and the angle the line AG-Q makes with respect to the AG's double-headed arrow ('βAG-Q'); for convenience, we will express this angle as one between 0o and 90o (inclusive).

Note that we could also make our universe the surface of a sphere, with us at its centre; however, to do many of the calculations we very likely would want to do we'd have to go brush up on spherical trig. That's perfectly OK of course, and if JREF forum members who are still reading this thread would prefer this mock universe, please say so (and also, please, if you'd prefer the 'circle' universe, please say so too!).

As I understand it, BAC's hypotheses and the associated tests have to do with (some of the) Qs which are within a certain distance of an AG, and for which βAG-Q is < some number, and for which the z of these Qs belong to a subset of the set of integers 0 to 100.

I have stated that I do not understand BAC's approach.

I hope that an exploration of this mock universe will help me to understand it.

Before proceeding, I'd like to hear from you, the readers of this post.

Specifically, to what extent do you think exploration of this mock universe will enable you to understand BAC's approach? What vital parts of his approach, if any, will exploration of this mock universe fail to clarify?

Any other comments?
 
I don't know DRD, the BAC method seems to be based upon an even distribution of QSOs.

So if there are three/sq. degree, that means in the model that they are to be distributed so that there are only 3 to a square degree and that they are evenly spaced out.

So if you have 6 apparent in a square degree then the odds of that are like figured from there.

Say that you just say, well there is a 1/3 chance there would be an extra QSO, since there are three per sq. degree that means the odds of an extra QSO are 1/3 and the odds of three extra QSOs are (1/3)3 or 1/27.

At least that is the way the whacky thing looks to me.

Now the stuff about the red shifts is just even stranger.

I sure hope the CDC does not use bacs methods of calculation.

Such as here where the calculation of appearance along the minor axes is made
Now let's complete the calculation by again adding in the fact that all 5 objects are aligned rather narrowly along the minor axis. I'll just use the dart board example I used previously, where I found that the probability of throwing 5 darts in a row that land within a 15 degree zone extending from opposite sides of the center of the dart board as being 3.9 x 10-6 per galaxy. And again, we have to multiply by the number of galaxies with 5 quasars that can be aligned. With only 20,600 such cases possible (conservatively), the probability of finding 5 quasars aligned along the minor axis is therefore 3.9 x 10-6 * 20,600 = 0.08 which makes the total likelihood of encountering this one case if one carefully studied the entire quasar population equal to 0.08 * 0.0063 = ~0.0005 .

Some how we have the ratio of distribution as being

3.9 x 10 -6 for the five QSO to be arranged there

so we take the total number of QSOs and multiply it by that distribution ratio and get 0.0063.

And somehow this goes back to Bayes theorm in post 151
http://www.internationalskeptics.com/forums/showpost.php?p=3594665&postcount=151

which seem to require exclusive sets (which hasn't been demonstrated that you a picking balls froma bag)(seems more like counting raindrops to me)
:http://www.intmath.com/Counting-probability/10_Bayes-theorem.php

but that doesn't seem to bother some people:
http://en.wikipedia.org/wiki/Bayes'_theorem#Derivation_from_conditional_probabilities

I don't know, it seems to be a philosphical movement as much as a part of modern statitics.

This is more what i am used to:
http://en.wikipedia.org/wiki/Observational_error
http://en.wikipedia.org/wiki/Errors_and_residuals_in_statistics
 
Last edited:
DRD,

I think your toy will help.

I hope that BAC will explain his approach using this toy universe, as clearly and succinctly as possible.
 
How can your methdology show a difference between a random placement and a causal one?
... snip ...
Show me where in this thread that you have answered this specific question. Otherwise I will go to the Community Forum and call you a liar in a thread deidicated to that purpose.

David, I need to thank you. In the process of preparing a post to address your question (and show that you haven't even tried to understand my methodology), I discovered that I made a mistake in the formula I used to calculate probability.

I will describe and correct that error in my next post, where I will once again calculate the probability of observing the specific set of high red-shift quasars that have been found near several specific low-redshift galaxies, assuming the mainstream claim that z are continuously distributed over the interval z=0-3 is true.

The error I made doesn't change the conclusion I reached nor the answer I will give to your question. But it is an error which I think someone who works in the field of probability and statistics, like you, probably should have caught. From the fact that you didn't, I can only speculate that either aren't an expert (and mind you, I'm not claiming to be one) or you didn't take the time to try to understand my methodology. :)

In any case, let me now address your question and please note that the answer was right in front of you.

Your question is how can my methodology show a difference between a random placement and a causal one? But to be more precise, your question should be restated: how can my methodology distinguish between the existence of physics which produce a relatively uniform distribution z (like the mainstream model) and physics which produce a highly non-uniform distribution of z (like Karlsson's quantized values)?

I want you to think carefully about what I'm calculating. I find the probability that a specific observation of quasar z values would occur, assuming that the mainstream assumptions about quasar/galaxy associations and quasar redshift (z) distribution are true. More specifically, I'm calculating in each case the probability of encountering an observation with a set of z's that match the Karlsson values within the accuracy that the observation actually matches the Karlsson values, while assuming that the z values are not quantized around Karlsson values but instead come from a relatively uniform distribution of z between 0 and 3.

If the observation exactly matches the Karlsson values then the probability calculated by this method is very, very small. If the observation doesn't match the Karlsson values, then the calculated probability is much, much larger. All the cases I've studied here have produced very small probabilities. And yet each of those cases actually exists. Does that perhaps suggest something?

Can we agree that the low probabilities I've calculated for seeing specific observations with a specific set of z values, assuming a random "placement" of z values, are correct and very small (<<1)? Can we agree that the probability of seeing those observations would still be small (<1) even if a VERY large number of galaxies were examined ... even if all the galaxies that could potentially have quasars near them were examined. That's what my calculations indicate.

Yet in spite of that fact, multiple observations with those low probability z combinations have actually been observed. Doesn't that suggest the placement of z values might not be random? And if the number of galaxies actually examined so far to obtain those observations isn't anywhere near as large as the number of potential galaxies with quasars near them on which that <1 probability is based, doesn't that STRONGLY suggest the placement of z values isn't random? Isn't this obvious, David?

In other words, given a relatively limited number of observations, we simply don't expect to see any cases where all or most of the z values are close to Karlsson values. Frankly, we should be surprised if we do see such a case. But if we see only one, we might think "well, we got lucky". But when we see two, we probably should begin to get a little suspicious. And when we see half a dozen observations which are calculated to be low probability cases, maybe it's time to wake up to the fact that this is telling us something; namely, that the z are not randomly produced at all but causal with preferred values near the ones identified by Karlsson.

The question then is what is the cause. But the mainstream hasn't gotten to that point because it simply denies that any quasar redshifts might be quantized. So first things first. Let's get to the point of you accepting that at least some redshifts appear to be quantized, then we can go searching for a physical mechanism to explain that. Fair enough, David?

SO, to answer your question, my procedure is most definitely capable of distinguishing between a uniform (random) and a non-uniform (causal) distribution of z. If we sample a relatively small number of small areas in the sky (say a small group of the quasar/galaxy associations) where there are r quasars and repeatedly find that the observed z are close to a few specific values of z, then it is obvious the probability of finding those values of z is more likely the result of a process that produces them than one which produces a uniform distribution of z (and a very low probability of ever seeing those cases).

Now if you can't understand the above explanation, David, then I can't help you further. If you publish your community forum attack on me, should I decide to respond at all (and I may not), I shall simply repeat this and the next post and thereby show that despite your supposed knowledge of probability and statistics, you missed a very serious and frankly very obvious error in my earlier calculation of probability, and that you still don't appear to understand rather simple logic for why my methodology can distinguish between a relatively uniform (random) distribution of z (like the mainstream's) and one (like that theorized by Karlsson) which would be more causally based. :D
 
Now, I want to explain the error I made and correct my methodology accordingly. Let's start by recalling that I claimed the probability of finding r specific values drawn without regard to order from a uniform distribution of n different values is 1/(n!/((n-r)!r!)) = r!/(n!/(n-r)!). But that's wrong.

To find the probability of a set of r specific values picked randomly from a distribution of n different values, we actually need to ratio the number of ways one can pick those r values from the distribution by the number of ways one can pick any r values from the distribution. Right?

For example, if we have a distribution with 5 possible values (call them a,b,c,d,e) and we want the probability of seeing c and d show up in a random draw of 2 values from that pool of 5 possibilities, we first need to find the number of ways we can draw c and d. Well that turns out to be r!, so the answer is 2 in that case.

Next, we need to divide by the number of ways one can draw ANY 2 values from the 5 possibilities. Note that drawing that value does not eliminate it from the pool. The formula to use here is nr. So there are 52 = 25 ways of drawing 2 values from a pool containing 5 different values.

So the probability of seeing c and d in a single observation in the above example is 2/25 = 0.08 = 8 percent.

So the formula I should have used in my calculation for the probability of seeing r specific values of z picked randomly from a distribution of n different values of z is

P = r!/nr.

Since nr > n!/(n-r)! we know that this probability will be somewhat smaller than what I previously calculated.

Now, instead of the probability of r specific numbers, we want the probability of a set of r observed values falling within a certain distance of those r specific numbers. It is this miss distance that determines the value of n we use.

If all the observations fell the same miss-distance from the r specific values, then the value of n would simply be found as follows:

n = possible range of values / increment ; where increment = 2 * miss distance .

But in reality, each of the r observations will likely fall a different miss-distance from the specific r number closest to it. So to handle this, instead of nr in the denominator of the probability equation, we substitute in ni1*ni2* ... snip ... * nir; where ij indicates that n is derived from the increment that corresponds to the miss-distance for the j-th data point.

Thus, P = r!*(1/ni1)*(1/ni2)* ... snip ... * (1/nir)

All would be well at this point, except the distribution of quasar redshift, z, between 0 and 3.0 is not really uniform. Mainstream literature indicates that the frequency of z actually rises from a small value (about zero) near z=0 to a maximum at about z=1, then stays roughly constant until it reaches z=3, where it then rather rapidly drops back to very near zero. A uniform assumption about the distribution will overestimate the effect on probability of data points with z<1 and underestimate the effect on probability of data points with z>1 within the range z=0-3.

I treated it as such in the previous calculations and demonstrated, when challenged about this, that at least for a few of the cases in question that assumption was likely conservative because of other factors in the calculation that also affected the relative weighting of the individual data points.

But this new form of the probability equation raises an interesting possibility ... that of directly accounting for the non-uniform nature of the mainstream's distribution of z. Suppose we weight the terms associated with specific data points (i.e., (1/nij) where j is the data point)? Since these terms are multiplicative, we should use a power law.

The weights should be based on the frequency of each data point relative to the average frequency they would have were they from a uniform distribution instead.

Now in our problem, the frequency rises from zero at z=0 to a maximum at z=1.0 and then stays constant to z=3.0. The area under that frequency distribution should sum to 1 over the entire range giving this equation:

1 = 1/2*maximumnum+2*maximumnum; where the m denotes that this is the non-uniform mainstream distribution of z.

The maximum value of the frequency can then be found:

maximumnum = 1/2.5 = 0.400

Now we find a uniform distribution from 0 to 3 that has the same area.

1 = 3*maximumu

maximumu = 0.333

The weights assigned to given z in the non-uniform mainstream distribution depend on where they lie between 0 and 3. A uniform distribution assumption underweights the importance of any z over 1. To correct this, any z over 1 will get a weight of 0.4/0.333 = 1.201 in the analysis. Any z under 1 is overweighted if a uniform distribution is assumed. To correct this, any z under one will get a weight less than 1. At z=0.3 the weighting factor is 0.36 while at z=0.6 the weighting factor is 0.72. The weighting equation can be written thus:

w = z*1.201; w<=1.201; 0<z<=3.0 .

The effect of this when dealing with terms that are all less than one will be to make the final probability smaller if the weight is > 1 and make it larger if the weight is < 1. As it should. This may not be the exact weighting that should apply, but I do think it will serve as a first approximation in dealing with the particular concern.

So, to summarize, my new approach to calculating the probability of seeing r observed z values at any galaxy under study, given the mainstream's assumptions about quasars, will be to find an appropriate increment for each z value, determine an n from each of those increments, determine a weight to apply to each n, then multiply them all together as follows:

P1G = r!*(1/ni1)w1*(1/ni2)w2* ... snip ... * (1/nir)wr
where

nik = 3.0/(2 * distance to the nearest Karlsson value) for the k-th z value,

wk = z*1.201; w<=1.201; for the k-th z value,

and

r = number of z values.

Any problems with that folks?

Now the probability of finding a particular observation, given the mainstream's assumptions, obviously goes up with the number of quasar/galaxy associations that are studied. So what is the probability of seeing a given observation if we were to look at all the quasar/galaxy associations that are possible in the sky? I think this is a useful indication of whether finding half a dozen (or so) very low probability cases after examining only a fraction of the possible quasar/galaxy associations is indicative of a problem. If the probability of finding the case is still very low even if we looked at all the possible associations, then it's highly likely there is a problem in the mainstream's theory regarding the cause of redshift (for quasars at least). Whether the mainstream proponents will admit this or not is another matter. :)

So the next question to answer is what is the maximum possible number of galaxies in the sky with r associated quasars? I shall call that quantity NmaxGwithr.

To find this, I will be assuming a number of things (some revised from previous calculations as well, based on better information). Those I'm debating are always free to offer specific alternative values for these parameters. If they don't, I can only assume they agree with them and that any complaints are merely hand-waving in stubborn defense of the mainstream theory.

First, the SDSS survey is a relatively complete sampling of all observable quasars. The SDSS website indicates it accounts for over 90%, in the areas that have been surveyed. But I'm going to conservatively assume that they only found 75% the quasars that exist and could be observed in that survey. So I will increase the number of quasars SDSS found in the portion of the sky they surveyed by 33% and then use that to compute the number of observable quasars across the entire sky.

Now the SDSS surveyors say (according to http://cas.sdss.org/dr6/en/sdss/release/ ) that their DR6 effort (the latest) found 104,140 quasars over in 6860 deg2 area. That works out to about 15 quasars deg-2. Following what I stated above, I'm going to increase that to 20 quasars deg-2. Now the surface area of a sphere has about 4 PI (180/PI)2 = 41,250 square degrees so if there are 20 quasars per square degree (over the range of magnitudes we can observe) then there are a possible 825,000 observable quasars in the sky. Anyone want to claim that's not a conservative estimate for the total number of observable quasars in the sky?

Next, there is the question of how those quasars are distributed with respect to low redshift galaxies and to each other. For now, I will conservatively assume that only half of them are near low redshift galaxies. I think that's a VERY conservative assumption and would be very interested in any data that would further refine that parameter. Afterall, there aren't that many low redshift galaxies. In fact, I showed earlier in the thread (post #223) that perhaps no more than 1 percent of the galaxies lie within z = 0.01, which equates to 1-2 galaxies per square degree of sky. I found a source that indicated most of the galaxies in near field surveys are smaller than 30-40 arcsec in diameter ... meaning they occupy only a fraction of any given square degree field (because 1 arcsec is 1/3600th of a degree). I noted a source that indicates even in galaxy groups, the distance between galaxy members is typically 3-5 times the galaxy diameter. I noted that even the large, naked eye, Andromeda galaxy ... our nearest neighbor ... only has an apparent diameter of 180 arcmin (that's 1/30th of a degree). And I noted that NGC 3516, one of the cases I calculate only has an apparent diameter of a few arcmin. It may be typical of specific galaxies I am looking at. So with only 20 quasars per square degree of sky on average, does everyone agree I'm very conservative in assuming only half of all quasars lie near low redshift galaxies like those in each of the cases of interest? If so, then that brings us down to 413,000 quasars in the population of interest. And I suspect the number really should be smaller.

And how are those quasars distributed amongst the low redshift galaxies? In other words, are they spread out evenly over 413,000 different galaxies or do they all lie near one galaxy? Now previously, I assumed that half the quasars are in groups of r. That would have meant that the maximum possible number of quasar/galaxy associations I would use in the following calculations would be 207,000/r. But I think that is far too conservative an assumption ... that high r associations are much rarer than I assumed. And I think the proof of that is how few high-r cases can be specifically identified by anyone. And certainly the Arp et. al. community would like to list as many as possible.

So, instead, I'm going assume that at each r level, one third of the remaining quasars are distributed at that r level. I think this will be much more consistent with the mainstream assumption that quasars have no connection to each other or to low redshift galaxies. Thus, at r=1 (in other words, where there is only 1 quasar near a low redshift galaxy), 137,000 of the quasars will be distributed. That leaves 275,000 quasars. And after distributing the r=2 quasars, there are 184,000 left. And then one-third of those will be in r = 3 associations. That means there are 61,333 quasars in r=3 associations for a total of 20,444 possible r=3 quasar/galaxy associations in the sky. That leaves 122,000 quasars still to distribute. One third of those in r=4 associations means there are 10,166 possible r=4 cases. And if one continues this logic one arrives at the following number of possible quasar/galaxy associations for each r of interest:

NmaxGwithr =

20444 at r=3
10166 at r=4
5433 at r=5
3018 at r=6
1724 at r=7
1000 at r=8
596 at r=9
358 at r=10
216 at r=11
132 at r=12

Do any of my opponents on this thread want to disagree with this assumption? If so, please provide a specific distribution that I should use and tell us why you think that's more reasonable. If you don't, I can only assume you think this one is or at least puts the best face on it from the standpoint of the results you would like to see. :)

So we arrive at the final probability of seeing the specific values of z around each of the cases in question were we to examine all the possible quasar/galaxy associations in the sky assuming the mainstream's theory is correct:

PzTotal = P1G * NmaxGwithr

I believe that if this probability is small (<1), then this is a good indication that the mainstream model regarding the origin of quasar z in these specific observations (and perhaps more generally) is badly flawed. And the less the probability, the more likely it is that z for quasars are instead quantized.

But I have one last factor to add, the likelihood of seeing that observation's particular alignment of quasars relative to the minor axis. Since they are independent of z under the mainstream model, this probability should be multiplicative with PzTotal. Now that I think about it some more, I don't like the way I calculated this probability earlier.

I want to find the probability of a specific observation being encountered given the number of quasars observed to be within a 15 degree zone centered about the minor axis. I showed earlier there is a probability of only 0.083 (based on area) that any given quasar will lie in that zone, assuming quasars are positioned randomly like darts (which is the mainstream assumption). And since quasar locations are assumed to be independently located in the mainstream theory, those probabilities are multiplicative.

But if there are quasars in the observation outside the 15 degree zone, the probability of encountering that observation should be increased. In fact, for every quasar outside the zone, I believe the probability of encountering that observation should be increased by a factor of 1/.917 = 1.091, up to a maximum probability of 1. Thus, I will calculate the probability of seeing the alignment in a single observation as

Pafor1G = 0.083(number of aligned quasars)*1.091(number of non-aligned quasars); Pafor1G <= 1 .

In calculating this probability, I will assume (I think conservatively) that any quasars near a galaxy and whose location with respect to the minor axis is unknown are non-aligned.

Now again, the probability of seeing that alignment observation if we examined every possible quasar/galaxy alignment with r quasars might be a good indicator of whether there is a problem with the mainstream model for generating quasars. So I will calculate

PaTotal = Pafor1G * NmaxGwithr .

If the probability is < 1 then that is an indication the mainstream model is wrong. In such a case, seeing multiple highly unlikely examples of quasars that are aligned with the minor axis would be an indication there is some underlying physics which is producing that phenomena and which the mainstream is ignoring. And the Narlikar/Arp et. al. model might be that phenomena and therefore deserve a much closer look by the mainstream.

And since the alignment and z phenomena are independent according to the mainstream model,

PTotal = PzTotal * PaTotal

And, by the way, with regards to both PzTotal and PaTotal, keep in mind that Arp et. al. have not examined a large fraction of the total number of possible quasar/galaxy associations with r quasars. In fact, the probability of encountering the group of observations studied here will decrease in direct proportion to the percentage of total possible quasar/galaxy associations that has actually been studied. This even further strengthens my argument.

Now can we all agree on this approach, or not? Does anyone have any suggestions to improve it ... to make it more accurate? Would anyone like to adjust one of the parameters or equations? Speak up and don't be coy. Or I'll rightly assume you agree with all aspects of this method. :)

Now, with the above as the basis, I will now demonstrate that correcting the errors in my method had no significant no impact on the overall conclusion I drew from my original calculations. I will redo the full calculation for each of the observations I've previously analyzed. This should serve as a nice summary of the overall method and my current results to this point, and hopefully act as a basis for further debate on this subject.

Let's start with NGC 3516.

In this case, observed z = 0.33, 0.69, 0.93, 1.40, 2.10 . The Karlsson z = 0.3, 0.6, 0.96, 1.41, 1.96, 2.64 . Therefore, the spacings are +0.03, +0.09, -0.03, -0.01, +0.14 . Doubling the spacings gives the increment width for each data point = 0.06, 0.18, 0.06, 0.02, 0.28 . The n corresponding to each increment is (3/0.06), (3/0.18), (3/0.06), (3/.02), (3/0.28) = 50, 16, 50, 150, 10 . The weighting factor for each data point is 0.40, 0.83, 1.12, 1.201, 1.201 .

Thus, P1G = 5! * (1/50)0.40 * (1/16)0.83 * (1/50)1.12 * (1/150)1.201 * (1/10)1.201 = 120 * 0.209 * 0.100 * 0.0125 * 0.0024 * 0.063 = 4.7 x 10-6 (compared to 2 x 10-6 without the weighting factors).

Now with NmaxGwith5 = 5433, so PzTotalNGC3156 = 4.7 x 10-6 * 5433 = 0.026

Now to factor in the alignment probability. In this case, I haven't found anything to suggest there are other quasars besides the 5 identified by Arp et. al. near this galaxy. If anyone out there can show there are additional quasars within 50 arcmin (given that the galaxy is only a few arcmin across), I will add them to the calculation as non-aligned if we don't know their location, or do and they aren't aligned. Until then, it looks like

PaTotalGNGC3516 = 0.0835 * 5433 = 0.021.

Thus, PTotalGNGC3516 = 0.026 * 0.021 = 0.00055 .

Wow! That's very small considering it assumes all the possible quasar/galaxy associations in the sky have been examined.

Now consider with NGC 5985.

In this case, observed z = 0.69, 0.81, 1.90, 1.97, 2.13 according to http://articles.adsabs.harvard.edu//full/1999A&A...341L...5A/L000006.000.html . With Karlsson z = 0.06, 0.3, 0.6, 0.96, 1.41, 1.96, 2.64 , the spacing to the nearest Karlson values are +0.09, -0.15, -0.06, +0.01 and +0.17. The increment for each is 0.18, 0.30, 0.12, 0.02, 0.34 so n equal 16, 10, 25, 150 and 8. The weighting factors are 0.83, 0.97, 1.201, 1.201, 1.201.

Thus, P1G = 4! * (1/16)0.83 * (1/10)0.97 * (1/25)1.201 * (1/150)1.201 * (1/8)1.201 = 24 * 0.100 * 0.107 * 0.021 * 0.0024 * 0.082 = 6.8 x 10-7
PzTotalNGC5985 = 6.8 x 10-7 * 10166 = 0.007 .

Of the above 5 quasars, 4 are aligned. In addition, DRD provided a source that suggested the existance of two more quasars and with no information about them besides their z, they will be assumed non-aligned. The total number of cases with 8 quasars is

PaTotalGNGC5985 = 0.0834 * 1.0913* 1000 = 0.062 .

Thus, PTotalGNGC5985 = 0.007 * 0.062 = 0.00043 .

Even smaller! So now it's a little hard to believe we just got lucky seeing these observations in the first case.

Now consider with NGC 2639.

In this case, http://www.journals.uchicago.edu/doi/abs/10.1086/421465 identifies observed quasar z = 0.305, 0.323, 0.337, 0.352, 1.304, 2.63 . With Karlsson z = 0.06, 0.3, 0.6, 0.96, 1.41, 1.96, 2.64 , the spacing to the nearest Karlson values are +0.005, +0.023, +0.037, +0.052, -0.106, 0.01. The increment for each is 0.01, 0.046, 0.074, 0.104, 0.212, 0.02 so n equal 300, 65, 40, 28, 14, 150 .. The weighting factors for each are 0.366, 0.388, 0.405, 0.422, 1.201, 1.201.

Thus, P1G = 6! * (1/300)0.366 * (1/65)0.388 * (1/40)0.405 * (1/28)0.422 * (1/14)1.201 * (1/150)1.201= 720 * 0.123 * 0.198 * 0.224 * 0.245 * 0.042 * 0.0024 = 9.7 x 10-5 (compared to the unweighted probability of 1.6 x 10-8!)

PzTotalNGC2639 = 9.7 x 10-5 * 3018 = 0.29 (and it may be much smaller than that if my weighting method is faulty).

Of the above 6 quasars, 5 are aligned. The paper also mentions some 3 other x-ray sources lying along the axis but I shall ignore them.

PaTotalGNGC2639 = 0.0835 * 1.0911* 3018 = 0.013 .

Thus, PTotalGNGC2639 = 0.29 * 0.013 = 0.0038 .

Again, a probability that is very small considering that it assumes that all the possible quasar/galaxy associations have been examined. Now are you starting to get the picture, folks?

Now what about NGC 1068?

Recall that http://www.journals.uchicago.edu/doi/abs/10.1086/311832 and http://arxiv.org/abs/astro-ph/0111123 and http://www.sciencedirect.com/scienc...serid=10&md5=596c8badf26d1a60f6786ae0bfcae1d6 collectively list 12 quasars with z = 0.261, 0.385, 0.468, 0.63, 0.649, 0.655, 0.684, 0.726, 1.074, 1.112, 1.552 and 2.018. With Karlsson z = 0.06, 0.3, 0.6, 0.96, 1.41, 1.96, 2.64 ... the distance to the nearest Karlsson value for each quasar is: -0.039, +0.085, +0.132, +0.03, +0.049, +0.055, +0.084, +0.126, +0.104, +0.152, +0.142, +0.058 . That makes the increments 0.078, 0.170, 0.264, 0.06, 0.098, 0.11, 0.168, 0.252, 0.208, 0.304, 0.284, 0.116 and n values 38, 17, 11, 50, 30, 27, 17, 11, 14, 9, 10, 25 . The weights for each is 0.313, 0.462, 0.562, 0.76, 0.78, 0.79, 0.82, 0.87, 1.201, 1.201, 1.201, 1.201 .


Thus, P1G = 12! * (1/38)0.313 * (1/17)0.462 * (1/11)0.562 * (1/50)0.76 * (1/30)0.78 * (1/27)0.79 * (1/17)0.82 * (1/11)0.87 * (1/14)1.201 * (1/9)1.201 * (1/10)1.201 * (1/25)1.201 = 479002600 * 0.320 * 0.270 * 0.260 * 0.051 * 0.070 * 0.074 * 0.100 * 0.124 * 0.042 * 0.005 * 0.063 * 0.021 = 9.7 x 10-6 (compared to the unweighted probability of 2.8 x 10-7!)

PzTotalNGC1068 = 9.7 x 10-6 * 132 = 0.0012 .

Since I don't really know the alignment of these quasars relative to the minor axis,

PaTotalGNGC1068 = 1.0

Although as I pointed out previously, there are curious and improbable alignments to be found in this case:

http://www.journals.uchicago.edu/cgi-bin/resolve?ApJ54273PDF "On Quasar Distances and Lifetimes in a Local Model, M. B. Bell, 2001 ... snip ... It was shown previously from the redshifts and positions of the compact, high-redshift objects near the Seyfert galaxy NGC 1068 that they appear to have been ejected from the center of the galaxy in four similarly structured triplets."

But in any case, we currently have PTotalGNGC1068 = 0.0012 .

So yet again, an incredibly small probability given that this calculation assumes all possible quasar/galaxy associations have been examined.

Dare we draw any conclusions here, folks? Or will the mainstream community be allowed to simply sweep this under the rug?
 
David, I need to thank you. In the process of preparing a post to address your question (and show that you haven't even tried to understand my methodology), I discovered that I made a mistake in the formula I used to calculate probability.

I will describe and correct that error in my next post, where I will once again calculate the probability of observing the specific set of high red-shift quasars that have been found near several specific low-redshift galaxies, assuming the mainstream claim that z are continuously distributed over the interval z=0-3 is true.

The error I made doesn't change the conclusion I reached nor the answer I will give to your question. But it is an error which I think someone who works in the field of probability and statistics, like you, probably should have caught. From the fact that you didn't, I can only speculate that either aren't an expert (and mind you, I'm not claiming to be one) or you didn't take the time to try to understand my methodology. :)
I am not an expert, nor have i stated that I am one and I did say I was mystified by your using Bayes theorm. I have tried to understand your methodlogy. I disagree with it's application. Bayesian theory is used in very specific situations and the population distribution of traits is not one of them.

You are the one who obviously did not take the time to read the three posts I wrote on statistics and population theory and sampling.

So what does that make you?


But that is your thing.
In any case, let me now address your question and please note that the answer was right in front of you.

Your question is how can my methodology show a difference between a random placement and a causal one? But to be more precise, your question should be restated: how can my methodology distinguish between the existence of physics which produce a relatively uniform distribution z (like the mainstream model) and physics which produce a highly non-uniform distribution of z (like Karlsson's quantized values)?
No in this thread the question sis:

How can your methodology shopw a difference in placement association of QSOs in relation to Arp galaxies?

But thanks for just ignoring that.
I want you to think carefully about what I'm calculating. I find the probability that a specific observation of quasar z values would occur, assuming that the mainstream assumptions about quasar/galaxy associations and quasar redshift (z) distribution are true. More specifically, I'm calculating in each case the probability of encountering an observation with a set of z's that match the Karlsson values within the accuracy that the observation actually matches the Karlsson values, while assuming that the z values are not quantized around Karlsson values but instead come from a relatively uniform distribution of z between 0 and 3.
And your numbers would be exactly the same. For distribution of QSOs or for the z-values. You do know that Bayes theorem is no longer used in most statitical setting like epidemiology and drug efficacy don't you?

If I gace you a random placement of QSOs around galaxies and a causal placement of QSOs around galaxies, your method would give the same numbers.

It can not distiguish them.

Why won't you answer that question?

Just as if I made a random set of z values and a causal one, you could not distinguish them.

Why not answer that?

Why is that BAC, why are you using Bayes theorem in a method which is discredited and not used in comparable studies?

Hmmm?
If we chose

If the observation exactly matches the Karlsson values then the probability calculated by this method is very, very small. If the observation doesn't match the Karlsson values, then the calculated probability is much, much larger. All the cases I've studied here have produced very small probabilities. And yet each of those cases actually exists. Does that perhaps suggest something?
yes aposteiori statitics that could be effeceted by sample bias, small sample size or sample error.

None of which you address.
Can we agree that the low probabilities I've calculated for seeing specific observations with a specific set of z values, assuming a random "placement" of z values, are correct and very small (<<1)?
No because of the sample issues , which you haven't addressed, and refuse to address, even though they are the point of the thread.
Can we agree that the probability of seeing those observations would still be small (<1) even if a VERY large number of galaxies were examined ... even if all the galaxies that could potentially have quasars near them were examined. That's what my calculations indicate.
And your calculations would be the same for a random set vs. a causal set.

Do you understand that or just ignore it?
Yet in spite of that fact, multiple observations with those low probability z combinations have actually been observed. Doesn't that suggest the placement of z values might not be random?
No. No more so than a royal flush in stud poker would be non random.

That is the issue to address.

Which you haven't, you keep ignoring th issue that i have pointed out.

yes the specific probability of any configuration is low, that does not tell you if it is random or causal.

Why not address that, which was addressed in the three posts I made on statitics.
And if the number of galaxies actually examined so far to obtain those observations isn't anywhere near as large as the number of potential galaxies with quasars near them on which that <1 probability is based, doesn't that STRONGLY suggest the placement of z values isn't random? Isn't this obvious, David?
Not it's not , a random set can only be distinguished from a causal set through the methods that you are NOT using.

Again I refer you to the first of the three posts I made I believe it was 165.
In other words, given a relatively limited number of observations, we simply don't expect to see any cases where all or most of the z values are close to Karlsson values.
No, that is not true, that is like saying that you should not get a full house in stud, no draw poker unless it is non random.

Which is an error.
Frankly, we should be surprised if we do see such a case. But if we see only one, we might think "well, we got lucky". But when we see two, we probably should begin to get a little suspicious. And when we see half a dozen observations which are calculated to be low probability cases, maybe it's time to wake up to the fact that this is telling us something; namely, that the z are not randomly produced at all but causal with preferred values near the ones identified by Karlsson.
Your ignorance is showing.
The question then is what is the cause. But the mainstream hasn't gotten to that point because it simply denies that any quasar redshifts might be quantized. So first things first. Let's get to the point of you accepting that at least some redshifts appear to be quantized, then we can go searching for a physical mechanism to explain that. Fair enough, David?
You haven't demonstrated anything except your use of bayes theorem is misplaced.

it would give the same values for a random and a causal set.
SO, to answer your question, my procedure is most definitely capable of distinguishing between a uniform (random) and a non-uniform (causal) distribution of z.
No it is not. it would give the same values for a random and a causal set.

The specific probaility is always going to have a low value, which is why you have to look at the distribution of values in samples of large sets.
If we sample a relatively small number of small areas in the sky (say a small group of the quasar/galaxy associations) where there are r quasars and repeatedly find that the observed z are close to a few specific values of z, then it is obvious the probability of finding those values of z is more likely the result of a process that produces them than one which produces a uniform distribution of z (and a very low probability of ever seeing those cases).
No it is not.
Now if you can't understand the above explanation, David, then I can't help you further. If you publish your community forum attack on me, should I decide to respond at all (and I may not), I shall simply repeat this and the next post and thereby show that despite your supposed knowledge of probability and statistics, you missed a very serious and frankly very obvious error in my earlier calculation of probability, and that you still don't appear to understand rather simple logic for why my methodology can distinguish between a relatively uniform (random) distribution of z (like the mainstream's) and one (like that theorized by Karlsson) which would be more causally based. :D

You have just asserted that you can tell the difference, but you are wrong.

Where in population studies do they use bayes theorem to determine causality?

You do know the difference between causality and correlation don't you?

I doubt it.
 
Last edited:
BAC, you are basically saying that someone who gets a fullhouse in a stud poker came can only do so from a non-random process.

Which is wrong.

Here is the probaility of any given hand in five card stud, no draw.

(1/52)5= 2.8301 x10-9
So any hand dealt in poker is from a non-random process!

I name you Iron Troll.
 
Last edited:
A quick summary of contemporary, mainstream, answer to 'what's a quasar?'

A (type 1) quasar is a luminous active galactic nucleus (AGN) for which we have an unobscured view of the accretion disk, broad line region (BLR), and narrow line region (NLR). The cutoff between a (type 1) Seyfert and a quasar is arbitrary, typically MB > -22 (where MB is the absolute B band magnitude).

An AGN for which our view of the accretion disk and BLR is obscured is a type 2 quasar or a type 2 Seyfert.

While the conditions under which an AGN becomes a strong radio source are not yet well understood, AGNs with strong jets do produce double-lobed radio sources.

If the dusty torus obscures our view of even the NLR, an AGN is usually visible as only an x-ray source (if there are strong jets, they may also be radio sources).

If the viewing geometry is such that we are looking (almost) directly down one of the jets, the AGN is seen as a BL Lac, blazer, or an OVV quasar. These AGNs are also (often) both x-ray and radio sources.

It is not often appreciated just how small, physically, an AGN is: the dusty torus, BLR, accretion disk, and SMBH are all contained within a region that may be only light-days across, and rarely more than a light-month or so; of course the NLR may be up to several light-years in size, and the jets and radio lobes may stretch for tens of thousands of light-years (or more).

Here is a webpage with a brief summary of the various kinds of AGN.

AGNs are one of the most active areas of research in extra-galactic astronomy and astrophysics; for example ADS lists 357 references with 'AGN' in the title for just 2007. Some of the research topics are:

* what is the AGN duty cycle? For how long does an AGN remain 'on'? how long does it stay quiescent before turning 'on' again?

* what fuels an AGN? How is mass funnelled into the accretion disk?

* how do AGNs evolve? What is the role of galaxy collisions/mergers in AGN evolution?

* what are the physical mechanisms that generate the dusty torus? That determine its size and nature?

* what proportion of AGNs are obscured? How does this proportion change with AGN evolution?

Historically, the general acceptance of quasars being at distances commensurate with their redshifts (via the Hubble relationship) was about a decade after they were discovered (i.e. in the 1970s). The unified AGN model became generally accepted about a decade later, and even contrarians such as Bell acknowledge that the various classes of observational objects are all the same 'real' class of objects (AGNs), with dividing lines between them simply arbitrary divisions of continuous distributions. Thus the distinctions that Chu et al. and Arp et al. drew in their 20th century papers (e.g. on NGC 3516 and NGC 5985) between quasars and Seyferts as potentially different classes of 'real' objects, rather than arbitrary divisions of an AGN continuous distribution, is now very rare.

Today's better understanding of AGNs also explains rather nicely one curious thing which some Arpians are (still) fond of pointing out: obscured AGNs show up as x-ray (point) sources. As x-ray astronomy developed it became increasingly easy to determine which point sources were AGNs and which XRBs (x-ray binaries) in (local) galaxies, leading to the discovery that many 'ULX' (ultra-luminous x-ray sources) 'in' or 'near' local galaxies were, in fact, AGNs … and that the sky density of such implied that obscured AGNs outnumber unobscured ones.

One of the most exciting recent discoveries concerning AGNs is the statistical association between some ~30 EeV cosmic rays and AGNs within ~100 Mpc of us (by the Auger Collaboration) – for the first time we have the possibility of doing extragalactic astronomy with something other than photons (well, the second … a handful of neutrinos from SN 1987a were detected); this result, if confirmed, is another demonstration that AGNs are not local in the Arpian sense, and that the CMB is indeed cosmic.
 
Dancing David, just one comment on the use of Bayesian statistics in contemporary astronomy: it is used, quite widely. While you may find examples of such an approach being used incorrectly (after all, scientists are human and do make mistakes), I doubt you'd find anything much you'd take exception to.

For example, the Final HKP paper (Freedman et al. 2001) that I mentioned earlier in this thread presents three different approaches to making an estimate - traditional (frequentist), Bayesian, and Monte Carlo. For most purposes this would be an over-kill; given how central H0 is in extra-galactic astronomy and cosmology, that the authors took the caution of doing their sums three different ways is A Good Thing (and that the answers were consistent An Even Better Thing!).
 
Thanks for all your information, Bayesian does have uses but I am not sure of BAC's use of staitistics in general, he just demonstrated that poker hands must be non-random.

That AGN stuff is so cool.

But ignored by BAC.
 
Last edited:
{snip}

In any case, let me now address your question and please note that the answer was right in front of you.

Your question is how can my methodology show a difference between a random placement and a causal one? But to be more precise, your question should be restated: how can my methodology distinguish between the existence of physics which produce a relatively uniform distribution z (like the mainstream model) and physics which produce a highly non-uniform distribution of z (like Karlsson's quantized values)?

I want you to think carefully about what I'm calculating. I find the probability that a specific observation of quasar z values would occur, assuming that the mainstream assumptions about quasar/galaxy associations and quasar redshift (z) distribution are true. More specifically, I'm calculating in each case the probability of encountering an observation with a set of z's that match the Karlsson values within the accuracy that the observation actually matches the Karlsson values, while assuming that the z values are not quantized around Karlsson values but instead come from a relatively uniform distribution of z between 0 and 3.

If the observation exactly matches the Karlsson values then the probability calculated by this method is very, very small. If the observation doesn't match the Karlsson values, then the calculated probability is much, much larger. All the cases I've studied here have produced very small probabilities. And yet each of those cases actually exists. Does that perhaps suggest something?

Can we agree that the low probabilities I've calculated for seeing specific observations with a specific set of z values, assuming a random "placement" of z values, are correct and very small (<<1)? Can we agree that the probability of seeing those observations would still be small (<1) even if a VERY large number of galaxies were examined ... even if all the galaxies that could potentially have quasars near them were examined. That's what my calculations indicate.

Yet in spite of that fact, multiple observations with those low probability z combinations have actually been observed. Doesn't that suggest the placement of z values might not be random? And if the number of galaxies actually examined so far to obtain those observations isn't anywhere near as large as the number of potential galaxies with quasars near them on which that <1 probability is based, doesn't that STRONGLY suggest the placement of z values isn't random? Isn't this obvious, David?

In other words, given a relatively limited number of observations, we simply don't expect to see any cases where all or most of the z values are close to Karlsson values. Frankly, we should be surprised if we do see such a case. But if we see only one, we might think "well, we got lucky". But when we see two, we probably should begin to get a little suspicious. And when we see half a dozen observations which are calculated to be low probability cases, maybe it's time to wake up to the fact that this is telling us something; namely, that the z are not randomly produced at all but causal with preferred values near the ones identified by Karlsson.

{snip}
.
I'm going to comment on only parts of this, in line with what I've been writing all along, namely the inputs and assumptions built into those inputs (not the calculations themselves).

First, the only way to do this kind of analysis, properly, is to start with the physical models; in the case of 'Karlsson peaks', I don't recall you having actually presented any physical model(s).

Second, if the 'Karlsson peaks' are indeed fully quantized, then they have precise values, to as many decimal places as you care to calculate.

Third, as I read the 'Karlsson peak' literature, there is only a general consensus on what these are, more or less; the actual values, to 3 or 4 significant figures, seem to vary from paper to paper.

Fourth, the observational uncertainties are, usually, quite small - a z can be observed to 1.23x, where x can be given ± 1 (or perhaps ± 2). A corollary is that 'missing' a Karlsson peak by more than ~0.002 is the same as 'missing' one by 0.1, or even 0.5 ... unless, of course, the physical model that generate 'Karlsson peaks' accommodates 'misses' that are (much) greater than the observational uncertainties.

Fifth, as I have already said, more than once, the distribution of observed AGN redshifts is quite smooth, with no 'Karlsson peaks' of any note. Of course, if the physical model you are seeking to test applies to only a tiny subset of observed quasars ...

Sixth, as has been said many, many times, the kind of a posterori approach you are using for your analysis is invalid ... unless there is a strong a priori case for choosing just the 'cases' you select to apply that analysis to (NGC 3516, NGC 5985, etc). Such a case may be due to the physical model of Karlsson peaks, or something else entirely ... and such a case may be quite legitimate. However, it seems that you have not presented any such case (other than something like 'here's what Arp chose to observe').
 
Some quick comments on post #329.

If it's the 'mainstream' understanding of how 'quasars' are distributed, then the numbers chosen as inputs need to be replaced by estimates of AGNs, and such estimates need to take into account the (arbitrary) low-luminosity cutoff (boundary between type 1 Seyferts and quasars), the ratio of obscured to unobscured AGNs, the estimated number of as-yet-undetected blazars (etc), AGN evolution, lensing, and clustering.

Further, the approach in this post contains no 'null test'; for example, in addition to a (very) small number of large, bright (active?) galaxies, there should be at least an equal number of other large, bright galaxies (perhaps a range of galaxy types, including ellipticals, lenticulars, spirals, irregulars; dwarfs and giants; merging and colliding galaxies; ...).

In addition, if the distribution of redshifts of 'quasars' in the selected subset does not match that of the population (to within the relevant statistic), then by definition it is not a random selection.

The method outlined in this post also seems to assume that large, bright galaxies are not clustered in any way; clearly they are.

I think the 'low redshift galaxies' numbers (etc) are also incorrect, as in this paragraph:
Next, there is the question of how those quasars are distributed with respect to low redshift galaxies and to each other. For now, I will conservatively assume that only half of them are near low redshift galaxies. I think that's a VERY conservative assumption and would be very interested in any data that would further refine that parameter. Afterall, there aren't that many low redshift galaxies. In fact, I showed earlier in the thread (post #223) that perhaps no more than 1 percent of the galaxies lie within z = 0.01, which equates to 1-2 galaxies per square degree of sky. I found a source that indicated most of the galaxies in near field surveys are smaller than 30-40 arcsec in diameter ... meaning they occupy only a fraction of any given square degree field (because 1 arcsec is 1/3600th of a degree). I noted a source that indicates even in galaxy groups, the distance between galaxy members is typically 3-5 times the galaxy diameter. I noted that even the large, naked eye, Andromeda galaxy ... our nearest neighbor ... only has an apparent diameter of 180 arcmin (that's 1/30th of a degree). And I noted that NGC 3516, one of the cases I calculate only has an apparent diameter of a few arcmin. It may be typical of specific galaxies I am looking at. So with only 20 quasars per square degree of sky on average, does everyone agree I'm very conservative in assuming only half of all quasars lie near low redshift galaxies like those in each of the cases of interest? If so, then that brings us down to 413,000 quasars in the population of interest. And I suspect the number really should be smaller.

(to be continued)
 
And in case you forgot:

2. What mechanism and forces cause galaxies to rotate per perrat's model of galaxy rotation. What observation has been made to support it.
3. What would keep a Lerner plasmoid of 40,000 solar masses in an area of 43 AU diameter from collapsing to a black hole.
4. What makes star cluster rotate faster around a galaxy than gravity minus dark matter can provide?
5. What part of Narlikar's theory has any observable consequence, other than your gnome of intrinsic red shift?
6. How do you explain the quantity of light elements and heavy elements through PC/PU?
7. How does either Narlikar's theory or other PC/PU provide for the cosmic back ground radiation spectrum?

And in case you forgot, here are your own words at the start of this thread:


1. Please try to stay on topic, if you bring in material it should be directly related to the topic at hand. ( I know I am great derailer!)
2. Please do not spam the thread with multiple links that are unrelated to the topic, discussions of Dark Matter, Electric Universe, Plasma Cosmology or attacks on the same should be limited and relevamt to the thread.
3. Please do not use ad homs, character slurs or accuse people of reading comprehension problems, or attack spelling and grammar...

I've been trying to abide by them ... well, almost all of them. :D
 
The hypothesis that you are testing has to do with an idea that '(all) quasars' are ejected '(predominantly) along the minor axes' of 'active galaxies', and have redshifts 'at Karlsson peaks',

I'm not testing "all" quasars, DRD. I'm testing the quasars in these particular galaxies for which I'm doing calculations.

I tell you what, DRD. Since you don't seem to like the calculations I've done, why don't you show us what the probability of observing those cases is given the mainstream theory? And don't answer 1.0 because that's not an answer ... that's avoiding the question. Do an analysis like mine to calculate the probability of those observations. You can do that, can't you? :D
 
L-C&G did, in fact, count quasars more than once,

So you think their method overcounted quasars by a factor of 4? :rolleyes:

I also think your arithmetic and/or reading of a source is wrong ... if only because SDSS DR6 alone has confirmed spectra of >100k 'quasars' (and the spectrascopic area is only some 7k square degrees).

Nothing wrong with my arithmetic or reading since that's what I used in my calculation. :)

But mostly I think you yourself didn't actually read L-C&G ... Table 1 ("Anisotropy of QSOs with different constraints") lists, in the column "Number of pairs" (meaning, galaxy-QSO pairs) "8698" on the "1.0o" line in the column thetamax .

Again, the L-C&G paper states very clearly that the number of samples is insufficient to draw any conclusions. You seem to want to pick and choose ... accept some results and ignore others.
 

Back
Top Bottom