• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Pi and Irrational Numbers

I wasn't referring to the formula, but to the line I quoted

The line was "Neat, huh?"

Soooo.... huh?

I'm not sure that follows.

It does. It's obvious. Would you like me to prove it?

Trivially, it's bounded by the difference between the current estimate and pi. The problem arises when the bound - which is itself indeterminate - coincides closely enough with a digit-boundary that it never delivers a definite answer. Which is to say, the pi-digit problem has merely been replaced with the error-bound-digit problem. Which is no great help.

The only way that could be a problem is if the digit you are computing (say in base 10 for ease) turns out to be 9, followed by 9, followed by 9, etc. I suppose it's true that sequences of 9's of arbitrary length do in fact occur in pi, and therefore one can find digits for which this technique would require many more than N terms to compute. But they are extremely rare, and irrelevant to the claim that the typical Nth digit requires of order N operations to compute.

I'm not yet convinced.

There is a paper, published in a mathematical journal, which proves the statements presented on that page. That paper has not only stood the test of nearly 20 years, it has become the basis for a set of similar algorithms for computing digits of other irrational numbers.

It's a little hard to think of something more convincing - particularly when a glance at the formulas makes it obvious why it works. But I suppose if that doesn't convince you, neither will I - so I'll stop here.
 
I'm not sure that follows.

Trivially, it's bounded by the difference between the current estimate and pi. The problem arises when the bound - which is itself indeterminate - coincides closely enough with a digit-boundary that it never delivers a definite answer. Which is to say, the pi-digit problem has merely been replaced with the error-bound-digit problem. Which is no great help.

I'm not yet convinced.
The terms in the infinite sum are

16^-k * [ 4/(8k+1) - 2/(8k+4) - 1/(8k+5) - 1/(8k+6) ]

Note that the numerators in the four fractions add to 0, while the denominators are of the same magnitude. This should give you a hint that the part within the square brackets is very small.

When you rewrite the part between the square brackets to one fraction, you'll see that in the numerator, the k^3 terms cancel each other out, so you get k^2 as the highest power of k; while in the denominator, you'll have k^4 as the highest power. (and I'm too lazy and too prone to make errors with it to write the whole thing out). But in effect, the term is of O(k^-2), which is enough for the "decreases very quickly" in this context.
 
Last edited:
The only way that could be a problem is if the digit you are computing (say in base 10 for ease) turns out to be 9, followed by 9, followed by 9, etc. I suppose it's true that sequences of 9's of arbitrary length do in fact occur in pi, and therefore one can find digits for which this technique would require many more than N terms to compute. But they are extremely rare, and irrelevant to the claim that the typical Nth digit requires of order N operations to compute.

This is, I think, the problem : I've been considering the case of any N, not of a typical N. I read the case too strongly.

In the any N case it doesn't matter that the non-typical N's are extremely rare, it's enough that they exist, there are an infinite number of them, and there's no way of predicting where they are. Not in any base.

There is a paper, published in a mathematical journal, which proves the statements presented on that page. That paper has not only stood the test of nearly 20 years, it has become the basis for a set of similar algorithms for computing digits of other irrational numbers.

It's a little hard to think of something more convincing - particularly when a glance at the formulas makes it obvious why it works. But I suppose if that doesn't convince you, neither will I - so I'll stop here.

As I say, my mistake was in the strength of the case being made.
 
The terms in the infinite sum are

16^-k * [ 4/(8k+1) - 2/(8k+4) - 1/(8k+5) - 1/(8k+6) ]

Note that the numerators in the four fractions add to 0, while the denominators are of the same magnitude. This should give you a hint that the part within the square brackets is very small.

k being consistently below the line, and k going to infinity, does rather convey that.

When you rewrite the part between the square brackets to one fraction ...

Why on earth would you do that?

The point is the rate of decay as k gets larger - does it increase, or decrease? In this case it decreases, which means it is not trivial that terms become negligible.
 
In the any N case it doesn't matter that the non-typical N's are extremely rare, it's enough that they exist, there are an infinite number of them, and there's no way of predicting where they are.

I still can't figure out what you're saying.

Imagine that we're computing pi by adding terms of the series, one by one.

Are you saying that, for some digits of pi, it's never possible to know with certainty what that digit is, because maybe it will change if enough further terms are added?

Or are you just saying that, for some digits, many more terms than usual will need to be added before we're sure what that digit is?
 
k being consistently below the line, and k going to infinity, does rather convey that.

No, I think you misunderstand. He meant that it is small, even relative to 1/k.

The point is the rate of decay as k gets larger - does it increase, or decrease? In this case it decreases, which means it is not trivial that terms become negligible.

Are you forgetting that the whole thing is anyway multiplied by 1/16k, which decays quite rapidly all by itself?
 
k being consistently below the line, and k going to infinity, does rather convey that.
Yes, but only being of O(1/k) is not enough - the harmonic series is divergent.

Why on earth would you do that?

The point is the rate of decay as k gets larger - does it increase, or decrease? In this case it decreases, which means it is not trivial that terms become negligible.
That's precisely the point of the rewrite. The single fraction, after rewriting, has a k^2 in the numerator and a k^4 in the denominator. So the whole fraction is of O(1/k^2), which means that all those fractions can be summed and the sum is smaller than pi^2/6 times some constant.

That whole point is moot anyway, as 69dodge rightly notes:
Are you forgetting that the whole thing is anyway multiplied by 1/16k, which decays quite rapidly all by itself?
which means that we only have to establish that all terms

[ 4/(8k+1) - 2/(8k+4) - 1/(8k+5) - 1/(8k+6) ]

are bounded by some constant, say c. Then each term

16^-k * [ 4/(8k+1) - 2/(8k+4) - 1/(8k+5) - 1/(8k+6) ]

is smaller than 16^-k * c and the infinite sum for k=m..infinity is

16^-m * 16/15 * c

so we have an upper bound for the sum of each tail of the series, and thus can calculate for which m, we have an accuracy of 16^-N.
 
I think the point was this. Using hex notation, somewhere in the expansion of pi there is ...0FFFFFFFFF...., with any arbitrary number of Fs. Somewhere else there's ...1000000000... So suppose you compute 0F as the Nth and (N+1)st digits. To distinguish that from 10, you have to compute the next digit. But if you get F, you must compute the next, etc.

So I guess it's true that there exist Ns for which the time to compute the Nth digit of pi is any arbitrary multiple of N. Of course those Ns are exponentially rare, since a sequence of m Fs has a probability of 16^{-m}. So this doesn't affect the typical time.
 
JREF in ASCII code is 74 82 69 70 and that sequence first occurs starting at the 10,969,623rd digit after the decimal in PI.
 
I think the point was this. Using hex notation, somewhere in the expansion of pi there is ...0FFFFFFFFF...., with any arbitrary number of Fs. Somewhere else there's ...1000000000... So suppose you compute 0F as the Nth and (N+1)st digits. To distinguish that from 10, you have to compute the next digit. But if you get F, you must compute the next, etc.

So I guess it's true that there exist Ns for which the time to compute the Nth digit of pi is any arbitrary multiple of N. Of course those Ns are exponentially rare, since a sequence of m Fs has a probability of 16^{-m}. So this doesn't affect the typical time.

Agreed. The algorithm is on estimate linear in time, but the worst case behaviour is unbounded.
 
Are you forgetting that the whole thing is anyway multiplied by 1/16k, which decays quite rapidly all by itself?

I think that's cancelled by the fact that each digit represents 1/16 of the previous one. In the construct that sol invictus described (which is what I was thinking of) this merely pushes the uncertainty further along the sequence.

My thinking has been about exceptions, and the arbitrarily long string of hexF seemed a good one. As ddt so excellently puts it, the worst case is unbounded. Not that there is a worst case, of course; for every arbitrarily long string there is an arbitrarily longer one.
 
Actually, that site doesn't have anything to do with pi at all. It just collects sensitive information for future identity theft because humans naturally put that kind of stuff in first.

joking, of course. Though that would be a decent scam
LOL, I was going to make a similar comment after I realized I was volunteering personal information on that website. I'd love to see the source code! What a cool utility.
 

Back
Top Bottom