• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Open-Ended Math Questions?

If I might turn the heat up just a tad.
Earlier in the thread you mentioned the simple concept of a repeating decimal.

Appreciate that you are dealing with a simpleton here.
I know what 9 is.
I can make a stab at .9, even at .9999 , but I have no idea at all what .999... is.
To me , 0.9 is "Nine tenths of one" and so on. From the context of the thread, .9999... appears to be an infinite number. (not infinitely large. Infinitely long). I don't understand how , without actually expanding it to infinity, we can know what , if anything , an infinitely long number is equal to. We're back at Turing's halting problem.

Either .999... is one , or it not. If it is one, then what is .999... / 2?
Cognitive dissonance FTW? Do you understand the concept of 1/3? How about 0.333...? Or how about 1.000...?

Also it has absolutely nothing to do with the halting problem. Or if it does, please explain. I'd like to learn.
 
You are not the first to accuse me of using something shifty in there.
No no, I'm not accusing you of being shifty. It's the N=-N example that I'm saying seems shifty to me (if you recall, my objection was that if I did the subtraction in that one, I'd subtract the 9 from the 90 etc., not leave the 9 hanging out to the left). Your explanation of your example is over my head.

But I still don't see why the debate doesn't end when one subtracts the value from 1 and gets 0.

Can anyone explain to me why that doesn't settle the matter?
 
The problem isn't with the algebra, but with the undefined results of subtracting an unconvergent sum from another unconvergent sum. And, of course, 0.99999... converges, but the infinite quantities above don't, hence the difference.

Ding, ding, ding!

It's the difference between convergent and divergent series.
 
Am I correct in saying that your problem with infinity is that it's not physical? I don't think I understand what your viewpoint is.

Mathematicians could not care less if numbers do not exist. They leave these questions to philosophers. :p Mathematics is about asking questions of the form "given assumptions X, Y, Z, what facts follow?" - and hopefully assumptions X, Y, Z are not disprovable or contradictory.

The classic example that most people object to is letting i be the square root of -1. What the heck is that? You can't have i cows. It makes no sense...! But if you drop these qualms and start thinking, "ok, so what if i does exist?" then you find it has fairly interesting and useful properties - in the same way that negative numbers are interesting and useful, even if you can't really imagine -5 cows (your bank would certainly argue so if you take out an overdraft!).

So, fair enough, perhaps numbers don't exist. But they're still damn useful. :D

Yes. But this is a sceptic board. Your argument here applies equally to the concept of "spirit". Very useful in many situations. Explains a lot.
But is it correct? Merely because we can imagine something does not mean it exists. But I don't think we actually can imagine infinities and .999... is an infinity any way we look at it.
I'm pretty sure that 0.9 is not equal to 1 and nor is .99 or .999 or .9999
I'm willing to bet that .99N , where N is any number I can think of, is not equal to one. Inorder to suppose that .999... is equal to one, I must either redefine the word "equal" or I must assume the existence of a number utterly different from any other number. That's a leap of faith. Just like believing in spirit.
Why should I be sceptical in one case and not the other?


A negative value assigned to a loan is no more "real" than assigning negative charge to an electron. It's a symbolic convention, no more- a human concept.
The quantity of money is always real. What has changed is not the money, but the relationship between lender and borrower. I don't owe the bank 500 negative pounds. I owe them 500 pounds.



0.5

If you do the math, you will find that the result you would arrive at from 0.999... is 0,4999...

And of course, if 0.999.... = 1, then 0.09999999 =0.1
Then you just need to add 0.4 regardless of how you have been writing it.

Show me your working, one step at a time, please. How do you do that math? All the way.


ETA-Greedy Algorithm- Remind me- what does FTW stand for. I dunno about cognitive dissonance, I'm perfectly content with "stupidity".

Look- either .999... is defined as having an infinite supply of nines or it is not. Which is it?
 
Last edited:
No no, I'm not accusing you of being shifty. It's the N=-N example that I'm saying seems shifty to me (if you recall, my objection was that if I did the subtraction in that one, I'd subtract the 9 from the 90 etc., not leave the 9 hanging out to the left). Your explanation of your example is over my head.

But I still don't see why the debate doesn't end when one subtracts the value from 1 and gets 0.

Can anyone explain to me why that doesn't settle the matter?
Same thing as before. You are making true assumptions without justification. Why should 1.000... - 0.999... = 0? Just because you intuit that 0.000... = 0? I think that's good enough, some people won't.
 
Why should 1.000... - 0.999... = 0? Just because you intuit that 0.000... = 0?
No. Because if the 0 on the right repeats endlessly -- which it will if I continue to perform the same operation endlessly which yields it -- then there's nothing but zeros. And nothing-but-zeros = 0.

Got nothing to do with intuition. Everything to do with what happens when you take out the paper and pencil.
 
Also it has absolutely nothing to do with the halting problem. Or if it does, please explain. I'd like to learn.

Just to annoy AS and his seventh graders even more, because I'm vicious that way...

The computability of numbers is defined in terms of having an algorithm to compute a number to a chosen precision, not exactly. For example, pi is transcendental, and there's no algorithm for calculating it exactly, but once you choose a precision, there's an algorithm guaranteed to halt in finite time to calculate pi to that precision. Same with e, base of the natural numbers.

All integers, rational, and algebraic numbers are computable. Computable reals are a subset of all transcendental (non-algebraic) numbers.

Computable numbers are countable (because every algorithm can be assigned a natural number called a Godel number) while the reals are uncountable, so in that sense, there are a lot more noncomputable transcendental numbers than there are computable numbers. Computable numbers, however, are not complete (the real field is complete by definition).
 
See below. You've done the following, at least: a) invoked infinity casually, making everybody cringe, b) assumed 0.9999... exists and makes sense (i.e. doesn't contradict anything) and c) assumed it has unique value. (Obviously, these are all OK, but you have to justify b) and c) in particular.)

OK.

a) I simply wouldn't call it infinity, because it isn't. What we have is a decimal that repeats "infinitely" (the adverb, not the noun). I wouldn't use that term either. At the 7th grade level, I would say that the 9 repeats "forever." I think that's a temporal concept that incorporates a more intuitive sense of infinite or infinitely. We can grasp that there is no such thing as the day after forever. Forever isn't simply "a long time." There is no end to it. Hence, I think 7th grade students who are capable of doing pre-algebra can grasp the notion that the 9 never stops. You cannot simply go all the way out to the end and then add another 9, because there is no end. They get that -- well, the ones who are smart, but not too clever by half do, anyway.

You think that's invoking infinity casually? OK. We can agree to disagree. I think it's a handy way of avoiding causing everyone's head to explode.

b) I've "assumed" .999... exists? No, I haven't. I have defined N to be .999... That's a given. It's the main axiom in the proof. You don't have to prove axioms. ETA: Oops, now I've gone and misapplied the term "axiom." I'll stick with given.

What could it possibly contradict? Are you going to postulate that every number represented by a decimal string must eventually terminate? If so, then I can easily prove you wrong by taking the string out one digit further. I can do so indefinitely, which of course means we can make one infinitely long. Again, in this context, I prefer to say that we can keep adding digits to the end "forever." It's more intuitive, in my opinion.

c) I really don't understand what you mean here. Every real number has a unique value. That's what it means to be a real number. You aren't going to try to assert that under certain conditions, in the set of real numbers sometimes 2 = 3, are you?

[Here's a more formal definition of "real number:" In mathematics, a quantity that can be expressed as a finite or infinite decimal expansion. The counting numbers, integers, rational numbers, and irrational numbers are all real numbers. Real numbers are used in measuring continuously varying quantities (e.g., size, time), in contrast to measurements that result from counting. The word real distinguishes them from the imaginary numbers. ]


What this proof demonstrates to younger students is that there is more than one way to represent a number notationally. .999... is simply a very well disguised notation for the very same number as 1.

Numbers themselves do not change value. They simply are what they are. Pi is always Pi. 0 is always 0.

Please note that in the set of real numbers, infinity is not a number. It doesn't mean "really big." It means unbounded, without limit. It is undefined numerically.

.999... does exist" is a poor argument. So why is .999... justified and not 999...? Plus, did you prove it has unique value?

The set of real numbers is defined to include numbers which can be expressed as infinite decimal expansions. On the other hand, 999... is understood to repeat infinitely and become infinitely large; it's "infinity." As others have noted, it is divergent and has no limit. That's another way of saying it isn't defined within the set of real numbers, and it is not real.

ETA: Oops. That last paragraph is incorrect. An infinitely large number is real, but it's undefined. It's not an imaginary number.

Again, the unique value question makes no sense to me. A number's value is what it is. You can perform a function on it and the result of having done so can be different from the original number, but that's the result of having performed the function, not because the number itself underwent some kind of transformation.

Answer without using sequences and limits, please. ;)

Limits? I don't need no stinkin' limits.

(This is my point. Your proof is essentially correct, but you need to justify some of your assumptions - assumptions that a "7th grader" will not spot, and doesn't care about. The language of limits and sequences makes such justification unnecessary, because such language is hyperprecise. Also, I hope I'm not being patronising or condescending - I apologise if so. I don't mean to be. I have a certain lack of way with words...)

It's not "essentially" correct. It is correct, without qualification.

I don't think you're being patronising or condescending. I hope you don't think I am. I'm being stubborn and resolute, but not condescending.

AS
 
Last edited:
No. Because if the 0 on the right repeats endlessly -- which it will if I continue to perform the same operation endlessly which yields it -- then there's nothing but zeros. And nothing-but-zeros = 0.

Got nothing to do with intuition. Everything to do with what happens when you take out the paper and pencil.
Ah, but that's exactly your intuition at work. Watch this argument: 0.999... is less than 1 because 0.9 is less than 1, 0.99 is less than 1, and if I continue to perform the same operation endlessly every number I write down will be less than 1. QED

But that's false, isn't it? You find 1-0.999...=0 to be convincing because it's brought the problem to a point where you intuitively understand that the axioms are true. That point is different for different people.
 
Ah, but that's exactly your intuition at work. Watch this argument: 0.999... is less than 1 because 0.9 is less than 1, 0.99 is less than 1, and if I continue to perform the same operation endlessly every number I write down will be less than 1. QED

But that's false, isn't it? You find 1-0.999...=0 to be convincing because it's brought the problem to a point where you intuitively understand that the axioms are true. That point is different for different people.
No, I find 1-0.999...=0 to be true because if I pick up a pencil and paper and write it out like I was taught in 3rd grade I'm going to come up with zeros forever.

You're the one making an assumption. Just because 0.99 is less than 1 doesn't make 0.9999... less than one. They're different animals. I pick up my fat pencil and my Blue Horse tablet and subtract 0.99 from 1.00, I get a value greater than zero. Not so for 0.9999.... I'm not thinking in terms of axioms.
 
OK.

a) I simply wouldn't call it infinity, because it isn't. What we have is a decimal that repeats "infinitely" (the adverb, not the noun). I wouldn't use that term either. At the 7th grade level, I would say that the 9 repeats "forever." I think that's a temporal concept that incorporates a more intuitive sense of infinite or infinitely. We can grasp that there is no such thing as the day after forever. Forever isn't simply "a long time." There is no end to it. Hence, I think 7th grade students who are capable of doing pre-algebra can grasp the notion that the 9 never stops. You cannot simply go all the way out to the end and then add another 9, because there is no end. They get that -- well, the ones who are smart, but not too clever by half do, anyway.

AS- OK, but then the problem can NEVER be solved, because you have shifted the assumption into infinite time

You think that's invoking infinity casually? OK. We can agree to disagree. I think it's a handy way of avoiding causing everyone's head to explode.

b) I've "assumed" .999... exists? No, I haven't. I have defined N to be .999... That's a given. It's the main axiom in the proof. You don't have to prove axioms.

But surely the must make sense in observational terms- or is mathematics actual metaphysics?

What could it possibly contradict?
All experience.

Are you going to postulate that every number represented by a decimal string must eventually terminate? If so, then I can easily prove you wrong by taking the string out one digit further. I can do so indefinitely, which of course means we can make one infinitely long. Again, in this context, I prefer to say that we can keep adding digits to the end "forever." It's more intuitive, in my opinion.
Fine- but does this not itself tell us something is wrong? In the real world, quantities do not behave this way- and what is number if not a measure of quantity?

c) I really don't understand what you mean here. Every real number has a unique value. That's what it means to be a real number. You aren't going to try to assert that under certain conditions, in the set of real numbers sometimes 2 = 3, are you?
Then why assert .999... - 1? Either there are two unique real numbers or there are not. If they are unique they are unequal. If equal , how can there be two equal real numbers?

[Here's a more formal definition of "real number:" In mathematics, a quantity that can be expressed as a finite or infinite decimal expansion. The counting numbers, integers, rational numbers, and irrational numbers are all real numbers. Real numbers are used in measuring continuously varying quantities (e.g., size, time), in contrast to measurements that result from counting. The word real distinguishes them from the imaginary numbers. ]


What this proof demonstrates to younger students is that there is more than one way to represent a number notationally. .999... is simply a very well disguised notation for the very same number as 1.
Then why bother?

Numbers themselves do not change value. They simply are what they are. Pi is always Pi. 0 is always 0.

Please note that in the set of real numbers, infinity is not a number.
Then how many "9" s actually are there in .999...

It doesn't mean "really big." It means unbounded, without limit. It is undefined numerically.
And how do we fit that into the real universe?



The set of real numbers is defined to include numbers which can be expressed as infinite decimal expansions. On the other hand, 999... is understood to repeat infinitely and become infinitely large; it's "infinity." As others have noted, it is divergent and has no limit. That's another way of saying it isn't defined within the set of real numbers, and it is not real.

This may make sense in mathematics. I don't think it does in English. If infinity is not a number, how can anything repeat infinitely?

Again, the unique value question makes no sense to me. A number's value is what it is. You can perform a function on it and the result of having done so can be different from the original number, but that's the result of having performed the function, not because the number itself underwent some kind of transformation.



Limits? I don't need no stinkin' limits.



It's not "essentially" correct. It is correct, without qualification.

I don't think you're being patronising or condescending. I hope you don't think I am. I'm being stubborn and resolute, but not condescending.

You ain't talking to me here , I'm aware. The really funny thing about threads like this to me (being, like Piggy , a word person) is that it's so obvious to you that I'm full of it and its so obvious to me that you are.
I assume at least one of us must be wrong, but I keep worrying if we both are.
Anyway, I've just had a 26 hour day, half of it on planes and I'm off to bed. No doubt this one will still be going years hence. Evenin' all.-ss

AS
.................
 
Last edited:
Show me your working, one step at a time, please. How do you do that math? All the way.

ETA-Greedy Algorithm- Remind me- what does FTW stand for. I dunno about cognitive dissonance, I'm perfectly content with "stupidity".

Look- either .999... is defined as having an infinite supply of nines or it is not. Which is it?
"for the win"

.999... has an infinite supply of nines. .333... has an infinite supply of threes. .000... has an infinite supply of zeros.

Doing the math all the way would take more space than already used in the entirety of this thread, because I'd have to convince you that the natural numbers make sense, then the integers, then the rationals, then the reals. Which of those do you have an issue with?

b) I've "assumed" .999... exists? No, I haven't. I have defined N to be .999... That's a given. It's the main axiom in the proof. You don't have to prove axioms.

What could it possibly contradict? Are you going to postulate that every number represented by a decimal string must eventually terminate? If so, then I can easily prove you wrong by taking the string out one digit further. I can do so indefinitely, which of course means we can make one infinitely long. Again, in this context, I prefer to say that we can keep adding digits to the end "forever." It's more intuitive, in my opinion.


It's not "essentially" correct. It is correct, without qualification.
You have assumed .999... exists. I can say "define N to be the smallest positive number" and get in big trouble. You don't have to prove axioms, but if your axioms are false or ill-defined in the bigger picture, then your proof will not be valid in the bigger picture. Your proof says "if .999... exists, then it equals 1" (actually that's still a simplification, but anyway...).

Calling infinity forever does nothing but make it more intuitive to some people and less to others. A rose by any other name?

It is correct with qualification. Evidence? Prepend "define M to be 0.888... and define M=N" to your proof, then append "therefore 0.888...=1" to the end. I have just proved that "if .888... exists and if .999... exists and if .888...=.999... then .888... equals 1". Does .888... equal 1? Of course not, because we need to check the assumptions. Same thing with your proof.
 
It is NOT a formal proof! The reason being that you're assuming that you know what it means to subtract infinite decimal expansions from each other. Mathematicians tend not to be very happy when you invoke infinity, because it can be very counterintuitive.

So, for that reason, you have to clarify exactly what you mean by 0.999..., without using infinity, to formalise the proof.

I don't see why infinity is such a problem with formal proof. It's absolutely required for proofs in real analysis.

I guess if I wanted to do an analytical proof I'd start by noting that 0.99999.... = 1 - limx->infinity 10-x.

Now do a quick epsilon-N proof to show that given any epsilon, there exists an N such that x > N implies 10-x < epsilon:

Let f(x) = 10-x. Given any epsilon > 0, choose N = -10*log(epsilon) (N is strictly positive for epsilon small). Then 10-N = 1010*log(epsilon) = epsilon10. For epsilon small, epsilon10 < epsilon. Thus given any epsilon > 0, there exists an N such that x >= N implies 10-x < epsilon. Hence f converges to 0.

Since f(x) converges to 0, define h(x) = 1 - f(x). So limx->infh(x) = 0.999.... = 1 - limx->inff(x) = 1 - 0 = 1.

Hence, 0.9999.... = 1.

QED
 
For christ sakes guys... is there anyone here disputing that .9999... = 1?

That topic has been discussed to D E A T H this time, last time, the time before that, the time before that ... and so on for an apparently infinite number of times :)
 
What you and GA seem to misunderstand is that my proof is not trivial. It is not a bunch of magical mystical fairy dust nonsense. It is a true algebraic proof that .999... equals 1. That's all it is. It isn't expressed in terms of series, so please stop trying to redefine it in those terms.

Actually, it is you that has trivialised this proof. Arithmetically, the proof is entirely valid... if, you have constructed a rigorous and consistent concept of real numbers.

You have assumed that the mathematics of real numbers has the same rules as the arithmetic of rational numbers. Do you have proof of that? N.b. "it's obvious", "it just is", "that's a stupid question" aren't proofs.

In fact a mathematically rigorous proof of the above does exist (try http://hemsidor.torget.se/users/m/mauritz/math/num/reaar.htm), in it reals are represented by 'Cauchy series' and arithmetic involving reals is defined in these terms.

Once you have defined what a real is, and how arithmetic involving reals is accomplished, you can now move on to using reals in your 'proof'.

What the hell is ...999999

It's a number with an infinite number of 9s before the decimal point. If you can have an infinite number of digits in a number, why can't I? If you say my number's nonsensical, why isn't yours? This is why your proof is incomplete, i.e. not rigorous. You must first prove that a number with infinite digits behaves in a consistent manner under arithmetical operations. Just saying so isn't a proof, mathematical or otherwise.

P.S. Just noticed the Google result 3 down from torget.se was an 'Ask Dr. Math' question from mathforum.org. He actually proposes your proof (after demonstrating that 0.999... = 1 using series), but with the caveat "[a]fter you have constructed the reals (Cauchy sequences are well suited for this case [...])" (link: http://mathforum.org/dr.math/faq/faq.0.9999.html). Boo-yah!
 
Last edited:

I particularly like the bit, "So in modern mathematics, the string of symbols <NOBR>0.9999... = 1</NOBR> is understood to mean "the infinite sum <NOBR>9/10 + 9/100 + 9/1000 + ...".</NOBR>
<NOBR></NOBR>
<NOBR></NOBR>
<NOBR></NOBR>
<NOBR>Silly mathematicians, as AS said, "It isn't expressed in terms of series, </NOBR>
<NOBR>so please stop trying to redefine it in those terms."</NOBR>​
 
I don't buy the idea of open-ended math problems. Call me a Math-Nazi, but math is not a fuzzy-wuzzy subject. When you apply math to a problem, it's not because you're trying to form an opinion. It's because you want an answer.
 
No, I find 1-0.999...=0 to be true because if I pick up a pencil and paper and write it out like I was taught in 3rd grade I'm going to come up with zeros forever.
Ok, but...

The definition of -1 is that it's the number which yields 0 when added to 1. And what do you get if you add 1 to ...9999 in the usual 3rd grade way?

:D

Actually, that's not as silly as it might seem. There's no reason we couldn't use "...9999" to represent -1, and likewise "...9998" for -2, "...9997" for -3, etc. Computers commonly use a method called "two's complement" to represent negative numbers, which is basically the same thing, just in binary instead of decimal. In fact, it makes more sense than sticking a dash in front of a number to make it negative, because then there's just a single way to add numbers regardless of their signs (namely, column by column in the usual way), instead of having to remember a separate rule, which involves subtraction, for adding together a positive number and a negative number.
 
I particularly like the bit, "So in modern mathematics, the string of symbols <NOBR>0.9999... = 1</NOBR> is understood to mean "the infinite sum <NOBR>9/10 + 9/100 + 9/1000 + ...".</NOBR>
<NOBR></NOBR>
<NOBR></NOBR>
<NOBR></NOBR>
<NOBR>


It occurs to me the primary disconnect between us is not that you have slam dunked me (See your "Boo-yah!" comment above), but that you and some others here insist on addressing this from the perspective of university level mathematics. Fine.

On the other hand, I have made it as clear as I know how that I first learned the algebraic proof in the 7th grade, when I was 12, and that I have been discussing it in the context of pre-algebra students learning it.

So your comment about "In modern mathematics..." is irrelevant in the context in which I have presented the proof. I understand that mathematicians define .999... as the sum of the sequence of numbers as you expressed it. For the umpteenth time now, 7th graders don't need to know that.

Silly mathematicians, as AS said, "It isn't expressed in terms of series, </NOBR>
<NOBR>so please stop trying to redefine it in those terms."</NOBR>

You think you've shown me to be a fool, but you're wrong. Please understand the context in which I wrote that. "It isn't expressd in terms of series..." refers to how .999... is expressed and understood within the confines of the algebraic proof. That is a true statement.

Those who keep criticising the proof as not rigorous, etc., are viewing it from the perspective of calculus or real analysis. They are right in that context. They are wrong in the context of 7th grade pre-algebra. That's why I told them to please stop trying to redefine the 7th grade understanding of .999... in real analysis terms of series and limits.

I've never argued otherwise in this thread.

AS
 
You have assumed .999... exists.

I don't understand why this should be any more problematic than assuming 1 exists.

We are talking about real numbers. Pre-algebra students have already been using decimals for some time before pre-alegra. They are familiar with the special case of multiplying a decimal by 10 and how to perform that operation.

Do we have to prove to 7th grade students that 1 exists? They already have been taught decimals, so what in the Dickens is so difficult to grasp about .9? What about .999? It is a fairly easy step to get from there to .999...forever. Why anyone at the 7th grade level would question the existence of such a well enough defined number is simply beyond me.

I can say "define N to be the smallest positive number" and get in big trouble.

Yes, you could, but then you would be firmly into real analysis and number theory territory. Pre-algebra students are not at that level.

That is a much different prospect than an infinitely repeating decimal.

Do high school students have such a hard time accepting that Pi contains an infinite string of decimals, and is thus incalculable? I don't think so, but I'm not a high school geometry teacher.

Does anyone seriously doubt that Pi exists? I've never seen any proof of any theorem in geometry that begins with "Assume Pi exists." Why then, are you imposing such a burden on the proof that .999... = 1?

You don't have to prove axioms, but if your axioms are false or ill-defined in the bigger picture, then your proof will not be valid in the bigger picture. Your proof says "if .999... exists, then it equals 1" (actually that's still a simplification, but anyway...).

I don't agree that .999... is ill defined.

Calling infinity forever does nothing but make it more intuitive to some people and less to others. A rose by any other name?

Point taken. Some people simply aren't going to grasp infinity at all. I get that. Indeed, I suppose most people think of "infinity" as a "really big number." We hear "infinity plus one" a good bit, but mathematicians recognize that phrase as nonsensical.

It is correct with qualification. Evidence? Prepend "define M to be 0.888... d define M=N" to your proof, then append "therefore 0.888...=1" to the end. I have just proved that "if .888... exists and if .999... exists and if .888...=.999... then .888... equals 1". Does .888... equal 1? Of course not, because we need to check the assumptions. Same thing with your proof.

I have no idea what you mean by that. You seem to think that the proof that .999... = 1 rests on assumptions about the existence of .999.... That's not true. .999... exists because it is 1. It's just a notational difference. Do I have prove that 1 exists? If so, then my entire study of math was for naught, as I've never seen anyone try to prove that 1 exists. I think it's axiomatic, and if you are going to do away with such axioms, then math is meaningless.

(I think philosophers can be a real pain in the ass sometimes, and this is one of them).

AS
 

Back
Top Bottom