Originally posted by Suggestologist
Where do you get it from the numeric representational system? Without going to a different logical level?
Do you mean,
what is the rule for deciding when two decimal expansions represent the same real number? Or do you mean,
why is that the rule?
The rule can be given purely in terms of the numeric representational system (i.e., decimal expansions). The reason for the rule cannot, as it involves the relationship between real numbers and their decimal representations.
I believe I have answered both these questions already. But in response to each answer, you asked for the answer to the other question. When I wrote, in answer to "why":<blockquote>the definition of ".
d<sub>1</sub>
d<sub>2</sub>
d<sub>3</sub>
d<sub>4</sub>. . .", where the
ds are decimal digits, is "the limit of the sequence .
d<sub>1</sub>, .
d<sub>1</sub>
d<sub>2</sub>, .
d<sub>1</sub>
d<sub>2</sub>
d<sub>3</sub>, .
d<sub>1</sub>
d<sub>2</sub>
d<sub>3</sub>
d<sub>4</sub>, . . .",</blockquote>you asked, "what":<blockquote>Please elaborate on how you turn this into the number line. How do you determine that number A is larger than number B in terms of their digital values?</blockquote>Then when I answered that, you again asked, "why":<blockquote>Where do you get this assumption about dual decimal expansions?</blockquote>You have both answers. Let's stop going around in circles.
Obviously, the digit sequence ".24999. . ." is not the same as the digit sequence ".25000. . .". However, digit sequences like these often appear in contexts in which it is either explicitly stated or implicitly understood that they represent real numbers. In such cases, we use the limit definition above to determine which real number is represented.
Although ".24999. . ." and ".25000. . ." differ as digit sequences, they represent the same real number when interpreted as decimal expansions.