Rocketdodger, have you read (and understood) Hofstadter's GEB?
I ask because your notion of "SRIP" essentially bears no relationship to the kind of ideas he builds up to in that book (albeit often using a lot of imagery and metaphors rather than anything more precise or concrete). I was under the impression that you and Pixy were very much "on the same page" when it comes to what SRIP is actually meant to be and that was something along the lines of what Hofstadter galks about in GEB, but it seems not. In contrast, dlorde does seem to be thinking along the lines of the ideas in GEB.
Naw we are all on the same page, more or less. You are just confusing frames -- you are thinking srip.cpp is somehow implied to be SRIP in a frame that it is not.
Going back to the full working srip.cpp implementation that I posted earlier, that actually has to be compiled and run on some physical machine to be conscious (according to you). Right?
Well, yes, but only because that is the only way for a "process" to occur.
But we could also easily translate that piece of code into any one of a number of other languages (and even BF) that don't directly support classes or even structures. I claim you could even translate it into a simple "physical system" along the same lines as Turing did, as below.
Instruct a person to follow a simple procedure;
"There is a box on the table in front of you. There is a piece of string glued at one end to the bottom of the box (on the outside). The other end of the string has a round red sticky ball that can be used to attach that end to other objects or even perhaps the same box. Go to the box. Find the end of the string attached to the bottom of the box and follow it to the end with the red sticky ball. If that red sticky ball is stuck to the same box that you started from then put a stone into the box (there will be some on the table), otherwise remove any stones from the box leaving it empty. Thanks."
Sure.
So, what you are saying is that when a person executes these instructions (or a mechanical device does essentially the same thing) then the box/string/stone thingy is conscious when a stone is being placed into the box, but is not when the box is emptied (because the string was attached to something else or even nothing).
Not exactly -- this is where you are confusing frames.
Something there is exhibiting SRIP, but what frame it is in and what exactly the system is composed of is not clear. I would say that if you are looking at the current frame, the system needs to include the person or machine running the computations, but all of a sudden then the system is always referencing self. If you look at some other frame, abstracted away from what is "doing" the computations, then SRIP like I discussed becomes evident.
You might not think this is a valid answer, but consider that if we ran a full precision simulation of you, the computer would be the thing "doing" the calculations at the lowest level yet "you" would be fully conscious
in the simulation ( or at least you would act like it, in terms of SRIP ). In fact, since we don't know the "cause" of the physical laws of our universe, we can't say anything "does" anything on its own in the sense you are speaking of. There could just as well be some deeper cause of particle behavior beyond what we have access to and all we see are the
results, which we merely find patterns in (and call those patterns mathematics ). So it would be correct for me to say that the "universe" "causes" a car to roll, just like the person in your example "caused" the rock to be in the box, etc, when viewed from certain frames. Since we are in the
same frame as the car, however, it makes more sense to us to say that the car just "rolls" -- we are not exposed to whatever "caused" it to happen.
However, the C++ reference to a class instance or a piece of string tied to a box doesn't achieve anything in terms of creating a conscious system. The box doesn't know or care if it has a piece of string tied to it or whether that piece of string is connected at the other end to it again. Ditto for a C pointer to a structure, or more generally some value in a particular memory location in your PC, as a result of compiling/translating srip.cpp to another programming language or machine code for some architecture. It's only some additional external interpretation that you (the programmer) have in your mind that could possibly be wanting to call that a "self-reference" or even SRIP.
Well, yeah -- because we are humans and we came up with the phrase SRIP. For us to call anything SRIP requires interpretation.
But don't you think, for example, that bacteria would get along just fine if humans hadn't come up with the phrase SRIP?
You are confusing a set of behaviors with the human representation of such sets of behaviors. The actual behaviors exist independently of any human's ability to categorize and recognize them.
If you still believe that srip.cpp is conscious when running then how about "compiling" that small chunk of code to an artificial neural network for us (without any of the baggage that is buried in C++ that isn't essential to this claimed example of SRIP)? You sound like you know quite a lot about such things so I'm sure that won't be hard for you. However, I'm also sure that if you do this you'll also see how completely vacuous your notion is, especially if you let someone else try to interpret what the resulting ANN is "doing" when it runs and without you giving them any "hints". You seem to be confusing your own interpretation of what things mean with what they might actually "mean" if they were just left to "do their own thing".
This is what you don't understand. The set of behaviors -- or behavior, since a set of behaviors is also a behavior -- that we call SRIP
exists independently of your ability to see it. When you look at a computer running a simulation, all you see at the lowest level is transistor states flipping back and forth. Thats because in our frame, the simulation is merely a bunch of transistors -- the computer. But
on top of that there is a simulation frame, where the things in the simulation exist in a different sense than in the frame below. And the fact is that regardless of
how the simulation frame is supported -- whether it be a computer or a guy with boxes, a string, ball etc -- as long as the causality in the frame
originates in the frame from the perspective of systems in the frame, behaviors can be said to
exist in the frame just as they can be said to exist in our own frame.
This is quite obvious if you just take a second to think about it. You yourself have no direct access to how your neurons work. You don't even have direct access to your own neurons, or even groups of them. They are in a frame
below what you consider your consciousness. The frame you exist in and observe the world from is not the same frame as where particles exist. When we look at a tunneling electron microscope or particle accelerator collision images, we are looking at the mapping of one of those lower frames to our observational frame. The world of particles, and even our own neurons, is and always will be something we can only indirectly gain access to. Furthermore, if you think about it, we won't ever have access to other people's neurons in the sense that we have access to our own frame. The way
we, as SRIP in a certain frame, have access to others is via human communication and interaction. You can stare at an MRI all you want and never know what exactly is going on in there, yet when you speak with them you are instantly aware of their consciousness.
And that is the whole basis for the computational model. We don't think that individual particles have anything to do with consciousness, even though obviously a biological neural network is nothing but a whole bunch of particles. There are behaviors
above that level, in the frame of network itself, that we think are responsible. If a neuron was "implemented" by a guy with a set of boxes and a ball on a string, it wouldn't matter to the consciousness that emerged from the network. It might matter to
you, observing it from the outside, but that is irrelevant.
For what it is worth, I find it interesting that you question whether or not I have read GEB when clearly you have missed the entire point of the most famous passage from that book -- the conversation between the Tortoise, Achilles, and the Anteater about talking to an anthill.