sol invictus
Philosopher
- Joined
- Oct 21, 2007
- Messages
- 8,613
Would the robot be conscious if we ran the computer at a significantly reduced clock speed? What if we single-stepped the program? What would this consciousness be like if we hand-executed the code with pencil and paper?
Those strike me as profound-sounding questions that fall apart into semantics if you look a little more closely.
If your robot is possible at all, then consciousness is nothing more than a sequence of processes (chemical or electrical, in continuous time or not - I don't think that's important) that cause the computer/human to declare "Cogito, ergo sum", behave in certain ways, pass Turing tests, etc.
If so, consciousness isn't a rigidly defined concept (have you ever said "Cogito, ergo sum" out loud?), anymore than "identity" or "north" is. And sure, you can always find situations in some gray area where it's hard to say whether something is conscious or not, just as for "identity" and "north" (do you have the same identity if you fall asleep and wake up? have amnesia? get a total organ transplant? what's north of the north pole? what's north of the galactic center?). But since it boils down to your choice of definition for the term, it's not all that interesting in the end.
Last edited:
