• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
The idea I am trying to convey is that software alone (simulation) will not result in consciousness and that what might be necessary is hardware and maybe in addition to some software (emulation).
Of course. Software alone does nothing at all. You must run it, which needs hardware - processor(s), memory, and I/O facilities. If the use of both hardware and software makes it an emulation, that's fine; we've been talking about an emulation. If someone here thinks a computer consciousness can function without hardware, I hope they'll let us know.

...please try to maintain sight of the fact that there are no conscious computers and whatever conjectures or science FICTION you might describe using these definitions, will not negate the fact that they are no more than just SPECULATIONS that are not even based on reality.
It isn't my position that there are currently conscious computers (I take Pixy's definition as a basic requirement for consciousness, not a practical definition of it). However, the speculation is based on reality. Brains are real, consciousness is real, computers are real, software is real.

"..currently the term "emulation" often means the complete imitation of a machine executing binary code".
Yes; this would involve running a software/microcode emulation layer so that different processor hardware can execute the same instruction set or native language as the emulated hardware. It uses a direct translation layer (the Adapter pattern), so it is a lower level emulation, but demonstrates the same abstraction as we've been discussing with the artificial consciousness - supporting the same activity with a different physical implementation.

I take the term in this discussion to mean a physical system that imitates the brain
That's fine - a computer is a physical system that can potentially imitate the brain.

Call it an emulator or call it a simulator, a conscious machine is the goal, and the proposal is that it is possible to do this by running software on a processor with memory and I/O facilities.

Whether it can be done in practice via a top-down abstraction-based approach like the CERA-CRANIUM cognitive architecture, or by the far more difficult bottom-up approach using multiple software instances of virtual neurons and/or instances of 'black box' neural groups, I don't know; but in principle, the bottom-up approach would do the job. However, I'm warming to a well implemented top-down approach.
 
I think the problem here is that you might know about computers but you do not understand that you do not understand how brains work.

Even if you understand how computers and programs work perfectly well you are not in any way qualified to conclude that you can make them operate like a brain because you do not even understand how a brain works.

The problem is that you are not willing to admit that you do not understand how the brain works.

The aphorism that little knowledge is more dangerous than no knowledge is quite apt here.

Knowing all about the design and construction of internal combustion engines (ICEs) will not carry any weight with people who know about Jet Engines (JEs) when I tell them that since an ICE takes in fuel and produces mechanical power then it must be readily attainable to make a JE out of an ICE.

Sure, some of the principles involved in making ICEs may help me attempt to make a JE and may give me a head start over someone who does not even know how to make ICEs.....but I will still fail if I am unaware of all the problems that are UNIQUE to the construction and design of JEs. Especially if I am not able to realize the differences based upon my insistence that they both are used to propel objects so I could easily convert one into the other.

You see both the JE and the ICE have the sole function of creating mechanical power out of chemical power. Now if we take this “operational definition” as the basis for why a JE is just like an ICE then we fail to take into account the differences in almost every aspect to do with the metallurgical and mechanical construction, the physics, the thermodynamics, and fluid mechanics of what it would take to have a working Jet engine.

But I don't need to know how the brain works, at all, to know that westprog and piggy are just wrong.

If they are blatantly wrong about computers, then why even bother bringing brains into the picture yet? How can any of us who support the computational model even hope to argue rationally with people that don't even understand the basics of computing?

Saying things like computer instructions in a program have no causal relationship to each other, or that the numbers stored in computer memory aren't the same as the numbers of sheep in a field ( whatever that even meant ), or that aliens wouldn't be able to make sense of what is going on in our computers, is just ignorance of how electronics function.

I would agree with you that yes, it is just as bad to have no clue how the brain works. Or, to be fair, to have the same level of knowledge as them -- so since they think computers are black boxes that you type stuff into, I might say a brain is a black box that is in someones head and I can chat with it. That would be a non-starter right there.

However, I don't think my knowledge of the brain is that inadequate, thank you very much. I don't know what you think I need to "understand" above and beyond knowing exactly how neurons function ( yes I know that synapses can be both excitatory and inhibitory, yes I know that the propagation speed of the action potential is dependent upon the coverage of the myelin sheath, yes I know that neuons fire with varying frequencies rather than varying strength, yes I know exactly how synaptic plasticity allows networks of neurons to learn, yes I know that there are both chemical synapses and direct contact between neurons and target cells, is there anything else you require sir ? ) , being familiar with brain architecture like the visual perception system and its series of functron-like filters staged in V1, V2, whatever, the probably-hopfield-network-like associative memory architecture of the hyppocampus, but the hyppocampus doesn't affect procedural memory, mind you!, how our ears turn mechanical waves into neural impulses, how the brainstem stops passing neural signals to the body during dream states ( except in dogs ) ( that was a joke ), and even tidbits like how much of your walking gait originates more from your spine than your brain itself.

What, do you think I am one of these keyboard philosophers that just troll forums spewing stuff they know nothing about? Sorry, I am not.
 
Last edited:
If the electrical circuit were to produce exactly the same physical effects as neurons, then possibly that would be the way to go. However, I know that no such circuits exist at present, because nerve damage is permanent and cannot be repaired. We cannot slip an electronic device into a damaged spine. That would be step one, which we should probably manage before speculating about what we could do for step fifty-six.

It's a thought experiment, designed to expose the assumptions that need to be addressed. Whether or not it's practical is beside the point.

Assume it's practical to simulate, on a computer, every neuron and interconnection in a brain, and connect this to a robot with motor and sensory apparatus.

Would it be conscious?
 
If I want to find out how computers work, I can talk to somebody who builds the things.

You seem to have been arguing as if you already knew how they worked.

I know exactly how they work, and could build one with brass and steel, powered by a steam engine. I have yet to hear a convincing argument that it could not, in principle, be conscious.
 
Last edited:
It's a thought experiment, designed to expose the assumptions that need to be addressed. Whether or not it's practical is beside the point.

Assume it's practical to simulate, on a computer, every neuron and interconnection in a brain, and connect this to a robot with motor and sensory apparatus.

Would it be conscious?
I'd be very surprised if you got a straight answer.
 
I think the problem here is that you might know about computers but you do not understand that you do not understand how brains work.

Even if you understand how computers and programs work perfectly well you are not in any way qualified to conclude that you can make them operate like a brain because you do not even understand how a brain works.

The problem is that you are not willing to admit that you do not understand how the brain works.

The aphorism that little knowledge is more dangerous than no knowledge is quite apt here.

Knowing all about the design and construction of internal combustion engines (ICEs) will not carry any weight with people who know about Jet Engines (JEs) when I tell them that since an ICE takes in fuel and produces mechanical power then it must be readily attainable to make a JE out of an ICE.

Sure, some of the principles involved in making ICEs may help me attempt to make a JE and may give me a head start over someone who does not even know how to make ICEs.....but I will still fail if I am unaware of all the problems that are UNIQUE to the construction and design of JEs. Especially if I am not able to realize the differences based upon my insistence that they both are used to propel objects so I could easily convert one into the other.

You see both the JE and the ICE have the sole function of creating mechanical power out of chemical power. Now if we take this “operational definition” as the basis for why a JE is just like an ICE then we fail to take into account the differences in almost every aspect to do with the metallurgical and mechanical construction, the physics, the thermodynamics, and fluid mechanics of what it would take to have a working Jet engine.

I think that people who've been programming at a high level think they understand how computers work, because they've learned to think in terms of high level programming languages, using concepts such as object-oriented code or functional programming. It's quite easy to get the idea of objects in memory sending messages to each other - and it's probably a good thing to think like that. It enables better software to be designed. Programmers should think like people, and design software that works for people.

But, in spite of what some people think, computers aren't people, and when you delve into how they work - something necessary for those people who in the early days had to work with machine language - we find that these intelligible constructs disappear. Instead we enter a much simpler world. There are no separate data structures. There's a block of computer memory, accessible by byte address. There's a processor, which has a particular state - which largely consists of the contents of a number of data registers. The contents of these registers include an address in memory of the next computer instruction to be executed.

It would be theoretically possible to enter all this data manually and to have the program run from any intermediate state it reaches. Indeed, that's how Bill Gates entered his BASIC for the Altair - he loaded the program into memory a word at a time from the switches on the front panel.

So the state of the system changes as it executes each instruction in turn. There are more sophisticated and complex processors available now (and some applications use the graphics processors to do massively parallel processing on certain data sets).

Is the state of the system dependent on the previous instruction? Well, it depends what you mean. Any state is reached due to the previous states of the system. That's inherent in all physical systems. It's true of the computer program, the film, the DVD and even the book. Does the instruction chosen depend on the action of the previous instruction? In fact, the "instructions" don't do anything themselves. They are interpreted by the processor, and it performs an action accordingly. An instruction may change the location of the next instruction to be executed - but it typically will not, and the processor will select the next consecutive instruction. This instruction may well reference an entirely different area of memory, and be unaffected by what its predecessor did.

It's certainly the case that the processor will change its state according to the values of certain elements of memory. It will be unaffected by others. This is fundamentally no different to the film projector. The film projector changes state as it projects the film. The images on the film don't effect what it does next, but other components of the mechanism do.

If we are watching the film on a DVD player, of course, then each successive frame is constructed from the previous frame according to the MPEG standard. Does this make the DVD release of a film an entirely different creation to the original version, with massive additional information processing going on? Try to find an objective way to measure that.
 
You seem to have been arguing as if you already knew how they worked.

I know exactly how they work, and could build one with brass and steel, powered by a steam engine. I have yet to hear a convincing argument that it could not, in principle, be conscious.

I could build a giant printing press out of Lego and string. I can't prove that it wouldn't be conscious.
 
It's a thought experiment, designed to expose the assumptions that need to be addressed. Whether or not it's practical is beside the point.

Assume it's practical to simulate, on a computer, every neuron and interconnection in a brain, and connect this to a robot with motor and sensory apparatus.

Would it be conscious?

Apparently not, because a simulation is imaginary... or something.

Or you watch too much Star Trek.

Or play computer games.

But if you build a computer that does what the hypothetical simulation does it might be conscious, but that is a different concept apparently.

I think it depends on what words you use to describe it.

Or in Westprog's case, if computers have souls.

Also, a simulated tornado can't blow your house down, therefore a simulated brain can't be conscious...or something.
 
Yeah but not like you seem to think it does.

In particular, the biggest piece of research at all -- observing dead people -- clearly shows that although dead neurons are still neurons, they just don't work correctly.

So I will leave it to you to figure out the difference between living neurons and dead neurons. Hint -- it isn't that one is a neuron and one is not, since they are both neurons.

And here's a hint for you: It ain't logic.
 
I'd be very surprised if you got a straight answer.

How can you get and answer when it's not a straight question?

"Assuming that we could do a lot of things which we can't do at the moment" - then who knows what might result.

I've spent quite enough time explaining just what a pure computational process couldn't be plugged into a robot. The people who don't want to believe that to be true won't be converted by another rehearsal of the arguments.
 
That isn't quite correct.

Yes, they do physically different things.

However, the causality between behaviors in the transistors and behaviors of a person's body or objects in our world is isomorphic.

And that means, for instance, if one of the simulated people tripped on a simulated rock, and a real person tripped on a real rock, it would be the same kind of relationships between their behaviors. In particular, both the simulated person and real person would be caused to trip, because of the respective obstacle in their respective worlds.

And if you dispute this, with your observer dependent nonsense, then I have to ask you what you think the transistors running the simulation are doing if not causing behavior changes in each other. Causality isn't observer dependent, piggy.

You put way too much stock in your behavioral isomorphisms.

You pretend that you can preserve information about a system in a different physical format and that this is the same as preserving the system itself.

It is not.

Yes, the transistors are causing behavior changes in each other, but they are definitely not tripping and falling.

If you want to build a conscious machine, you have to reproduce more than merely the relationships between the changes.
 
Well I happen to hold the view that what a brain does to make consciousness happen is transition from state to state based on the current state, a specific set of rules, and input.

So in my view any system that is capable of transitioning from state to state based on the current state, the same set of rules, and input, is capable of being conscious just like our brains.

This is a bit of a yawner.

You can say this about anything real, as long as you're talking about physical computation.

When you claim, however, that you can get real consciousness not by reproducing the sufficient and necessary physical computations, but rather by substituting these for logical computations supported by a non-similar physical apparatus, then you've left physics and are into metaphysics.

I hold that the actual state when viewed in isolation is completely irrelevant. I hold that the only important factor is the series of transitions, and why the transitions occurred. Meaning I don't even think the configuration of the system is important other than just to support the state transitions.

Now you know my full position, and why I think computers can be conscious.

If you think brains do more than transition from state to state like that, then fine. But you can't argue with my logic, only the premise, because my logic is correct.

No, your logic is not correct, because it ignores important features of the real world.

Yes, if you build a physical machine which behaves physically like the brain, then it will really do what the brain really does.

But a simulated brain cannot produce a real instance of consciousness for precisely the same reason that a simulated tornado cannot produce a real instance of wind.
 
...
It's certainly the case that the processor will change its state according to the values of certain elements of memory. It will be unaffected by others. This is fundamentally no different to the film projector. The film projector changes state as it projects the film. The images on the film don't effect what it does next, but other components of the mechanism do.

If we are watching the film on a DVD player, of course, then each successive frame is constructed from the previous frame according to the MPEG standard. Does this make the DVD release of a film an entirely different creation to the original version, with massive additional information processing going on? Try to find an objective way to measure that.

But what the projector does doesn't influence what is on the film. What the DVD player does doesn't influence the content of the DVD. They are just mechanisms for displaying recordings.

I'm not understanding why that would be relevant to the behaviour of a simulated AI.
 
When anything in nature is unobserved, it's impossible to say there's anything particular going on - there's only nature changing states... (except you don't know that unless you're observing).

But that's not what I'm saying.

Not at all.

Here's what I'm saying:

Let's consider a drawing of Winnie the Pooh.

It only has anything at all to do with a talking teddy bear when a person looks at it.

When I look at it, my brain thinks about Winnie the Pooh, and that (and that alone) is why it is a representation of a talking teddy bear.

If there's no appropriate observer, it's just graphite on paper.

The graphite on paper is real no matter what.

But Winnie the Pooh is only "there" when observed by a mind that can understand what it's intended to represent.

The same is true for the "world" of the flight simulator.

And the same would be true for a simulator running a simulation of a brain, even if it were the most detailed possible simulation.
 
I would suggest that the machine is physically designed to work like the brain - i.e. a single physical processor may well be sufficient (in principle) to work like the brain, given an adequate and sufficient software architecture, memory, data, and I/O support.

You may not agree that it is physically designed to work like the brain, but that's a different argument :)

You are absolutely wrong about this, and if you'd read up on some neuroscience it would be clear why.

The brain operates as a real object in spacetime. Time matters, shape matters.

You could not get brain-like behavior from a single physical processor.

Period.
 
Whatever the similarities may be, it's fairly clear that the physical processes going on in the brain are very different from those going on in the computer.

Paul Allen agrees:

“If you start out as a programmer, as I did in high school, the brain works in a completely different fashion than computers do,” Allen said, calling the effort “fascinating” and “noting that he’s been touched by neurodegenerative diseases” — his mother has Alzheimer’s. On the call he noted that while it’s possible to teach a student — a human brain — to program a computer in a matter of years, a computer can’t learn to function like a human brain even given a lifetime of opportunity. “You can’t create an artificial intelligence,” Allen said, “unless you know how the real thing works.”

He's giving $300 million to the study of consciousness.

If only Pixy had let him know that it's already been figured out by people who understand computers. Oh, wait....

And as for the simplicity of the neuron, or the intelligence of the Internet, it turns out that your brain has more connections than every computer on earth combined.

And as for the value of the "electric choir" analogy over the computer analogy, you might find this interesting.
 
I don't think it is a coincidence that the two people responsible for 90% of the opposition's posting volume on this thread just don't understand how computers work.

Protip: Computers are not conscious.

It is more important to understand how brains work. In fact, at this point, that's the only important thing.

If you want to argue for conscious computers, you have 100% of the responsibility to make your case.

The folks on this thread are pretty quick studies.

If you have a case to make, then make it.

But the burden is entirely yours.
 
It's a thought experiment, designed to expose the assumptions that need to be addressed. Whether or not it's practical is beside the point.

Assume it's practical to simulate, on a computer, every neuron and interconnection in a brain, and connect this to a robot with motor and sensory apparatus.

Would it be conscious?

No.

I dealt with precisely this issue in a detailed post upthread.

If you missed it, I'll try to find it and link it.
 
Apparently not, because a simulation is imaginary... or something.

Or you watch too much Star Trek.

Or play computer games.

But if you build a computer that does what the hypothetical simulation does it might be conscious, but that is a different concept apparently.

I think it depends on what words you use to describe it.

Or in Westprog's case, if computers have souls.

Also, a simulated tornado can't blow your house down, therefore a simulated brain can't be conscious...or something.

You should read more closely. Then you'd be able to fill in the gaps here.
 
Is the state of the system dependent on the previous instruction? Well, it depends what you mean.

All processed instructions change the state of the system. It may be to move data from one memory address to another, to swap registers, clear a register, or change the instruction pointer, etc. The only instruction that doesn't explicitly change the state of the system is the No-op or Null instruction that 'does nothing' - but the system state implicitly changes as the instruction pointer advances to the next instruction after a defined number of clock cycles (a delay potentially relevant in real-time or time-dependent systems). Of course, you can define your preferred sub-categories of 'change' (such as explicit or implicit).
 
Status
Not open for further replies.

Back
Top Bottom