• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
ETA: In addition to books by Dennett and Hofstadter, I recommend reading "the man who mistook his wife for a hat" by Oliver Sacks. Reading about what happens when a brain doesn't work right can be very enlightening. There are also some youtube movies about split brain patients, who appear to have a separate consciousness in each half of their brain.
Yes! Its the Why Buildings Fall Down of neuroscience.

That's why engineers spend so much time studying failure modes - they illuminate the underlying workings of the system so much better than the fully functional device.
 
Galteeth said:
I've never fully grasped what Daniel Dennet is saying. He claims qualia don't exist.
Does he really claim this? I think he is saying that the philosophical concept of qualia is confusing and worthless.

Remember, the quale is a philosophical concept. It is not a scientific thing. It's possible that the philosopher's formulation of qualia (the hard problem) is simply broken. Justin Sytsma thinks so.

~~ Paul
 
As I was saying earlier, being alive is what seems to be, at the very least, a necessary condition for a system to support consciousness.

Again, I'm not asking about consciousness. I'm asking why we couldn't replicate the functional behavior of a single neuron. . You keep answering unrelated questions.

The functional behavior of a single neuron -is- all of its biological functioning -- which includes whatever "magic" it does to produce conscious experiences. Would you count state of the art neural networks of today as functional replicas of neurons? If not, what do you think would be sufficient? If so, where are the p-zombies?

I already mentioned that there are certain thermodynamic indicators of life and I think its these properties that make living neurons more suitable to the task than the non-living models you're suggesting.

Suitable for what ? Electrical pulses go in a neuron, and pulses come out. We can already duplicate that with dead electronics.

If that were sufficient to produce the behaviors you seek to replicate we wouldn't even be having this discussion. We'd already have automatons based off of the kind of modeling you're suggesting that produce behaviors indistinguishable from living conscious entities. Like I mentioned earlier, you're not the first to think of such a scheme and AI researchers have been going that route for decades. Nothing even remotely resembling the results you suggest have been forthcoming.

As PixyMisa so eloquently pointed out earlier: a cadaver won't work because its dead. Same applies to our current computer components.

If you measure a cadaver's neuron, there are no pulses coming out, so it's obvious what the problem is.

If our understanding of neurons is as sufficient as you suggest why can't we use it to simply reactivate them long enough to produce behaviors that resemble those of conscious individuals? Better yet, why can't we just use that understanding to create an artificial neural net that can even produce a p-zombie version of a dog? You're proposing a thought experiment of something people have been testing in practice for quite some time now. The results are less than impressive, to say the least.
 
Last edited:
The functional behavior of a single neuron -is- all of its biological functioning -- which includes whatever "magic" it does to produce conscious experiences.
Neurons don't produce conscious experiences. Networks of neurons do.

Would you count state of the art neural networks of today as functional replicas of neurons?
No, neural networks are functional replicas of networks of neurons.

If so, where are the p-zombies?
There's no such thing.

If that were sufficient to produce the behaviors you seek to replicate we wouldn't even be having this discussion. We'd already have automatons based off of the kind of modeling you're suggesting that produce behaviors indistinguishable from living conscious entities.
Not at all. Brains are complex even if neurons are simple.

Like I mentioned earlier, you're not the first to think of such a scheme and AI researchers have been going that route for decades. Nothing even remotely resembling the results you suggest have been forthcoming.
No-one has ever built a neural network remotely approaching the capacity of the human brain.

If our understanding of neurons is as sufficient as you suggest why can't we use it to simply reactivate them long enough to produce behaviors that resemble those of conscious individuals?
Well, you can, if they're only recently dead. Individually. There are on the order of a hundred billion neurons in the human brain. We'll leave the wiring job to you, okay?

Better yet, why can't we just use that understanding to create an artificial neural net that can even produce a p-zombie version of a dog?
Lack of computing power. We could do a honeybee. But there's still no such thing as a P-Zombie.

You're proposing a thought experiment of something people have been testing in practice for quite some time now. The results are less than impressive, to say the least.
Sorry, but you're simply wrong. Again.
 
Why would I be kidding ?

Because a single neuron -- hell, even a single skin cell -- displays a more sophisticated array of functioning and behavior than any appliances we have on the market now. Just what definition are you going by that would lead you to conclude that a toaster is 'more conscious' than a neuron?
 
Because a single neuron -- hell, even a single skin cell -- displays a more sophisticated array of functioning and behavior than any appliances we have on the market now. Just what definition are you going by that would lead you to conclude that a toaster is 'more conscious' than a neuron?

We're talking about consciousness, not "sophisticated arrays of functioning and behavior".
 
Because a single neuron -- hell, even a single skin cell -- displays a more sophisticated array of functioning and behavior than any appliances we have on the market now. Just what definition are you going by that would lead you to conclude that a toaster is 'more conscious' than a neuron?

We're talking about consciousness, not "sophisticated arrays of functioning and behavior".

Okay, then it should be pretty easy for you to answer the question: Just what definition are you going by that would lead you to conclude that a toaster is 'more conscious' than a neuron?
 
Okay, then it should be pretty easy for you to answer the question: Just what definition are you going by that would lead you to conclude that a toaster is 'more conscious' than a neuron?

Actually, I consider neither of them are conscious in any interesting sense. "Toaster" was meant in a metaphorical sense, because it kept popping up in the discussion.

As for a more realistic example of a machine that is somewhat conscious, I'd nominate something like a flight control computer on board a modern aircraft. It would take a decent amount of neurons to duplicate that behavior.
 
Thanks for taking the time to do that.

What I have bolded in your quote describes what somebody might learn over time through practice in dealing intelligently with a large number of people.

It's not just self-referentiality there's quite a lot of coreferentiality involved I think it's safe to say.

How does an algorithm pack in what seems like the potentially infinite amount of info needed to make the formalization you laid out?

Thanks in advance for dealing with my questions.

Well it obviously needs to be very complex -- but that isn't necessarily as big a deal as you might think.

Because you need to understand that there is a distinction between the complexity of an instance of an algorithm and the "original" or "base" or "class" or whatever you want to call it, algorithm.

For example, your DNA contains the algorithm(s) needed to describe you exactly as you are -- kind of. But they are only the "template" part of the algorithm, the part people talk about when they say "this algorithm was used to arrive at this state" or something like that. What is omitted in such a statement, but what is implied, is all the data that is included in a particular "run" or "instance" of the algorithm. In the case of you, the "template" of your DNA has produced an instance that has been "running" on data -- the environment of you and your cells -- for a long time.

Obviously, the latter is much more complex than your DNA. It would be impossible to store very much about your current state in your DNA -- there is just too much complexity in you as you are now. People think DNA has tons of storage but they are wrong -- it had tons of storage in 1970 when people's segmented core drives had only kilobytes of memory, but now when you can get terabyte drives for under $50 DNA isn't the masterpiece it used to be. The trick is that evolution led to DNA that relies upon the laws of nature to do most of it's heavy lifting.

For instance, does DNA instruct most biochemicals on how to react? No, of course not. There is no need to -- as long as DNA instructs the cells to make the chemicals, and maybe get them in proximity, basic chemistry takes over and the reactions take place. DNA doesn't even instruct enzymes on how to catalyze -- it just instructs the cells how to make the enzymes and then nature takes over. If you wanted to embed *every* step of such an algorithm in the instructions of DNA it would require orders of magnitude more complexity than is available.

And does DNA instruct your brain on how to act given a certain situation? Does DNA instruct you on how to drive a car, or speak English, or even make fundamental inferences about causation (the most basic task of any brain) ? No, of course not. All DNA has done is given your neurons a very primitive topographical arrangement -- that of a baby. The laws of nature, combined with the environment you grew up in, leads to everything else.

So, long story short, the same kind of tricks can be used to specify a formal algorithm that has emergent behavior vastly more complex than the formalization itself. Yes, the formalization your are asking for would be very complex, but it might not be as complex as you envision, because all that is required is the formalization of steps that are not implicit given the laws of nature. All the other stuff -- the steps that are implicit, that combine with the basic algorithm to generate an instance that has emergent properties and behavior -- can be omitted, and there are very many such steps.

That means you can describe an infinite set of behavior with a finite algorithm. Then what ends up being infinite are the various instances of the algorithm that occur.
 
Last edited:
Status
Not open for further replies.

Back
Top Bottom