The Hard Problem of Gravity

Neurons are edge-sensitive, meaning they either fire or they don't. See Neuron/all-or-none principle WP.WP


Well, yes and no. The all-or-none principle applies to action potentials, the self-propagating electrochemical wave along the length of the axon. Local potentials, on the other hand, may be exitatory or inhibitory, are additive (or subtractive), graded changes in the cell membrane's polarization. It is only when the local potentials reach the threshold level that an action potential is created. (the local potentials are the result of either the transduction of external energies into neural stimuli or neurotransmitters from another neuron's presynaptic membrane; in at least the latter case, the release of neurotransmitters is not continuous, but in more-or-less discrete synaptic vesicles.)

So, parts of the system are all-or-nothing, and parts are not. And the parts that are, can fire frequently for "lots" or infrequently for "little" while still being all-or-nothing.

I am one of those who feels that the physical framework of the body (esp. the nervous system) plays a key role in our experiencing... but I do not see any a priori reason that another system could not simulate the functional equivalent.WP
 
Well, yes and no. The all-or-none principle applies to action potentials, the self-propagating electrochemical wave along the length of the axon. ...
Well, yes and no was sort of what I was saying (which is why I started with tons of disclaimers a couple of posts ago).

If the thing going over the axon over the dendrite of the next neuron is the signal, then there is no variation--except in "clock rate", which may or may not count; it really depends on what you want to call the signal. blobru was defining analog versus digital based on whether or not the inputs were continuous or discrete. I'm not quite sure it's that easy to classify.

I'm also not quite sure how to add in the hormonal effects on the networks given such considerations. Are they inputs?

Edit: Still, I didn't take into account cumulative effects... am I just making this harder than it is?
 
Last edited:
If the universe is a Turing machine (computer), [computable] information can be generalized to any medium?
You don't even need to go that far. If the brain is a computer (and it is), anything it does can be done by any other computer (of sufficient capacity).

Sounds like brain in a vat computation :boxedin: (or maybe clone in a can).
More the reverse. The brain would be simulated, but the Universe would be real.
 
"Analog": continuous input vs discrete: "digital".
Neurons are edge-sensitive, meaning they either fire or they don't. See Neuron/all-or-none principle WPWP.
WP
Well, yes and no. The all-or-none principle applies to action potentials, the self-propagating electrochemical wave along the length of the axon. ...
...blobru was defining analog versus digital based on whether or not the inputs were continuous or discrete. I'm not quite sure it's that easy to classify...

I meant neurons must accept, store and transmit in analog (which can be continuous [-valued or -sampled] or discrete) vs switches which only handle digital, discrete data. Sorry for the confusion. :blush:

yy2bggggs said:
In terms of consciousness, the difference between 'attentive', 'peripheral', and 'negligible' vision (not sure if these are the trade terms) seems one big clue to how consciousness works; that is, there seem to be different strengths of consciousness, where "consciousness" is our attending to things in our visual field (more generally, in our experiential field).
Not sure what you mean by negligible. Attentiveness and peripheral aren't parallel classes--you can attend to something that is in your peripheral field. You can also stare right smack dab at something and not notice it (see inattentional blindnessWP).

Yes, great. To redo my sloppy gradation from above: concentrated[?], attentive, inattentive. Im our visual field, there seem things we are most concentrated on (could at that instant name and/or describe); things we are at that same instant attentive to but not as intensely (see but couldn't describe without a shift in concentration); and things we are inattentive to (are in our visual field but we don't take them in even as objects). And you're right to point out that we can concentrate on or be attentive to things in our peripheral vision, and blind to what's right in front of our eyes (how often is one surprised to notice the transparent outline of the nose on his face).

For what it's worth, attentiveness can affect percepts--see this illusion for an example. So even when you identify these categories, note that they bleed into others.

But yeah, those are the types of things that interest me.

"Waves of EVCP" -- cool -- had never seen that. I think phenomena like this are good starting points for any attempts to define consciousness, for we certainly become conscious of the waves, once it crosses a certain threshold of attention perhaps, much like we only become conscious of pain once an illness or injury becomes severe enough perhaps or we see blood, imagine we are sick, etc. In some respects consciousness seems like an alarm system, always directed towards whatever is [subconsciously?] judged to be most threatening; with no threats, scanning around for whatever is most useful, or pleasant; finding nothing, we're bored, lose consciousness, and fall asleep.

Oops -- bit of a ramble there -- enuf rube phenomenology. :drool:

If the universe is a Turing machine (computer), [computable] information can be generalized to any medium?
You don't even need to go that far. If the brain is a computer (and it is), anything it does can be done by any other computer (of sufficient capacity).

In terms of data processing, assuming the brain is the sort of computer covered by the Church-Turing thesis, and that the thesis is correct -- sure; but even then we'd still have to establish that that suffices for consciousness (e.g., consciousness could be a side-effect of the way the data is processed; it may require the neuron as well as the data it contains).

Sounds like brain in a vat computation :boxedin: (or maybe clone in a can).
More the reverse. The brain would be simulated, but the Universe would be real.

Ok. Simulating the brain in another medium.
Well, yes and no. The all-or-none principle applies to action potentials, the self-propagating electrochemical wave along the length of the axon. ...
...blobru was defining analog versus digital based on whether or not the inputs were continuous or discrete. I'm not quite sure it's that easy to classify...

I meant neurons must accept, store and transmit in analog (which can be continuous [-valued or -sampled] or discrete) vs switches which only handle digital, discrete data. Sorry for the confusion. :blush:

yy2bggggs said:
In terms of consciousness, the difference between 'attentive', 'peripheral', and 'negligible' vision (not sure if these are the trade terms) seems one big clue to how consciousness works; that is, there seem to be different strengths of consciousness, where "consciousness" is our attending to things in our visual field (more generally, in our experiential field).
Not sure what you mean by negligible. Attentiveness and peripheral aren't parallel classes--you can attend to something that is in your peripheral field. You can also stare right smack dab at something and not notice it (see inattentional blindnessWP).

Yes, great. To redo my sloppy gradation from above: concentrated[?], attentive, inattentive. Im our visual field, there seem things we are most concentrated on (could at that instant name and/or describe); things we are at that same instant attentive to but not as intensely (see but couldn't describe without a shift in concentration); and things we are inattentive to (are in our visual field but we don't take them in even as objects). And you're right to point out that we can concentrate on or be attentive to things in our peripheral vision, and blind to what's right in front of our eyes (how often is one surprised to notice the transparent outline of the nose on his face).

For what it's worth, attentiveness can affect percepts--see this illusion for an example. So even when you identify these categories, note that they bleed into others.

But yeah, those are the types of things that interest me.

"Waves of EVCP" -- cool -- had never seen that. I think phenomena like this are good starting points for any attempts to define consciousness, for we certainly become conscious of the waves, once it crosses a certain threshold of attention perhaps, much like we only become conscious of pain once an illness or injury becomes severe enough perhaps or we see blood, imagine we are sick, etc. In some respects consciousness seems like an alarm system, always directed towards whatever is [subconsciously?] judged to be most threatening; with no threats, scanning around for whatever is most useful, or pleasant; finding nothing, we're bored, lose consciousness, and fall asleep.

Oops -- bit of a ramble there -- enuf rube phenomenology. :drool:

If the universe is a Turing machine (computer), [computable] information can be generalized to any medium?
You don't even need to go that far. If the brain is a computer (and it is), anything it does can be done by any other computer (of sufficient capacity).

In terms of data processing, assuming the brain is the sort of computer covered by the Church-Turing thesis, and that the thesis is correct -- sure; but even then we'd still have to establish that that suffices for consciousness (e.g., consciousness could be a side-effect of the way the data is processed; it may require the neuron as well as the data it contains).

Sounds like brain in a vat computation :boxedin: (or maybe clone in a can).
More the reverse. The brain would be simulated, but the Universe would be real.

Ok. Simulating the brain in another medium." target="_blank">WP
 
Last edited:
In terms of data processing, assuming the brain is the sort of computer covered by the Church-Turing thesis, and that the thesis is correct -- sure; but even then we'd still have to establish that that suffices for consciousness (e.g., consciousness could be a side-effect of the way the data is processed; it may require the neuron as well as the data it contains).
I contend that this is logically incoherent and physically impossible - that consciousness is an informational process and can therefore, by definition, only arise from computational function, and that there is nothing happening but computational function in the brain that could give rise to anything remotely resembling conscious behaviour; no magical fields or souls or whatnot.

We know one by definition - consciousness is this sort of process, so it can only be formed by this sort of system. We know the other by neurobiology, neurochemistry, and neurophysics - there are no magic fields, no shouting neurons, most certainly no souls.
 
I don't think there is much reason to assume that its limited to human brains. Its just that, since we are humans and know that we have the capacity for consciousness, best start for investigation. Once we zero in on exactly what physical process in the brain constitutes conscious experience, and the 'whys' and 'hows' of it, we can extrapolate from there.

As has been pointed out by Mercutio, the brain is not a pure digital processing system. Even if it were, we don't know whether consciousness is a product of just brain function, or of the entire nervous system, or indeed of the entire body. There are no conscious brains in jars.

The assumption that we can take one part of a human being and isolate it and assume that it is the sole element responsible for consciousness is not sensible.
 
No. Brains are computers, so if brains do things, computers do them (e.g., brains do them). There are some things that brains do that silicon-based IBM PC compatibles do not do--for example, metabolize glucose. But there's nothing that brains can do that computers can't do, because brains are computers.
Not just like logic switches in a computer. Neurons are logical switches.

...not exactly. Anything a brain can do a computer can too, in practice, because brains are computers.

If this is simply stating that because a brain does computing, a computer can do what a brain does, it's merely a truism. If, however, the statement is that everything a brain does is explained by digital switching, then it's certainly not a known fact.

It might be better if we stopped using nouns like "computer" where what we mean can be ambiguous, and instead used statements like "algorithmic digital processing". That way it's clear what we mean.
 
If this is simply stating that because a brain does computing, a computer can do what a brain does, it's merely a truism. If, however, the statement is that everything a brain does is explained by digital switching, then it's certainly not a known fact.

It might be better if we stopped using nouns like "computer" where what we mean can be ambiguous, and instead used statements like "algorithmic digital processing". That way it's clear what we mean.
So what is it that you suggest the brain does that can't be done algorithmically?
 
Well, yes and no was sort of what I was saying (which is why I started with tons of disclaimers a couple of posts ago).

If the thing going over the axon over the dendrite of the next neuron is the signal, then there is no variation--except in "clock rate", which may or may not count; it really depends on what you want to call the signal. blobru was defining analog versus digital based on whether or not the inputs were continuous or discrete. I'm not quite sure it's that easy to classify.

I'm also not quite sure how to add in the hormonal effects on the networks given such considerations. Are they inputs?

Edit: Still, I didn't take into account cumulative effects... am I just making this harder than it is?

I don't think so. All we need to ask is - can we assert with absolute certainty that all mental processes are a digital, rather than an analogue function? Certainly what goes on in the brain is an analogue process, but then so is what goes on in an (electronic*) computer. What goes on in an (electronic) computer can be reduced to a digital level for our understanding, and it's accepted that such a digital level can be produced by any analogue process without necessarily losing digital precision.

So, by abstracting the digital operation of the brain and discarding the analogue level, have we lost anything important?
 
Well, yes and no was sort of what I was saying (which is why I started with tons of disclaimers a couple of posts ago).

If the thing going over the axon over the dendrite of the next neuron is the signal, then there is no variation--except in "clock rate", which may or may not count; it really depends on what you want to call the signal. blobru was defining analog versus digital based on whether or not the inputs were continuous or discrete. I'm not quite sure it's that easy to classify.

I'm also not quite sure how to add in the hormonal effects on the networks given such considerations. Are they inputs?

Edit: Still, I didn't take into account cumulative effects... am I just making this harder than it is?



Yes, there are all sorts of inputs that greatly complicate nervous communication.

For instance, we have the old Dale's hypothesis -- one neuron, one neurotransmitter. While that is partially true, it is actually a lie since there can be many different substances released into the synaptic cleft by one particular neuron. Small peptides, which often accompany the small molecules that serve as transmitters, are often co-released; and they may have long-term effects on the likelihood of a particular neuron firing.

Maybe it's just me, but I think folks often have misconceptions about how neurons work. They don't, for instance, just sit around and wait for a signal from their neighbor and then fire when the neighbor releases neurotransmitter. Most central nervous system neurons are firing at a basal rate all the time; what new input does is change the basal firing rate (so the real message is not "Hey, I'm firing" but "Hey, I'm firing this fast").

I've forgottent the general estimate of how many inputs it takes for a neuron to reach threshhold, but the difference between the central and peripheral nervous systems is astounding. One vesicle of acetylcholine in the peripheral nervous system is sufficient to produce a muscle action potential. But it takes at least 30 vesicles at minimum to produce an action potential in the CNS. This is complicated by space and time issues -- excitatory post-synaptic potentials generally originate on dendrites, so they have quite a distance to travel before they reach the axon hillock (where the action potential is generated) and all the inhibitory inputs generally occur on the cell body, intervening between the dendrite and axon. EPSPs 'degenerate' over distance and time, so they lose their punch and need other EPSPs to help get over threshhold.

All of that can be simulated sort of by computer systems, but it gets even more complicated when you introduce modulatory elements. There are two different types of receptors broadly speaking -- directly activated and modulatory. The second group (think the neurotransmitters that most people know -- dopamine, norepinephrine) change the likelihood of the neuron reaching threshold rather than directly causing the cell to fire. Then there are all the modulatory neuropeptides that do some of the same (and some of these work directly on cell potentials while others initiate second messenger systems inside the cell to produce even longer term changes by turning on different gene sets). But it even gets worse when you think in terms of other hormones (or growth factors) some of which do have some of these same modulatory effects in the CNS. Nerve growth factor, for instance, plays a role in pain transmission both in the peripheral and central nervous systems.

Then we must account for the glia, which also modulate nerve function by, in part, acting as wells for potassium and seem to make less-likely repetitive firing of neurons.


And when it comes to consciousness there is the entire body to consider. It's all a system -- the whole body -- not just the brain. Brains-in-vats are fine to talk about, but if anyone gets close to human-style consciousness my bet is that it's going to be in a robot and not in a desktop.


ETA:

Sorry, forgot to mention -- clock rate is very important in this enterprise, since there are many pathological states that exist because of slowing of the signal along the axon (like multiple sclerosis) and anything that might slow the processing of information through a network (a new inhibitory input, a seizure) would also affect the outcome.
 
Last edited:
Which is apparently different from the physical theory of information, which has information being exchanged in all physical interactions.
It is a physical theory of information.

How the term information is used in information theory is subtly different from, but closely related to, how it is used in physics.

What is it about the differences between the two definitions that troubles you?
 
It is a physical theory of information.

How the term information is used in information theory is subtly different from, but closely related to, how it is used in physics.

What is it about the differences between the two definitions that troubles you?

It doesn't trouble me that information is exchanged in all physical interactions. It leads me to think that it's going to be possible to come up with a theory of consciousness in which computation and information theory might be relevant, but which might not.
 
There's a fairly detailed post from The Wasp just down from yours and up from this. It shows how human beings work.
No, he's talking about how organisms with neurons work. And everything he is talking about is effectively calculable.

You're not yet even talking about the HPC, so long as you're describing abilities such as coming up with jokes. These things are classified as "easy problems" by Chalmers.
 
Last edited:
There's a fairly detailed post from The Wasp just down from yours and up from this. It shows how human beings work.
Yes, I saw that.

Now, what is it that you suggest the brain does that can't be done algorithmically?

It doesn't trouble me that information is exchanged in all physical interactions. It leads me to think that it's going to be possible to come up with a theory of consciousness in which computation and information theory might be relevant, but which might not.
You mean, like self-referential information processing?
 
No, he's talking about how organisms with neurons work. And everything he is talking about is effectively calculable.

You're not yet even talking about the HPC, so long as you're describing abilities such as coming up with jokes. These things are classified as "easy problems" by Chalmers.


Yep. I don't know how to represent that in a computer system, but you could put what I know about computers in a paper bag. A small paper bag.

Question for Westprog: Do you think that emotion/motivation states/feelings consist in an ontologically different category from other mental actions, such as calculation ability?

To me they are simply different types of problem solving. Emotions and feelings deal with different types of problems than math does, but they still are there to solve problems.
 
No, he's talking about how organisms with neurons work. And everything he is talking about is effectively calculable.

But if consciousness is produced by analogue physical actions, then it cannot be emulated digitally. Simulating the release of hormones will not emulate the action.

You're not yet even talking about the HPC, so long as you're describing abilities such as coming up with jokes. These things are classified as "easy problems" by Chalmers.

I already indicated that producing jokes was an example of selecting particular patterns from a finite set. It is an easy problem, which has not been solved. Generally an inability to solve an easy problem indicates an inability to solve the harder problems.
 

Back
Top Bottom