• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
I'm not clear on what you're saying here. Of course it wouldn't be "any arbitrary computer", it would have to have software to emulate the multi-processor brain, have sufficient storage, and to run in real time be sufficiently fast. But if we eliminate the time constraint by slowing down the I/O (slowing down the universe, I suppose, which we can do if the universe is simulated), then any processor chip would be sufficient, given enough memory and a memory addressing scheme that makes it all available. Is an otherwise equivalent brain less conscious because it thinks more slowly?

You're still begging the question, as to whether something that implements a computation going on in the artificial brain is doing the same thing as the artificial brain. We can't easily test this hypothesis, because we can't plug the computer into a human body, unlike the artificial brain. In the event that we can produce an artificial brain, we can see if the human "works properly".
 
O I C, so the observer has to be completely ignorant, except not ignorant of your concept of "actually physically aggregating things?"

No.

Real aggregation is "physical addition" because it occurs in systems that don't require a brain state to be part of them.

If 2 wolves join a group of 5 wolves, or if 4 sticks drift into a group of 2 sticks, then real aggregation or physical addition has taken place.

This type of activity is evolutionarily important to animals, so it's not surprising their brains developed ways of noticing it, discriminating different degrees, and responding.

One thing the brain can do is "symbolic addition" which involves changing brain states in the animal, but no change in the physical state of any other system -- which is what makes it "symbolic", "logical", or "imaginary" rather than physical or "real".

We can even "farm out" some of the work to information processors, which help us with the symbol manipulation, whether we're talking about an abacus or a TI scientific calculator.

One indisputible feature of symbolic information processors, though (we could legitimately call the whole world of matter and energy a physical information processor) is that to work they require brain states to be part of the system.

Symbolic info processors can't be created (or imagined about natural objects) without a brain state. And unless there is a brain in the system somewhere, all we're left with is an object that cannot be determined to be an information processor at all.

If you don't believe me, then just look back at the video PixyMisa posted of the marble adding machine. But watch it with the sound off and pretend you don't know what the markings mean (that would be adding info in a brain state to the system) or pretend that they're covered with various colors of paint.

Track everything the marbles do. You will not be able to determine from the behavior of the machine alone -- that is to say, in a system which contains only the machine -- what it is intended to do symbolically.

In other words, the machine would have no way of knowing it was being used as a calculator, even if it knew everything about its own body (note that the "meanings" of the symbols painted on it are not facts about the machine, but about the shape of a brain somewhere).

If it did guess that it was being used as an aggregator, it would still not agree with the actual use of the machine, because it would think that every time you put three marbles in separate slots at the top, you end up with 3 marbles together in a bucket at the bottom -- from the point of view of the machine (without the brain that understands the symbols and process) any 3 marbles dropped by any route to the bottom would be 1 + 1 + 1 = 3.

Of course it might also guess that it was intended to be used exactly the way our brains understand it to be used symbolically... but it could have no more confidence in that guess than the 1 + 1 + 1 = 3 theory, or the theory that it's used to tell stories, or the theory that it's a piece of conceptual art, or a spare part for a larger machine that someone's dropping marbles through simply out of boredom.

That's what I mean when I say that, by definition, symbolic information processors like our laptop computers require brain states as part of the system to operate. Without the brain states (which we can label as programmer, user, or observer) the minimum necessary requirements for symbolic information processing to exist are not met.

Aside from the fact that this is obviously special pleading on your part, can you explain this: when you "aggregate" rocks in your backyard, those rocks are actually "replacing" the air where they lie, so it isn't "aggregating" anything unless someone is there to tell the difference between rocks and air.

How does that factor into your argument?

Seems to me the only genuine "aggregation" would be the aggregation of fundamental particles in empty space.

I hope the above has clarified why no special pleading is necessary.

But as to the point about the rocks and the air....

If you want to claim that there is no difference between rocks and air if no one is there to notice it, then you're into solipsism, which I don't intend to argue against, I'll just have to leave you to it.

But I will say that it doesn't make much sense in terms of what's already been described.

If there were no inherent difference, then evolving beings wouldn't have noticed a difference, and would not have evolved brains that attend to that difference, and even develop ways to track it.
 
Emulated; neurons being emulated by artificial neurons.

There is all the difference in the world, though, between emulation by replication or emulation by representation.

If I create a tornado in a storm box, that's replication. (Both systems can be measured with the same tools, eg. an anamometer, and neither system must include a brain state.)

If I create a tornado in a computer simulation, that's representation. (The physical computations of the second don't exhibit the properties which define the first, and a brain state must be part of the complete system for a full description.)

If you fail to keep the difference in mind when analyzing the second system, you are likely to grant the imaginary bits (the parts that require brain states, which is to say the "world of the simulation") the same physical reality of the non-imaginary bits (the physical computations of the simulator machine).

And if you use the term "emulation" for both processes, you are bound to make this mistake.
 
No.

Real aggregation is "physical addition" because it occurs in systems that don't require a brain state to be part of them.

If 2 wolves join a group of 5 wolves, or if 4 sticks drift into a group of 2 sticks, then real aggregation or physical addition has taken place.

I agree up to a point - but the concept of "wolf" or "stick" is something we apply to the universe, not something inherent. We classify things according to their behaviour, but is there an objective addition going on? When do the wolves join the group? At what distance? How big does a stick have to be to be a stick?

IMO, the concept of "addition" is extremey difficult to tie down in a physical sense. We can apply addition as part of a physical model, but as a physical event, it's so universal as to not be particularly useful.
 
Wait a moment (I thought I might be making a mistake putting all that stuff in a single post).

I started with a black box that replaces the visual cortex, and you appeared to agree that if it could interface with the incoming and outgoing neurons appropriately and reproduce the same outputs given the same inputs, that this could work - the patient could see and remain conscious.

Oh, sorry, looks like I deserve my own complaint. :o

You're right, I thought we had gone back to the point in the discussion about swapping the whole brain out. My mistake, sorry.

I then suggested a number of scenarios based on that, replacing more subsystems in the same way, and/or extending the scope of the original black box to encompass more of the brain function. Finally I suggested replacing the whole brain with a black box (half seriously, half in jest).

My purpose was to see, given you accepted visual cortex replacement (didn't you?), whether you feel there is a point beyond which replacing those subsystems with black boxes would 'break' consciousness, or whether you feel it is possible to have a human-like black box consciousness that doesn't necessarily function in terms of artificial neurons (this because of previous suggestions that the physical structure would need to be emulated).

These questions obviously require some knowledge and understanding of the functional architecture of the brain, and some idea or ideas of which parts of that architecture might be involved in consciousness (e.g. the frontal cortex, but not the cerebellum - which is an obvious black-box candidate). I assume you have enough knowledge of and opinions about these things, given your steadfast and authoritative statements in the thread.

In other words, I'm curious to know what extent of the brain you think it might be theoretically possible to replace in the way described, and still maintain conscious function. What do you think the constraints are, where do you feel problems might lie, etc. (given that we could produce such black boxes and connect them) ? I think it's germane to the thread, but I'll understand if you don't want to tackle it.

Part of my motivation is that I think it may soon be possible to do this kind of replacement for real with very simple brains - to monitor the substantive inputs and outputs of a neural subsystem, train a learning system to reproduce the functionality, ablate the monitored tissue, and use the monitoring probes to enable the learning system to replace it. Clearly it's a long way from the pure speculation above, but it was the stimulus for it (ha-ha).

OK, now I get you.

And yeah, this really gets us to the big questions about consciousness, and how it relates to the larger brain structure.

The real tangle with a question like this is that we have really just 2 choices in how to imagine the black box replacing V.

On the one hand, we can try to imagine how we would literally wire up such a box.

On the other hand, we can simply assume that we have something set up that accepts all the neural bombardment that V takes in, and spews out all the neural bombardment V emits, from and to the right locations, and not worry about where the apparatus lives.

The difference between these two options is not so great when we consider a brain with the consciousness function inoperable as when we consider a brain with the function up and running.

If we assume no consciousness, we can start with our simpler solution and just map inputs and outputs.

In theory, this should work... unless there are as-yet unknown factors in the electrochemical workings of the non-conscious brain that make physical arrangement in spacetime a factor -- which is possible, given what we know about power lines, for instance -- but since we don't have any reason to believe that at the moment, let's just let this conclusion stand provisionally and assume we can replace V with a black box that's I/O equivalent.

(This thing's gonna have a ming-boggling array of inputs and outputs, though.)

But because our black box isn't doing any of the physical computation that V actually performed, any brain processes which depend on that work will either fail or go haywire in some way.

If consciousness depends at all on the coherence and synchronization of its "signature" global brain waves, which is to say if they're not just noise like the heat coming out of my computer or the noise coming out of my truck, then it's quite likely our black box just created a conscious "blind spot" which will remove visual activity from consciousness.

And although we don't know yet, my bet is that the brain waves are not noise, since we have few other good candidates for mechanisms to synchronize the activity of disparate brain regions, which we know is happening in conscious experience.

You could try various configurations of black boxes for different areas of the brain, and you'd just have to see by trial and error what happened to the reports of a subject and the observation of the signature waves.

I would bet that some configurations would create impaired consciousness and others would prohibit consciousness.

(Which means that all possible attempts to simulate a conscious brain may simply fail to produce consciousness because they are logically inconsistent, like the moat in Junction Pass which doesn't work because they switched from letters to pumpkins, which also can't work because the moat won't run.)

But even that model is simplistic, considering the role of bundling, such as the McGurk effect, in which the visual image of a speaker's lips determines which of any two ambiguous sounds we hear (say, B and F) even if we know in advance what the sound will actually be. Which means the visual and auditory streams have been merged upstream somehow in a way that consciousness doesn't have the tools to untangle.

This is why we are able to see a car rolling along the road, and not an overwhelming barrage of shapes, colors, sounds, and motions (which is what's really hitting our heads).

If a black box removes a physical component necessary for bundling across disparate regions, the effects could be quite unpredictable and very weird.
 
No.

Real aggregation is "physical addition" because it occurs in systems that don't require a brain state to be part of them.

If 2 wolves join a group of 5 wolves, or if 4 sticks drift into a group of 2 sticks, then real aggregation or physical addition has taken place.
Your distinction is arbitrary, and I don't see the point of it. Why should I say that 2 wolves joining a pack of 5 wolves performs a "physical addition", and that this is a such of thing worthy of recognizing as a type of thing in itself, and different from all other sorts of addition? How come it's 2 wolves joining a group of 5 instead of 7 total wolves joining forces--in which case it's not an operation at all? Or, maybe the 5 that were a group were all stragglers at first, and then formed the group of 5... in which case, what physically happened was that 7 individual wolves joined together into a group, and we simply caught this mid process--making this a physical multiplication?
 
I would like to try another approach to this that hopefully doesn't spiral out of control. Lets partake in a thought exercise, taking baby steps, and see where people start to diverge.

So everyone here is in some sort of a space, which they can perceive only to a finite distance ( meaning even if they are outdoors there is some limit to the distance they can see stuff happening at ), looking at a computer and thinking about what is on the screen.

Does anyone disagree?

So far, so good.

Furthermore, in this situation, everything that exists is composed of fundamental particles arranged and behaving in various ways. There is no constraint that we humans are aware of all the types of particles, or that we understand them fully, only that everything in the situation must be composed of "something" and we can call the smallest units we can observe "fundamental particles," mainly because that is what physicists term them.

Does anyone disagree?

Still good.

Furthermore, the behavior of the particles can be described with mathematics. It is possible that new behaviors are discovered, and current mathematics can't describe the newly discovered behaviors, but when that happens we just supplement and improve our mathematics, so that in the end we can always describe all the behavior of all the particles we can observe.

I understand that things like quantum uncertainty are present, but those types of things can be counted under behaviors that aren't necessarily fully observable at all times, and anyway I think they are probably immaterial to the discussion -- we are concerned with the deterministic behaviors of particles here.

Does anyone disagree?

I'm with you, brother.

So to sum up the first baby step is to come to an agreement that we and the spaces we inhabit are composed of particles the behavior of which can be described mathematically. Whether the particles "follow" mathematics, or whether mathematics is just the "description" of the particles, or anything in between, isn't necessarily relevant.

If nobody disagrees we can move onto the next baby step in the thought exercise.

Cool.
 
Your distinction is arbitrary, and I don't see the point of it. Why should I say that 2 wolves joining a pack of 5 wolves performs a "physical addition", and that this is a such of thing worthy of recognizing as a type of thing in itself, and different from all other sorts of addition? How come it's 2 wolves joining a group of 5 instead of 7 total wolves joining forces--in which case it's not an operation at all? Or, maybe the 5 that were a group were all stragglers at first, and then formed the group of 5... in which case, what physically happened was that 7 individual wolves joined together into a group, and we simply caught this mid process--making this a physical multiplication?

We are also adding all the hairs on the wolves, or multiplying the number of wolves by four to get the number of feet. I can't really see this as an actual physical process.
 
We are also adding all the hairs on the wolves, or multiplying the number of wolves by four to get the number of feet. I can't really see this as an actual physical process.
You two are on opposite ends of this. Whereas it may have no preferred description, it is a physical process, and the number of wolves that result is indeed really 7. And yes we can also discuss other aspects of the system, but that's not a discussion of the entities we label wolves.

Given a wolf is what we call it, and that is an entity that we learned to recognize (as opposed to invented), 7 is the only correct answer for how many there are.
 
I think that confusion stems from a refusal to accept that for many functions emulation is the same as simulation.

Case in point, if you have 4 neurons in series, connected to other neurons at both ends, and you replace all 4 with artificial emulators, the overall network should function the same.

Now replace the 4 with a single emulator, that internally functions by simulating each of the 4 original neurons. If an emulator works by simulating anything, those simulations are also emulations.

I think even piggy would agree that this will also preserve the properties of the network.

Yet when we extrapolate, there is some arbitrary point where people start to think the internal simulations cease to also be emulations.

It's funny, from our point of view, there's a similar point in your approach where the imaginary suddenly becomes real. :)

So let me tackle that.

And yeah, I do agree that in some cases there's no difference between simulation and emulation. In fact, I can describe those cases, I think: it's when no real work needs to be done by the physical system during that portion of the chain (except what might be coincidentally done by the original part and the replacement, such as output of random heat) and when the implementation of the hardware doesn't hinder the functioning of other subsystems.

And the necessary fallout from this is that simulations cannot also be emulations -- or in my lingo, representations cannot also be replicas -- at any point where the system does any real work that the replacement can't also perform.

That's why a representation (including a computer simulation) of a kidney can never replace the entire kidney -- it can't do the phsyical work. You have to have an actual dialysis machine for that.

That's why Pinocchio can never be a boy as long as he's made out of wood -- it can't do the physical work that a human body does.

Or we can imagine Major Tom in a space suit, out repairing a space probe.

Let's say he can either use his jet pack to float back to his ship, or he can teleport like in Star Trek.

We'll ignore the problematic bits and just stipulate that he can be Wonkavisioned through space with the relative position and type of his particles preserved in the transfer of photons or whatever, and reassembled from that configuration into a corresponding collection of massive particles, a man in a spacesuit.

Now let's imagine that on this day Tom spots a space squid between him and the mother ship. A small one, but big enough to break something important, so he decides to kill it on his way back.

If David back on the ship teleports him, he can't kill the space squid.

But why not? The teleportation is real, the inputs match the outputs... but it's like I said, there is some real work to be done at that point in the chain which cannot be performed by the physical choice we've made for Tom's transportation (whatever it is) unless it kills anything in its beam, which it probably doesn't because then it would be too dangerous to use.

We could even program the transporter to rearrange the information about Tom's physical state along the way, run it through the same transformations the physical Tom would go through if he had killed the squid.

He would emerge with the memory of using his jet pack to return, and killing the squid along the way. Then he'd look out the portal and see the damn thing still there!

Ok, so getting back to the brain example, if we get down to the level of one neuron, it's likely (but not certain) that there's a medium available which we can use as a replica. And if there's only one action that's important (e.g. a neuron fired or it didn't) then it's hard to describe that as also not a digital simulation.

But if we try to zoom out to the level of the brain, specifically a conscious brain, then we'd have to assume that there is no physical work at all which is ever important in the functioning of the brain which will not also be produced coincidentally by a machine designed to run computer sims, if we are to believe that it could be replaced by a machine designed to run computer sims.

That's a tall order.

And it becomes an insurmountable one when we consider that consciousness is a phenomenon observable in spacetime, which makes it objectively real -- my body as a physical object is generating a conscious experience right now which is locatable in both time and space (i.e., it ain't happening tomorrow in Paris) -- and the laws of physics demand that all real phenomena require some sort of work.

Therefore we can conclude that a simulation/representation of a brain, rather than a replica of a brain, cannot generate a real instance of conscious awareness -- as happens, for instance, when a baby begins to exhibit that behavior -- because the physical work of the machine is too different from the physical work of a brain to make it happen.

So yes, the point of separation you're talking about is very real.
 
That was my point. What you were saying was in no way a response to what I said, which was a very simple and clear statement about the nature of computation. It was phrased as if it was a response though. Such replies make communication difficult.

We are having communication difficulties, but it's both ways.

What I said was a response... I won't get into why because it doesn't matter....

Where were we? In all the confusion I lost count.
 
I mean a computer implementation that runs a computation supposedly equivalent to what is running in the brain - Turing-equivalent, technically. Such a computation wouldn't be running on artificial neurons, and it wouldn't be "plug-compatible" with the brain.
Why not? (to both).

The disagreement is not about the artificial brain - I think that most people would agree with that as likely workable, in concept anyway. It with the idea that the functionality of the said artificial brain could be expressed, without lost of functionality, as a computation, and said computation could be implemented on any computing hardware.
But artificial neurons would work by computation; how else?
 
... It's quite obvious that any arbitrary computer can't be just stuffed into the brain cavity. Therefore it cannot have "exactly the same functionality".
Why should it's functionality depend on where it can be stuffed?

In any case, we could conceive of an IO interface in the brain cavity that is wirelessly connected to the artificial brain residing elsewhere. How would that make the functionality different?
 
If I wasn't learning things from all the posts people are making, showing you how utterly wrong you are on all this stuff, I would have stopped reading this thread long ago.

That is how frustrating your empty arguments have become piggy.

Take this latest one for instance. Given that any computer we program must be "built" first by assembling the hardware, and the "programming" is done by changing physical properties of the hardware, and given that we know the brain typically develops over time more by changing synapse strength rather than actually growing neuron connections, which is very much closer to how we "program" computers rather than to how we "build" computers, but the brain must be "built" before it can be "programmed" anyway, I don't know wtf you are talking about when you try to distinguish between "building" something and "programming" something. According to *any* formal definition you could come up with, "programming" is merely fine-grained "building," -- there is zero qualitative difference between the two.

For someone who is trying to bark up the tree of equivocating physical processes when no "observer" is present, you sure do pull a lot of arbitrary and unexplained distinctions out of you-know-where.

I'm glad you brought this up, because I wasn't aware there was any confusion on this point. However, I can see why my too-narrow use of the word "built" may have caused it.

Brains are built and computers are built. That's true.

Which means that you are 100% correct that programming is "fine-grained building". What the programmer is doing at the keyboard is changing the structure of the machine it's connected to so that some of its parts behave in different ways when electricity runs through them than they did before.

(Fortunately, the programmer doesn't have to know what these physical changes are, or even be aware of them, to peform the operation remotely.)

But it is fine-grained building of a particular type. For instance, building a skyscraper isn't "programming it" in the sense we mean when we say computers are programmed.

Programming the computer involves manipulating it so that it moves in ways which are intended to mimic, in some useful form, a set of ideas in the head of the programmer, perhaps associated with some real-world system or some imaginary system... or perhaps randomly if anyone desires to be reminded of a random number or color or sound at any time in the future.

Computers can be made in many configurations, out of many types of materials, each with its own physical properties, some faster and easier to manage than others.

So the question is this: Can that kind of "building" alone allow us to take the material we're working in and turn it into a conscious object? Or will that end up being like programming a skyscraper into existence... the wrong kind of building?

Well, to answer that, we have to ask "What sort of building is necessary to make consciousness happen?"

The short answer to that, of course, is: We don't know.

There simply are no satisfactory answers right now.

That situation by itself means first that we cannot confirm that a machine which is not conscious can be made conscious by programming alone, without building some other structures designed for the purpose, not of supporting symbolic logic, but of enabling the phenomenon of conscious awareness.

But digging further, the brain is obviously doing some sort of real work to make the phenomenon happen. That's beyond doubt, and nobody in biology questions it that I know of.

If no work were involved, the phenomenon could not occur. Because there are no observable phenomena that have no real causes.

Are the signature waves noise, or are they doing some of the necessary work?

Nobody knows right now, but they're currently our best lead for a component of the work that has to happen somehow, the synchronization of patterns in disparate parts of the brain.

In any case, since we know that consciousness is the product of some sort of physical work of the brain, we're not going to get a human body, or a robot body either, conscious unless we've got some structures that are designed to do whatever minimum work turns out to be necessary.

This is what eliminates the possibility of a pure-programming solution.

It simply means that if the machine wasn't already equipped to perform this behavior to make the phenomenon occur in spacetime, you can't do the kind of fine-building called programming (which is intended to make the computer mimic some other system symbolically, regardless of the actual physical actions of the computer which could be any number of possibilities) and expect to make it perform that behavior.

It's like saying you can take a machine without a CD player and make it play CDs by programming alone.

Can't be done, because programming doesn't get you the real work you need.

Programming and a laser and the other stuff, then yeah, you can perform the task and get a real phenomenon going, air molecules bouncing around.

Programming and the right hardware -- which will be more than just the hardware required to run the logic, because the laws of physics demand it -- will get you a real event of conscious awareness.

If you want to say, no, the brain or machine representing things to itself can cause conscious awareness, then you're in trouble.

There are several reasons why, but one fatal reason is that the brain represents things to itself via feedback in many known ways which are not involved in conscious awareness, although some are. So this requires us to follow up by asking "What makes the difference between feedback loops involved in conscious experience, and those that aren't?"

That's kinda where we are now.
 
I'm confused. The way I read this, you're claiming that you can derive the fact that consciousness cannot be programmed, but must be built, from the fact that you cannot see how a simulation would be become conscious.

Is this your claim?

Not quite. A computer simulation cannot be conscious if consciousness requires any physical work on the part of the brain which is not also done by the computer.

Given what we know, we can safely conclude this is the case. (If it weren't, then it's difficult to see how consciousness could be postponed until after binding, and so forth.)

And if consciousness is a result, even in part, of the brute electrophysical activity of the brain, then building it would not constitute "programming" for the same reason that installing a traffic sensor under an intersection isn't considered programming.

I mean, you could call it that, but only if our universe is the computer.

Which is perfectly valid, but rather trivial.
 
I agree up to a point - but the concept of "wolf" or "stick" is something we apply to the universe, not something inherent. We classify things according to their behaviour, but is there an objective addition going on? When do the wolves join the group? At what distance? How big does a stick have to be to be a stick?

IMO, the concept of "addition" is extremey difficult to tie down in a physical sense. We can apply addition as part of a physical model, but as a physical event, it's so universal as to not be particularly useful.

Actually, I'd rather not get off into that because, interesting as it is, and informative as it may be about certain things, it's going to get us off topic.

Suffice it to say that things in the universe do aggregate and de-aggregate, especially from the point of view of a critter with a field of vision who is taking in a terrestrial landscape, and that this real behavior is what our brains have learned to, in some ways, represent symbolically.
 
Your distinction is arbitrary, and I don't see the point of it. Why should I say that 2 wolves joining a pack of 5 wolves performs a "physical addition", and that this is a such of thing worthy of recognizing as a type of thing in itself, and different from all other sorts of addition? How come it's 2 wolves joining a group of 5 instead of 7 total wolves joining forces--in which case it's not an operation at all? Or, maybe the 5 that were a group were all stragglers at first, and then formed the group of 5... in which case, what physically happened was that 7 individual wolves joined together into a group, and we simply caught this mid process--making this a physical multiplication?

Well, you can move off into philosophical musings about it, but that's not really necessary in order to do what needs to be done here.

In fact, I don't see how it will help.

The point is, things do aggregate and deaggregate in the universe, moving into and out of clusters, and our brains evolved to notice this.

Arguing about where groups begin and end is like the argument over the border between a language and dialect or between atmosphere and outer space... a pointless exercise that doesn't help us use these terms in the contexts in which they're significant.

Aggregations in systems that don't need to include brain states is physical addition -- whether that's raindrops filling up a footprint to make a small puddle, or predators closing ranks to make a more formidable fighting force -- are examples of physical addition.

So if a recipe says to add a pinch of salt to the batter, it's not asking you to do anything symbolic, but physical.

Now let's compare that to what PixyMisa's marble machine does.

Physically, it simply aggregates however many marbles you put on top into a single pile at the bottom.

So any configuration of 3 marbles at the top will be physically aggregated as a group of 3 marbles at the bottom.

That's physical addition of the separate marbles into a group (philosophical debates over the borders of groups notwithstanding).

But if you bring a brain into the system which understands the symbols painted on it and the rules for its operation, then (and only then) do you have symbolic addition, in which the observing brain may be reminded of the number 26 or 52 or 8 rather than 3 when the 3 marbles make their run from the top down into the bowl.

That is the difference, and it's real and it's clear, regardless of the philosophical woolgathering we could do about where groups begin and end.
 
We are also adding all the hairs on the wolves, or multiplying the number of wolves by four to get the number of feet. I can't really see this as an actual physical process.

Hairs that are always together on a wolf aren't aggregating. They are aggregated.

Ditto the feet.

However, if you look at a wolf and it has 4 feet and you glance away and back and it has 3, your brain is going to notice, and cause you to stare at the wolf intently.

That's because unexpected deaggregation of an animal's feet is important.
 
But artificial neurons would work by computation; how else?

An artificial nueron could only work by physical computation.

If it doesn't, it has no means of communicating with the neurons it connects, which operate exclusively by physical computation.

I mean, you could include a symbolic computation in the mix, but why would you?

If you can replace

Neuron -> Neuron -> Neuron

with

Neuron -> Replica -> Neuron

Why bother with

Neuron -> Modulator -> Simulation -> Demodulator -> Neuron?
 
Status
Not open for further replies.

Back
Top Bottom