• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
Piggy said:
Yeah, the voltage changes are as real as the neural firings.

But the similarity between these changes is only significant if you know that one is supposed to symbolize the other.

That's so important, it bears repeating:

The similarity between these changes is only significant if you know that one is supposed to symbolize the other.
Not so fast, turbo.

First, you need to explain why the neural firings themselves are significant. Because if they aren't then any similarity with voltage changes in the hardware won't be significant either.

Why are those neural firings significant, piggy?

I never said that any neural firings were significant.

I said that the similarity between the 2 systems is only significant if you know that one is supposed to symbolize the other -- and not the other way around, or they each symbolize something else, or the similarity is accidental.

If you don't know that, or have some way of figuring it out, then you can only remark that the systems are similar.
 
I know you're just using this as a metaphor, but, we're not talking about pick up trucks. In particular, we're not talking about a capability along the lines of "hauling a half ton load". Anything along those lines in the realm of consciousness is an intention, and we really don't care, regarding consciousness per se, if it initiates an intention to lift a physical cup, or a simulated one. Whatever the initiator of intent does is the same in any case.

If I just put on VR goggles, and enter into a sophisticated simulation, I myself can lift a simulated cup. I do that with an intention of lifting up a cup--I happen to know it's simulated in this case, but I don't think I have to learn how to pick up a virtual cup like I would have to learn how to juggle--I'm pretty sure whatever goes on when I just lift a real cup is the same thing that goes on when I lift a virtual cup.

This is not true when we're talking about trucks hauling freight. We can't just hook up the truck to a truck simulator, and have its current freight-hauling capability simply apply to virtual freight.

So you're at the very least comparing apples to oranges. As much as you want to simply apply your truck metaphor, there is a very good argument to why that metaphor simply fails, and isn't apt.

Note that no actual virtual people were invoked in the discussing of this point. All hypothetical stunts were hypothetically performed by a real actor (using existing technologies even!)

It's quite possible for a truck to operate in a virtual environment, just like a person. I've been involved with testing motor vehicles in such environments, which are essential for objectively testing their performance - rolling roads, wind tunnels, etc. This works pretty much the same way as a human being interacting with a virtual environment.
 
There's no such thing as a symbolic computation that is not a physical computation.

I know.

But this is also true:

1. Not all physical computations are symbolic ones.

2. The physical computations themselves bear no evidence of being imagined as symbolic computations.

3. The use of any physical computation as a symbolic computation requires physical changes in the state of at least one brain.

Since we can't determine the symbolic computations from the information processor alone, and changes in brain states are involved in any symbolic computation (if only to decide that such a thing exists in the first place), we must conclude that symbolic computation is imaginary.


Which means that if you do not distinguish between the real (physical) and symbolic (imaginary) computations when thinking about what's going on during information processing, in which one is overlaid on the other, you'll come to confuse the imaginary with the real.

And that's where the notion of the "world of the simulation" comes from... granting the symbolic and physical computations equal status as real, when in fact only one is real, and the other imaginary.

Maintain that distinction, and the world of the simulation shifts immediately and permanently to the imagination (brain states).
 
Yeah, actually, adding processors can add functionality, if you keep in mind that we're talking about the physical functioning of a physical object. (Which is to say, physical or real computation, the kind stars and oceans and brains do.)

In that case, you can't... not for philosophical reasons, but because of the laws of physics... reduce it to a series of actions in sequence.

Take my truck, for example....

The real physical systems have to be shaped the way they are, and made of the things they are -- or real functional equivalents -- for that object to do what it does.

It cannot be made to happen very slowly, one step at a time.

And this is one of the fundamental errors at the root of the false "you can program consciousness" claim.

The brain is not symbolic, it's real. Its computations aren't logical, they're physical. Consciousness is a real phenomenon in spacetime.

And you can't replace symbolic computations for real ones.

If you could, you could make anything happen by imagining it.

I have no idea what you are responding to here. We were talking about replicating a one-processor-per-neuron brain with a single-processor brain. You implied that the functionality of the first could not be replicated by the second. You were wrong.
 
I'm not talking about the interpretation of the programmer but of the entities inside the simulation.

You can mean one of two things by "the simulation".

Either you can mean the behavior of the machine, or you can mean the behavior of an observer's brain.

After all, these two things alone constitute the simulation, and both are necessary. (Without a brain to supply the symbolic associations, it's just how an object is acting.)

The "entities" inside the machine are patterns of behavior of computer parts, which aren't essentially different from how those parts behave when the computer is doing anything else, so there's nothing special about them.

And they aren't "inside" anything but the computer box.

It's in the behavior of the brain that we encounter what's been called "the world of the simulation". Looking at the behavior of the machine, which bounces lights and sounds off the observer's head -- because that's what it was designed to do -- his brain state changes.

Maybe he used to imagine a building in a field. Now he imagines a pile of busted lumber in a field, which makes him think about a tornado.

The association between tornadoes and fields and buildings, which you note we don't have any of here, and the way the computer bounced light and sound off his head exists as a set of ideas in his head.

So that is literally where "inside the simulation" is... inside an observer's head.
 
That statement is clear, simple, and wrong in every possible way.


I'm glad you posted that.

I was going to use an example in which a person does the physical addition, and the machine does the "counting", but I didn't know how to explain a very simple machine in which the aggregation is done symbolically, too.

Now, remove the symbols from the machine and we'll have the situation I was discussing.

As it is, this contraption requires brain states in order to be an information processor.

And this will always be the case (by definition) unless we're discussing purely physical computations / information.

Without the symbols to trigger the brain states, it's just marbles rolling around a board.
 
In one of those coincidences that make you wonder if there's some small-world phenomena at play, today's SMBC comic is eerily relevant to our interests here.

The punch line appears to be literally true.

The universe has all the necessaries to be called a program.

We've got stuff changing states in ways that can be described by rules.

That's pretty much all you need.

Our bodies are subroutines of the physical computations of the universe.

Our brains as well, of course.
 
You can mean one of two things by "the simulation".

Either you can mean the behavior of the machine, or you can mean the behavior of an observer's brain.

Observer ? Why would that matter ?

After all, these two things alone constitute the simulation, and both are necessary.

That's odd. I was under the impression that the simulation was running when I wasn't looking. How are both "necessary", unless you think interpretation has anything to do with the action ?

The "entities" inside the machine are patterns of behavior of computer parts, which aren't essentially different from how those parts behave when the computer is doing anything else, so there's nothing special about them.

But what you're failing to see is that the "entities" inside the universe are ALSO patterns of behaviour. Conscious ones, and it doesn't matter if someone's in a lab, looking at the results of our universe. _WE_ interpret our own lives, thank you very much.

And they aren't "inside" anything but the computer box.

Irrelevant.
 
I have no idea what you are responding to here. We were talking about replicating a one-processor-per-neuron brain with a single-processor brain. You implied that the functionality of the first could not be replicated by the second. You were wrong.

Yes, I thought that was what we were talking about. It sometimes seems like there's both a real thread and a virtual thread (perhaps a simulated thread :p) going on...
 
I have no idea what you are responding to here. We were talking about replicating a one-processor-per-neuron brain with a single-processor brain. You implied that the functionality of the first could not be replicated by the second. You were wrong.

It was specifically stated (unless I misunderstand) that individual neurons would be replaced by artificial devices which replicated the physical function of the brain. This would then eventually form a network which would produce some form of computation. This computation could then be performed on a single-processor computer with the same result.

It's important to note that the equivalences between the brain and the artificial neurons, and between the artificial neurons and the computer are different equivalences, dealing with different issues, and the relationship is not transitive, and does not imply an equivalence between the brain and the computer program.
 
That question "How does an interpreter work?" is the really interesting one.
Yes, and you're not addressing it.
But we can't get to it unless we clear the deck of the kinds of mistakes that lead folks to think that a person who is being simulated on a computer could become conscious.
Sure, I can't wait.
In a way, it's true that the simulated person can't become conscious because simulations involve (require) observers.

But that's one of those "I was late because my daughter thought the sprinkler was a duck" sort of statements... perhaps true, but not fleshed out enough.
But you have a bootstrapping issue here. If a simulated person can't become conscious because simulations require an observer, then neither can a brain become conscious because a brain requires an observer. After all, the ultimate simulation is the real thing. And to just start with our knowledge that a brain does have an observer is jumping the gun--even begging the question.
The simulated person cannot become conscious because it is imaginary... that's another way of saying it, but just as unclear.
But this is where what people are really trying to tell you comes into play.

You say that simulations are imaginary. But why, if simulations are so imaginary, do we bother to actually build them? The answer is obvious. It's because we need them to actually exist in order to do something.

If you want an imaginary simulation, have Donald Duck build it. If I build it, it's going to be real. I'd really like to reserve the concept of "imaginary simulation" to the thing that Donald Duck builds in a cartoon, because if I'm the boss, and I pay good money to someone to run a simulation for me, they had better be able to put their hands on some physical device that is actually performing a bit of work. I want to have some way of firing them when they simply "imagine" that the simulation exists.

But it gets worse...
I think the key question to ask here is this: When we're talking about the man in the simulation, what all do we have to have in the room, so to speak?

Well, first, we need the simulator. We're assuming it's a computer here, which is fine. We need that thing. Let's say it's plugged in and idling.
No no no. You're trying to teach me a lesson of consigning all simulators to the world of Donald Duck. But you're coming up with all of your own tests.

I think I'll make a test of my own. My concern is that whatever rules you're coming up with have a bootstrapping issue. So to test this out, I want to make my simulation be an actual interpreter.

That's not a big problem. I need someone to simulate... I volunteer. I'm of sound mind and body, and I hereby attest to my own conscious state. Furthermore, being simulated requires no effort on my part--which is perfect for me!

But now I need something to simulate me. A computer is too easy--we can have debates indefinitely because some people have a major issue seeing a computer as being a real agency, while for others it's no problem. I have a much better idea.

I'm going to pick another physical system to simulate me. In fact, I'm going to choose as the simulator--westprog's brain. Oh, but don't worry, I'm not simply going to leave it at that--if I don't actually do any work building the simulation, then we can hardly call it one. So here's what I'll do. I'll set up the simulator by asking westprog to pretend he is me for the next few moments.

Now, you play with your simulation of a conscious entity, and I'll be running along with my own.
Now, funny thing about this idling... when you think about it, you could pick any collection of parts in that machine -- atoms or molecules or bigger pieces, whatever -- and assign any number of symbolic values to the kinds of changes they're making.
Sure. I'll give westprog a slinky and have him wait in a chair while I plan out my assignment of entities within my physical system that is simulating me to the entities in the simulated system. For giggles, I just had an interesting idea. While running my simulation, I'm going to run it in a simulated world. In fact, I won't even try too hard to make it realistic... I always wondered what it would be like if I could actually visit candy island. So I'm going to write a Candy Island simulator. To attach it to my simulation, I'll use virtual goggles and a headset.
And if you did that, you could then describe any number of possible worlds, and track their development over time by looking at what the machine was doing.
That's true. To start with, whenever westprog says anything in English, while pretending to be me, I'm going to map that to the equivalent English. We'll call that run "A". Maybe I'll run the simulation again, in run "B", and XOR the letters of each word uttered in English together, map that to ASCII text, and presume that my simulation is saying that.
In other words, you could simply decide, in your own mind, that these sets of changes represented logical values for each type of change, and in each case you'd come up with a possible world.
Right. In run "A", westprog says, "I can't believe it! You can even eat the trees here!" ...or something similar. I know because I always wanted to say that when I visited Candy Island.

In run "B", westprog's just spouting out gibberish, most of which is not only not pronounceable, but messes up my terminal.
So what does it mean to say that the world changes?

What do we need "in the room" for that to happen?
Well in my case, the simulated me is running around in a fictional virtual world where he is on an island made entirely of candy.
The computer can sit there and change state all day, but just as no human could examine the computer and determine the state of the world you've decided to imagine (except as one of a practically infinite number of possibilities) the computer itself could not do so, even if it somehow literally "knew" everything about itself.
And therefore, westprog cannot experience Candy Island. According to the rule.

But now I'm getting suspicious. I know westprog is conscious, and would even be conscious while pretending to be me. Even when he is immersed in a fictional world.

So something about your rule is bothering me.
The computer alone in a room does not, and cannot, constitute the world you imagine.

Why? Well, because you imagined it! It's a state of your brain. It only changes when you look at the machine and subsequently change your mind about what's going on in that world.
That's correct. I am the one that is imagining that westprog is me in Candy Island! It's all how I interpret the results when I look at where westprog is!

Only, something is wrong here. westprog is conscious the entire time, when I'm looking at the results or not. In fact, I'm sort of jealous--he's actually experiencing the wonderful virtual immersed world of Candy Island. I'm just peeking in from time to time trying to infer how much I would enjoy it if I were that simulated entity...

Yeah, yeah, I know. That's a mistake. I'm anthropomorphizing westprog.
This fact -- and it is a fact -- is extremely important.

In order for the state of the "world of the simulation" to change -- which is to say, the world which the simulation is intended to represent -- a brain needs to change states.
Well, certainly this particular one is true. westprog's brain state changes. Nevertheless, something is a bit odd about your application here. I can't help but think that this intended representation is supposed to be the one in my brain.

Well, let's try scenario B. Oh, yeah. Gibberish. That simulation's not working. I'm going to be nice to westprog, though, and not suggest dismantling it to debug it.
There is not enough information in the computer alone to make that happen.
Well, in my case, I think there's plenty of information in my simulation to make this happen. In fact, there's information to spare, and a lot of that information is probably doing other important things.
And this fact does not change when we somehow make the computer change in certain ways to custom fit what we imagine.
But it changes when I put something I know is an interpreter in the same situation. Maybe I should invoke special pleading when I do that.
So you see, we don't -- at this point -- have to ask about how those changes in brain state come about. And that, after all, is the same as asking "How does an interpreter work?"
My simulator's interpreter works just fine, thank you. But per all of the rules we went through, I am supposed to presume it doesn't work.

That's the problem. Now, you never bothered to consider the problem--if you note, my entire question to you was, if your rules were true, then how come we are interpreters. Hopefully you'll see what I'm getting at, since in my case, I added an example that actually was an interpreter, and all of your rules still suggested that I treat it as if it wasn't one.

Your rules are garbage. They conclude that the machine is not an interpreter. And I'm sure that's the conclusion you wanted to get. The problem is, they conclude that even if I put an interpreter in. So I don't trust that those rules actually even work.
I would love to move on to the question of how the interpreter works.
You'd better, before you start defending silly claims such as that westprog is not conscious.
But the explanation -- or, rather, the partial and still unsatisfactory explanation we have -- isn't going to make sense to anyone who still accepts that the world of the simulation is anything but imaginary.
Well, westprog was only pretending to be me, sure. But he's a real person. (Well, technically, in this case, he was imaginary, because it's a hypothetical experiment, but I hope you see that I was actually trying to raise that problem, which you managed to completely snuff off with a long post.)
And the key to making the picture come into focus isn't so much focusing on the need for the brain state (or interpreter or observer)... the key is to be extremely rigorous in making clear distinctions at all times between the physical computations of the real world, on the one hand, and the symbolic computations we imagine them to equate to when we build and use information processors.
I want you to note something here though. I never believed the simulated version of me was actually me. But the simulated version of me here was actually westprog. And he was conscious. And he is real.
If you avoid conflating those two things, you won't make the errors that lead to conscious people inside of simulations who are experiencing the world which the simulation is intended to represent.
It depends. If their simulation is being run in a Donald Duck cartoon, you have a point. If not, see your problem case above.
 
I'm sorry, I assumed saying that would be stating the obvious. Take it as read that the black box can interface appropriately with the neurons providing its inputs and the neurons receiving its outputs.

Yes, but you've failed to consider in detail this question: Can the object you propose to use here actually do this?

That is, can your computer running your simulation of the brain simultaneously produce those outputs from the inputs and make the body it’s in (whether it’s mechanical or biological) conscious?

To answer this question, let’s first consider what it means to propose that our black-box is a computer.

As an analogy, we might imagine a country whose primary export is pumpkins, and to get from the farmland to the seaports, everyone has to cross the mountains at a town called Junction Pass.

It’s a very old town with mazes of complex streets of varying sizes, and byzantine customs that force people from different farms or who are shipping to different destinations to trade various numbers of pumpkins with each other at different kinds of intersections and so forth. You never knew what you’d end up coming out with, depending on who you met along the way.

So that’s like our brain, with impulses coming in from sensory neurons and going out to motor neurons, with complex interactions determining how things come out.

Ok, suppose the folks in Junction Pass got tired of all the traffic and decided they’d go into the courier business.

They dig a moat around the town and hire an engineer named Escher to make it flow in a circle.

So now, when a farmer arrives at Junction Pass along any road, he dumps his pumpkins in the moat and hands a letter to a courier, saying how many pumpkins he’s got, where he’s coming from, and where he’s going, then gets on one of the ferries and rides around the city.

The couriers then walk through the city, writing down all the transfers of pumpkins on their letters, depending on who else they bump into along the way.

Once they reach the other side of town, they return the letter to its owner, he reads the new number, gathers that many pumpkins from the moat, and goes on his way.

What they’ve done is to substitute the physical work for symbolic work.

So if we want to do this for the brain, what we have to do is figure out all the rules of encounter for the impulses, convert the real neural impulses to the equivalent of letters (something else that’s real but isn’t the original thing, but can be translated out of it and into it, in a way) run them through the same transformations, then convert them back into impulses.

We can do that with a simulator that converts neural impulses to electromagnetic impulses with the help of a little hardware, manipulates them in a way equivalent to how the neural impulses would be manipulated using some other hardware, then reconverts them to electrochemical impulses with a final bit of hardware.

Installing that hardware setup would be installing a sim.

It’s not a replica brain, because, well, there ain’t no pumpkins. If we had a replica brain, it would be doing all the physical work that a brain does. But since we’ve substituted a sim at this juncture – between the sensory input and the motor output (what your body senses and how it responds) – that work isn’t getting done.

Instead, what’s getting done is a very different kind of work, the kind of work that keeps simulator machines running.

Which creates a problem.

Because consciousness is caused by work, real work, that gets done by a brain at this stage.

Going back to Junction Pass, their courier scheme can get the pumpkins delivered, but if they make the swap and carry letters instead of pumpkins, anything that had to be done with an actual pumpkin in the city can’t get done.

Suppose, for example, that Escher didn’t know the town’s plans for the courier business, and his moat-moving machine is powered by pumpkins.

In his design, a certain portion of the farmers are routed near the pump, where they dump their crop into a chute, and the energy of the pumpkins falling through the chute powers the paddlewheel that makes the moat flow.

Well, now there are no pumpkins, just letters that can be redeemed for pumpkins, but that’s no good.

And that’s what we’ve done to our poor body donor.

When we swapped the biological brain for the artificial brain, we observed the body acting as it always has with regard to consciousness. Which we expected, since we know that consciousness is a biological function of the brain. We’d observe the signature waves syncing up as the body woke up, losing coherence as it falls asleep, syncing up in different ways during dreaming.

None of that’s happening anymore.

At the point where it’s supposed to be happening, very different sorts of physical things are going on.

Is there a conscious brain lurking somewhere in the electromagnetic patterns that we’ve used to work out the logic and figure out what sort of impulses to physically produce as output of the sim machine?

Only if you believe there are pumpkins somewhere in those letters.
 
No, as previously discussed.

No, you're not saying that the brain "hosted by a computer" is being simulated by it?

Ok, then you tell me what in the world that means?

How can a computer "host" an actual object? Can your computer host my truck? If it can't then it can't host my foot, or my brain.
 
I have no idea what you are responding to here. We were talking about replicating a one-processor-per-neuron brain with a single-processor brain. You implied that the functionality of the first could not be replicated by the second. You were wrong.

No, that conclusion actually isn't implicit in what I was saying.

Of course that's the case with information processing.

But I hope you're not saying that we can do something equivalent with physical systems and still have them operate the same way they do now.

Are you?

How are you going to do something like that with a star, for instance, and still have it be a star?

If you don't understand the point of my asking such a question, then again, you're not making the necessary distinctions between real and symbolic computations when considering the system.
 
It's a very good point. I suspect we'll only know for sure whether it will be a problem by trying to emulate brains. I also suspect the by the time we can emulate the simplest mammalian brain, we'll understand enough to know whether it's likely to be an insurmountable problem or not.


Exactly..... and I am glad you said emulate not simulate.
 
To put it another way,let's use the analogy of a physical map. Suppose I had a map of roads in France. This means we can have maps of locations. Furthermore, maps are actual things--they are made of ink and paper. Or, at least, the ones we know about.

But we can also share maps. You can give me your map of Jamaica. It's still a map, and it's still a thing. But now you want to build an argument based on the fact that we can draw and share maps of Oceana, Candy Island, and Mario World. Sure, we can do that--but they are still maps.

But now you're trying to argue that we cannot make maps on blackboards. Well, maybe that's right. Blackboards are not paper and ink. But I'm having difficulty following your argument, because I don't see the inherent limitation. Well, you don't know how the map of France gets to actually be one of France, but you know that pen and paper maps exist, and somehow get to be of France--perhaps being a map of a thing requires really complicated stuff. But, you surmise, if we just draw a line on the blackboard, that could be anything. That line could be a road, or a side of a Hospital. And no matter how many lines we draw, it would simply be a dusty line that could be anything, and we know our maps of France involve nice thin lines embedded in sheets of paper, and also somehow wind up being of France.

I object that your argument against blackboards being maps doesn't hold. Maps are just things, and we know they exist, and get to be about real places. All you need to do in theory is look at pen and paper maps and figure out how they can be maps of France, then see if that requires ink, or if drawing lines suffices. Furthermore, best as I can tell, your argument precludes even the possibility that pen and paper maps can be of France, because even if it does require ink--though I'm not sure why it would--the ink lines themselves could be roads or the side of hospitals. I have no idea why you think a line gets to be of France solely because that line is drawn in ink, and I don't buy the argument that we have to make the blackboard map an ink map to work simply because the ink map, which we know does work, uses ink.

But you object that just because we can put picture of a blackboard with some lines on it into one of our maps doesn't mean what's drawn on the actual blackboard will ever correspond to a particular place. And, sure, that's fair; the one doesn't automatically imply the other. But I still don't see why you think a blackboard cannot be a map of France.

But then you go on to say that since we can draw a map of Candy Island, it's equally likely that a blackboard can be a map of France as it is that Candy Island exists.

And to that, I say, bollocks. The fact that both are things we put in maps does not entail they are equally likely. The argument is fundamentally broken here.



:clap:

This is the most exquisite epitome of a Straw man Fallacy I have ever seen.

Well done sir.... I bookmarked this post for future reference to what a straw man fallacious argument should look like.


In case you are not sure where you have set up the straw man so adroitly, have a look at all the highlighted bits.

I salute your straw man construction dexterity and tip my straw hat to you sir. :th:


And to that, I say, bollocks.


Exactly..... Well said..... that is exactly what your post is..... in fact I see your bollocks and add a dash of balderdash and loads of hogwash accompanied with a surfeit of twaddle to perfectly describe the bovine fecal matter you so nimbly concocted.





Nominated.



I was thinking about nominating it too for the most hilarious failure in logic…. But the guideline for what should be nominated did not include that category.
 
Last edited:
That's irrelevant to my point. I'm sure you really want to make a point to correct something I never said, but all I need to establish is that at least one kind of physical lump of matter does this thing.


No …. Have you heard of the Hasty generalization logical fallacy.



Nevertheless, you're also begging the question. How do you know that only one kind of physical lump of matter does this thing, unless you're particularly loose with your definition of "kind"? We're supposedly talking about physical reality. That's a pretty big place. We're talking many quintillions of cubic light years of just observable space--and simply the existence of a second type of entity doing something that we know for a fact at least one type of entity does within this space.


Can you show me another? What may or may not exist in the wide expanse of the universe is immaterial to the argument at hand in any case. We are arguing about whether a silicon chip can become a conscious entity.....not whether there are aliens and gods.


I'm perfectly willing to say "at least one type of entity" in this kind of space exists which is conscious. I'm even willing to allow some skepticism for moving up to the claim of "at least two types of entities probably are conscious"--that is, I agree, a huge step.

But a claim that only one type of entity is conscious because you only know of one? I think you're going to have to back that one up. Huh? If I demonstrate that at least one entity does a thing, isn't that sufficient to demonstrate that an entity can do it?

You're shifting the burden (and furthermore, establishing an argument from personal incredulity). Try this. If you can show that a brain is required for consciousness, then maybe I'll entertain your theory that a silicon chip cannot produce it.
Huh?

I don't agree that you have a proper epistemic inference. Show me your work.

I vehemently disagree with the rule: "I only know of one type of thing that does X. Therefore, nothing can do X unless it is that type of thing." If this were a valid epistemic rule, we would not have such ubiquitous concepts as "proof of concept".



The above is nothing but an argument akin to RELIGIOUS arguments.

We were discussing what we KNOW HERE AND NOW….. not what might be lurking in some corner or GAP of the universe.

Are you a theist? By your above theistic style argumentation we cannot dismiss Fairy Godmothers nor Thor and his hammer, since they could be out there too.

We were discussing consciousness as we know it here on earth….. not the possible SPIRITS and other constructs that you and theists can concoct.
 
Last edited:
So let me get this straight.

Your argument is that since "imagining a conscious silicon chip running a simulation" is an idea, and "a Superman who leaps higher than skyscrapers" is also an idea, then an actual conscious silicon chip must be equally likely as an actual Superman leaping higher than skyscrapers.

Is that representative of what you're arguing? If so, I leave it up to the reader whether or not he agrees that this argument is sound.


No… my argument is that an imagined conscious silicon chip running a simulation is just that…. An Imaginative construct.

Whether it might come to be true or not is not proven by the strength of the imaginative process nor by the fact that you can imagine it.

If you think that by virtue of being able to imagine the device it makes it more possible, then consider how possible a Superman is by that same virtue.

Don’t forget that the post was in reply to your gobbledygook in this post about imagined stuff being real stuff since it is occurring in a real brain.

Just because you can imagine something does not strengthen the likelihood of its existence nor does it weaken it.

The likelihood of something is dependent on the laws of physics and material reality not on what fictive process can be realized in someone's head.

Sure.... an inventor can imagine something and then proceed to construct it. But if his/her imagination violates the reality of material physics then no matter how imaginative or inventive or adroit the person is s/he is not going to realize the imagined thing because it violates the laws of physics.

So if a silicon chip becoming conscious violates the laws of physics then no matter how hard we try it will not happen.

If on the other hand it does not violate the laws of physics then it may be possible.

Therein lays the crux of the argument at hand.

Is a sentient silicon chip running a simulated program a violation of the laws of physics?
:confused:

I contend that it MIGHT be so..... I think that unless the device trying to emulate the brain actually carries out a similar physical interaction as the brain it most likely will not achieve consciousness.

My HUNCH is based on the fact that AS FAR AS WE KNOW HERE AND NOW (and not in imagined possible other realms and fiction) there is SO FAR no other physical process that has produced consciousness other than CEREBRAL CORTEXES and not even all of them at that.

So unless you think like theists and wish to advocate the UNKNOWN as proof to negate what is known then that is your prerogative..... but it is not a logical one.
 
Last edited:
Status
Not open for further replies.

Back
Top Bottom