• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
rocketdodger said:
Something I don't understand about this: how to reduce to a computational description what it is (normatively) about any particular computational description - its truth and relevancy- that make it useful and distinguishable from noise?

How does one formalize the making of such judgements?

This isn't a big deal, you just can't reduce everything to a computational description at the same time because, obviously, there is no way to then include the reduced description in the reduced description.

In other words, you can't see the back of your own head if you are also looking at everything else.


My original wording was confusing. I fixed it.
 
Something I don't understand about this: how to reduce to a computational description what it is (normatively) about that computational description - its truth and relevancy- that make it useful and distinguishable from noise?

How does one formalize the making of such judgements?



Are you asking 'how do we build meaning from the ground up?'
 
My original wording was confusing. I fixed it.

The same thing applies.

All of this becomes very clear if you learn about A.I., Frank. If you are interested in this issue, and know a little computer science, you should pick up a copy of the book "Artificial Intelligence: A Modern Approach" and read through it. Or you can go to the wikipedia artificial intelligence portal but personally I find it much less accessible than that book.

At a fundamental level the primary task of an A.I. programmer is to find ways for an agent to describe the environment state it cares about. For trivial A.I. this can be just a bunch of static data. But as the complexity of an agent's desired behavior increases, it becomes less and less viable to take a brute force approach and try to account for all possibilities ahead of time. You need to think up ways to allow the agent to learn about not only things in the world but also relationships between things in the world.

And at this point things really start to get interesting, and an educated observer might start to see parallels between the patterns of information processing and the way humans think.

For instance there are entire branches of A.I. dedicated to the data structures and algorithms of logical inference -- reasoning. Did you think reasoning was something only people do? Very wrong -- we have known how program machines to do it for decades. The key is finding ways to reduce logic -- the relationships between things in the real world -- to the simplest representations and steps possible, things that a computer can deal with.

http://en.wikipedia.org/wiki/Automated_reasoning

And that is the kind of stuff that answers your question -- how can a formal language itself be formalized, or rather, how can the very concept of language itself be formalized?

You can get a high level understanding of this and many other topics that are both fascinating and directly relevant to this discussion if you take a little time to read up on A.I.

Although intuitively it should be clear that you can formalize the idea of language using language because you can do it in English: it isn't hard to describe to someone what language is, is it? Linguists do it all the time.
 
No, it would not, because it is not designed to.

Running a simulation of a power plant is not the same as running a power plant.

I'll let westprog review the details of that fact for you.

It might be that a simulation of a power plant would be easily converted to run a power plant. In general, it won't be, and if it can be converted easily, then it was probably intended for conversion in the first place.
 
We can easily make interfaces that let electronic devices transfer signals along nerves.

Signals - yes. Signals that allow us to functionally replace nerves - no. Nerve replacement is still not possible. Some of the biologists/doctors posting here might have a better idea as to how close we are to doing it, but as of now, nerve damage is permanent.
 
Signals - yes. Signals that allow us to functionally replace nerves - no. Nerve replacement is still not possible. Some of the biologists/doctors posting here might have a better idea as to how close we are to doing it, but as of now, nerve damage is permanent.

We can't replace individual nerve cells yet, true, but this is hardly a technological impossibility in the future. Or is that what you are saying?
 
Now if you'll take the next step and agree that without the 'telling' it's as meaningless as a landslide. Machines don't recognize or care about the meaning of design or not-design. If a fault occurs the computer won't be a bit bothered if the output says 2+2=5.
There are computer systems that check their results by various means, and can automatically correct the result if such an error occurs (for example, aerospace control systems are often implemented as three independent subsystems, using different algorithms, that vote on the result in case of disagreement).

Whether such a system is 'bothered' or 'cares' is an anthropomorphic judgement; a discrepancy is detected and corrected. You could say that the system cares because it checks for discrepancies, and when discrepancy is detected, it must be bothered because it corrects the error. Alternatively, you could argue that it is the designer that really cares, because the designer implemented the system in such a way (and I could argue that the designer is part of the system).

You could also argue the same about human behaviour - it is natural selection that really cares, because it implemented us in such a way that we take 'caring' actions. That we are capable of conceptualising these behaviours with such abstractions doesn't change the behaviours, only the way we think about them.
 
Last edited:
We can't replace individual nerve cells yet, true, but this is hardly a technological impossibility in the future. Or is that what you are saying?

No, I'm just pointing out that it hasn't been done, so any claims as to the way it will be done are close to guesswork. Certainly it isn't a matter of plugging a conductor into a circuit. I assume that it's possible, and since there's very strong incentives to make it work, I assume that it's very difficult and complicated.
 
No, I'm just pointing out that it hasn't been done, so any claims as to the way it will be done are close to guesswork. Certainly it isn't a matter of plugging a conductor into a circuit. I assume that it's possible, and since there's very strong incentives to make it work, I assume that it's very difficult and complicated.
Technology has already been implemented to connect electronic light sensors to retinal nerve cells to restore some vision (admittedly crude at present). There has also been plenty of research into direct microchip to nerve interfaces, and I understand that, in-vitro, some of these interfaces are now quite effective and persistent. Sorry, no references.
 
You could also argue the same about human behaviour - it is natural selection that really cares, because it implemented us in such a way that we take 'caring' actions. That we are capable of conceptualising these behaviours with such abstractions doesn't change the behaviours, only the way we think about them.

This is a point I have brought up over and over.

The computationalist position doesn't anthropomorphize machines.

The anti-computationalist position anthropomorphizes humans.
 
We can all hope he understands something better than he understands what 'anthropomorphize' means.
 
This is a point I have brought up over and over.

The computationalist position doesn't anthropomorphize machines.

The anti-computationalist position anthropomorphizes humans.


Pretty sure you mean 'deifies' humans, or something like.

Though thanks for the chuckle :D (of the sort I provide on a regular basis). ;)
 
What computationalists want to pretend is life including humans are just machines.
 
The same thing applies.
If you are interested in this issue, and know a little computer science, you should pick up a copy of the book "Artificial Intelligence: A Modern Approach" and read through it.

How much is 'a little' computer science? I know fairly well how computers work, but only have a vague knowledge about programming.
 
Are you suggesting that computationalists are wrong, and that life (living things) are not machines ...

Why yes, I am. In my worldview with intent just part of the fabric of space-time; no "magic" required. Intentful life is the point of complexity where it becomes obvious.
 
Why yes, I am.
Well if you're really saying that a living thing is not... 'An intricate natural system or organism', I don't think this discussion is likely to make useful progress.

In my worldview with intent just part of the fabric of space-time;
At the risk of nagging, can you explain what you mean by that?

no "magic" required.

Perish the thought.

Intentful life is the point of complexity where it becomes obvious.
Sorry, that parses in several different ways, can you rephrase it?
 
Last edited:
It's just the alternative to the ontological choice 'body', or 'physical', or 'material', or 'whatever you choose to name the ontology that has no intent'.
 
It's just the alternative to the ontological choice 'body', or 'physical', or 'material', or 'whatever you choose to name the ontology that has no intent'.
Are you saying the fabric of spacetime is purposeful and has a goal or goals?

If so, how do you know this, and do you know what this goal (or goals) is/are?
 
Status
Not open for further replies.

Back
Top Bottom