• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
A calculator or computer does something real in the real world. It follows the algorithm that we call adding sums if we tell it to do so even if we are not there to watch the result. It displays those numbers on a screen or a printout -- real things happening in the real world. We can describe what it does symbolically, but its actions are quite real.

No no no. That is not what it does in the real world. Take a good look at it, you'll see.

It's only in our imaginations that it does such things. Just like the abacus, what it does is to obey the rules of physics.

Yes, lighting up the display is something it does in the real world.

Adding things together is not. That depends entirely on your imagination.

On the other hand, a machine that packs cans in crates actually does aggregate things in the real world. It takes items and groups them in aggregates of, say, 12 or 24.

A calculator does no such thing. We have simply set it up to do physical taks that have nothing to do with aggregating or separating and such, in a way that is designed to trigger our imaginations so that we can envision such things when they don't really happen.
 
No no no. That is not what it does in the real world. Take a good look at it, you'll see.

It's only in our imaginations that it does such things. Just like the abacus, what it does is to obey the rules of physics.

Yes, lighting up the display is something it does in the real world.

Adding things together is not. That depends entirely on your imagination.

On the other hand, a machine that packs cans in crates actually does aggregate things in the real world. It takes items and groups them in aggregates of, say, 12 or 24.

A calculator does no such thing. We have simply set it up to do physical taks that have nothing to do with aggregating or separating and such, in a way that is designed to trigger our imaginations so that we can envision such things when they don't really happen.


No, it is not in our imagination that it does this. It is the program, intentionally following an algorithm, that determines which logic gates open and close -- real actions controlled from the top-down with a real output. Manipulating symbolic numbers, where those symbols are given meaning by the person who programmed the machine, is still adding regardless of whether or not the output is viewed.

A dropped abacus does not intentionally follow an algorithm, so its output must be viewed by someone to constitute addition.
 
No no no. That is not what it does in the real world. Take a good look at it, you'll see.

It's only in our imaginations that it does such things. Just like the abacus, what it does is to obey the rules of physics.

Yes, lighting up the display is something it does in the real world.

Adding things together is not. That depends entirely on your imagination.

On the other hand, a machine that packs cans in crates actually does aggregate things in the real world. It takes items and groups them in aggregates of, say, 12 or 24.

A calculator does no such thing. We have simply set it up to do physical taks that have nothing to do with aggregating or separating and such, in a way that is designed to trigger our imaginations so that we can envision such things when they don't really happen.

I'd go a step further and say that everything we perceive a device, or any other object, to do is a subjective interpretation we attribute to it. The "physical" and "symbolic" activity that a machine carries out are just two possible interpretive layers that an individual(s) can give to it.

What we call the "real world" is just a collective set of raw data that our individual units of consciousness can read and give subjective meaning to.
 
Are you asking 'how do we build meaning from the ground up?'


Don't think so just trying to point out that description depends on what is normative: is something true, is it false, is it relevant or not, and so on. Being able to pursue goals depends on this.

Humans achieve this through a practice of some sort. Over time. I don't see how an ability to meaningfully judge can be incorporated into an algorithm.
 
This isn't a big deal, you just can't reduce everything to a computational description at the same time because, obviously, there is no way to then include the reduced description in the reduced description.

In other words, you can't see the back of your own head if you are also looking at everything else.


I think you may have illustrated the problem. Humans achieve what they do over time through a practice, not all at once through a formalism.

All of this becomes very clear if you learn about A.I., Frank. If you are interested in this issue, and know a little computer science, you should pick up a copy of the book "Artificial Intelligence: A Modern Approach" and read through it. Or you can go to the wikipedia artificial intelligence portal but personally I find it much less accessible than that book.

At a fundamental level the primary task of an A.I. programmer is to find ways for an agent to describe the environment state it cares about. For trivial A.I. this can be just a bunch of static data. But as the complexity of an agent's desired behavior increases, it becomes less and less viable to take a brute force approach and try to account for all possibilities ahead of time. You need to think up ways to allow the agent to learn about not only things in the world but also relationships between things in the world.

And at this point things really start to get interesting, and an educated observer might start to see parallels between the patterns of information processing and the way humans think.

For instance there are entire branches of A.I. dedicated to the data structures and algorithms of logical inference -- reasoning. Did you think reasoning was something only people do? Very wrong -- we have known how program machines to do it for decades. The key is finding ways to reduce logic -- the relationships between things in the real world -- to the simplest representations and steps possible, things that a computer can deal with.

http://en.wikipedia.org/wiki/Automated_reasoning

And that is the kind of stuff that answers your question -- how can a formal language itself be formalized, or rather, how can the very concept of language itself be formalized?


My question is how is making normative choices formalizable?

You can get a high level understanding of this and many other topics that are both fascinating and directly relevant to this discussion if you take a little time to read up on A.I.

Although intuitively it should be clear that you can formalize the idea of language using language because you can do it in English: it isn't hard to describe to someone what language is, is it? Linguists do it all the time.


Lnguists study formal systems of rules followed by fluent language speakers, and how meaning is inferred from words and context.

How do you formalize knowing what linguistic practice will tell you: when a person is making a meaningful utterance or mere noise noise?

Without that you can't know what to include in the grammar structure.
 
I'd go a step further and say that everything we perceive a device, or any other object, to do is a subjective interpretation we attribute to it. The "physical" and "symbolic" activity that a machine carries out are just two possible interpretive layers that an individual(s) can give to it.

What we call the "real world" is just a collective set of raw data that our individual units of consciousness can read and give subjective meaning to.

Was wondering were you were with your thoughts.:)
 
I think you may have illustrated the problem. Humans achieve what they do over time through a practice, not all at once through a formalism.




My question is how is making normative choices formalizable?




Lnguists study formal systems of rules followed by fluent language speakers, and how meaning is inferred from words and context.

How do you formalize knowing what linguistic practice will tell you: when a person is making a meaningful utterance or mere noise noise?

Without that you can't know what to include in the grammar structure.

There are some truths that cannot be formalized- Godel
The hand waving the hand waving the hand - Escher
All the theory without the notes does not make it sound - Bach
 
Hey, we're all friends here right?

No one is getting angry or upset about stuff, I hope.

Well, I'm still talking to you. I haven't found you to be gratuitously abusive or over-sensitive. I hope I'm keeping to the same standard.
 
I think you may have illustrated the problem. Humans achieve what they do over time through a practice, not all at once through a formalism.

Computers don't need to be taught every detail. Give them a general idea of a goal and they can teach themselves. Admittedly the technique is still young, but it's done some amazing stuff.

Look up genetic algorithms.

How do you formalize knowing what linguistic practice will tell you: when a person is making a meaningful utterance or mere noise noise?

Man, I sometimes can't tell. To be fair my circuitry for processing sounds is apparently a bit impaired (had it tested once when I was getting tested for ADHD).
 
No, it is not in our imagination that it does this. It is the program, intentionally following an algorithm, that determines which logic gates open and close -- real actions controlled from the top-down with a real output. Manipulating symbolic numbers, where those symbols are given meaning by the person who programmed the machine, is still adding regardless of whether or not the output is viewed.

A dropped abacus does not intentionally follow an algorithm, so its output must be viewed by someone to constitute addition.

There has to be a person involved. But I don't see how the intentions of the person imbue the calculator with intentionality or meaning.
 
We do have research that have transmitted information along the optic nerve actually. I could look that up if you want (I was thinking about the spine when responding you so I forgot that like an idiot, which I guess is a lot harder). They are working on artificial eyes for the blind and I've looked into it a few times since my mother is blind (sadly her optic nerve is basically dead due to how she became blind, so she'll have to wait until the back-of-the-head concept gets worked out). We also have sensors that pick up things from nerves, used to activate artificial arms.

We definitely have excellent starting points. Refining the technology is basically the only difficulty. Part of it is that nerves are pretty darn small. I expect 20 or 40 years from now blindness will mostly be a thing of the past. Possibly sooner, but the people with unworking optic nerves pose extra difficulties.

Does it actually replace the optic nerve, or transmit information along it? Different things.

I'm certainly not claiming that nerve/neuron replacement is impossible - just that it's quite difficult.
 
Was wondering were you were with your thoughts.:)

Hehe.

The past year has been reeeally busy and interesting. Between school, work, my mom going thru chemo, and attempts to maintain a social life thru it all I haven't had nearly as much time or interest in playing on the forums. Now the semester is starting to wind down and I figured I'd give my thoughts a stretch again :D
 
There are some truths that cannot be formalized- Godel
The hand waving the hand waving the hand - Escher
All the theory without the notes does not make it sound - Bach

I'd promised Pixy last year that I'd read a copy of GEB but never had the time to. Then, outta the blue, some girl I've been going out with decided give it to me as a b-day gift a few weeks ago. Now I'm diggin into it to see the wutz wut. Gonna see what I can take from it ;)
 
Well, I was partially serious.

I don't think it is valid to assume that just because some other human is similar to you they probably experience the same conscious states as you.

People who think that don't understand the nature of neural networks. It just doesn't work that way.

I don't assume my consciousness is anything like yours, at all. Yeah we both see in color, and might have similar basic top level sensory perception networks, but everything else in our brain is 100% unique to us. The topography of our networks might be similar, but only like fingerprints are similar.

If I hooked up your hearing to my brain, the result is likely to be incomprehensible noise. If I hooked up your memories, it would be nonsense. If I tried to see with your visual cortex, it would be mayhem.

So I say people anthropomorphize humans because they make assumptions about how similar the experience of another human is to them.

Okay. You mean universalize experience based on shared human nature, so to speak. I don't think that's a bad assumption, as long as formal similarities in 'output' [experience] don't obscure the infinite set of neural pathways which may have led there. Similarities define a species; differences define the individual.

It's an interesting thought-experiment. I agree it would initially be noisy. But 'noise' itself is informative ("there's something there -- what?"). So the nervous system, assuming it didn't go comatose from shock, might begin to talk back randomly until it decoded the unfamiliar signals, like being reborn... reconceived even? Stuff of sci-fi. :alien002:

As an example, think about language. French people think in French. They think in French. That is crazy to me -- they don't just replace the nouns and verbs with French equivalents, their entire sentence structure is different and their thoughts are consequently structured in a different order. And French isn't even that different from English. Think about the Asian languages! It is likely that the conscious experience of a Japanese man is very different than mine. Similar in some ways, yes, but not entirely the same either.

It isn't that computationalists assume too much. We assume only what is valid. Everyone else takes it too far in the other direction.

That's certainly the key to learning another language: semantics emerge from syntax. One reason these consciousness debates drag on and on could be the translation of the semantics of one view into the syntax of the other creates a lot of noise, and random reinterpretation. Stuff of philosophy. :o
 
Last edited:
OK, I take back the gibe on your understanding of anthropomorphism. :o

I hope you are wrong about our differences outweighing our similarities, although I'd agree each individuals neural configurations are wildly dissimilar.

If the differences are as spectacular as you suggest, aren't the challenges of setting up a conscious AI even more problematic?

A biologic brain in a cyborg body seems to me a more promising approach.

The relevant differences between my brain and yours are orders of magnitude larger than the differences between say a nintendo wii and an xbox 360 -- and those have very different hardware configurations, processors, software, etc.

That is the nature of neural networks, especially biological ones. Everything your brain has learned, your entire life, is represented as differences between synapse excitability. Add to that the fact that the mere existence of synapses varies wildly between people, and you end up with two brains that might look the same even under a coarse microscope behaving very differently when neural impulses flow through them.
 
On the contrary, you're the one making the unsupported equivalence between self-reference and consciousness.

I've asked you on several occasions to support that idea, and you've never done it.

You think that I know nothing about consciousness because I disagree with your unsupported assertions. Meanwhile, you are content to be generally ignorant of research on the only thing in the universe which we can be sure actually accomplishes the task -- the brain.

No, I think you know nothing about consciousness because all you ever do -- ever -- is ask a question, respond to any answers with the sentiment that they are inadequate or invalid, and repeat the question.

Over

and

over

and

over.
 
Status
Not open for further replies.

Back
Top Bottom