• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

The Hard Problem of Gravity

And it is just an assumption.


Perhaps but why re-use a label that already has an established meaning in the field?



As I said you of course can re-define what HPC means but you are going to find yourself involved in more than a few misunderstandings in a thread like this, all of which could be avoided by you not trying to redefine the term HPC.

Do you really doubt that you're conscious, Darat? :rolleyes: Put away the devil's advocate hat and answer honestly now...
 
AkuManiMani said:
When I say "awareness" I'm referring to the state of being aware. The whole issue of the HPC [which I will henceforth all the EMA] is the attempt to define just what exactly does it mean to be 'aware'. Just what exactly is this state and why should it be at all?

Which means you have constructed a kind of conceptual map. And which means that HPC-EMA is demanding a whole collection of them, one on top of each other. In effect, by piling then on top of another, you don't get to the semantic endpoint, it's just a ongoing movement of syntax.

I think that the study of consciousness, like every other intellectual pursuit, has is no ultimate end-point. In a sense, this is a good thing because there will always be more for future generations to investigate and learn. The funny thing about studying consciousness in particular is that its like putting a mirror up to a mirror -- which is probably why it will remain a 'hard problem' in some regard no matter how much we learn about it :boggled:

Anyways, I'm not claiming THE answer to the EMA; what I'm proposing is a possible next step in understanding consciousness :)

AkuManiMani said:
I'd say that 'awareness' isn't so much the map but the particular medium the map is on/in. Its my position that the state of being aware isn't just a process but should be considered a class of thing IAOI. A specific experience would be a kind of disturbance or wrinkle of awareness.

For first sentence, see above. The two latter sentences are assertions which you can surely make, but is there any good reason to postulate them in the first place? What is the basis for the postulation?

Whats the basis?

Almost two decades of reading, sleeping, and reflecting on a wide range of scientific works [I've been fascinated by science and philosophy since I was little :D] related directly and indirectly to this subject. Its a general intuition that's been growing on me since highschool. Needless to say my basis for it is rather broad and it would probably take an entire book to properly lay it out. Obviously, I'm not going to do that here but I did summarize a bit of in this post.

One of the reasons [which I don't explicitly mention in the link above] is that the 'consciousness as field' [CaF] postulate seems the best way to deal with the binding problem.

Another reason is that it puts forward a more specific physical basis to investigate consciousness without the vague "well its got something to do with neurotransmitters, and firing neurons, and such".

Also, the CaF dovetails quite nicely with another hypothesis that mainly addresses how memories many be stored across the brain. Its called the holonomic brain theory.

Those, among other reasons, are my basis for strongly suspecting CaF.

AkuManiMani said:
In short, I'm saying that there must be a physics to awareness and qualitative experience that we simply haven't developed yet.

Yes, I understand your position, but what I don't understand is the 'necessity' of it. It could also be that you're simply drowning in more conceptualizations, with no end in sight.

How would that be different from anything else in science? Is that necessarily a bad thing? I'm just proposing what could be the next of a very long line of theories addressing the scientific and philosophical questions of consciousness.



AkuManiMani said:
Ironically, that's what I'm charging some of the posters here with. They are equating representations of conscious experience with conscious experience IAOI.

Well, but translate "money" for "awareness" or "consciousness" and you understand where I'm coming from. In a way it's like the banana problem: whatever you might throw at me as an explanation, you haven't explained "banana-ness". For surely it is different form "apple-ness" even though we could agree on the formal taxonomy of both being fruits, ultimately that's just an agreed upon convention. :p

What seems increasingly apparent to me, is that no real truth is to be found (as in ultimate explanation). What is needed is a useful explanatory system that's internally coherent, that's all we're going to have. That seems to me where both Pixy and RD is also coming from.

Well, to put this in light of the OP, if everyone just remained content with Newton's definition of gravity we would have never learned about General Relativity. All of our theories are inherently incomplete and tentative. This isn't necessarily a bad thing. It only becomes problematic when we become so trapped by them we blind ourselves to gaining an even better understanding of the world.

I'm just trying to help brainstorm and inspire folks to look at this from a different perspective and possibly discover something new :)
 
Last edited:
I'm curious about this one.

How is a video game a computer?
The game software itself isn't - unless you're playing The Incredible Machine or Conway's Life or something. Video game hardware is, of course. Which is what I was referring to - the original context was a list of hardware.
 
AkuManiMani said:
A computer doesn't know so start a scan at a particular time anymore than a wind-up clock knows to cuckoo at a certain time. Its deterministically set by a mechanical architecture designed by conscious entities to produce specific outcomes.

There is a difference between a mechanical chain of events and a conscious initiation of an event. This, among other things, leads me to assume that there must be a basic physical principle behind consciousness rather than just a specific functional architecture.

If you want to make this argument you're going to have to demonstrate that minds you consider conscious operate in a non-deterministic manner beyond that of all other objects and entities (eta: that means no randomness or QM, as those apply to computers and cuckoo-clocks too.) Otherwise you have no basis to differentiate as you are trying to do.


I'd go further to say that such non-deterministic behavior is one feature which distinguishes the behavior of life in general from the behavior of the mechanical constructs like clocks. I suspect the reason for this is that organisms scale up QM level effects to the macroscopic scale.

Organisms are emergent entities that organize themselves upward from the quantum scale. Their gross form and functions depend on their quantum level interactions and organization. This is why predicting the exact physical behavior of a single organism [i.e. where it will be standing at any given time or what it will decide to do next] cannot be done using macroscale physical laws like Newton's Laws of motion. The best you can do using deterministic laws is do things like predict how fast the critter will fall or how much energy would be required to move it a certain distance. Predicting things like where it will run, how fast, or how far can only be expressed in non-deterministic terms.

Constructs like clocks, metronomes, cars, etc. are downward impositions of organization on inanimate, macroscopic, chunks of matter. Their behaviors will, inevitably, exhibit overall behaviors that conform to the deterministic-like rules of the macro-scale. Their basic functions are fully describable on the macroscale, and so lend themselves to to predictions using deterministic theories.

Good luck.

None required, but thanks anyway :p
 
Last edited:
Uhm... those aren't even hard to understand, much less demonstrative of things "beyond the limitations of human logic". They're no more beyond human logic than NOT gates with their output hooked to their input are beyond the laws of physics.

In fact, they're very simple systems. You simply make a statement about itself, and make it affirm its negation. It's no more surprising than if you asked me to pick droughts or crosses, and told me what I was going to pick, and I never pick it. I simply make my choice dependent on yours such that you can never be right. The only difference is that you use the "magical" (but not really all that shocking) powers of self reference to make the statement itself pick the opposite result from its assigned truth value.

Slightly confusing to some, but not even close to beyond human reasoning. I don't have much of an issue with Epimenides, and I'm pretty sure it's too mundane to qualify me for a free cup of coffee, much less some Nobel prize.

I now return you to your thread.

Thank you for responding bgggs, I consider you a very intelligent forum member and always learn from what you say.

That being said, I disagree here. Your response is pretty much exactly how drkitten responded a year or two ago when I brought up the Epimenides paradox, and I disagreed with him too.

I claim that you -- and all other humans -- do not understand such statements. I think we can parse them, and think about them, but I don't think we ever reach the same understanding we have of other statements. Our mental algorithm never halts on such an input string, so to speak.

Now if I remember correctly, drkitten suggested that unlike a Godel sentence, the Epimenides statement isn't well-formed. But I disagree -- or at least, I can't for the life of me see why it isn't well-formed (given the fairly informal rules of human language, as opposed to a real formal system).

And we can say the same kinds of things about well-formed human language sentences that we can about the formulae of formal languages. In particular, they must be derived from axioms and they can lead to other formulae via the application of rules of inference.

Except in the case of ... well, you know all this already. So I guess what I woud like you to tell me is why the Epimenides statement is not a Godel sentence of human language and hence human thought.
 
Do you really doubt that you're conscious, Darat? :rolleyes: Put away the devil's advocate hat and answer honestly now...

Do you really not understand the difference between knowing you are conscious and knowing it seems like you are conscious?
 
Do you really doubt that you're conscious, Darat? :rolleyes: Put away the devil's advocate hat and answer honestly now...
Just as we readily admit that we cannot know that another person is conscious, we must admit that we cannot know if we are conscious by their definition. Same exact uncertainty. Unless you wish to say that consciousness is preponderately a set of public behaviors, then you must be as uncertain of your own consciousness as anyone else's.

So, if you are certain you are conscious, it follows that you define consciousness as I do (and, if I am not mistaken, as Darat does). Turns out that qualia are irrelevant to that definition, and that we p-zombies are every bit as conscious as those hypothetical entities known as real people.
 
Do you really not understand the difference between knowing you are conscious and knowing it seems like you are conscious?

Nothing can "seem like" anything unless you are conscious. Without consciousness -- There...Is...No...'Seem'.

Just as we readily admit that we cannot know that another person is conscious, we must admit that we cannot know if we are conscious by their definition. Same exact uncertainty. Unless you wish to say that consciousness is preponderately a set of public behaviors, then you must be as uncertain of your own consciousness as anyone else's.

So, if you are certain you are conscious, it follows that you define consciousness as I do (and, if I am not mistaken, as Darat does). Turns out that qualia are irrelevant to that definition, and that we p-zombies are every bit as conscious as those hypothetical entities known as real people.

Darat wasn't questioning the certainty of determining consciousness in others. He was questioning the possibility of being able to be certain of consciousness in one's self.

The first proposition is an epistemologically reasonable doubt; the second is sheer nonsense.
 
Last edited:
The game software itself isn't - unless you're playing The Incredible Machine or Conway's Life or something. Video game hardware is, of course. Which is what I was referring to - the original context was a list of hardware.

The software of today is a computer in and of itself as well. Almost all useful software involves so much abstraction that there is computation occuring at multiple levels the hardware doesn't even know about.

If you were a CPU, would you be able to figure out what is going on in and between complex data structures -- many too big to even fit inside your entire set of caches -- just by crunching 64 bits at a time? Of course not. The CPU is clueless, just like neurons are clueless. Software is where the real action is. Kind of reminds me of... consciousness?
 
Just as we readily admit that we cannot know that another person is conscious, we must admit that we cannot know if we are conscious by their definition. Same exact uncertainty. Unless you wish to say that consciousness is preponderately a set of public behaviors, then you must be as uncertain of your own consciousness as anyone else's.

So, if you are certain you are conscious, it follows that you define consciousness as I do (and, if I am not mistaken, as Darat does). Turns out that qualia are irrelevant to that definition, and that we p-zombies are every bit as conscious as those hypothetical entities known as real people.

That was an excellent post Mercutio. I finally understand what you are getting at with this whole public behavior thing. Thank you very much for genuinely teaching me something today!
 
I think someone built a Turing Machine in Little Big Planet. :)
Exactly!

And I seem to remember people building an entire game within second life.

Of course, these are just the most obvious examples. The truth is, almost all useful software is well above the hardware's head. In fact, I would say all software, but I am not looking to argue.
 
Except in the case of ... well, you know all this already. So I guess what I woud like you to tell me is why the Epimenides statement is not a Godel sentence of human language and hence human thought.

I was trying to explain it the way I learned it wasn't a real paradox, but wiki has a clear answer so I'll just copy that:

For a better understanding of the liar paradox, it is useful to write it down in a more formal way. If "this statement is false" is denoted by A and its truth value is being sought, it is necessary to find a condition that restricts the choice of possible truth values of A. Because A is self-referential it is possible to give the condition by an equation.
If some statement, B, is assumed to be false, one writes B = false. The statement (C) that the statement B is false would be written as C = "B = false". Now, the liar paradox can be expressed as the statement A, that A is false:
A = "A = false"
This is an equation from which the truth value of A = "this statement is false" could hopefully be obtained. In the boolean domain "A = false" is equivalent to not A and therefore the equation is not solvable. This is the motivation for reinterpretation of A. The simplest logical approach to make the equation solvable is the dialetheistic approach, in which case the solution is a A being both "true" and "false". Other resolutions mostly include some modifications of the equation e.g. A. N. Prior claims that the equation should be A = "A = false" and "A = true" and therefore A is false.
 
The software of today is a computer in and of itself as well. Almost all useful software involves so much abstraction that there is computation occuring at multiple levels the hardware doesn't even know about.

I agree.

If you were a CPU, would you be able to figure out what is going on in and between complex data structures -- many too big to even fit inside your entire set of caches -- just by crunching 64 bits at a time? Of course not. The CPU is clueless, just like neurons are clueless. Software is where the real action is. Kind of reminds me of... consciousness?

Close, but not quite.

You wanted me to suggest to you specific approaches to the problem, right? I've posted some of them here, here, and here.

If you've objections to any particular point please state them :)
 
Last edited:
But that's just begging the question. Why do you think there is a seem?

Well, being a non-aku-zombie things have the quality of seeming particular ways to me. Sadly, I would not expect an aku-zombie, such as yourself, to comprehend such things :rolleyes:
 
Last edited:
Just as we readily admit that we cannot know that another person is conscious, we must admit that we cannot know if we are conscious by their definition.

1, That doesn't follow at all. Just because I am unsure about others' subjective experiences (or lack thereof), does not mean I am unsure about my own subjective experiences. Anyone who has ever stubbed their toe knows what I mean. It is not just that you injured your foot; stubbing your toe hurts, it feels bad, and any theory of consciousness better take that into account.

2, Who's the radical skeptic around here? Do you really walk around doubting you (and other people) are conscious? It's one thing to doubt physical matter exists, it's something else entirely to deny conscious experience. Do you honestly think it's possible you're a zombie?

Same exact uncertainty. Unless you wish to say that consciousness is preponderately a set of public behaviors, then you must be as uncertain of your own consciousness as anyone else's.

I must? To deny my own consciousness is to assert it- I cannot undertsand the denial of consciuosness without being conscious to begin with.

So, if you are certain you are conscious, it follows that you define consciousness as I do (and, if I am not mistaken, as Darat does).

Why does it follow? How do you define consciousness? Define "sad" for me. We all feel sad sometimes, and if someone tells us they are sad we know what they mean. So what does that mean "feel sad"?

Turns out that qualia are irrelevant to that definition, and that we p-zombies are every bit as conscious as those hypothetical entities known as real people.

If qualia are irrelevant, you won't mind slamming your fingers in the door, right? It doesn't really hurt. That's an archaic term dependent on qualia. You can redefine things all you want to try to remove subjective experience, but at the end of the day, you're going to take the novacaine shot at the dentist. Not because you have some abstract notion to block mental processes telling you a hole is being drilled in your tooth, but because it feels bad.
 

Back
Top Bottom