• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Poll about realism

What is your position on realism?

  • Direct Realist

    Votes: 25 58.1%
  • Indirect or Representational Realist

    Votes: 10 23.3%
  • Non-Realist

    Votes: 2 4.7%
  • Don't know / none of the above

    Votes: 6 14.0%

  • Total voters
    43
JustGeoff and ZD,

I suggest the philosophies of Dr. Seuss.

What do you two think of the idea that P2 is invalid, since both the observer and the brain are in fact looking at the same thing - energy being manipulated by the program - from different perspectives?

Or the fact that direct realism holds, because neither are gaining any information about the program that the other can't learn through like experiences? (I had to study this one - Wikipedia got me all confused on the meanings of direct/indirect realism.)

Or the fact that the machine isn't capable of controlling the environment any more than the brain is, otherwise P4 would be invalid?
 
Or the fact that the machine isn't capable of controlling the environment any more than the brain is, otherwise P4 would be invalid?
The brain does help to regulate the environment within the body, however. Otherwise if it didn't, there would be no use for it.
 
Well at least we know where your bias is coming from. Though I gotta be honest... took me looking up Sussex to find out much about it. Never heard of it before now. I guess 'defining the times' doesn't extend much outside of philosophy.

Sussex isn't known for it's philosophy department. We have one big name, the person who prompted me to start this thread, and that is all. But it was and still is a centre of excellence for AI and computer science and for that reason it became one of the Universities at which cognitive science emerged in the first place. Unless you happen to be interested in the history of cognitive science, there's no reason you should have heard of Sussex University. We are only ranked 9th in the UK.

Anyway Sussex definitely isn't a forerunner in A.I. research now is it?

Yes, that is exactly what it is.

http://en.wikipedia.org/wiki/University_of_Sussex

The university is also noted for its work in molecular sciences...and for its work in computing and cognitive science, particularly Artificial Intelligence and human-computer interaction.

In addition to the seven current schools, the university houses several centres of excellence including...the Centre for Computational Neuroscience and Robotics....

http://www.cogs.susx.ac.uk/ccnr/

:)
 
Last edited:
Hello cpolk

What do you two think of the idea that P2 is invalid, since both the observer and the brain are in fact looking at the same thing - energy being manipulated by the program - from different perspectives?

P2: No X is actually present.

The "different perspective" is all that is required for P" to be true, IMO. You are equating a real physical stick with a pattern of electric charge in a RAM chip. The stick the BIV percieves is no more real than Sherlock Holmes is to you and me. So I think P2 is quite hard to challenge for a direct realist.

Or the fact that direct realism holds, because neither are gaining any information about the program that the other can't learn through like experiences? (I had to study this one - Wikipedia got me all confused on the meanings of direct/indirect realism.)

Where did you get this from? Is it a defence that somebody else has used?

Or the fact that the machine isn't capable of controlling the environment any more than the brain is, otherwise P4 would be invalid?

I don't understand this either. The computer has full control over the enviroment in so much as what would appear like random quantum events to us would be predetermined in the case of the BIV by the computer. This part of my own objection to the argument. In terms of QM, the objects perceived by the BIV and by the normal subject are very different. For us, many different metaphysical interpretations of QM are possible. For the BIV, none are possible because there is no wave/particle duality and no "collapse of the wavefunction". My own position, which I haven't really explained up till now, is based on the claim that during veridical perception the laws of quantum mechanics are also required at the act of observation, and it is these laws that ensure that perception is direct. Those laws do not act in the case of the BIV. The BIV can inhabit a world where the laws of physics could be altered at the whim of the computer. We do not. In other words P4 is invalid precisely because the computer can feed the BIV signals which are not being properly "orchestrated" by the laws of QM by objects in the external world. The term "orchestrated" being lifted from the Penrose-Hameroff "orch-OR" model of consciousness, something like which is deemed neccesary if you accept Penrose's arguments about Godels theorem and human cognition. I have no idea whether this argument flies, but I just handed in an essay to Prof. Smith. I am sure there is a major hole in it. I'm not expecting a 1st. :)

For a detailed summary of A.D.Smith's defence of direct realism, which I do not pretend to understand, go here:

http://www.people.fas.harvard.edu/~ssiegel/papers/DRPC.htm
 
The problem with your position is it ignores the fact that qm/physical laws still hold at the junction of sensory input and brain. Further the brain is still detecting something real - even if what that thing is appears to be something different.

This concept is only marginally different from the idea of putting a person in a room full of holograms (and not allowing them to touch the holograms). The person is still seeing whatever is there - so what is there is real. It's just not what he thinks it is.

Same thing if we bypass the organs of sense and input signals directly. The brain is still interpreting signals directly - and as such developing ideas about what is 'real' in response to those signals. It would be a different reality but a reality nonetheless. Yes the computer could alter physical laws at any time; all that means is that in this reality physical laws are not immutable. But remember again - this is only the illusion of physical laws.

As to Sussex... well forgive me for being a snob :D but 'philosophy of mind' is a poor avenue for A.I. research. I was thinking of universities where things actually get accomplished in A.I... apparently with Sussex ruled by nay-sayers claiming some sort of vague philosophical high ground it'll never be one of those. Oh I'm sure it's a fine school but philosophical ponderings should never trump actual research.
 
P2: No X is actually present.

But it is actually present. We're seeing it in the form of sequenced programming. It contains the exact same information, and is therefore the exact same thing. The fact that we're perceiving it from outside the program and the BIV is experiencing it from inside the program make no more difference than whether I am holding a real stick blindfolded and you are looking at a real stick without touching it. We are both experiencing the same stick, and neither of us learn anything that the other can't through like interaction; therefore, the same goes with the program that contains all of the information of the stick.

The "different perspective" is all that is required for P" to be true, IMO. You are equating a real physical stick with a pattern of electric charge in a RAM chip. The stick the BIV percieves is no more real than Sherlock Holmes is to you and me. So I think P2 is quite hard to challenge for a direct realist.

No, I am equating all of the information contained in a real physical stick, with the section of programming that contains all of the information of a real physical stick. Where "PS" equals all of the possible information of a real stick:

Brain's PS=Program's PS

Where did you get this from? Is it a defence that somebody else has used?

After doing a lot of reading, I found that by "external world", direct realism refers to the entire body, not just the brain alone. Since the BIV is not a whole human, and the "body" and sensory organs are simulated by the program, the external world is also the universe created by the program.

Since the program of the stick (which contains all of the possible information of the stick) is represented as a physical object in that external world, direct realism still holds.


I don't understand this either. The computer has full control over the enviroment in so much as what would appear like random quantum events to us would be predetermined in the case of the BIV by the computer.

This can't be, though. In order for P4 to be true, the world has to be perfectly simulated. This means that the BIV cannot break the stick, come back later, and find the stick whole again; this is not a natural representation of a real stick. So, by accepting P4, we are accepting:

1. The program must contain all aspects of a real physical stick.

2. In order to contain all aspects of a real physical stick, the program of the stick must be made up of programs that represent each atom, and each atom must therefore be made up of programs that contain information at the sub-atomic level.

3. The machine must mimic the exact physical laws that hold those subatomic particles together.

If any of this isn't true, then P4 would be invalidated, because the stick in the brain's world is not acting as a stick in the observer's world would.

Therefore, in order for P4 to be true, once the program starts, the universe cannot be changed in any manner, since to do so would not naturally follow the laws of physics. The machine then has no more control over its own universe than the brain does, or else P4 is false.
 
It is a view of the mind which claims that thinking really consists of the brain shuffling around symbols in some sort of universal "mind language". If you are going to model thought computationally then you need such a device.
This while correct on the surface is a flawed argument. There is not a universal mind language, because the symbolic transfer is a learned events. The brain grows and programs itself through learning. So on the surface a computational model that seeks to replicate a brain through a priori programing will fail. A computational model that is based upon reverberating computation that is generated through exposure to stimuli, has a very high chance of sucsess.
The problem is that real brains do not need any such device.
That is what a brain is, an associative filter involving a trillion cunjtions and some organicly driven progarming that is learned.
The sort of "research" that LOT inspires is therefore worthless.
The apriori basis is flawed the computational model is not, I assume that you have heard of robotic insects that use fuzzy logic to learn to stand?
What one purports to learn about human cognition by using this technique is hopelessly intertwined with the false assumption of computation. For example, last year in my "cognitive modelling" class I had to produce a language-based model of the cognitive process required to solve a simple puzzle - in the case a sort of colour-based jigsaw puzzle. Being an ex-programmer I found this child's play and got 85% for the project. But the final paragraph, for which I was NOT marked down (even though my tutor was a complete computationalist dinosaur) is as follows:
And that is an apriori model. Did you write a program that learns to solve the puzzle, or did you use a priori programing?
 
Did you write a program that learns to solve the puzzle, or did you use a priori programing?

I used "a priori programming", if that's what you want to call it. Writing a program that could learned on its own to solve the puzzle would have been a complete nightmare, or downright impossible.
 
cpolk

No, I am equating all of the information contained in a real physical stick, with the section of programming that contains all of the information of a real physical stick. Where "PS" equals all of the possible information of a real stick:

Sorry don't have time for a reply to all of your post right now but this bit is interesting. That reduces all reality to information, leading to something that looks like information dual-aspect theory - a sort of neutral monism. Is this what you are suggesting?
 
I do not understand why direct realism and indirect realism are mutually exclusive.

~~ Paul

If indirect realism is true then what it means is that it is only possible to percieve real physical objects indirectly. You can never perceive them directly. If we can ever percieve objects directly then, even if we can also perceive them indirectly, direct realism is true and indirect realism is false. They are mutually exclusive by definition.
 
Geoff said:
If indirect realism is true then what it means is that it is only possible to percieve real physical objects indirectly. You can never perceive them directly. If we can ever percieve objects directly then, even if we can also perceive them indirectly, direct realism is true and indirect realism is false. They are mutually exclusive by definition.
According to your definitions:
Direct Realism:

The objects of perception during veridical experiences (i.e. not hallucinations or lucid dreams) are real physical objects which exist totally independently of mind(s).

Indirect/representational Realism:

We are not directly aware of physical object, but we are indirectly aware of them. We are directly aware of sense-data/qualia/mental-impressions....
there is no reason why objects can't be "real physical objects" that exist independently of mind and be such that we can only experience them indirectly. In fact, I'm not even sure what it would mean to experience an object directly, unless I was that object.

Perhaps this is more a question of how the brain perceives external objects.

~~ Paul
 
According to your definitions:

there is no reason why objects can't be "real physical objects" that exist independently of mind and be such that we can only experience them indirectly.

Absolutely. But that position is called indirect realism. Indirect realism does not deny that there are "real physical objects". It just denies we perceive them directly.

In fact, I'm not even sure what it would mean to experience an object directly, unless I was that object.

That would be the experience of being an object. I feel similarly about indirect realism. I'm not sure what it is supposed to mean.

Perhaps this is more a question of how the brain perceives external objects.

No, that is a question for neuroscience alone, this is well and truly metaphysics. Indeed, "percieves" may not even be applicable, in this context, to what brains do. It is about subjective experience, and if you start talking about "brains perceiving things" then the p-zombie argument enters the frame and you end up arguing about whether there has been an assumption of physicalism. Part of what makes this debate interesting is that it sidesteps the arguments about physicalism. As you know, I am fairly certain I am no physicalist - but I am not even sure what my position on realism is. I defended direct realism in an essay, but I'm not sure I even believe it is true. There is no easy mapping between (direct realism / physicalism) (indirect realism / dualism) and (non-realism - idealism). Neither is there a straight mapping onto naturalism.
 
Last edited:
Hi cpolk

But it is actually present. We're seeing it in the form of sequenced programming.

The sequenced programming is all there is! There is no stick. :D

It contains the exact same information, and is therefore the exact same thing.

No, that doesn't follow. I can sympathise with a view like this because I am a neutral monist who believes that what we call the physical world exists in the form of information which manifests both as mind and matter - i.e. that both mental things and physical things are reducable to identical sets of information. However, I don't think you can use this argument to say that a physical stick is no different to a virtual stick in a machine. Even if you are a neutral monist the information you reduce the mental and physical things to is ontologically different to the information in the computer, because the information in the computer is encoded in the physical world and the information that mind/matter reduces to isn't.

Basically, if "things as they really are" is information then I believe this has to be non-realism. We do not directly perceive information. We percieve objects. Which is why I am worried that I am not actually a direct realist.

Originally Posted by JustGeoff :
I don't understand this either. The computer has full control over the enviroment in so much as what would appear like random quantum events to us would be predetermined in the case of the BIV by the computer.

This can't be, though. In order for P4 to be true, the world has to be perfectly simulated. This means that the BIV cannot break the stick, come back later, and find the stick whole again; this is not a natural representation of a real stick. So, by accepting P4, we are accepting:

P4 still stands. We never witness the laws of QM in action. All we witness is the object after the collapse of the wavefunction. So even if the laws of QM are not operating in the BIVs world, the BIV knows no different. He lives in the same "billiard-ball" sort of reality that we do.

1. The program must contain all aspects of a real physical stick.

2. In order to contain all aspects of a real physical stick, the program of the stick must be made up of programs that represent each atom, and each atom must therefore be made up of programs that contain information at the sub-atomic level.

3. The machine must mimic the exact physical laws that hold those subatomic particles together.

If any of this isn't true, then P4 would be invalidated, because the stick in the brain's world is not acting as a stick in the observer's world would.

Therefore, in order for P4 to be true, once the program starts, the universe cannot be changed in any manner, since to do so would not naturally follow the laws of physics.

That only applies to classical physics and a fully deterministic universe (hard determinism, fatalism). If the laws of classical physics were to change, the BIV would be able to tell. But you do not have to change any of the laws of physics for the BIV's world to be being manipulated by the computer provided the BIV always makes sure that the what happens is one of the potential quantum outcomes. When we observe a photon, the laws of QM seem to imply it was potentially in many places at the same time - and only became fixed in one place when we observed it. For the BIV, the computer can preselect which outcome is going to occur before the BIV percieves the outcome.
 
Geoff said:
Absolutely. But that position is called indirect realism. Indirect realism does not deny that there are "real physical objects". It just denies we perceive them directly.
Aha.

No, that is a question for neuroscience alone, this is well and truly metaphysics.
Not according to Wikipedia:

http://en.wikipedia.org/wiki/Direct_realism

which immediately jumps into the neurophysiological aspects of the question.

Sounds like this hinges on the exact definition of indirect. Are you sure it's not meaningless wordplay?

~~ Paul
 
For about the seventh time, we are talking about brains in vats, NOT people who are drunk or have taken LSD. :(




The argument gets even more complicated if you start bringing in non-visual forms of perception. Also, this reply sounds like it is more about the argument from illusion than the argument from hallucination.



In humdrum talk we are all direct realists. Although this version of direct realism is generally refered to as "naive realism".



Do you see with your eyes? Don't you need a brain too? :D


Hi JG,

Thanks for the reply, and apologies for returning to the fray so late. Had to see a man about a pink elephant:)

Let me say first that the thought experiment helps itself to a pretty large assumption: that we can make sense of talk about brains in vats hallucinating. The lack of detail here makes it easy to overlook obvious questions. For example, how do we get an account of what is happening to the BIV? Did the BIV start its existence disembodied? If it did, how do we get information about what has happened to it at any time? Do we know that what we are doing to it produces anything that could be called an experience? Can something disembodied and therefore lacking sense organs be said to hallucinate? Or, if it didn't start off disembodied, do we pop it back into its body so that it ( now "he" again) can tell us stories about what happened to it while it was disembodied? And if we do, won't we treat its stories with the same smiling indulgence as we extend to, say, stories of dreams in which the dreamer says he witnessed his own death or saw a square circle? That is, won't we treat the stories the revenant tells not as bizarre or false reports but as pseudoreports?
The difficulty I have here is that a BIV not only isn't a person, it is so utterly unlike a person that the notion of its hallucinating is hard to understand. And that difficulty will not be made to disappear by an assumption or stipulation built into the thought experiment: assumptions have to be intelligible when they are translated into humdrum talk. Perhaps you can manage that. In that case I'll reconsider.

As to your point about hallucination and illusion, you are right to draw the distinction, but I don't think it will help. We can see that the argument is meant to drive us to sense data: just look at the conclusion! And the sense datum man tells much the same story about hallucination as about illusion: in one case the data ( sort of, but not quite) go with what's there, but in the other there's nothing there at all. In fact, the sense datum man is going to tell much the same story even when we can see what's staring us in the face.

Let me go back to P1 and P3, where the notions of " being immediately aware of" and " being aware of" come in. The first point to make is that "be aware of", like "see", is a term of success: we do not say that we are aware of what isn't there or isn't so. (Therefore if it used of, say, what we hallucinate, it's being used in a strange way that needs explaining.) The second is that we do not say that we are "immediately aware of" much at all. As far as I can see, it is used as a philosophical, not a humdrum, term. So it's pretty hard to follow without definition. But I am guessing that immediate awareness is meant to be a species of awareness: so what we aren't aware of at all, we can't be immediately aware of. The man who hallucinates purple snakes isn't immediately aware of them because they aren't there to be aware of. And that is where, I suspect, sense data come in: to provide an answer to the question " What is he immediately aware of?" But why should there be an answer to this question ( as opposed to the question "What is he hallucinating?" or " What does the poor devil think he is seeing?")?

As for the point about direct realism and naive realism, I'm not convinced that those of us that speak humdrum for preference like philosophical isms of any sort.

And you are quite right about needing brains as well as eyes to see. The difference is that " You need a brain to see" is true, but " You need eyes to see" is a truism :)

Now, must see a man about a purple snake.

Regards
 
Hi JG,
how do we get an account of what is happening to the BIV?

We send in Agent Geoff.

Did the BIV start its existence disembodied?

Not quite sure why this matters. Sounds more plausible if it didn't.

If it did, how do we get information about what has happened to it at any time?

It can usually only get information through its fake world. But we can send in an Agent Geoff if we need to ask it what it is experiencing.

Do we know that what we are doing to it produces anything that could be called an experience?

We could theoretically know it has an identical brain state. If you believe that a specific brain state is always associated with a specific experience then not only can you imply it is experiencing something, but you can imply that what it is experiencing is identical to being normally aware. If you believe one brain state can have two different phenomenological "results" then you are a disjunctivist. :)

Can something disembodied and therefore lacking sense organs be said to hallucinate?

It can in the context of this argument, because the argument is historically known as "the argument from hallucination" even though it is now often accepted that the only valid case is a brain in a vat. The reason for this is that "hallucination", taken to mean something other than a BIV is either the result of "something going wrong in the brain", meaning a different brain state, or "something being misperceived in the world", making it the argument from illusion.

Or, if it didn't start off disembodied, do we pop it back into its body so that it ( now "he" again) can tell us stories about what happened to it while it was disembodied?

We could do. Or we could send in Agent Geoff. :)

And if we do, won't we treat its stories with the same smiling indulgence as we extend to, say, stories of dreams in which the dreamer says he witnessed his own death or saw a square circle?

Why should we do that?

That is, won't we treat the stories the revenant tells not as bizarre or false reports but as pseudoreports?

Why should the BIVs account of its own experiences be disqualified as evidence?

The difficulty I have here is that a BIV not only isn't a person, it is so utterly unlike a person that the notion of its hallucinating is hard to understand. And that difficulty will not be made to disappear by an assumption or stipulation built into the thought experiment: assumptions have to be intelligible when they are translated into humdrum talk. Perhaps you can manage that. In that case I'll reconsider.

I don't see the problem. Unless the problem is the technical use of the word "hallucination" which I explained above.
 
No, that is a question for neuroscience alone, this is well and truly metaphysics.
So, what you are asking is "Ignoring everything that we actually know, do we experience things directly, indirectly, or something else?"

The answer to that question is "mu", as hammegk pointed out upthread.
 

Back
Top Bottom