Materialism and Logic, mtually exclusive?

Sorry, I'm not following you.
Ok, I’ll try explaining it again. Lets look at where this started.

Wouldn't the default position actually be not-knowing: MAY BE/MAY NOT BE possible?
For ease of reading I’ll break this up.

1.) Wouldn't the default position actually be not-knowing: may be possible?
2.) Wouldn't the default position actually be not-knowing: may not be possible?
3.) Wouldn't the default position actually be not-knowing: may be impossible?
4.) Wouldn't the default position actually be not-knowing: may not be impossible?

There is nothing wrong with 1, and ironically enough, due to our language 1 and 3 mean the exact same thing. May be possible or may be impossible, may being a keyword there, means that it can be possible or impossible, it is not absolute, it can turn out to be either.

In 2 we have a problem. May not be possible means it must be impossible, the not makes the statement into an absolute as a negative. This equals to “not knowing proves it must be impossible”, and this is an argument from ignorance. Not knowing does not logically prove it impossible.

In 4 we have a similar problem as 2. May not be impossible means it must be possible. This equals to “not knowing proves it must be possible”. This is a proof of the negative fallacy.

2 and 4 are absolutely equal, and both are wrong. You can’t use the “not”. Not knowing can only show that it may be possible or may be impossible, it can’t show that it may not be possible or may not be impossible as each of these constitute a fallacy.
 
Ok, I’ll try explaining it again. Lets look at where this started.


For ease of reading I’ll break this up.

1.) Wouldn't the default position actually be not-knowing: may be possible?
2.) Wouldn't the default position actually be not-knowing: may not be possible?
3.) Wouldn't the default position actually be not-knowing: may be impossible?
4.) Wouldn't the default position actually be not-knowing: may not be impossible?

There is nothing wrong with 1, and ironically enough, due to our language 1 and 3 mean the exact same thing. May be possible or may be impossible, may being a keyword there, means that it can be possible or impossible, it is not absolute, it can turn out to be either.

In 2 we have a problem. May not be possible means it must be impossible, the not makes the statement into an absolute as a negative. This equals to “not knowing proves it must be impossible”, and this is an argument from ignorance. Not knowing does not logically prove it impossible.

In 4 we have a similar problem as 2. May not be impossible means it must be possible. This equals to “not knowing proves it must be possible”. This is a proof of the negative fallacy.

2 and 4 are absolutely equal, and both are wrong. You can’t use the “not”. Not knowing can only show that it may be possible or may be impossible, it can’t show that it may not be possible or may not be impossible as each of these constitute a fallacy.
Gotcha.

Prove that it is impossible for machines to ever have the ability to spontaneously visualize, independently problem solve, invent.
Can't, of course. I thought we were trying to be empirical here.
 
Gotcha.


Can't, of course. I thought we were trying to be empirical here.
We are, which is why I think your response to Randfan’s question, while perhaps being technically correct for the actual question asked, did not address the point he was attempting to make. His question was worded poorly. ST, on the other hand, appears to be arguing that he can prove that statement.
 
We are, which is why I think your response to Randfan’s question, while perhaps being technically correct for the actual question asked, did not address the point he was attempting to make. His question was worded poorly. ST, on the other hand, appears to be arguing that he can prove that statement.
Thanks for explaining these things to me. I have no formal background in any of this, though find it fascinating.

Looking forward to the next installment.
 
stillthinkin said:
stillthinkin said:
chriswl said:
stillthinkin said:
If machines can do logic, but cant make mistakes - what would we mean by a "bug"?
A failure of a specific, physical machine to correctly perform in accordance with its specification. The machine just does what it does, what it has to do according to the laws of physics. But it's not doing what what we (wrongly) expected it to do. The mistake is ours.
If we make a mistake while making a machine, but the mistake remains ours... then how is it that when dont make a mistake, the behaviour of the mechanism becomes its own? If the mistake is ours and not the machines, then how is the logic we put into the machine an activity of the machine? Machines do what they have to do, according to the laws of physics - they dont make logical inferences any more than they make mistakes.
Just as "the mistake is ours", as you say, so is the logic ours. Any attempt to say that the machine does logic, but cant make mistakes, is special pleading. Even if the machine make the same mistake a million times, because of one line of wrong code - the mistake is that of the programmer, and it is only one mistake.
chriswl, you have responded to the first half of post 409, but not to this second half. Anything to say?
 
No human will never know the answer to that

Unless we can demonstrate what it is which causes the feeling of consciousness. But I see your point. We (as in, you and other members of the forum) have bashed that topic to death, so I understand where you stand.

I can't even 'prove' you think, although I agree it's 'possible'...

Well, that's ok then. Move along. :D
 
Fine. I will after my exam today.

These findings are discussed in support of a state theory of hypnosis in which the basic changes in phenomenal experience produced by hypnotic induction reflect, at least in part, the modulation of activity within brain areas critically involved in the regulation of consciousness.

Emphasis mine. Source.

Overall, results demonstrate that individuals with moderate-to-severe [Traumatic Brain Injury] exhibit [Working Memory] deficits that are associated with dysfunction within a distributed network of brain regions that support verbally mediated [Working Memory].

Emphasis mine. Square brackets represent clarifications unclear from original quote in these contexts. Source.

We propose a conceptual model of the system for visual guidance of hand action including parietal hand manipulation neurons.

Source.

...(1) there is a clear tendency to consider consciousness as a scientific object; (2) consistent subjective and objective descriptions of consciousness are possible; an intentional-modeling structure accounts for its main features; (3) from the evolutionary biology standpoint, conscious cognitive activities, as based on models of the self, the world and the alter-ego, have a functional value; (4) the material basis of consciousness can be clarified without recourse to new properties of the matter or to quantum physics. Current neurobiology, based on classical macrophysics, appears able to handle the problem. In this scope, the neurobiology of sleep-wakefulness and attention, and neuropsychology, have already achieved substantial advances.

Source.

Nonetheless, by combining cognitive and neurobiological methods, it is possible to approach consciousness, to describe its cognitive nature, its behavioural correlates, its possible evolutionary origin and functional role; last but not least, it is possible to investigate its neuroanatomical and neurophysiological underpinnings.

Source.

We suggest that our moral frames of mind emerge from our primate prosocial capacities, transfigured and valenced by our symbolic languages, cultures, and religions.

Source.

This text presents developmental neurobiological findings and insights, and developmental psychological and psychoanalytic studies of infants.

Not peer reviewed, but still interesting. Source.

In this review, I will discuss the brain sites in which several addictive drugs trigger their habit-forming actions and the anatomical circuitry connecting these sites.

Unfortunately only available through Google cache. A search of the journal would yield the article, however. Source.

These data and concepts suggest that the biochemical and anatomical substrates underlying the affective disorders evolve over time as a function of recurrences, as does pharmacological responsivity. This formulation highlights the critical importance of early intervention in the illness in order to prevent malignant transformation to rapid cycling, spontaneous episodes, and refractoriness to drug treatment.

Regarding our understanding of brain biology, not directly related to neurobiology. Source.

I could carry on, but I'm bored now.
 
And understanding the brain, as we eventually will, we will come to understand how to model the exact same processes in machines of our own devising.

This argument is ages old, though, and every time someone sets a benchmark for what a machine can never do, science comes along and makes a machine that does.

They used to say a machine will never walk bipedally. We have robots that walk, jump, and climb stairs now. They used to say machines were slow and inefficient at basic calculations and would never replace human thinking. We've shown that to be o-so-wrong. They used to say machines couldn't play chess, or tie shoes, or drive cars, and on and on and on...

So when some ignoramus says machines will never be able to 'think, visualize, problem solve, invent...', the only proper answer would be, simply, to say, "We'll see." Or, of course, you can ask why - what is it about all of the above which prevents mechanical replication and modelling? As far as we know, there is NO reason - other than complexity (apologies to ST, but that's the way it is) - that we can't make a mechanistic device which operates exactly like a human brain and would, therefore, do all the wonderful, terrible, and useless things a human brain does. Just like there's absolutely no reason we can't make a perfect computer simulation of the city of New York, down to the last piece of trash and the last dirty manhole cover - other than complexity.

The only - ONLY - way to prove monism wrong is to show solid, valid evidence that dualism is true, which would involve demonstrating, without doubt, the existence of both physical and non-physical 'stuff'.

As for whether we're talking materialism or idealism, I'm not sure anyone will ever come up with a good way to tell them apart, and determine the truth-value of either.
 
chriswl, you have responded to the first half of post 409, but not to this second half. Anything to say?
I don't really understand what point you were making.

You have redefined "logical inference" so that it doesn't mean merely mechanically producing an answer from some inputs according to some pre-specified logical function. For you logical inference is the rather mysterious process by which these logical functions are created in the first place.

That's not to say that these logical functions cannot sometimes themselves be produced by fairly dumb, mechanical processes. A programmer who is simply translating a specification into code may be acting in this mechanical fashion. He may make errors but these are exactly the same kind of errors (that you are claiming are not real errors) that a computer would make.

But somewhere, if we trace back this chain of mechanical rule-following we will arrve at one of your moments of "logical inference", a moment where we have some kind of direct intuition of truth.

So I would now ask you, what do you mean by mistakes or errors? By what standards can what you call "logical inference" ever be judged to be wrong?
 
So when some ignoramus says machines will never be able to 'think, visualize, problem solve, invent...', the only proper answer would be, simply, to say, "We'll see." Or, of course, you can ask why - what is it about all of the above which prevents mechanical replication and modelling? As far as we know, there is NO reason - other than complexity (apologies to ST, but that's the way it is) - that we can't make a mechanistic device which operates exactly like a human brain and would, therefore, do all the wonderful, terrible, and useless things a human brain does. Just like there's absolutely no reason we can't make a perfect computer simulation of the city of New York, down to the last piece of trash and the last dirty manhole cover - other than complexity.
Just like there's absolutely no reason I can't tap dance the 1812 Overture on that giant marimba suspended from twenty three hot-air balloons in my back yard -other than not having any of that stuff, yet.
 
Just like there's absolutely no reason I can't tap dance the 1812 Overture on that giant marimba suspended from twenty three hot-air balloons in my back yard -other than not having any of that stuff, yet.
Yeah, that’s correct. Well, it may be physically impossible to produce every single note within the 1812 Overture by tapping your feet alone, but you could probably get close enough so people could tell what you’re playing. In any sense, this statement doesn’t mean anything; it is rhetorical and doesn’t advance or support any argument.
 
Then he asked the wrong question (for his purposes), originally. But his was the question I answered, and appropriately.
That's all well and good but it doesn't change my point.

In this act of correcting himself, then, RandFan informed me I had not demonstrated that machines will not ever do what humans do with my example of Tesla's visualization and invention of his alternating-current motor.
You miss the point. Sadly you did not read my post or respond to it. You've latched on to this little point of yours that you feel is significant when it is not.

RandFan's original question had nothing to do with a demonstration of how machines will not ever do what humans can. I never would have answered a question concerning the possibility or impossibility of machines someday doing what humans do. My example of Tesla was not meant to answer the question of whether machines will someday do what humans do. Yet my quite appropriate answer to the simple question of what is it that humans do that machines don't was not accepted as I was told it didn't demonstrate how machines will never do what humans do. As though it were meant to.
Not relevant to the discussion. I worded my question poorly and have corrected it. You are now just trying to take rhetorical advantage of it.

This is the context in which I asked RandFan if he now wanted me to explain why no machine will ever do what humans do (a knock off of Tesla's performance). For the first time I was actually addressing the question of the possibility/impossibility of future machinery doing what humans do. A question I found completely meaningless.
That's nice. Not relevant but whatever. I dealt with the question. You chose not to respond.

I responded to a question I found completely meaningless by asking the (logical) question: shouldn't the burden of proof be on your proposed idea (that it is not impossible that a machine will someday do what humans do)?
I am no longer making this claim so no. It's been dealt with. That you choose to ignore that fact is your problem.
 
Last edited:
Please just read my answer to I less than three logic above, Randfan.
:mad: Had you responded in a sincere way we could have possibly resolved this. Your response to I < 3 does not deal with the substance of my post. I can only say that I'm rather disapointed.
 
I answered the question. I would use the phrase, and I would know I was using it anthropomorphically.
Would convey sufficient information for another to understand the problem? What non-anthropomorphic term could you replace it with?

I see: "the car needs gas in order to keep running." Can you tell me why your car needs to run? Perhaps it needs to go somewhere?
No, it just needs gas to run to fulfill whatever purpose we have for it. And why do humans "need" things? We are just biological machines that have been programed for survival. If I programed a vehicle to deliver materials from point A to point B then to fulfill it's purpose it would need to get energy to fulfill that purpose. How are human needs different? We experience pain when we need energy and we experience pleasure when we get it.

Your argument is fallacious. It simply states that there are these human qualities that machines are incapable of experiencing and therefore we are not machines. It's begging the question.
 
Taffer et al., we all acknowledge that the nervous system is a necessary condition for consciousness, sentience, human living, etc. This was discussed back in post 58. The doubt is around whether the nervous system, as a purely material thing, is sufficient to explain consiciousness -- or in the case of this thread, logic.
 
The doubt is around whether the nervous system, as a purely material thing, is sufficient to explain consiciousness -- or in the case of this thread, logic.
Two very different questions with two very different answers. Can the material world explain consciousness - maybe. Can it explain logic - most certainly.
 
Two very different questions with two very different answers. Can the material world explain consciousness - maybe. Can it explain logic - most certainly.
Agreed, and of course the first question is a god of the gaps type argument. That we can't explain consciousness in material terms at this moment doesn't mean that we won't at some point be able to explain it. PB and Stillthinkin are trying to find meaning in our gap of knowledge. Hint: It's not there.
 
That we can't explain consciousness in material terms at this moment doesn't mean that we won't at some point be able to explain it.
Alternatively, and as correctly:

That we can't explain consciousness in ~material terms at this moment doesn't mean that we won't at some point be able to explain it.

:)
 

Back
Top Bottom