Materialism and Logic, mtually exclusive?

And here I thought Ms. Swinson & I had been discreet. :p


LMAO!

I never thought I'd bust out laughing with Hammy...

BTW - and this sickens me a bit, but oh well - I'm finding myself more and more in agreement with Hammy. Now if he'd do more than 'post and run'...

:D
 
This was supposed to be for the Ms. Swinson (I had assumed Orlanda reference). Damn I gotta learn to pay attention.

In any event,

That we can't explain consciousness in ~material terms at this moment doesn't mean that we won't at some point be able to explain it.
Yes, but that is not in question. Everyone concedes this point. let's just not draw any conclusions from either position.
 
Taffer et al., we all acknowledge that the nervous system is a necessary condition for consciousness, sentience, human living, etc. This was discussed back in post 58. The doubt is around whether the nervous system, as a purely material thing, is sufficient to explain consiciousness -- or in the case of this thread, logic.

If you read any of my sources I posted, or if you do even a little bit of research in neurobiology, you will see that we are already beginning to have an understand of the purely material explanation of consciousness.
 
stillthinkin said:
chriswl, you have responded to the first half of post 409, but not to this second half. Anything to say?
I don't really understand what point you were making.

You have redefined "logical inference" so that it doesn't mean merely mechanically producing an answer from some inputs according to some pre-specified logical function. For you logical inference is the rather mysterious process by which these logical functions are created in the first place.

That's not to say that these logical functions cannot sometimes themselves be produced by fairly dumb, mechanical processes. A programmer who is simply translating a specification into code may be acting in this mechanical fashion. He may make errors but these are exactly the same kind of errors (that you are claiming are not real errors) that a computer would make.

But somewhere, if we trace back this chain of mechanical rule-following we will arrve at one of your moments of "logical inference", a moment where we have some kind of direct intuition of truth.

So I would now ask you, what do you mean by mistakes or errors? By what standards can what you call "logical inference" ever be judged to be wrong?
Thanks chriswl, I think I see what you mean. I have been presuming that people here were more familiar with what logical inference is, within the circle of formally trained logicians.

I dare say I have not "redefined logical inference" at all... it never meant mechanically producing an answer from some inputs. Some people on this forum seem to believe that that is all logical inference is. A mechanism can produce any outputs from any inputs we choose; it will never be logical but always only mechanical.

By logical inference I mean to refer to the correct forms of argument; I could agree with you that logical inference can involve a "moment where we have some kind of direct intuition of truth"... if by that we mean seeing (so to speak) that if the premises of an argument are true, then the conclusion must be true. The conclusion follows from the premises, and this is quite different from causality.

We all make mistakes and are familiar with the experience. Good examples of logical mistakes or errors are the fallacies (see the index).

There is no situation where (correct) logical inference can be wrong. If the premises are false, then the conclusion may be false. But if the premises are true then (correct) logical inference cannot yield a falsehood from those premises. I place the word "correct" in parentheses because logical inference is understood to be correct; otherwise we do not have logical inference but a fallacy.

The point in post 409 is that we cannot at the same time attribute correct logical inference to a machine, but then attribute "mistakes" in logic, due to a bug, to the designer of the machine.
 
Thanks chriswl, I think I see what you mean. I have been presuming that people here were more familiar with what logical inference is, within the circle of formally trained logicians.

I dare say I have not "redefined logical inference" at all... it never meant mechanically producing an answer from some inputs. Some people on this forum seem to believe that that is all logical inference is. A mechanism can produce any outputs from any inputs we choose; it will never be logical but always only mechanical.

By logical inference I mean to refer to the correct forms of argument; I could agree with you that logical inference can involve a "moment where we have some kind of direct intuition of truth"... if by that we mean seeing (so to speak) that if the premises of an argument are true, then the conclusion must be true. The conclusion follows from the premises, and this is quite different from causality.

We all make mistakes and are familiar with the experience. Good examples of logical mistakes or errors are the fallacies (see the index).

There is no situation where (correct) logical inference can be wrong. If the premises are false, then the conclusion may be false. But if the premises are true then (correct) logical inference cannot yield a falsehood from those premises. I place the word "correct" in parentheses because logical inference is understood to be correct; otherwise we do not have logical inference but a fallacy.

The point in post 409 is that we cannot at the same time attribute correct logical inference to a machine, but then attribute "mistakes" in logic, due to a bug, to the designer of the machine.

That is not quite correct, stillthinkin. A piece of logic (i.e. an argument) is deductively valid if, and only if, it is impossible for the premises to be true and the conclusion false. Inductive reasoning is an entirely differnet matter. It is not quite accurate to say that "if the premises are true then (correct) logical inference cannot yield a falsehood". There is a slight, but significant, difference between "cannot yield" can "if and only if it is impossible". The former is giving "cannot yield" as a property to 'correct' logical deduction (the technical term is valid), while the latter makes it a requirement for the validity of an argument.

Also remember, that when dealing with validity, the actual truth values (i.e. whether premises or conclusions are are true or not) has absolutely no bearing on the validity of an argument. For example, "All unicorns are invisible and pink. That is a unicorn. Therefore it is invisible and pink." is a valid argument, while "The sky is blue. Grass is green. Therefore unicorns don't exist" most clearly is not.
 
I dare say I have not "redefined logical inference" at all... it never meant mechanically producing an answer from some inputs. Some people on this forum seem to believe that that is all logical inference is. A mechanism can produce any outputs from any inputs we choose; it will never be logical but always only mechanical.
You are begging the question. You start by defining mechanical as something that humans don't do and therefore what machines do is not the same as what humans do.

Your reasoning is circular.

By logical inference I mean to refer to the correct forms of argument; I could agree with you that logical inference can involve a "moment where we have some kind of direct intuition of truth"... if by that we mean seeing (so to speak) that if the premises of an argument are true, then the conclusion must be true. The conclusion follows from the premises, and this is quite different from causality.
And we can build truth machines to verify if a conclusion follows from the premises.

The point in post *409 is that we cannot at the same time attribute correct logical inference to a machine, but then attribute "mistakes" in logic, due to a bug, to the designer of the machine.
I have not done that. Because of the inherent nature of magnetic media data files become corrupt. The error or what we perceive as error is not due to a mistake by the designer but the limitations of the hardware and variables that we cannot precisely predict. Likewise human error or what we perceive as error is the result of limitations of our brains and complex variables that we cannot precisely predict.

Stillthinkin,

You have yet to demonstrate that humans do something that machines don't when it comes to decision making and mistakes. You are committing a number of fallacies including a god of the gaps type of argument and you are begging the question.

You cannot advance your argument unless and or until you establish that humans are fundamentally different in the way they process data and employ logic to make decisions.

You have only asserted that humans make mistakes and humans don't without establishing this proposition. It would go along way if you could focus on this question.

RandFan

*409 is a computer error. Just a coincidence but I thought that was ironic.

Your Web server thinks that the request submitted by the client (e.g. your Web browser or our CheckUpDown robot) can not be completed because it conflicts with some rule already established. For example, you may get a 409 error if you try to upload a file to your Web server which is older than the one already there - resulting in a version control conflict.
I concede that the statement contains anthropomorphic statements.
 
Last edited:
I dare say I have not "redefined logical inference" at all... it never meant mechanically producing an answer from some inputs. Some people on this forum seem to believe that that is all logical inference is.
I was using the word "mechanical" metaphorically. To proceed mechanically is to proceed in a determined, strictly causal way. Like a machine. In this sense logical inference is mechanical. The conclusions follow inevitably from the premises. My point is that anything that is purely "mechanical" in this metaphorical sense can be made actually mechanical, because it is not doing anything that a physical mechanism cannot also do.

A mechanism can produce any outputs from any inputs we choose; it will never be logical but always only mechanical.
You are just stating your conclusion here. You haven't shown this.

By logical inference I mean to refer to the correct forms of argument; I could agree with you that logical inference can involve a "moment where we have some kind of direct intuition of truth"... if by that we mean seeing (so to speak) that if the premises of an argument are true, then the conclusion must be true. The conclusion follows from the premises, and this is quite different from causality.
It is clear that a piece of computer code or digital hardware contains the correct forms of logical argument, so it should still qualify as logic. Brains and computers are causal devices that can implement and embody logic.

There is no situation where (correct) logical inference can be wrong. If the premises are false, then the conclusion may be false. But if the premises are true then (correct) logical inference cannot yield a falsehood from those premises. I place the word "correct" in parentheses because logical inference is understood to be correct; otherwise we do not have logical inference but a fallacy.
Agreed.

The point in post 409 is that we cannot at the same time attribute correct logical inference to a machine, but then attribute "mistakes" in logic, due to a bug, to the designer of the machine.
Of course we can. In this case the designer made incorrect logical inferences (not really logic) in writing the program and the machine made only correct inferences in carrying it out.

Don't forget that the programmer and computer are not carrying out the same logical inferences. The programmer is using his powers of logic to turn a specification into a machine-executable program, according to a set of rules he learnt when he learnt how to be a programmer. Let's imagine the program that is being written here is a compiler. When this is run the computer uses its logical abilities to turn high level language code into machine code, using rules that were programmed into it by the human programmer.

It is clear that both the human and the computer are doing the same sort of task (which involves doing logical inferences) but they are not literally making the same logical inferences. This means that the human can commit logical errors and the computer then correctly execute the resulting code without itself being said to be making errors. Just as there could have been a mistake in the original specification handed to the programmer but he could have correctly implemented the incorrect specification without himself being said to be in error.
 
Taffer said:
stillthinkin said:
Taffer et al., we all acknowledge that the nervous system is a necessary condition for consciousness, sentience, human living, etc. This was discussed back in post 58. The doubt is around whether the nervous system, as a purely material thing, is sufficient to explain consiciousness -- or in the case of this thread, logic.
If you read any of my sources I posted, or if you do even a little bit of research in neurobiology, you will see that we are already beginning to have an understand of the purely material explanation of consciousness.
Taffer, I have tried reading the sources you posted references to. It seems to me, prima facie, that all you have done is search for neurobiological abstracts containing the word "consciousness", and then posted some of the contents of the abstract. Perhaps you are not aware that the sources whose abstracts you have cited are not actually available to the public without charge. The one I was most interested in, which begins "there is a clear tendency", would cost me $30 to acquire. I registered for free at some other sites, and still could not obtain anything more than the abstract.

This makes me question whether you have read these articles yourself. If they actually are available to you, perhaps I can get a copy from you, at least of the one that I mention an interest in?

As for a "purely material explanation of consciousness", this thread is actually about materialism and logic. I would greatly appreciate a description of the following:

1. a materialist account of what a proposition is
2. a materialist account of what a modus tollens argument is

By "materialist account", I mean of course one that is in accord with the claim that "everything that exists is material". So, either a proposition and a modus tollens argument are composed of matter, or they do not exist.
 
Taffer and RandFan - your posts 508 and 509 are responses to my post 507, which was directed quite explicitly to chriswl. My time is very limited. I do not intend to respond.
 
Taffer and RandFan - your posts 508 and 509 are responses to my post 507, which was directed quite explicitly to chriswl. My time is very limited. I do not intend to respond.
That's fine with me. I've allready promised not to be impatient and then I was impatient. :boggled:

I appreciate your contributions. I realize the difficulty you have. I can't speak for Taffer but no wories on my part.

RandFan

P.S. I reserve the right to respond to the posts that you make in response to others.
 
I was using the word "mechanical" metaphorically. To proceed mechanically is to proceed in a determined, strictly causal way. Like a machine. In this sense logical inference is mechanical. The conclusions follow inevitably from the premises. My point is that anything that is purely "mechanical" in this metaphorical sense can be made actually mechanical, because it is not doing anything that a physical mechanism cannot also do.
Do you mean that we can do logic habitually, without really concentrating, and that this is what you call "mechanical"? So, for example, the fact that we can argue "if x then y... not y... therefore not x" quite habitually, without really thinking too hard, it can therefore be done by a mechanism. Or perhaps you mean that it can be mechanical, because I have removed the propositions themselves and replaced them with the letters "x" and "y"? You use the word "inevitability" for both mechanical and logical results, but that does not mean that mechanisms and logic are the same thing.

chriswl said:
It is clear that a piece of computer code or digital hardware contains the correct forms of logical argument, so it should still qualify as logic. Brains and computers are causal devices that can implement and embody logic.
It is not clear that a machine - whether a lever or a computer - employs logic whatsoever. Causality among material things is not the source of logic, nor may logic be reduced to causality -- that at least is my contention. In your view it would seem that brains and computers may causally implement illogic just as well as logic; in fact, we humans seem to do illogic much more efficiently than logic. If causality is the source of logic, then logic is no better than illogic.


stillthinkin said:
The point in post 409 is that we cannot at the same time attribute correct logical inference to a machine, but then attribute "mistakes" in logic, due to a bug, to the designer of the machine.
Of course we can. In this case the designer made incorrect logical inferences (not really logic) in writing the program and the machine made only correct inferences in carrying it out.
Again, if the designer does correct logic and therefore the machine does logic... but the designer makes a mistake and the computer does not make the mistake, how is that not special pleading? The machine does what it is designed to do, whether logical or illogical.

Don't forget that the programmer and computer are not carrying out the same logical inferences. The programmer is using his powers of logic to turn a specification into a machine-executable program, according to a set of rules he learnt when he learnt how to be a programmer. Let's imagine the program that is being written here is a compiler. When this is run the computer uses its logical abilities to turn high level language code into machine code, using rules that were programmed into it by the human programmer.

It is clear that both the human and the computer are doing the same sort of task (which involves doing logical inferences) but they are not literally making the same logical inferences. This means that the human can commit logical errors and the computer then correctly execute the resulting code without itself being said to be making errors. Just as there could have been a mistake in the original specification handed to the programmer but he could have correctly implemented the incorrect specification without himself being said to be in error.
The process by which a human being designs a machine to accomplish a task, and what that machine actually does, are definitely connected... obviously. But the human being and the machine are not doing the same thing at all.

There was a good discussion between piggy and elliotfc on the "free will redux" thread about this, regarding Babelfish software, starting here. I agree with elliotfc. I would say that every decision regarding translation was made by the programmers of the software, and that the resulting text in the destination language is due to the decisions that they made, not to the alleged decisions of the machine.

I liked your work on that thread as well, regarding emergent properties.
 
It is not clear that a machine - whether a lever or a computer - employs logic whatsoever. Causality among material things is not the source of logic, nor may logic be reduced to causality -- that at least is my contention.
And one that you simply assert.

In your view it would seem that brains and computers may causally implement illogic just as well as logic; in fact, we humans seem to do illogic much more efficiently than logic. If causality is the source of logic, then logic is no better than illogic.
For rhetorical purposes let me ask, what is the basis to assert that humans "do illogic"? You claim that humans do things like "illogic" that machines don't without any basis other than to assert that it is true.

The process by which a human being designs a machine to accomplish a task, and what that machine actually does, are definitely connected... obviously. But the human being and the machine are not doing the same thing at all.
Again, you are simply asserting a difference without justifying the difference. It seems that you are merely appealing to our intuition that there is a difference. I seen no reason to accept your contention without justification.

There was a good discussion between piggy and elliotfc on the "free will redux" thread about this, regarding Babelfish software, starting here. I agree with elliotfc. I would say that every decision regarding translation was made by the programmers of the software, and that the resulting text in the destination language is due to the decisions that they made, not to the alleged decisions of the machine.
But what about programs that evolve? You know, programs that evolve by learning strategies like the famous chess programs?

I would expect that the rebuttal would be that yes, these programs evolve but only in a narrow way. A chess game that evolves game strategy isn't going to evolve to play checkers and this is a fair critique. But we are only at the beginning of designing such programs. Is there any reason to suppose that such evolution won't eventually lead to a program than can learn any game without being taught the game first? A computer that can learn to solve problems the designer never envisioned for the software?

To answer those two questions in the negative you would have to first figure out what it is exactly that differentiates humans from machines that is not material in nature and to date you can't do that.

When I came to understand that one fact I realized that skepticism of materialism is simply god of the gaps. Arguing from ignorance. To counter PB's rebuttal, I'm not arguing that we will only that we only have materialism and we know that degradation of the brain cause degradation in the ability to perform logic. To argue that there is something other than materialism to explain consciousness is to argue without any foundation but ignorance.
 
Last edited:
They used to say a machine will never walk bipedally. We have robots that walk, jump, and climb stairs now. They used to say machines were slow and inefficient at basic calculations and would never replace human thinking. We've shown that to be o-so-wrong. They used to say machines couldn't play chess, or tie shoes, or drive cars, and on and on and on...

They used to say The Clapper could never be used in tandem with a second Clapper for turning on multiple devices: only one to a box. We may someday have boxes with two, even three Clappers. They used to say The Clapper couldn't properly distinguish between an actual human clap and the clap of common venereal disease caused by the bacterium neisseria gonorrhoeae. We've shown that to be o-so-wrong. New Clapper detection lights will glow only when an actual human clap is detected. These Clapper detection lights will help you determine the proper speed and loudness of your claps that are necessary to activate any subsequent Clapper. They used to say The Clapper couldn't help us to play chess, or tie shoes, or drive cars, and on and on and on. It can't, but the enclosed detailed instructions may help us understand how to model these exact same processes in a Better Clapper that may possibly one day exist. If it does. :)
 
Taffer, I have tried reading the sources you posted references to. It seems to me, prima facie, that all you have done is search for neurobiological abstracts containing the word "consciousness", and then posted some of the contents of the abstract. Perhaps you are not aware that the sources whose abstracts you have cited are not actually available to the public without charge. The one I was most interested in, which begins "there is a clear tendency", would cost me $30 to acquire. I registered for free at some other sites, and still could not obtain anything more than the abstract.
This makes me question whether you have read these articles yourself. If they actually are available to you, perhaps I can get a copy from you, at least of the one that I mention an interest in?

Firstly, the point of an abstract is to inform the reader of the goals, method and results of a paper without having to read the rest of the paper. Anything stated in an abstract will be backed up in the paper. It is not always required to read the entire paper to understand it's results.
Secondly, it would be quite the breach of copywrite laws if I were to give you an academic journal article that I have access too. Despite this, the one you were interested in is one of the articles I was unable to access (others, I was). However, as I said above, an abstract presents the key findings of a paper.

As for a "purely material explanation of consciousness", this thread is actually about materialism and logic. I would greatly appreciate a description of the following:

1. a materialist account of what a proposition is

Please. A proposition is a human construct, my friend. We invented it as an essential step in formal logic. Formal logic, however, is modelled on the way the universe appears. How does an 'materialists' definition of a proposition differ from anyone elses?

2. a materialist account of what a modus tollens argument is

By "materialist account", I mean of course one that is in accord with the claim that "everything that exists is material". So, either a proposition and a modus tollens argument are composed of matter, or they do not exist.

Again, I completely fail to see your point. Modus Tollens, or 'denying the consequent', is just a name given to a particular form of logically valid argument. Logic, as I have already said, is based on a consistant universe. Why would you think otherwise?

When you say that they must be composed of matter, this is both an intended misrepresentation of materialism, and a straw man argument. Both logical terms were invented by humans, based on consistant observations of our universe. How is this not materialistic? Unless you believe in no empty propositions, which is both silly and blatently false, then why would you even ask this? It is clear, to anyone with any history in philosophical metaphysics, that materialism does not claim that, for example, 'red' exists as a material entity. It claims that everything in the universe has a material cause. Thus 'red' exists as in objects which cause us to see red.

If you honestly misrepresent materialism in this way, then I suggest you start another thread so we can clearly explain it to you.
 
Stillthinkin, it is clear to me that you are also misrepresenting 'logic'. Logic is not some magical property which exists only in humans. Nor is it some form of spiritual being which a god has put inside of us. Logic is, quite simply, a human construct. It is a word, and rules, used to describe a consistantly observable universe, based upon observational rules of that universe. It employs causality, which is observable in our universe. It employs exclusions, which is observable in our universe. Each of the atomic propositions can be reduced to their observable 'cause' (i.e. that which enspired us to construct these laws). The reason why there is no reason machines could not do 'logic'? Because they already do. A logic gate is just that. There is no special property of human logic that makes it different from the universe. Logic is a mechanical representation of a causal universe.

Example. You are trying to logically deduce the cause of a phenomenon. If you are employing formal logic, then even current machines could return the same value. Why is this not different? Because thought does not enter into it. Logic, by it's very nature, does not what thought.

I ask you a question, then. Can machines perform mathematics?
 

Back
Top Bottom