• Due to ongoing issues caused by Search, it has been temporarily disabled
  • Please excuse the mess, we're moving the furniture and restructuring the forum categories
  • You may need to edit your signatures.

    When we moved to Xenfora some of the signature options didn't come over. In the old software signatures were limited by a character limit, on Xenfora there are more options and there is a character number and number of lines limit. I've set maximum number of lines to 4 and unlimited characters.

Merged Cognitive Theory, ongoing progress

Why would they? I never said they did. ...
Wrong: If I had a grant to do experimental research then credit wouldn't be much of an issue.

I an involved to see what the status (dictionary definition = "the situation at a particular time during a process") of the ongoing progress is as in the title of the thread "Cognitive Theory, ongoing progress" :jaw-dropp! So far all I see is that you have an idea just in your head.

If it is obvious that who is president does not affect the description or publishing of science then why do you think Trump being president means that you have concerns about describing or publish your science?
 
barehl,
Is this your overarching theory, or is there more to it?:

Hypothesis: human cognition and intelligence evolved over time from non-cognitive and lower intelligence organisms. To avoid lots of random, lucky jumps in brain structure, elements that are used at each level must be present in previous generations. And more than likely these elements will still be found in modern organisms that have lower cognition/intelligence.

So, evolution?

What is your goal for the research? To explain how/why consciousness arises? I don't think many people dispute that it has evolved over time. At least serious researchers. That doesn't really solve the hard problem of consciousness though. Or is your goal more applied?
 
Is intelligence an advantageous trait?
Sometimes, depending on the species and it's environment, but it also carries costs, for instance energy costs, they may or may not be made up for by it's advantages.

If it is then why isn't everything intelligent?
See above.

Why were there no dinosaur civilizations?
Because intelligence, of the type that could lead to the development of civilizations, had not yet evolved in dinosaurs. This is in some sense related to what you said earlier about the evolution of intelligence. That is, evolution proceeds in fits and bursts as particular species adapt to particular environments and old adaptations are repurposed or built upon in new ways.

You couldn't have the evolution of intelligence until you had some sort of brain, you couldn't have the evolution of some sort of brain until there was some sort of nervous system, etc.


These are the types of questions that concern me.
Some of them are quite interesting questions.


That's basically what Darwin said. That isn't a significant point.

Is this:
Hypothesis: human cognition and intelligence evolved over time from non-cognitive and lower intelligence organisms. To avoid lots of random, lucky jumps in brain structure, elements that are used at each level must be present in previous generations. And more than likely these elements will still be found in modern organisms that have lower cognition/intelligence.
More significant? Does it differ particularly from "what Darwin said"? Maybe I'm misunderstanding, but I thought you were presenting this hypothesis as your own original thought and the evidence that is consistent with it as the evidence that is consistent with your theory. If not, I'm mistaken, but in that case you still haven't got around to answering theprestige's question.
 
Cognitive Theory: Ongoing Progress, part 2

Lately, I've been investigating ways of breaking causality in Turing derivative devices (like computers). I remembered that years ago I wrote a non-recursive solution to Towers of Hanoi that wasn't sensitive to the initial state. In other words, it could begin with any legal positioning of the disks and solve in the least number of moves. So, it occurred to me that if the program didn't rely on the previous state for a solution then this would imply a causal break. And it would seem that randomness in the environment would do the same thing. This wasn't a new idea. I was well aware that Hofstadter said the same thing almost forty years ago.

The next thing to consider was whether this appears in living organisms. The example that came to mind was stereotypical behavior of zoo animals. For example, bears will pace for hours at a time. In the documentaries I'd seen on this the treatment was to make the bear's environment more random. So, it looks like we have confirmation of degraded behavior when environmental entropy is lacking, and an improvement of behavior when environmental entropy is restored. To be honest, the idea that something as intelligent as a bear is so dependent on environmental entropy was quite surprising to me.

This could mean that animals have been more dependent on environmental entropy than I suspected. This would also suggest that animal consciousness would be more constrained. However humans don't seem to have this severe reliance on environmental entropy. Why? The next question is if this is seen with great apes which are closer in brain structure to humans. Since I'm not very knowledgeable about this, I need more information from people who are.

How Abnormal Is the Behaviour of Captive, Zoo-Living Chimpanzees?

Our overall finding was that abnormal behaviour was present in all sampled individuals across six independent groups of zoo-living chimpanzees, despite the differences between these groups in size, composition, housing, etc. We found substantial variation between individuals in the frequency and duration of abnormal behaviour, but all individuals engaged in at least some abnormal behaviour, and variation across individuals could not be explained by sex, age, rearing history or background (defined as prior housing conditions).
So, this wouldn't rule it out. However, it doesn't have information about treatment. In the one experiment that I can recall that specifically sought to make the environment of zoo chimpanzees less predictable, the new elements only seemed to be noticed by juvenile chimps. Mature chimps didn't seem to pay any attention.

Stereotypic Behavior in Nonhuman Primates as a Model for the Human Condition

Autism is a neurodevelopmental disorder whose central features include impaired social interaction and communication as well as stereotyped patterns of behavior
This is of perhaps more interest to me because my nephew has Asperger Syndrome and he does have stereotypical behavior.

in a study of 210 residents of a facility for individuals with intellectual disability, 60.9% were reported to exhibit stereotypies
So, it appears that this behavior in humans is seen with some type of brain disorder. This would suggest that there is a distinct difference between, say, chimp and human cognition. But then we have treatment:

In rhesus macaques, stereotypies have been associated with environmental restriction such as single housing; those housed singly exhibited more repetitive locomotion, stereotypy, and self-directed behavior than did monkeys housed in social groups. Similarly, chimpanzees removed from their social group and placed in individual cages also showed an increase in stereotyped behaviors

In older animals, additional environmental enhancements such as foraging opportunities can also promote an improvement in behavior. For example, the provisioning of straw, food puzzles, and forage materials reduced abnormal behavior in chimpanzees and rhesus macaques; as foraging increased, abnormal behaviors, including stereotypies, decreased.
Now that is quite interesting to me since the chimpanzees are responding to entropy in a way similar to bears. And, this would explain why the older chimps didn't pay attention in the experiment that I was familiar with. And then:

As with nonhuman primates, environmental enrichment also reduced stereotyped behavior in both children and adults with autism and intellectual disabilities

when institutionalized intellectually disabled adults were presented with pictures to look at or objects to manipulate, allowing for an alternate activity, they showed reduced levels of stereotypies.

Similarly, when 13 autistic children were provided with multiple sensorimotor stimuli, including olfactory enrichment, music enrichment, and exposure to different textures and toys, they showed a significant reduction in autism severity scores in comparison with the control group that did not receive the enrichment, and there was also a significant increase in the number of parents reporting an improvement in autism symptoms
This does seem to refute Hofstadter's suggestion that environmental entropy alone could be enough for human level cognition. And, it points to a direction for more investigation. One more piece of the consciousness puzzle.
 
Last edited:
Threads merged. Please do not start a new thread when there is an existing and active thread on the same subject.
Replying to this modbox in thread will be off topic  Posted By: Agatha
 
Lately, I've been investigating ways of breaking causality in Turing derivative devices (like computers). ....
Followed by not even a definition of "causality in Turing derivative devices "!
The usual definitions of causality do not include sensitivity to the initial state. You may be thinking of chaos theory or maybe emergent behavior.

A derail into the abnormal psychology of zoo animals.
 
environmental entropy
I think you have a number of very different concepts confused at very deep levels. The randomness that goes into environmental enrichment has nothing to do with entropy. If, as I suspect, you're using that as a bridge into information theory and/or determinism, you're barking up the wrong ivory tower of cards.
 
Seriously? Sensory and social deprivation are considered one of the more inhumane tortures you can visit on humans.

What bearing this has on cognition I think is illuminated a bit by your tortured comparison of an example with bears and chimpanzees. Chimpanzees are social animals, bears are not. The social bonds can provide chimpanzees with a fair amount of enrichment, not so with bears. Social interaction is necessary for the well being of chimpanzees, but not so with bears. Both animals are intelligent problem solvers, but both require different things.
 
Followed by not even a definition of "causality in Turing derivative devices "!
A Turing Machine is an abstract construct used in computational theory. A real, working version of this is necessarily finite. However, instead of being called a Finite Turing Machine it goes by the less intuitive name of Linear Bounded Automation. Any working computer we have today should be an LBA. Some assume that brains are also equivalent to LBA but this hasn't been proven. So, one of the questions is whether a brain could be described as a Probabilistic Linear Bounded Automaton. This requires a source of random information. However, pseudo-random algorithms aren't actually random and a table of random numbers isn't random if is used more than once. The only way to ensure that a table wasn't reused would be to make it infinite length and then we would be back to a Turing Machine. Hofstadter suggested that the environment had enough random information that questions like infinite tables of random numbers or pseudo-random algorithms could be avoided. In terms of causality, any deterministic computational device is completely predictable. Any future state is known based on the current state and the inputs. The term 'causality' isn't usually applied in computational theory but it is routinely used in discussions of free-will. For example:

The conflict between intuitively felt freedom and natural law arises when either causal closure or physical determinism (nomological determinism) is asserted.

If I'm investigating consciousness then I'm not sure how I can avoid using terms that are commonly used for that topic, particularly when it is overlapping. The term 'stress' used in psychology was borrowed from engineering. The psychological meaning is quite different from the engineering meaning.

A derail into the abnormal psychology of zoo animals.

I'm not quite sure how I could be derailing my own thread by talking about my own theory in a thread about my own theory. This is the same topic I've been working on for the last four years. This is cognitive theory. This is how I'm trying to explain how human consciousness works and evolved. I'm sorry if this isn't your preferred direction of research. You are of course perfectly free to do your own research.
 
Seriously? Sensory and social deprivation are considered one of the more inhumane tortures you can visit on humans.
That's true, but I didn't mention sensory deprivation and as far as I'm aware none of the experiments involved sensory deprivation. Social deprivation was mentioned as were the negative effects.

What bearing this has on cognition I think is illuminated a bit by your tortured comparison of an example with bears and chimpanzees. Chimpanzees are social animals, bears are not.
I don't recall making a social comparison between bears and chimpanzees or bears and humans. Social aspects would be a different topic.
 
I think you have a number of very different concepts confused at very deep levels.
Feel free to post a list of what concepts I have confused. I'm not sure how many you believe this would be so just start with the first ten or so.

The randomness that goes into environmental enrichment has nothing to do with entropy.

Wikipedia:

In computing, entropy is the randomness collected by an operating system or application for use in cryptography or other uses that require random data. This randomness is often collected from hardware sources (variance in fan noise or HDD), either pre-existing ones such as mouse movements or specially provided randomness generators. A lack of entropy can have a negative impact on performance and security.

If, as I suspect, you're using that as a bridge into information theory and/or determinism, you're barking up the wrong ivory tower of cards.
You get an A on your creative use of mixed metaphors. You didn't score as well on the rest.
 
Is this your overarching theory, or is there more to it?

What is your goal for the research? To explain how/why consciousness arises? I don't think many people dispute that it has evolved over time. At least serious researchers. That doesn't really solve the hard problem of consciousness though. Or is your goal more applied?
The goal is a detailed model of how the human brain processes information which results in intelligence, creativity, learning, and understanding. This model would apply to the evolutionary development of the brain and to creating a machine equivalent. And, with a completed model we should be able to answer questions about consciousness and free will.
 
Seriously, think about this probability thing logically. Lets say you have a computational device that depends on random information to function, as you are proposing. If we feed it pseudorandom numbers, or a repeating loop of real random numbers, etc, it fails. If we feed it true non-repeating random numbers, it works.

Firstly, of course, if we use a repeating sequence of true random numbers and the tested runtime is less than the length of the repeat sequence, it will work unless it somehow knows that we will repeat the numbers.

So we get to the repeat. If the computing device now fails, it means that it must have internally stored the entire input sequence of random numbers somehow, otherwise it would not be able to "detect" that a repeat occurred. Of course, the longer the device runs, the more memory it needs so that it will fail if a repeat occurs.

Then there is the issue that repeats do occur normally in true random numbers. If they didn't the numbers wouldn't be random. The longer the device is running, the longer the sequence of repeats it will see. If it sees a repeat of 400 digits, does it fail? Or does it know that these are true random number repeats and not us cheating by copying 400 bits rather than taking them from a true random number generator?

Then we get to the halting problem. Computationally, it is not possible to build a device that produces true random numbers. You have recognized this and determined that this is a limitation on computation and to get around this, you will have to feed your device true random numbers. Lets say you build this device, and yes, it works, but only if you feed it true random numbers, then it fails.

I feed the device a large number as an input number and test if it fails. If it fails, I did not feed it a true random number, so I increment the number and run it again. I do this until it succeeds. We now have a computation device that can produce true random numbers. If the number produced by this method is not a true random number, then your device did not need true random numbers to function. If it is a true random number, then it's possible to produce true random numbers by computation alone and you don't need an external true random number source.
 
I'm sorry for overlooking your post.

Sometimes, depending on the species and it's environment, but it also carries costs, for instance energy costs, they may or may not be made up for by it's advantages.
Yes, the classical model of evolutionary brain development covers things like size, weight, and energy requirements, but this model has been lacking. For example, why wouldn't animals larger than humans be intelligent since presumably the energy and size requirements would be negligible? Why aren't elephants and whales smarter than humans? This question is also difficult to answer using an emergent hypothesis.

You couldn't have the evolution of intelligence until you had some sort of brain, you couldn't have the evolution of some sort of brain until there was some sort of nervous system, etc.
You have reptiles during the Carboniferous, about 300 million years ago which are ancestral to modern reptiles, dinosaurs, birds, and mammals. Why did only a subgroup of mammals become intelligent? Obviously not because of time or body size.

Maybe I'm misunderstanding, but I thought you were presenting this hypothesis as your own original thought
No. I was just giving my reasons for how I was conducting the research. You can approach human intelligence from the point of view of neurology, computational science, philosophy, etc. I didn't make progress until I looked at it from the point of view of evolutionary theory.

If not, I'm mistaken, but in that case you still haven't got around to answering theprestige's question.
I answered that in some detail here:
http://www.internationalskeptics.com/forums/showthread.php?postid=12084765#post12084765
 
Last edited:
Lets say you have a computational device that depends on random information to function, as you are proposing.
I didn't come up with the idea of probability based computation.

Probabilistic Turing machine

If we feed it pseudorandom numbers, or a repeating loop of real random numbers, etc, it fails. If we feed it true non-repeating random numbers, it works.
I'm not sure what you are talking about. A subroutine or algorithm that takes a number as an argument will run exactly the same whether that number is fixed, random, or pseudo-random. This is elementary computer science. You know this as well as I do.

Firstly, of course, if we use a repeating sequence of true random numbers and the tested runtime is less than the length of the repeat sequence, it will work unless it somehow knows that we will repeat the numbers.

So we get to the repeat. If the computing device now fails, it means that it must have internally stored the entire input sequence of random numbers somehow, otherwise it would not be able to "detect" that a repeat occurred. Of course, the longer the device runs, the more memory it needs so that it will fail if a repeat occurs.
Yes, as I've already said. I'm not sure why you thought there was some disagreement on this point.

Other points of non-disagreement would be the Monte Carlo, Atlantic City, and Las Vegas algorithms which seem to work just fine with pseudo-random numbers.
 
Feel free to post a list of what concepts I have confused. I'm not sure how many you believe this would be so just start with the first ten or so.
You could start with the one I gave you.

barehl said:
Wikipedia:

In computing, entropy is the randomness collected by an operating system or application for use in cryptography or other uses that require random data. This randomness is often collected from hardware sources (variance in fan noise or HDD), either pre-existing ones such as mouse movements or specially provided randomness generators. A lack of entropy can have a negative impact on performance and security.
That's informational entropy, yes. It has nothing to do with environmental enrichment.

Environmental Entropy: a mathematical model of enrichment and play should be your first paper. Render down all the lovey-dovey verbiage about environmental interaction and show that there are underlying principles which really can be mapped onto an informational space. THEN you can build on that.
 
I'm not sure what you are talking about. A subroutine or algorithm that takes a number as an argument will run exactly the same whether that number is fixed, random, or pseudo-random. This is elementary computer science. You know this as well as I do.

I'm referring to the input stream of random numbers you are feeding your computational device in order to make probabilistic decisions.

Other points of non-disagreement would be the Monte Carlo, Atlantic City, and Las Vegas algorithms which seem to work just fine with pseudo-random numbers.

Oh, then if a pseudo random number generator of sufficient quality is fine, then your probabilistic Turing machine just reduces to a normal one that incorporates a prng of sufficient quality.
 
A Turing Machine is an abstract construct used in computational theory. A real, working version of this is necessarily finite. However, instead of being called a Finite Turing Machine it goes by the less intuitive name of Linear Bounded Automation. Any working computer we have today should be an LBA.
The first of those sentences is true, but what follows is balderdash.

Turing machines start out finite and remain finite at every finite stage of computation. They are potentially infinite in that there is no bound on the size of their tape (memory). To build a real, working Turing machine, you build a machine with a finite tape that is dynamically extensible: Whenever the TM gets to the end of the tape, you add more tape, pretty much as you would add another memory device to your computer system or would replace one of your system's memory devices by a similar device with more capacity, copying the replaced device's contents onto the new one as you do so.

barehl tells us a Linear Bounded Automaton is finite, but the only bound on the size of a Linear Bounded Automaton's tape is derived from the size of its input. To build a real, working Linear Bounded Automaton, you have to build a machine with an arbitrarily long finite tape. You'd do that the same way you'd build a Turing Machine, but it's slightly simpler because you only have to add the additional tape (memory) once, when you are given a concrete input and can compute the amount of memory needed from the input's size.

It is silly to say, as barehl did, that today's working computers are Linear Bounded Automata rather than Turing machines. You can add new or larger memory devices to consumer-grade computers by plugging in a USB thumb drive. It is also possible to build computers that allow other kinds of memory devices to be added dynamically without shutting down the computer. The ability to do that is necessary if you're building a Linear Bounded Automaton, and once you've done it you've done all of the engineering necessary to construct a working Turing machine.

In the real world, the real reason real computers aren't equivalent to Turing machines is that their ability to address memory devices, even large hard drives, is typically limited by a fixed maximum number of bits used to identify the location/cell/sector/word/whatever you want to access on a memory device, and by the fixed number of bits used to identify the particular memory device you want to access. Both of those technical limitations could be overcome quite easily, but it's cheaper and faster to use a fixed number of bits that's believed to exceed the number of memory devices and the device capacities that will actually be used during the anticipated lifetime of the computer system.

A cognitive theory that's based on fundamental misconceptions about Turing machines and Linear Bounded Automata is unlikely to add to our knowledge of cognition or intelligence.
 
This could mean that animals have been more dependent on environmental entropy than I suspected. This would also suggest that animal consciousness would be more constrained. However humans don't seem to have this severe reliance on environmental entropy. Why? The next question is if this is seen with great apes which are closer in brain structure to humans. Since I'm not very knowledgeable about this, I need more information from people who are.

You draw the oddest conclusions from your examples.

There is a huge difference between a complex environment and a random environment. Meeting the challenge of understanding a complex environment has evolutionary advantages. But there is no point in trying to understand a truly random environment.

On the other hand there are algorithms, such as simulated annealing, that rely on randomness to help solve a complex problem. This has relevance to problem solving using neural networks.

I believe these to be two separate and distinct issues. Or perhaps I am totally missing the point.
 
Last edited:
What is the textbook definition of "causality in Turing derivative devices"

A Turing Machine is an abstract construct used in computational theory....
You need to assume that posters in a thread about AI know the basics of AI, e.g. what a Turing machine is :eye-poppi!

And actually reply to a post. I will make it clearer:
What is the textbook definition of "causality in Turing derivative devices".
The definition of causality in any Turing machine might be that the state of the machine + its input causes a change in state. That may include a symbol on the tape that means "generate a random number and go to the state with that number". However I suspect that would be more a "subprogram" on the tape, e.g. a set of symbols that make up a random number generator, etc.

List sources other than your imagination that relate "causality in Turing derivative devices" to living organisms, e.g. zoo animals. Otherwise we just have what looks like a fantasy that abnormal behaviors (e.g. repetition) in zoo animals is related to an vague or even nonexistent definition.
 
Last edited:
The first of those sentences is true, but what follows is balderdash.

Turing machines start out finite and remain finite at every finite stage of computation. They are potentially infinite in that there is no bound on the size of their tape (memory). To build a real, working Turing machine, you build a machine with a finite tape that is dynamically extensible: Whenever the TM gets to the end of the tape, you add more tape, pretty much as you would add another memory device to your computer system or would replace one of your system's memory devices by a similar device with more capacity, copying the replaced device's contents onto the new one as you do so.

There is a limited amount of matter and energy in the universe that is within our cosmic horizon. Nothing you build in this universe is potentially infinite. Everything you can build can be reduced to a very large finite state machine.
 
I believe we have encountered a fellow traveler of the late and un-lamented member ProgrammingGodJordan with a better grasp of the English language but the same delusional approach to science and research.
 
I believe we have encountered a fellow traveler of the late and un-lamented member ProgrammingGodJordan with a better grasp of the English language but the same delusional approach to science and research.

Please don't compare other members to ProgrammingGodJordan.
 
Hierarchical topographic maps. The binding problem was only a problem in the context of the hypothetical computer architecture doing the binding, which had to pick between global concepts devoid of spatial context, or localized information with no unifying architecture. Turns out nested topologies can translate between the two just fine.

If you're familiar with deep learning, it operates on a similar principle.

Let's check this.

The neural binding problem(s) published in Cognitive Neurodynamics.

Abstract:
The famous Neural Binding Problem (NBP) comprises at least four distinct problems with different computational and neural requirements. This review discusses the current state of work on General Coordination, Visual Feature-Binding, Variable Binding, and the Subjective Unity of Perception. There is significant continuing progress, partially masked by confusing the different versions of the NBP.​
Introduction:
In Science, something is called “a problem” when there is no plausible model for its substrate.​
Here's Beelz' reference:

The brain’s organizing principle is topographic feature maps (Kaas 1997) and in the visual system these maps are primarily spatial (Lennie 1998).​
This is the important point about visual feature binding:

Another salient fact is that the visual system can perform some complex recognition rapidly enough to preclude anything but a strict feed-forward computation. There are now detailed computational models (Serre et al. 2007) that learn to solve difficult vision tasks and are consistent with much that is known about the hierarchical nature of the human visual system. The ventral (“what”) pathway contains neurons of increasing stimulus complexity and concomitantly larger receptive fields and the models do as well.​
I agree. Neural networks have made a lot of progress in this area, at least for very, very specific applications. So, it's solved, right? No.

Fortunately, quite a lot is known about Visual Feature-Binding, the simplest form of the NBP.​
We've made progress on two of these:

Suggesting plausible neural networks for General Considerations on Coordination and for Visual Feature-Binding is no longer considered a “problem” in the sense of a mystery.​
But not the other two:

Neural realization of variable binding is completely unsolved

Today there is no system or even any theory of a system that can understand language the way humans do.

We will now address the deepest and most interesting variant of the NBP, the phenomenal unity of perception. There are intractable problems in all branches of science; for Neuroscience a major one is the mystery of subjective personal experience.

What we do know is that there is no place in the brain where there could be a direct neural encoding of the illusory detailed scene (Kaas and Collins 2003). That is, enough is known about the structure and function of the visual system to rule out any detailed neural representation that embodies the subjective experience. So, this version of the NBP really is a scientific mystery at this time.​
Experience is still a mystery.
 
Again, that isn't what I said and you know it. Taking a quote out of the context of the paragraph which explains what I was saying in great detail doesn't help your claim.

If it is obvious that who is president does not affect the description or publishing of science then why do you think
I have not made a claim that Trump would block or interfere with publication. Why do you keep pretending that I did? I've already explained what my concerns were. The fact that you keep ignoring what I said and then making up new things to attribute to me doesn't help you at all in a discussion with me. Or are you doing this as a performance for other people here?

Why don't you try sticking to ideas that are actually mine?
 
I'm referring to the input stream of random numbers you are feeding your computational device in order to make probabilistic decisions.
I said some time ago that I don't believe that consciousness can be explained using only computational theory. Can I prove that yet? No. But that is the direction that I'm working. I may find out that computational theory is adequate after all. Again, all I can tell you is that I was unable to make progress using only computational theory. We'll see.

Oh, then if a pseudo random number generator of sufficient quality is fine, then your probabilistic Turing machine just reduces to a normal one that incorporates a prng of sufficient quality.
Well, yes, no disagreement there.
 
It is silly to say, as barehl did, that today's working computers are Linear Bounded Automata rather than Turing machines.

In the real world, the real reason real computers aren't equivalent to Turing machines is that their ability to address memory devices, even large hard drives, is typically limited by a fixed maximum number of bits used to identify the location/cell/sector/word/whatever you want to access on a memory device, and by the fixed number of bits used to identify the particular memory device you want to access.

Well, that clears it up. Apparently if I say it then it's silly but if someone else like Clinger says the same thing then it's not silly. Thank you.
 
There is a huge difference between a complex environment and a random environment. Meeting the challenge of understanding a complex environment has evolutionary advantages. But there is no point in trying to understand a truly random environment.

I'm not sure what a truly random environment would be. Wouldn't you need to have variable or changing laws of physics for that? If you are a frog, can you predict when an insect that could make a good meal might happen by? I don't see how you could. That would seem to be a random event.
 
There is a limited amount of matter and energy in the universe that is within our cosmic horizon. Nothing you build in this universe is potentially infinite. Everything you can build can be reduced to a very large finite state machine.

That is true for any specific problem. But it isn't really true for general problems. You would end up needing to have collections of finite state machines and then additional FSMs to decide which one to use. You quickly get into intractability where the size and complexity of the FSM grows much faster than the complexity of the problem. So, as far as I can tell, theoretically true but not practical.
 
I believe we have encountered a fellow traveler of the late and un-lamented member ProgrammingGodJordan with a better grasp of the English language but the same delusional approach to science and research.
You can talk to me directly. I don't resort to feelings and intuition and mysterious forces to explain things. You seem to think that I rely on vagueness or some kind of semantic arguments. Vagueness is what I'm trying to get rid of and terms are only useful if they can be robustly defined. If you know of some evidence that refutes my ideas or if I find evidence elsewhere then I'll have to modify or abandon my ideas. That's what science is.
 
Just for fun I remapped this to my scenario where people think I am gay.

I've been treated differently since I was six years old. I was treated differently all through grade school, high school, and even in college. I've gotten this from family, friends, co-workers, employers, acquaintances, and mental health professionals. This has gone on for a number of decades. So, how many possibilities are there?

1.) We have a case of mass delusion where people who come into contact with me mistakenly think I'm gay when I'm actually not. Since this involves people who have never met it would have to include some kind of telepathy.
2.) At the age of six I cannot help fooling people into thinking I was gay. And apparently got good enough at it to fool people who actually were smart and knowledgeable.
3.) The conclusions by others about me have been consistent because they were based on observations.

Thing is, I am not gay. Effeminite? perhaps. Overly mannered in the way I carry myself? Sure. Attracted to men. Nope. Just aint there.

ok, back on topic.
 
Update:

The latest thing I've been working on is pronoun reversal in people with autism. I found this reference particularly good since the writer has Aspergers.

https://musingsofanaspie.com/2014/03/14/pronoun-reversal-and-confusion/


Before someone else tells me that my research doesn't relate to my own research (that still makes me laugh) this involves self-perception and language usage.

When you say you're working on pronoun reversal, what form does your work take?
 
Just for fun I remapped this to my scenario where people think I am gay.

You changed what I said. The original post was:

2.) At the age of six I learned to fool people into thinking I was smart. And apparently got good enough at it to fool people who actually were smart and knowledgeable.

So if you were going to substitute then you would need to change both:

2.) At the age of six I learned to fool people into thinking I was gay. And apparently got good enough at it to fool people who actually were gay and knowledgeable.

I'm not sure how that would work. Children I grew up with were not sexually expressive until at least age 10. For example, I used to hold hands with my girlfriend in kindergarten but I didn't have any concept of sexuality then.
 
When you say you're working on pronoun reversal, what form does your work take?

You find out what information is available that describes this phenomenon. Then you see how this fits into cognitive theory. I think a complete theory should be able to cover topics like this. Items like this could either support or falsify a given theory. How would you fit this into something like Integrated Information Theory?
 
You changed what I said. The original post was:

2.) At the age of six I learned to fool people into thinking I was smart. And apparently got good enough at it to fool people who actually were smart and knowledgeable.

So if you were going to substitute then you would need to change both:

2.) At the age of six I learned to fool people into thinking I was gay. And apparently got good enough at it to fool people who actually were gay and knowledgeable.

I'm not sure how that would work. Children I grew up with were not sexually expressive until at least age 10. For example, I used to hold hands with my girlfriend in kindergarten but I didn't have any concept of sexuality then.
Whoosh!
 
Back
Top Bottom