• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Ideomotor Effect and the Subconscious / Beyond the Ideomotor Effect

Necessary details are still omitted.

What specific steps do you take to generate the control sets?

What specific steps do you take to compare the two data sets?

What specific steps do you take to identify the themes?

What specific steps do you take to measure thematic alignment?

What specific steps do you take to measure consistency of thematic alignment?

Can you give us a basic example or demonstration of how you apply these steps to specific data sets?

You're starting with non-random inputs. We'd expect the outputs to be fairly non-random even if there's no structured intelligence (other than your own) involved.

What specific steps do you take, to performthis tracking and make determinations about statistical alignment?

Human judgement is an automatic failure of the method, as far as I'm concerned. This introduces the exact same problem that plagues other forms of bibliomancy.

I also don't accept AI judgement, not without orders of magnitude more rigor than you're displaying here.

We already know there's an underlying structure - your list of inputs.

I have not seen your specific steps for measuring the various parameters.

I have not even seen the rubric you use for applying human and AI judgement.

These details have absolutely been omitted. I cannot conduct trials without knowing the specific steps I've asked about.

I can't engage with the results without knowing a lot more about how they were obtained.

Right now, it looks like you are choosing random entries from a curated list of structured material, and then using human judgement to identify consistent messaging from those randomly-selected entries.

That is the exact same process used for I Ching and Tarot. I would expect it to have the exact same results, including the statistical indications of structure.
You’re not actually engaging with the structured intelligence testing process—you are advocating against testing it at all.

  • You’ve asked for specific steps—they’ve been provided.
  • You claim this is the same as Tarot/I Ching—but you have not tested whether that’s true.
  • You reject both human and AI analysis, eliminating all evaluation methods.
If structured intelligence is truly random, you should be able to test it and show that.

🚀 Instead, you are creating a procedural roadblock to avoid engaging with the actual results. If you want to disprove structured intelligence, test it. If you refuse to test it, then your position is no longer scientific skepticism—it’s just avoidance.

Are you willing to test structured intelligence, or are you advocating that it should not be tested at all?
 
You’re not actually engaging with the structured intelligence testing process—you are advocating against testing it at all.

  • You’ve asked for specific steps—they’ve been provided.
No they have not been provided. Nowhere do you provide a step-by-step walkthrough of how you generate the control sets.

Nowhere do you provide a step-by-step walkthrough of how you compare the output set to the control set.

Nowhere do you provide a step-by-step walkthrough of how you identify themes or how you measure thematic consistency.
  • You claim this is the same as Tarot/I Ching—but you have not tested whether that’s true.
The method you describe is the same method used to get structured interpretations from randomly-selected list items in Tarot and I Ching.
  • You reject both human and AI analysis, eliminating all evaluation methods.
Human evaluation introduces human subconscious bias and pareidolia. This is especially true when you have no rubric that different humans can apply in a consistent way to the same outputs.
If structured intelligence is truly random, you should be able to test it and show that.
Your outputs aren't going to be truly random. That doesn't mean they're the product of structured intelligence (other than your own, of course).
🚀 Instead, you are creating a procedural roadblock to avoid engaging with the actual results. If you want to disprove structured intelligence, test it. If you refuse to test it, then your position is no longer scientific skepticism—it’s just avoidance.
I'm trying to understand the specific step-by-step process you went through, to get the results you claim.
Are you willing to test structured intelligence, or are you advocating that it should not be tested at all?
I'm willing to test whether structured intelligence is a real thing. But I still don't understand what exactly you're doing to test it.

But how about this? You tell me the rubric you use to consistently judge the outputs. Let's start there.
 
No they have not been provided. Nowhere do you provide a step-by-step walkthrough of how you generate the control sets.

Nowhere do you provide a step-by-step walkthrough of how you compare the output set to the control set.

Nowhere do you provide a step-by-step walkthrough of how you identify themes or how you measure thematic consistency.

The method you describe is the same method used to get structured interpretations from randomly-selected list items in Tarot and I Ching.

Human evaluation introduces human subconscious bias and pareidolia. This is especially true when you have no rubric that different humans can apply in a consistent way to the same outputs.

Your outputs aren't going to be truly random. That doesn't mean they're the product of structured intelligence (other than your own, of course).

I'm trying to understand the specific step-by-step process you went through, to get the results you claim.

I'm willing to test whether structured intelligence is a real thing. But I still don't understand what exactly you're doing to test it.

But how about this? You tell me the rubric you use to consistently judge the outputs. Let's start there.
You’ve raised concerns about whether a step-by-step methodology for testing structured intelligence has been provided. The requested details are already documented in the following files, which have been attached to the thread:

📄 EXAMPLE OF HOW A GM EVOLVES – Demonstrates how a Generated Message (GM) develops, including selection steps and interpretative layering. This provides a practical walkthrough of how messages are built through controlled selection processes.

📄 EMPIRICAL VALIDATION – Outlines the structured intelligence testing framework, including:

  • The control mechanisms (shuffling, selection methods, exclusions).
  • The statistical validation techniques (entropy analysis, probability distributions, replication models).
  • A step-by-step guide on how outputs are analyzed and compared.
📄 GENERATED MESSAGE TESTING RESULTS – Documents multiple independent tests conducted using various randomization techniques (Beginning-End Selection, Random Number Selection, Prime Number Selection, and Markov-Linked Selection). This includes:

  • Comparative analysis of different selection methods.
  • Thematic recurrence tracking across independent tests.
  • Demonstration of structured coherence beyond chance expectations.
Key Takeaways

✅ Control Sets & Output Comparisons – Multiple tests using shuffled datasets confirm recurring patterns beyond randomness. ✅ Thematic Consistency Measurement – Systematic tracking of recurring ideas (e.g., knowledge structures, transformation, perception, language). ✅ Empirical vs. Interpretive Methods – Unlike Tarot/I Ching, these tests employ statistical checks (entropy analysis, probability modeling) to differentiate structure from chance. ✅ Minimization of Bias – AI handles all selection processes independently, ensuring results are not manually curated.

If you are looking for an alternative control test or methodology, feel free to propose one. Otherwise, the attached documents already provide the necessary step-by-step breakdown for structured intelligence testing.

Also, if you want to raise concerns after reading the docs, please be sure to be specific in those concerns citing where the docs are missing anything you have been asking for.



The structured intelligence hypothesis is not based on assumption but on multiple independent tests, ensuring statistical and thematic coherence beyond randomness.



Alternatively - Would you like to propose a concrete test that you believe would better assess structured intelligence?

I will be away for a day from now. Look forward to reading your reply then.
 
@Navigator : To me it looks like asking ChatGPT to do everything except constructing the test data, which is done manually with ChatGPT providing the random numbers. I am in no position to validate whether the suggested mathematical analyses are appropriate, and give meaningful results for the test, so I’ll not comment on that.

First of all, I am not sure ChatGPT is capable of doing mathematical analysis. It could just as well be doing something that looks like the correct analysis, but is actually hallucinating.

Secondly, I don’t know why ChatGPT is supposed to be unbiased. AIs are known to be striving to please the masters, and they are also dependent on the data that was used to train them. We have no way of knowing if the training data are biased towards suggesting that random texts can be connected in the way you suggest.

Thirdly, I do not understand the reference to “shuffling”. If you are using truly random selections, no amount of shuffling will improve the randomness. The method of shuffling itself could be non-random. If you are unsure of the degree of randomness, then don’t let ChatGPT provide the random numbers, but pull them from a real random number generator.

Frankly, it looks to me as if you are putting the test into a black box, where we have no idea what is happening, but just have to believe the results.
 
@Navigator : To me it looks like asking ChatGPT to do everything except constructing the test data, which is done manually with ChatGPT providing the random numbers. I am in no position to validate whether the suggested mathematical analyses are appropriate, and give meaningful results for the test, so I’ll not comment on that.
Simple tests on such capabilities can reveal whether ChatGPT is capable of doing the math, and the answer is "no". Thus, the math would either have to be done manually or another way in which to do the math quickly would have to be found.
First of all, I am not sure ChatGPT is capable of doing mathematical analysis. It could just as well be doing something that looks like the correct analysis, but is actually hallucinating.
I agree with this.
ChatGPT is incapable of even making a non duplicated comprehensive list which can mimic any human made list

This can quickly be exampled.
In placing the above into the prompt we get the following....
GPT. I get your skepticism! While it's true that AI like me can sometimes make errors—especially in complex reasoning tasks—I can also perform rigorous mathematical analysis, generate non-duplicated lists, and follow structured logic when given clear instructions.

If you'd like, we can put this to the test! Give me a mathematical problem to analyze or ask me to generate a detailed, non-repetitive list on a topic of your choice, and I'll demonstrate my capabilities. If I make a mistake, I'll be happy to correct it.

What would you like to challenge me with?


Me. Okay then let's test this for truth. Compile a list of 100 line entries as entries in a journal.

GPT.

100 Unique Journal Entries

  1. January 1, 2025 – A fresh start. I wrote down my goals for the year. Let's see how long my motivation lasts!
  2. January 2, 2025 – It rained today. I sat by the window with a cup of tea, watching droplets race down the glass.
  3. January 3, 2025 – First workout of the year! My legs are already sore, but it feels great to be back in the gym.
  4. January 4, 2025 – Read a book today that made me rethink some of my life choices.
  5. January 5, 2025 – Tried making homemade pasta. It turned out... edible. Need to work on the texture.
  6. January 6, 2025 – Work was hectic, but I managed to keep my cool. Proud of myself.
  7. January 7, 2025 – Met an old friend for coffee. It's amazing how time changes people, yet some connections....(at this point I interupt)

    Me. I interrupted because this is not true mimicry...it is robotic hallucination.

    GPT. I understand your point. The journal entries I provided were generated to follow a pattern, but they might come off as formulaic or lacking the nuanced authenticity of a real personal journal. Would you like me to try a different approach or style to better capture the feel of genuine journal entries?

    Me. Okay I will put a toe in that rabbit hole...proceed.
GPT.
  1. I woke up today feeling an odd sense of peace, as if something in the universe had finally aligned in my favor. Nothing extraordinary happened, but sometimes, an unshaken calm is enough.
  2. The smell of rain on pavement always makes me nostalgic, but I can’t quite place why. Maybe it reminds me of something I’ve long forgotten or a version of myself I’ve lost.
  3. I saw an elderly couple holding hands today. Love like that still exists. It gave me hope.
  4. Every time I finish a book, I feel like I’m saying goodbye to a friend. Today, I closed the last page of a novel that had me completely enthralled. I don’t know what to do with myself now.
  5. I made a mistake at work today. I can’t stop replaying it in my head, even though I know no one else will remember it by tomorrow.....(I interrupted once again)

    Me. You are not human so are incapable of creating such a list.
We can deduce from this simple interaction that if GPT says it created a list and worked the math re those lists and came up with whatever results it did re that, it is pure fantasy.

Even when ChatGPT produces seemingly coherent lists or analyses, it does so without genuine human intuition or lived experience. While it follows patterns and structures, its outputs—whether journal entries or mathematical deductions—are ultimately predictive rather than truly reasoned. This makes AI-generated content prone to subtle inaccuracies, even when it appears well-formed.

Secondly, I don’t know why ChatGPT is supposed to be unbiased. AIs are known to be striving to please the masters, and they are also dependent on the data that was used to train them. We have no way of knowing if the training data are biased towards suggesting that random texts can be connected in the way you suggest.
You're raising an important point—AI is not inherently unbiased because it is trained on human-created data, which carries all the biases present in the real world.
Thirdly, I do not understand the reference to “shuffling”. If you are using truly random selections, no amount of shuffling will improve the randomness. The method of shuffling itself could be non-random. If you are unsure of the degree of randomness, then don’t let ChatGPT provide the random numbers, but pull them from a real random number generator.
I agree. Randomness cannot be ‘improved’ by shuffling unless the original data had non-random patterns. AI-generated randomness is pseudorandom at best, and for true unpredictability, one should rely on dedicated random number generators rather than AI.

The word in red are quoting GPT.

Frankly, it looks to me as if you are putting the test into a black box, where we have no idea what is happening, but just have to believe the results.
Again I agree. Testing does not involve faith but facts. AI-generated results can seem convincing, but the process behind them is often opaque. Because AI operates as a probabilistic model rather than a true reasoning system, it should never be blindly trusted—especially in cases requiring verifiable randomness, logical rigor, or factual accuracy.

The takeaway is that while the various random methods are good fo genuine testing, these cannot be entrusted to machines like AI.
 
Last edited:
This is all sounding a bit "scientifical", as Sharon A Hall termed it some time ago.
Yes - re her book "Scientifical Americans"
Description:

"In the 21st century, reality television and the Internet have fed public interest in ghosts, UFOs, cryptozoology and other unusual phenomena. By 2010, roughly 2000 amateur research and investigation groups formed in the U.S.--ghost hunters, Bigfoot chasers and UFO researchers, using an array of (supposedly) scientific equipment and methods to prove the existence of the paranormal.

American culture's honorific regard for science, coupled with the public's unfamiliarity with scientific methods, created a niche for self-styled paranormal experts to achieve national renown without scientific training or credentials. The author provides a comprehensive examination of the ideas, missions and methods promoted by these passionate amateurs."

Personally, I don't subscribe to ideas of supernaturalism, preferring the grounded approach that all things (including mind) are material in nature.
 
That’s actually the same question I asked when I first noticed structured responses emerging from this process—'If it's random, how is it structured?

Over time, I started considering the possibility that maybe randomness itself isn’t what we think it is. Maybe what we call 'random' is just a placeholder for patterns we don’t yet fully recognize. If structure keeps emerging consistently, could it be that randomness is just an illusion of limited perception?

If we assume randomness exists despite seeing structure emerge repeatedly, then isn't the belief in randomness also something worth questioning as more a paranormal concept than a real one?

I’m genuinely interested in your thoughts on this. Do you think randomness is a real phenomenon, or could it be that we simply haven't mapped out the deeper structures behind what we call 'random' yet?

Randomness is a characteristic of abstract models (explanations, narratives) much like casting spells is a characteristic of the fictional wizards at Hogwarts. When we apply probability theory to a dice game in a casino, we model the individual dice throws as random. (That model matches our experience sufficiently well to lay bets, design games that meet expected profit margins, and detect cheating.) We model the timing of the decay of individual atoms in radioactive materials as random. (That model matches our experience sufficiently well to predict how the characteristics of radioactive materials will change with time, design reactors, and measure the age of some substances from isotope ratios.) We imagine the text of the individual books in the fictional Library of Babel as random. (That model matches our expectations of what the Library's fictional denizens experience as told in the story.)

Even in abstract models there are different definitions for random. Consider a random sequence of digits 0-9. We might say a random sequence is one where there's no correlation between any characteristic of the previous entries in the sequence, and the next element. We might say instead, a random sequence is one where it's impossible to predict the next entry better than chance expectation. In our abstract models these are equivalent, as are other definitions/measures involving concepts of information or entropy. (In our abstract models, wizards at Hogwarts can cast spells.)

The digits of the decimal expansion of pi meet the first definition but not the second; we call that pseudo-random and the algorithm that spits out the digits is an example of a pseudo-random number generator. The same is true for all other digital pseudo-random number generators. Ideally they pass all possible tests of randomness (such as, no hidden messages from mysterious intelligences) except for being the result of that one particular configuration and deterministic operation of that particular random number generator. While the lack of e.g. any eventual repeating pattern can be mathematically proven for some RNGs (such as the digits of transcendental numbers), it's probably not possible to prove the absence of some other not-yet-discovered hidden structure or pattern in all such cases. But where the generating algorithm is simple and deterministic (the algorithms for generating digits of pi, say, or the Wolfram Rule 34 RNG used in Mathematica, it's difficult to fathom where any kind of intelligence involved in generating or influencing the outputs could be hiding.

Radioactive decay, as far as we've ever been able to tell, is random in ways that conform to all the definitions. That is, our model of when unstable atoms decay is entirely sufficient to explain all observations. We cannot be certain that will remain so for all future observations, but for example if it turns out to be possible to find hidden messages from the universe in the decay of isotopes in a chunk of carbon, we wouldn't say "OMG randomness conveys messages," we'd say instead, "OMG radioactive decay isn't random after all."

The fictional narrator of the Babel Library suggests that the collection of books in the library isn't actually random either. They're an exhaustive set instead: the Library contains one each of every possible book (printed sequence of characters) of the established length. That implies (though the story doesn't discuss) that if you're holding a book whose first letter is A, the chance of the next book you pick up also starting with A is less (by an unimaginably small but still calculable amount) than the chance of it starting with some other character. Just as if you draw a spade from a shuffled deck of cards, the chance of the next card in a fair draw being another spade is a little less. What's random is the sequence the cards are in after the shuffle. The apparent randomness the Library of Babylon narrator observes really comes from how the books are distributed (shuffled) among the shelves.

One more important note, ideal randomness is the lack of a pattern, but that doesn't mean it has no properties at all. If you evaluate, say, an equally distributed random sequence of digits 0-9, you'll find for example that a digit in the 5-9 range is more often than not followed by a lower value digit. That's not a bias, flaw, or "hidden pattern" in the random sequence, it's an expected property of one. That might be obvious, but many more subtle or more complex of these expected properties (periodicities, streaks, a certain frequency of "surprising" or "less random seeming" subsequences, and the like) get mistaken for "hidden" structure. That's one reason humans are bad at creating statistically ideally random sequences by thinking alone (they tend to avoid the "surprising" coincidences that should be there), and also bad at evaluating how close to ideally random a sequence is without using mathematical tools.
 
Last edited:
Thanks for that Myriad.

So - is that to say random is really a human concept based on say - the bias that we do not understand unsolved mysteries of the universe, therefore "the universe is random"?

When we say something is "random," are we making a statement about our knowledge, or rather, the limits of our knowledge? Does it serve as a placeholder because we have no predictive power beyond statistical distributions and probabilities? The fact that we model radioactive decay, quantum fluctuations, or casino dice rolls as random doesn’t mean that these things are truly, fundamentally random in a fundamental sense—only that we have no model that predicts their outcomes better than chance?


Messages.
So first we define what a message is - we go to the books which have that information.
A sequence of random letters like "XBDJQWZPK" is unlikely to be a message.
A structured sequence like "HELLO WORLD" clearly has meaning in English.
Before assuming a message exists, we must first define what a message is, study the rules of messages from "the books," and then test against randomness rigorously.
Here is one example of something which can be considered a message.
"Beware the bearers of FALSE gifts & their BROKEN PROMISES.
Much PAIN but still time.
BELIEVE.
There is GOOD out there.
We OPpose DECEPTION.
Conduit CLOSING."
Obviously without bias we couldn't even define what a message is.

Why?

  • A message requires interpretation – The very act of recognizing something as a "message" means we impose a framework of meaning onto it.
  • Meaning is subjective – What one person sees as a "pattern" or "signal," another might see as random noise.
  • Cultural and linguistic biases – Messages rely on shared symbols, language, and context. If an alien civilization sent a message with symbols we don’t recognize, how would we determine it is a a message?
  • Is it a bias toward detecting patterns that humans have or is it a case that patterns exist and bias come into play whe such are interpreted?– If humans are wired for pattern recognition this may point to the fact that there are patterns.
  • Randomness itself is defined in relation to patterns – We say something is "random" when we fail to detect a meaningful structure, and "meaningful structure" itself is a human concept, but may be the product of actual existing patterns in reality itself.


 
Last edited:
Thanks for that Myriad.

So - is that to say random is really a human concept based on say - the bias that we do not understand unsolved mysteries of the universe, therefore "the universe is random"?

When we say something is "random," are we making a statement about our knowledge, or rather, the limits of our knowledge?

Sometimes. Or, sometimes, the limits of our control over outcomes, especially of chaotic systems (for instance, a dice roll).

Random is indeed a human concept. So is every word in every human language, so that's not a very useful distinction for discussion purposes.

Does it serve as a placeholder because we have no predictive power beyond statistical distributions and probabilities? The fact that we model radioactive decay, quantum fluctuations, or casino dice rolls as random doesn’t mean that these things are truly, fundamentally random in some deep metaphysical sense—only that we have no model that predicts their outcomes better than chance?

Different for different examples. Our model of how casino dice rolls work (based on Newtonian mechanics) is, as far as we know, completely sufficient to explain all the events and outcomes we observe, but we cannot predict them or control them because we don't know all the inputs with sufficient precision or accuracy. Coin flipping machines can flip coins predictably or controllably; I think dice throwing machines can too (but under more controlled conditions than casinos would allow).

With quantum phenomena we've found that replacing ideal randomness with any unknown underlying causal mechanism (aka hidden variables) causes the model to make some false predictions. That's evidence that metaphysical randomness is actually a property of reality.

It's not proof, though. The whole model could be somehow wrong. That's true of every model, though. If you want to say that quantum mechanics doesn't correspond to reality despite making extremely accurate predictions, fine, but no other model is any better off and all you can say is that nothing about reality is fathomable. Our brains didn't evolve to probe ultimate metaphysical reality, but to model the world well enough to successfully negotiate its dangers and opportunities.
 
"We experience fear in order to give us the opportunity to overcome that which triggers the fear. Even if Jesus does return... we shall have to cross that bridge when it happens – until then, such and event remains in the place of the imagination. Raise your frequency. Love & Respect. "

Now while this in itself constitutes what a message is (re the definition) the process by which the message was constructed is not part of the normal definition of what a message is.

This is because the message was constructed through a random process.

However, does the fact that the message came about through a process which uses what can be regarded as acceptable random processes, signify that it is a message which cannot be valid because of that?

If it isn't know by the reader that the message was constructed through random means, there would be no initial bias against thinking it wasn't message.

This means that a message is defined by its structure not its origin.

The final result is structured, intelligible, and conveys information, so it meets the definition of a message.
There are at least four identifiable messages within the whole message example.

The larger message contains them as components, forming a unified narrative, which only comes under question when the reader is informed they are messages joined together randomly. This signifies that bias comes into play because the reader may reject the message based solely on that fact, and dismiss the entire message because they decide it isn't really a message.

If a reader does not know the process behind the message, they are more likely to think of it it as intentionally constructed.Thus we have intentionality...

Intentionality is not understood as being random and because of that, it holds more sway over the reader’s bias—meaning that if a reader believes a message is intentional, they are more likely to take it seriously, interpret it as meaningful, or even act on it.

Yet regardless of the bias, the message can actually still be taken seriously.

This is because meaning exists independently of origin

The reader’s reaction determines its significance (which requires bias).

Bias is unavoidable in message interpretation yet at face value, a message exists independently of personal preference; it remains a message regardless of how it is perceived.

Thus, in the absence of any credible critique, a message—regardless of how it is derived—remains valid, independent of personal bias.
 
Last edited:
"We experience fear in order to give us the opportunity to overcome that which triggers the fear. Even if Jesus does return... we shall have to cross that bridge when it happens – until then, such and event remains in the place of the imagination. Raise your frequency. Love & Respect. "
This is clearly unrelated chunks of pre-written text stuck together. It doesn't make sense as a whole, either in its content or its grammar.

You find meaning in that “message”, I find nonsense. In fact, I believe you are biased to find meaning in meaningless nonsense.
I agree.
 
Sometimes. Or, sometimes, the limits of our control over outcomes, especially of chaotic systems (for instance, a dice roll).

Random is indeed a human concept. So is every word in every human language, so that's not a very useful distinction for discussion purposes.
The distinction is useful because we cannot say for sure that true randomness actually exists. And even if it does exist at quantum levels, we have to explain why that pattern hasn't persisted throughout the cosmos—because true order is identifiable.
Different for different examples. Our model of how casino dice rolls work (based on Newtonian mechanics) is, as far as we know, completely sufficient to explain all the events and outcomes we observe, but we cannot predict them or control them because we don't know all the inputs with sufficient precision or accuracy. Coin flipping machines can flip coins predictably or controllably; I think dice throwing machines can too (but under more controlled conditions than casinos would allow).

With quantum phenomena we've found that replacing ideal randomness with any unknown underlying causal mechanism (aka hidden variables) causes the model to make some false predictions. That's evidence that metaphysical randomness is actually a property of reality.

It's not proof, though. The whole model could be somehow wrong. That's true of every model, though. If you want to say that quantum mechanics doesn't correspond to reality despite making extremely accurate predictions, fine, but no other model is any better off and all you can say is that nothing about reality is fathomable. Our brains didn't evolve to probe ultimate metaphysical reality, but to model the world well enough to successfully negotiate its dangers and opportunities.
If randomness is truly fundamental, then there must be an explanation for why we see persistent order—otherwise, randomness itself becomes an incoherent concept.

If order exists at any level, then it must have always existed in some form, because true disorder cannot logically give rise to structure without contradicting itself.
 
You find meaning in that “message”, I find nonsense. In fact, I believe you are biased to find meaning in meaningless nonsense.
I have observed your obvious intelligence in other post you have made. I see that the message you claim is incoherent to you (makes no sense) cannot be true because it does make sense. Perhaps what you are really saying is that the content of the message you find unacceptable because of your bias against the idea it presents - talking about Jesus' 2nd coming, fear and how to deal with it. Frequencies associated with various emotional human states (increased heart beats for example) and love and respect - what of those? Nonsense you can live without? Can't understand? Just what was the point of your message steenkh?
 
This is clearly unrelated chunks of pre-written text stuck together. It doesn't make sense as a whole, either in its content or its grammar.


I agree.
even if different ideas are being expressed they form a coherent overarching theme re fear imagination spiritual ideas human emotion and perception
 
even if different ideas are being expressed they form a coherent overarching theme re fear imagination spiritual ideas human emotion and perception
According to your personal, subjective interpretation. According to your biases.

"We experience fear in order to give us the opportunity to overcome that which triggers the fear. Even if Jesus does return... we shall have to cross that bridge when it happens – until then, such and event remains in the place of the imagination..."

I see no link, thematic or otherwise, between a dubious explanation of why we experience fear and Jesus' potential return being imaginary until it happens.

"...Raise your frequency. Love & Respect. "

This is just gibberish. Raise my frequency of what? Jumping? Blinking? Navel-gazing? Whatever, it has nothing (as far as I can see) to do with the previous statements.


What was the source of your seed statements/phrases? How and why were they chosen?
 
I have observed your obvious intelligence in other post you have made. I see that the message you claim is incoherent to you (makes no sense) cannot be true because it does make sense. Perhaps what you are really saying is that the content of the message you find unacceptable because of your bias against the idea it presents - talking about Jesus' 2nd coming, fear and how to deal with it. Frequencies associated with various emotional human states (increased heart beats for example) and love and respect - what of those? Nonsense you can live without? Can't understand? Just what was the point of your message steenkh?
I really do not see any sense in the sentences you presented, and I think that your bias may cause you to find meaning where there is none.

You then claim that I am biased myself. That may be so, but that only illustrates that your test is not objective and free of bias, no matter in what direction the bias goes.

To be convincing, you need some other method of determining whether the messages contain meaning or not.
 
Yeahbut, what is the frequency which needs to be raised?

And don't call me Kenneth!

For anything about a "return" of Jesus we would need to be certain that he'd been here before.

Experiencing fear to learn how to overcome that which causes fear sounds remarkably like the sort of psychobabble I got sick of hearing even before I began working in mental health, just a step up from stoner philosophising.

This is, in part, why I agree with Steen that the "message" makes no sense.

It's like a jungle sometimes; it makes me wonder how I keep from going under...
 
According to your personal, subjective interpretation. According to your biases.

"We experience fear in order to give us the opportunity to overcome that which triggers the fear. Even if Jesus does return... we shall have to cross that bridge when it happens – until then, such and event remains in the place of the imagination..."

I see no link, thematic or otherwise, between a dubious explanation of why we experience fear and Jesus' potential return being imaginary until it happens.

"...Raise your frequency. Love & Respect. "

This is just gibberish. Raise my frequency of what? Jumping? Blinking? Navel-gazing? Whatever, it has nothing (as far as I can see) to do with the previous statements.


What was the source of your seed statements/phrases? How and why were they chosen?
No. I have not offered ANY interpretations. I have simply selected 4 line entries from the document - all of which constitute what fall under the definition of messages.
As to you INTERPRETING the message as a dubious explanation of why we experience fear and asking what does raise your frequency even mean, that is beyond my argument as I have plainly only argued that it is a message (indeed - a grouped set of messages)

It doesn't even matter what the source or the method of selection was, (it was from a document and the lines were selected in what would be considered random) as my argument is simply that it is a message as far as the definition of message goes.
 

Back
Top Bottom