• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Computer software that knows what you are thinking

We're missing the worst part - if they couple this discovery with parallel advancements in Transcranial magnetic stimulation not only will Big Brother be able to read our thoughts but insert them as well!

Time to move to our bunker in the hills I reckon.

What was that movie where they built a device to record and play back thoughts and experiences - one of the first uses was, you guessed it, porn!
 
As DanishDynamite says, this is nothing new at all. I remember reading about this in New Scientist several years ago, and a quick look at the references in the paper show that they are just repeating similar experiments that have been done for over a decade. Sure, it might lead to more interesting things in the future, but at the moment there is nothing to get worked up over any more than there was 10 years ago.
 
Last edited:
EHLO,

I'm pretty sure that statement has some sarcasm in there...


Cuddles,

Still, I think it does open the door to very dangerous possibilities.

To put it simply, I believe in science, I also believe humans should be entitled to certain civil liberties.
 
Still, I think it does open the door to very dangerous possibilities.

Just about any field of science could lead to dangerous possibilities. Many of them already have. But there is no point worrying about it until you can actually show that they do lead to dangerous possibilties. Even then, the worry should be confined to working out what the possibilties actually are, what the dangers might be, and preferably ways to avoid them.

Investigating the structure of the atom led to some dangerous possibilties. Discovering DNA led to some dangerous possibilities. Fire and the wheel led to some dangerous possibilities. People really need to keep some sense of perspective. Certainly there are many things which require some thought on future problems and ethics, crying about big brother every time there's an article about the brain is just silly.
 
I suppose you're right. But some scientific discoveries pose a greater RISK of danger than others. The ability to determine what a person's thinking would violate every last ounce of privacy they have.

When a person forsees a risk of great danger, is it wrong for a person to speak up in advance such as to warn people of the risk so that it doesn't happen?

Regarding ethics, and ramifications -- the people who did this study seemed to have no concern for the effects their technology could have, at least any negative ones. To my knowledge, when Julian Haynes did a fMRI Algorithm test of the medial PFC to determine intent of individuals, at least he opened up a serious discussion of neuroethics, and wished to evaluate the ramifications that such technology can have.


INRM
 
I suppose you're right. But some scientific discoveries pose a greater RISK of danger than others. The ability to determine what a person's thinking would violate every last ounce of privacy they have.

You're still overstating the case. An fMRI cannot tell "what a person is thinking" - it can measure and display physiological reactions to stimulus. There is evidence that, with a willing participant who is not trying to trick the machine, these physiological reactions can be used to identify the stimulus.

To argue that an fMRI can be used to "determine what a person is thinking" is tantamout to saying that when I determine that my friend is angry because he is frowning, I am reading my friend's mind.
 
I think looking at your friend frowning is quite a different matter than using an fMRI to determine what they are looking at or thinking about...


INRM
 
I think looking at your friend frowning is quite a different matter than using an fMRI to determine what they are looking at or thinking about...


INRM

The difference is only in scale. In each case, we are examining physiological reactions to determine the stimulus causing them. Perhaps a better example would be currently-used lie detector tests, which operate on a different principle than an fMRI but nevertheless measure physical reactions and compare them against a baseline to determine mental stimuli.
 
The fact still remains that useing a lie dector (to passively determine what the reaction of a thought is), and using fMRI (to determine not what that reaction is but rather what the thought was that caused that reaction) is not the same thing not in the same ballpark even. In fact there is nothing existing to compare this technology to.
 
EHLO,

I'm pretty sure that statement has some sarcasm in there...

Yes, indeed it did.

I understand your concerns but as others have mentioned, it's only by the largest leap of imagination that this kind of technology poses the threats that you foresee.
 
The fact still remains that useing a lie dector (to passively determine what the reaction of a thought is), and using fMRI (to determine not what that reaction is but rather what the thought was that caused that reaction)

That is not what an fMRI does. An fMRI does not read your thoughts. From Wikipedia:
Functional magnetic resonance imaging (fMRI) measures the haemodynamic response related to neural activity in the brain...Hemoglobin is diamagnetic when oxygenated but paramagnetic when deoxygenated. The magnetic resonance (MR) signal of blood is therefore slightly different depending on the level of oxygenation. These differential signals can be detected using an appropriate MR pulse sequence as blood-oxygen-level dependent (BOLD) contrast....In general, changes in BOLD signal are well correlated with changes in blood flow....The BOLD signal is only an indirect measure of neural activity, and is therefore susceptible to influence by non-neural changes in the body.

In layman's terms, the fMRI studies blood flow in the brain. Recent studies are showing that blood flow to certain regions corresponds to certain stimuli.

For lie detector tests, the machine measures breathing rate, pulse, blood pressure, and perspiration [HSW]. These physiological responses have been correlated to certain stimuli.

The physiological responses studied by an fMRI are just better predictors of emotion and cognition than the physiological responses measured by the lie detector. Remember, on Mythbusters it was easier to fool an fMRI than it was to fool the lie detector (although their methodology was suspect).
 
SThompson,

Still, would you say that with time they would become increasingly accurate to the point that their accuracy would approach 100%?


INRM
 
SThompson,

Still, would you say that with time they would become increasingly accurate to the point that their accuracy would approach 100%?


INRM

No - at some level, any such technology would be affected by quantum mechanics.
 
This kind of technology is old hat. Nothing new there as I read it.

*Yawn*

Good for you sir.

It was the first time I read the article so I thought it was interesting to bring to the forum.

I await for the next time you regale us with your disinterest.
 
As DanishDynamite says, this is nothing new at all. I remember reading about this in New Scientist several years ago, and a quick look at the references in the paper show that they are just repeating similar experiments that have been done for over a decade. Sure, it might lead to more interesting things in the future, but at the moment there is nothing to get worked up over any more than there was 10 years ago.

The hoopla in this article is that they purposely ignored the data from the visual cortex. To see of they could do it with the patterns in the other areas of the brain such as the cognitive areas.
They were looking to see if the software could identify what object the brain was looking at by using the data generated by what the brain was thinking about that object and not the data generated from the visual cortex.

Basicaly they were playing around with the algorithms.

Maybe it old hat, but I thought the implications were much more interesting.
 
I suppose you're right. But some scientific discoveries pose a greater RISK of danger than others. The ability to determine what a person's thinking would violate every last ounce of privacy they have.

How exactly would it violate your privacy? Bear in mind they have to strap you down in an MRI machine. In what way does dragging you out of your home, forcing you into interogation, determining that you're thinking of a chair and imprisoning you for life differ from dragging you out of your home, forcing you into interogation, deciding that you're a terrorist/communist/rebel/liberal/spy/whatever and imprisoning you for life? People with power can do pretty much anything they like to you already, and in many countries they do. How is knowing that you're thinking of a chair going to make any difference?

When a person forsees a risk of great danger, is it wrong for a person to speak up in advance such as to warn people of the risk so that it doesn't happen?

It depends on whether the great danger they forsee is based on any evidence. People forsee great danger in mobile phones. They see great danger in genetic modification. They see great danger in vaccines. They see great danger in anything with the word "nano" in it. Should they speak up? No. Should they speak up if they ever actually find any evidence of danger? Of course. You are simply assuming that no-one involved has thought of the implications. Perhaps you should consider that they know a lot more about it than you and have concluded that there is no danger because lying down in an MRI scanner and thinking of a chair doesn't actually present a danger to anyone.
 
Well, I assume the technology would evolve to the point that it could more accurately predict what you're looking at, and even eventually to the point of being able to more or less gauge what you're thinking about. There was an fMRI program which was used on the medial PFC which enabled them to gauge intent.

And it is possible to gauge bloodflow without using an fMRI. Even from a distance theoretically. And *THAT*' is what worries me.

I mean phones used to have those old dials on them, then they went to touch tone, then their cords became longer, then wireless phones came out. Sure they were huge as a loaf of bread but then they got down to the size of regular phones.

Then they got smaller and smaller and smaller, and were even developed with the ability to go online, text people, and all sorts of other things. Now we have razor's and even sliver phones that are tiny and thin.

Maybe it's not the best example, but things evolve well beyond their initial intention is what I'm getting at. Maybe I'm jumping to conclusions, or maybe I have better forsight than most. But it has me a bit worried.


INRM
 
<some far future paranoid stuff>...


I mean phones used to have those old dials on them, then they went to touch tone, then their cords became longer, then wireless phones came out. Sure they were huge as a loaf of bread but then they got down to the size of regular phones.

Then they got smaller and smaller and smaller, and were even developed with the ability to go online, text people, and all sorts of other things. Now we have razor's and even sliver phones that are tiny and thin.

Maybe it's not the best example, but things evolve well beyond their initial intention is what I'm getting at. Maybe I'm jumping to conclusions, or maybe I have better forsight than most. But it has me a bit worried.


INRM

Nope, they've not involved beyond their original intention. They are still used to facilitate communication across distance. They've just got smaller and there's been some technical / functional convergence with other modern forms of old technology / function eg sending a letter, taking a picture.

Even if your nightmare scenario came about, what have you got to hide? If they could read my mind then after filtering through the copious amounts of sex and naked ladies, they'd be left with something that matches what I happily say out loud (mind you that goes for most of the sex thoughts too).
 

Back
Top Bottom