• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

More monkey physics...

Badly Shaved Monkey

Anti-homeopathy illuminati member
Joined
Feb 5, 2004
Messages
5,363
Can someone explain a phenomenon I have observed whereby screwing your eyelids half-shut can sometimes make a blurry image sharper?

(The irritating thing is that BSM intended to post this a few days ago after having that experience yet again, but BSM's poor substitute for a life intruded and he didn't get a chance and he can't remember the exact circumstances required for the trick to work, so he's hoping others can recognise the experience without him having to explain in more detail. I think I may have been trying to a signpost through a rain-spattered car window.)

I have several possible mechanisms in mind. I don't think it affects the eyeball itself to make it focus differently by squashing and altering its shape. Do the gaps in your eyelashes act like a load of pinhole lenses, though I'm not sure how that would help. I've wondered whether it's a perceptual trick- the image is not really better, but by screwing up your eyes and reducing the overall brightness you reduce your visual system's 'expectations' of image quality so the actual image is sharper by reference to a lowered expectation.

And while we're on the subject of signal-noise ratios. Can anyone remind me about the background to a technique for picking up really weak signals that would normally be undetectable by adding white noise to the signal? I think the purpose is to raise the amplitude of the signal above the threshold of detector. I remember reading about it in Scientific American a few years ago, but can now remember no more details.
 
Can someone explain a phenomenon I have observed whereby screwing your eyelids half-shut can sometimes make a blurry image sharper?

For the same reason that a pin-hole camera is always in focus.

Lack of focus comes from the lens not being exactly the right shape, so two different rays of light from the same object don't hit the same spot on the retina. If all the light comes through a tiny, tiny aperture, then only one ray of light can get in, so it will always hit the same spot as itself.

Basically, the smaller the aperture, the longer the range of distances at which a lens will focus correctly, because most of the blurriness -- but also most of the light -- comes from the outside bits of the lens.
 
Another possibility is that if the image isn't focusing one your retina correctly, you've basically got a bunch of copies of the image falling on different parts of your retina. Perhaps squinting cuts down on the amount of different copies, so it looks sharper.

As for your other question, I believe that you're talking about dithering. It's mostly a technique for digital devices: basically, the idea is that if you have a constant signal, every rounding error is going to go the same way each time. Dithering introduces a random element into the rounding error. The closer the signal is to one of rounded values, the greater the possibility that it will be rounded to that value, and that gives you some information about the signal.
 
As for your other question, I believe that you're talking about dithering. It's mostly a technique for digital devices: basically, the idea is that if you have a constant signal, every rounding error is going to go the same way each time. Dithering introduces a random element into the rounding error. The closer the signal is to one of rounded values, the greater the possibility that it will be rounded to that value, and that gives you some information about the signal.

I can see what you're talking about, but I'm sure it was about limits of detection and rendering subthreshold signals detectable.
 
For the same reason that a pin-hole camera is always in focus.

Lack of focus comes from the lens not being exactly the right shape, so two different rays of light from the same object don't hit the same spot on the retina. If all the light comes through a tiny, tiny aperture, then only one ray of light can get in, so it will always hit the same spot as itself.

Basically, the smaller the aperture, the longer the range of distances at which a lens will focus correctly, because most of the blurriness -- but also most of the light -- comes from the outside bits of the lens.

A few years ago, people sold glasses that were made out of a thin metal sheet with lots of holes. One would think that having lots of holes would defeat the purpose of having a pinhole, but they actually worked pretty well. I think they were taken off the market because of safety concerns, because they limited the amount of light so much.
 
I think what you're after is "stochastic resonance." It's similar to dithering in that it consists in adding noise, but it's done with analog signals, and for a different purpose.

Hooray! I think that's it.

Using that to search the Sci Am site, I find an artcile entitled "The Benefits of Background Noise"

and I think that's what I was remembering.

But, here's a thing. When i looked up stochastic resonance on Google, I found this page

http://neurodyn.umsl.edu/sr/

I don't know how stable the link is because the page says it is under construction.

Look at the image labelled "noise=120" then look at it with screwed up eyelids and I challenge you not to see it as a more readily interpretable image with your eyes screwed up.

Now, this can't relate to pinhole cameras and the focussing properties of small apertures because this iamge is not out of focus it is too grainy. If you vary how well focussed an image of the picture is, you will get more or less sharp images of the individual white dots that make it up. If you screw up your eyes you actually blur out the white dots but make the whole image easier to interpret. It's like the blurring makes it easier for you to perceive the whole image whereas a well-focussed rendition of it means you see the dots and not the whole- a kind of not seeng the wood for the trees effect.


http://www.randi.org/forumlive/images/attach/jpg.gif

Right. That's pretty much the situation I was envisaging in my OP.

Clearly I don't know what I'm doing when I try to attach an image. The original is about 4x6cm. The effect I'm describing is much better with the original. Could someone else attach post it and tell me how they did it, please?
 

Attachments

  • Grainy Image.JPG
    Grainy Image.JPG
    26.7 KB · Views: 3
Last edited:
A few years ago, people sold glasses that were made out of a thin metal sheet with lots of holes. One would think that having lots of holes would defeat the purpose of having a pinhole, but they actually worked pretty well. I think they were taken off the market because of safety concerns, because they limited the amount of light so much.
That would be the "Bates method," by coincidence recently covered in this thread. They still seem to be available: http://www.naturalvision.com.sg/vte.asp
 
The pinhole lens effect is one reason.

Pattern recognition is another, and is of course the explanation for grainy images bein improved by squinting at them. Out brain always tries to extract patterns from an image, and in a noisy image, the system is flooded by possible patterns in the noise. This tends to drown out the rintended image. By intentionally blurring the picture, you reduce the number of possible patterns, and it is easier to ignore the noise.

Hans
 
Is this only true with "familiar" images, such as a face?
Or can blurring the image also clarify an unfamiliar image?

ie is this actually a neural / memory effect, not an optical one at all?
 
The explanation I have seen (from a vision researcher in a grad seminar on vision) has to do with feature detectors (sorta) and fourier analysis of visual images, and takes place at the level of the visual cortex.

Basically, the experiments on this involve saturating particular feature detectors by exposing the visual field to a grid (depending on which wavelengths you wish to saturate, the grid can be coarser or finer, as:
image004.jpg

By saturating that particular feature detector and thereby fatiguing it, one can subtract much of it from the perception of a visual stimulus. A complex visual stimulus may light up a whole bunch of different sine wave feature detectors, in different orientations from horizontal. The fourier synthesis (not quite technically a fourier synthesis, but that is the right idea) of these sine signals gives rise to our perception of the complex picture.

Ok, still there?

Squinting effectively filters out one end (I can't remember whether it is the higher or lower frequencies) of the spectrum of gradients, and essentially cuts out a whole lot of static and a little bit of picture. The resultant percept is of a much clearer picture. (To answer Soapy Sam, it can work on unfamiliar pictures, or even on pictures of pseudorandom "noise". I have seen pics that look like grey blobby things, but which contain a very legible message if one merely squints while reading it. Or wears the appropriate grid filter lens.)

More here:http://www.psychol.ucl.ac.uk/alan.johnston/Psychophysics.html

BTW: Isn't the real research in Sensation & Perception soooo much cooler than stupid arguments about the nature of qualia?
 
For another example, try this: http://www.portalmix.com/eyetest.htm

I think that what's happening here is that each letter is made up of white regions separated by black lines. At lower resolutions, the lines become less distinct, and the white regions blur into each other.
 
Lack of focus comes from the lens not being exactly the right shape, so two different rays of light from the same object don't hit the same spot on the retina.
I wouldn't say that the lens isn't the right shape, it's more that it's simply focused at the wrong distance. Making the aperture smaller makes the "depth of field" that photographers talk about, longer - more stuff will be in focus. A large lens/aperture gathers more light, but the focus is more critical.

Now that I'm well over 40, I've seen another consequence of this. I have trouble focusing up-close (less than 1/2 meter) now, but it helps if the situation is well-lit. Then my iris closes down my pupil, making the aperture smaller, making things more in focus.
 
I wouldn't say that the lens isn't the right shape, it's more that it's simply focused at the wrong distance. Making the aperture smaller makes the "depth of field" that photographers talk about, longer - more stuff will be in focus. A large lens/aperture gathers more light, but the focus is more critical.

Now that I'm well over 40, I've seen another consequence of this. I have trouble focusing up-close (less than 1/2 meter) now, but it helps if the situation is well-lit. Then my iris closes down my pupil, making the aperture smaller, making things more in focus.

Yes, but the lens of the eye focuses by changing its shape, rather than by moving forward and back like a camera lens, so when it is out of focus it is, in fact, the wrong shape.
 
But, here's a thing. When i looked up stochastic resonance on Google, I found this page

http://neurodyn.umsl.edu/sr/

I don't know how stable the link is because the page says it is under construction.
That link is about 7 or 8 years old.

I know this, because neurodyn.umsl.edu is the homepage for the Center for Neurodynamics at UMSL, which is the group where I did my PhD work. The head of that group, Frank Moss, was my PhD advisor. He is also one of the formost experts in the field of stochastic resonance.

The man in the picture, Enrico Simonoto, was a postdoc in the Center back when I was doing my PhD there (from 96 to 2001). He left shortly before I finished.


Dr. Stupid
 

Back
Top Bottom