The Impossibility of Invisibility

The scale of the image generating element required within these "pixels" is much smaller than the wavelength of light

Why do you say so, that the image generating element is or would need to be much smaller than the wavelength of light?
 
Here's 2 ways to look at it.

Watch a movie (like I'm watching now).
I can see a person up-close on the screen, a tree 20yrds away, and mountains that look 50 yards away. If I look at the up close person, are the views behind it blurred? Nope, not when I look at them. It's an infinity shot.

The way this works (not perfected yet) is to make the image appear as if you were looking through a window. The window shows behind it what is outside. The objects don't need manipulating if you can't see them clearly, your vision needs to focus.

With this, it would be like looking through a window. You would focus your sight on what you choose. It wouldn't take a set snapshot, but rather keep a continuous view of everything behind it, like presenting a clear window to the viewer from either side.

If a projector can give you a liefelike look, as if the image were mearly a window to what was behind it, then the viewer would focus to view the image they wanted. The image doesn't change, just the viewer's focus.


Cheers,
DrZ

Yes, I understand infinity shots. The problem is that the results of an infinity shot never occur in nature: the human eye does not work that way. An observant person would look at the mountains and wonder why the nearest part of the image is still in focus. It is not a matter of a person not being able to focus on the mountains and then immediately focus on the person or object near the camera, it is a matter of a person focusing on the mountains and still noticing that the subject matter near the camera is still in focus. As I was trying to say in post #31, if I look through the branches of a bush very close to me at a tree very far from me, I can see the at the bush's branches are out of focus. Similarly, when I focus on the bush, I can see that the far tree is out of focus. A projected image (no matter how fine the resolution is) cannot reproduce those conditions.

Not everyone would notice it , but observant people and people trained to look for it can easily see the difference between the actual view and a view created by an invisibility device projecting an image of great depth.
 
The focus effect is nothing more than the divergence of the light rays that strike either side of the lens in your eye. The rays from a close object are diverging faster than the rays from a far object. If the object is at infinity, the rays are parallel. To a limited extent, an invisibility cloak can reproduce this effect and therefore produce a proper 3D image. For this to happen, each camera/projector "pixel" would have to be significantly smaller than the lens in your eye or about the size of the pixels on the screen of your monitor and each of these "pixels" would be projecting the full 180º image as would be seen from that point. The scale of the image generating element required within these "pixels" is much smaller than the wavelength of light so this cannot be built using current technologies.

On a larger scale of making more distance objects (say 100 meters away) invisible where everything already appears to be close to or at infinity, the focusing issue is not a problem. The diffraction limit of the human eye for visible light is about 5cm at that distance so we wouldn't need to make the "pixels" any smaller than that and could probably get by with much larger.

I agree.


ETA:

Ladewig said:
A projected image (no matter how fine the resolution is) cannot reproduce those conditions.

Yes, Dan O., you have described the two exceptions to my assertion.
 
Last edited:
A projected image (no matter how fine the resolution is) cannot reproduce those conditions.
What I called "dome projectors" are not projecting "one" overall image on the surface of the invisibility cloak for your eyes to focus on. This is something you haven't grasped yet. They are neither front nor rear typical projection screens. The so called "image" seen on the surface of the invisibility cloak would be different precisely depending on your viewpoint, so that it appears to have exactly what's behind the object as seen from that viewpoint. Hence the surface doesn't exhibit just one image projected on it at any given time. There are simultaneously thousands of different "images" viewable on that cloak at any given time, depending on where you are looking at it from.

Think of the appropriately named magic pictures when thinking about each "dome projector" on this invisibility cloak we are talking about.

The image you would see on the invibisility cloack would change as you move your view point with respect to the covered object, always perfectly sending at you the light as it was received exactly on the other side of the object with respect to your viewpoint. The cloak (or the tiny projectors for that matter) won't need to know where you are, because each "dome" is projecting the right light to all angles at all times, so as to render the covered object perfectly invisible from all angles.
 
Last edited:
What I called "dome projectors" are not projecting "one" overall image on the surface of the invisibility cloak for your eyes to focus on. This is something you haven't grasped yet. They are neither front nor rear typical projection screens. The so called "image" seen on the surface of the invisibility cloak would be different precisely depending on your viewpoint, so that it appears to have exactly what's behind the object as seen from that viewpoint. Hence the surface doesn't exhibit just one image projected on it at any given time. There are simultaneously thousands of different "images" viewable on that cloak at any given time, depending on where you are looking at it from.

Think of the appropriately named magic pictures when thinking about each "dome projector" on this invisibility cloak we are talking about.

The image you would see on the invibisility cloack would change as you move your view point with respect to the covered object, always perfectly sending at you the light as it was received exactly on the other side of the object with respect to your viewpoint. The cloak (or the tiny projectors for that matter) won't need to know where you are, because each "dome" is projecting the right light to all angles at all times, so as to render the covered object perfectly invisible from all angles.


Again, I am willing to accept everything that you said. None of that has anything to do with my objection. In fact I'll go so far as to say that the problem I am describing would occur even if the invisibility device and the observer were both perfectly still and that the observer had one eye closed.
 
Again, I am willing to accept everything that you said. None of that has anything to do with my objection. In fact I'll go so far as to say that the problem I am describing would occur even if the invisibility device and the observer were both perfectly still and that the observer had one eye closed.

Aha, let's imagine that setting then, but let's first agree on something.

Imagine observer is at that location, there is no invisible object blocking his view, and there are some near and far objects in the background, towards which he can focus his vision appropriately. He can do this without moving his eyes, and without those objects moving. He does his focusing appropriately just with his eyes, given the rays of light coming from those objects in front of him.

Now, assuming you agree with that, tell me now, we put the invisible object in between, and given that we are assuming it's a perfectly invisible object, all rays of light emanating from any point of it towards the eye of the observer are exactly the same rays of light that were hitting the eye of the observer before, from those same points, but when the invisible object wasn't there.

If the rays of light are exactly the same, coming from exactly the same points of space and going towards the same point in space (namely the observing eye at the same position) how come focusing wasn't an issue before, and is an issue now?
 
Last edited:
Aha, let's imagine that setting then, but let's first agree on something.

Imagine observer is at that location, there is no invisible object blocking his view, and there are some near and far objects in the background, towards which he can focus his vision appropriately. He can do this without moving his eyes, and without those objects moving. He does his focusing appropriately just with his eyes, given the rays of light coming from those objects in front of him.

Now, assuming you agree with that, tell me now, we put the invisible object in between, and given that we are assuming it's a perfectly invisible object, all rays of light emanating from any point of it towards the eye of the observer are exactly the same rays of light that were hitting the eye of the observer before, from those same points, but when the invisible object wasn't there.

If the rays of light are exactly the same, coming from exactly the same points of space and going towards the same point in space (namely the observing eye at the same position) how come focusing wasn't an issue before, and is an issue now?

Aha, let's imagine that setting then, but let's first agree on something.

Imagine observer is at that location, there is no invisible object blocking his view, and there are some near and far objects in the background, towards which he can focus his vision appropriately. He can do this without moving his eyes, and without those objects moving. He does his focusing appropriately just with his eyes, given the rays of light coming from those objects in front of him.

Now, assuming you agree with that, tell me now, we put the invisible object in between, and given that we are assuming it's a perfectly invisible object, all rays of light emanating from any point of it towards the eye of the observer are exactly the same rays of light that were hitting the eye of the observer before, from those same points, but when the invisible object wasn't there.

If the rays of light are exactly the same, coming from exactly the same points of space and going towards the same point in space (namely the observing eye at the same position) how come focusing wasn't an issue before, and is an issue now?


I am convinced that I cannot explain the matter clearly enough for others to understand. I will let Dan O. answer your question in post #41.
 
Why do you say so, that the image generating element is or would need to be much smaller than the wavelength of light?

If a cloaked object passed between you and your computer screen, the "pixels" of that cloak need to be smaller than the pixels if your screen otherwise the text on your screen would be blurred. Each one of these "pixels" in the cloak is a little projector that has to project the entire scene behind it so every pixel on your computer screen and more has to be generated inside each individual "pixel" of the cloak. If you have 1024 pixels across a .32 meters screen the density is 3200 pixels per meter so the "pixels" of the cloak must be smaller than about 0.3 millimeters. The 1024+ image generating elements across this "pixel" each get less than 300 nanometers which is the wavelength of ultra-violet light.
 
If a cloaked object passed between you and your computer screen, the "pixels" of that cloak need to be smaller than the pixels if your screen otherwise the text on your screen would be blurred. Each one of these "pixels" in the cloak is a little projector that has to project the entire scene behind it so every pixel on your computer screen and more has to be generated inside each individual "pixel" of the cloak. If you have 1024 pixels across a .32 meters screen the density is 3200 pixels per meter so the "pixels" of the cloak must be smaller than about 0.3 millimeters. The 1024+ image generating elements across this "pixel" each get less than 300 nanometers which is the wavelength of ultra-violet light.

I see why you said that.

Now, I have two comments on it:

1) It has nothing to do with the focusing concern that Ladewig has. So that remains in similar pending status with you, Ladewig.


2) Dan O. you are assuming the image generating element has to be pixel based, which is limiting of course, but we are talking about science fiction here. No need to limit our light generating element technologies to specific technologies we know of already.


Fiber optics have in theory infinity bandwidth within each fiber; it only happens that our technology can't yet fully take advantage of that.


For the dome cameras, I'm thinking of a tiny 180 wide-angle fish-eye lens on top of a certain (non-existing) light sensitive device, maybe a variety of one of those "BECs" mentioned in the link I posted earlier. Imagine, for now, one of those being exactly 300 or 400 nm in diameter, that's the smallest we could get for our dome cameras and dome projectors (in principle, given our current knowledge of things, but keep reading) to process visible light.

Now imagine one such device with its fisheye lens on top catching absolutely all visible light hitting the device from the whole hemi-space in front of the base of the dome. Now we just need the easy task to build a device to reverse that, such a small device which could project light in all directions selectively. Your idea seems to be to have a pixel panel under a fish-eye lens, a pixel plane flash-light with a fish eye lens on top, sort of. But that's probably not the way to go. The projectors could be something entirely different that we don't know of yet. Let's have more fun thinking outside the box a little more.

Maybe these BECs mentioned in the earlier link not only could detect light, maybe we eventually discover we can "poll" these BECs to let us know exactly what light hit the BEC at exactly which angle. And we might eventually discover we can poll the device to give us that information simultaneously for as many angles as we want.

That would give us the input we need from each of the dome cameras. Now each of those pieces of information, the rays of light hitting a dome camera at all different angles we want to know about, could be separated and reassembled at will. We could send the appropriately reassembled information selectively to "build" the right output to be projected by the appropiate BEC (our dome projectors), send it to it, and that BEC will somehow output the assembled fountain of light out into the 180 space in front of it. Perfectly, just what we needed.

Once again, we are under the realms of science fiction after all, aren't we? Don't think I'm onto something :P

In fact, I'm thinking that even 300-400 nm shouldn't be a lower bound for the cameras, because arguably there could be smaller elements that working together and collectively could sense exactly what wave length hit them at a specific angle. Similarly we could think about the projector elements, being smaller than 300 nm, yet producing the appropiate light at the appropiate wavelengths collectively, not individually. And in that case, something like pixel/BEC-based panels could come back to our drawing table ;) But a pixel-based panel rather different from what we know of.

In conclusion, visible light's range of wavelengths shouldn't be a problem, given our completely unrestricted sci-fi scope of limitations. So pixel based projection should NOT be a techonology of choice or a technological limitation for our light-gathering or light generating elements to allow us to achieve invisibility.

Ok and I am stopping my wildly running thoughts for now. :)
 
Last edited:
I am convinced that I cannot explain the matter clearly enough for others to understand. I will let Dan O. answer your question in post #41.

Ladewig, I am pretty certain that the focusing issue you are concerned about is actually not an issue at all given the way things would work according to what we have so far described; even before my previous post, which has nothing or little to do with the focusing aspect in optics.
 
Last edited:
Interesting thread, and I'm about to derail it as it reminds me of some thoughts I had regarding the ill fated "virtual reality" goggles.

Not that I ever used any, but I seem to remember the problem with the VR goggles, other than the limited resolution of the time, were that the eyes got tired focusing on such a near plane, and it made some people feel nauseous too due to focusing on one plane in reality, but the brain seeing a different image.

I wondered, rather than just project a flat image, whether it was possible to reconstruct the entire light wave front much like a hologram does? That way the eye could focus on anything it wanted and I suspect it would provide pretty amazing immersion.

I have no idea if this is possible given current technology or whether it requires some of the ideas bandied around here. Heck, it would be awesome just to see a realtime computer generated hologram!
 
I wondered, rather than just project a flat image, whether it was possible to reconstruct the entire light wave front much like a hologram does?
Aha!!! Someone mentions holograms!!

Turns out, the dome cameras we've been talking about catch exactly the same kind information recorded in each point on the surface of a hologram. Voilá!!

In a regular photograph, each "pixel" on the photo negative captures the intensity of just one very narrow beam of light, as directed by a particular lens (or a small hole in an old-fashioned dark camera) in front of the film. This is the same thing that happens in a digital camera, whether photo camera or video camera. Each pixel, just one very narrow beam of light.

Contrary to that, each "pixel" on the surface of a hologram records pretty much everything it "sees" in any direction. Pretty much what our dome cameras need to do. Only, our cameras will have to do that continuously over time (plus they will have to do some ultra-fast tricky information processing and sharing with thousands/millions of other tiny devices also.)

Here's from Wikipedia's Hologram entry:

Each point on the hologram's surface is affected by light waves reflected from all points in the scene, rather than from just one point. It's as if, during recording, each point on the hologram's surface were an eye that could record everything it sees in any direction. After the hologram has been recorded, looking at a point in that hologram is like looking "through" one of those eyes.


We could say, in fact, that each tiny dome camera on our invisibility cloak works (functionally speaking) like an ultrafast re-writeable hologram. That's just a small part of what our invisibility cloak cameras need to do though.

And analogous to the cameras, the dome projectors actually behave somewhat like holograms, in this case like already recorded holograms. Only the projectros behave like ultrarapidly changing holograms that actually emit light! And information "recorded" temporarily on those projectors comes not from a scene the projectors had in front at recording time, but from appropriately aggregating different angular information about the light gathered by exactly all the hologram-like tiny cameras on the surface of the object covered by the invisibility cloak.

How about that, holograms help explain the devices in this invisibility cloak quite well :) Thanks Lister!

PS. By the way Lister, the VR goggles you mention are also called HMDs (Head Mounted Displays). Look them up on Google and Wikipedia, you'll find plenty of models and info on them.
 
Last edited:
So is it safe to say that the Hologram factor might add a little bit more credibility to the possibility of an Invisible Device somewhere in the future?
 
So is it safe to say that the Hologram factor might add a little bit more credibility to the possibility of an Invisible Device somewhere in the future?


Well, in spite of my apparent enthusiasm when we speak on sci-fi land, there are some seriously daunting challenges besides the ones mentioned already.

One problem is still optics related. Nothing about focus though.

I've stated several times these holographic projectors (so much better to call them holographic instead of just "dome projectors") would project the appropriate light perfectly to all directions, reproducing the light that was coming from anywhere behind the object towards the eye of the observer at any viewpoint; hence the object is rendered invisible.

Even if we had the technology to recreate that tiny beam of light perfectly from one side of the object (from the cameras) to exactly the other side of the object (that exact projector in the eye-sight direction), there is still an issue with light.

The surface of the invisibility cloak facing us would also be receiving light from our side. So light is hitting this side of the invisibility cloak too. Unless that light is perfectly absorbed by the invisibility cloak, it will be reflected to some extent, so that it will polute the beams coming from the holographic tiny projectors on the cloak. Because of that, there needs to be truly perfect light absortion by the surface of the cloak. Truly perfect light absortion! On a surface that is full of some tiny devices, some of them cameras, some of them projectors. Hmm....

We won't have perfect invisibility without perfect light absortion on the cloak. Light can't always be adjusted in an additive manner to achieve some other light (or rather, some other darkness.) Let me explain. If the invisible object is in front of a pitch black velvet, and you hit the object with a hugely powerful beam of light from the top, and some of that light is reflected towards us from the surface of the invisibility cloak, there's nothing we can do with the light from the projectors to darken that "other" light and make the end result truly black. That light should have been absorbed by the cloak. There's no way around it. Otherwise there's no perfect invisibility.

But we are in sci-fi grounds, so let's assume perfect light absortion, even from a surface full of two types of devices, some of them holographic light gatherers, some of them holographic light emitters.

Now, one other problem, as mentioned before: dust. Even with perfect adjustment and compensation of light on this side of the invisibility cloak, dust will polute any attempt of invisibility. The surface will require to repell any small particle for the device to work.

But we can assume some electrostatic type of force field that will repel any tiny particles, right?

Now, one other Major problem: computational and communicational complexity. Even for cloaks with small areas there would be just too many devices, and quite a bit of information to process, and extremely high data sharing requirements, extremely high refresh rates required. Move something fast enough behind the object, if the refresh rate of the network is not fast enough, invisibility will be breached.

But we can assume speed-of-light communication and refresh rates and holographic processing speeds, right?

Ok, these three assumptions are already huge ones. And we had made quite a few huge ones already before.


But I think some form of quasi-invisibility could be achieved some day though. If not for critical stealth applications, for entertainment or simply fashionable applications.
 
Last edited:
You want a re-writable hologram, we already have them.

Using transparent materials that exhibits non-linear light properties, you can setup an interference structure within the material using a reference laser that simultaneously reverses the light wave within the material. The effect is that any light that enters the device can be amplified and returned to the source in perfect focus even if it was distorted in it's travels by the atmosphere or cheap optics (as long as the distortions don't change in the short time interval between the arriving and departing wave). This concept of a Phase Conjugate Mirror and Time Reversed Light has been around for nearly 25 years. http://en.wikipedia.org/wiki/Nonlinear_optics#Phase_conjugation
 

Back
Top Bottom