• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

The Turin Shroud: The Image of Edessa created in c. 300-400 AD?

It reminds me of the 911 truthers who can identify metals from photographs. :rolleyes:

OK, so the site's into wind-up-merchant mode. So be it. One last word from me: if one has developed a hypothesis that predicts there should be left-over debris, maybe micro-particulate, then the next step is to go looking for it. No, it's not a case of seeing things that aren't there, but of locating the things that are there, aided by a handy readily-available contrast-enhancing tool, and enquiring into their real nature. Sub-micron particles? McCrone claimed to have seen them under his microscope and said he didn't need to do chemical tests. They were birefringent so ipso facto were iron oxide. Others have seen tiny particles and described them as mould spores, Jerusalem limestone etc etc.

I have a Model 10 (flour-imprinting) that accounts for the image colour, superficiality, negative image, 3D response to ImageJ, bleachability, thermal stability, water-resistance, microscopic properties etc. I'm entitled to a fair hearing, am I not, instead of being insulted and written off here as an incompetent? My cv is not that of an incompetent, having close on 500 citations last time I looked for my single-author 1986 resistant starch/dietary fibre paper.

I'm clearly wasting my time here. :mad:
 
OK, so the site's into wind-up-merchant mode. So be it. One last word from me: if one has developed a hypothesis that predicts there should be left-over debris, maybe micro-particulate, then the next step is to go looking for it. No, it's not a case of seeing things that aren't there, but of locating the things that are there, aided by a handy readily-available contrast-enhancing tool, and enquiring into their real nature. Sub-micron particles? McCrone claimed to have seen them under his microscope and said he didn't need to do chemical tests. They were birefringent so ipso facto were iron oxide. Others have seen tiny particles and described them as mould spores, Jerusalem limestone etc etc.

I have a Model 10 (flour-imprinting) that accounts for the image colour, superficiality, negative image, 3D response to ImageJ, bleachability, thermal stability, water-resistance, microscopic properties etc. I'm entitled to a fair hearing, am I not, instead of being insulted and written off here as an incompetent? My cv is not that of an incompetent, having close on 500 citations last time I looked for my single-author 1986 resistant starch/dietary fibre paper.

I'm clearly wasting my time here. :mad:

You are twiddling sliders in a cosmetic app without knowing what you are doing.

How many colours may a pixel have? Do you know?

What is happening in this area of your image? Can you explain the square "tiles"? Or the entirely blank area circled in black? Why does that region have square edges?
 

Attachments

  • fibre-400x-mag-from-porter-feb-10-2002-as-is-v-zeke.png
    fibre-400x-mag-from-porter-feb-10-2002-as-is-v-zeke.png
    18.2 KB · Views: 108
OK, so the site's into wind-up-merchant mode. So be it. One last word from me: if one has developed a hypothesis that predicts there should be left-over debris, maybe micro-particulate, then the next step is to go looking for it. No, it's not a case of seeing things that aren't there, but of locating the things that are there, aided by a handy readily-available contrast-enhancing tool, and enquiring into their real nature. Sub-micron particles? McCrone claimed to have seen them under his microscope and said he didn't need to do chemical tests. They were birefringent so ipso facto were iron oxide. Others have seen tiny particles and described them as mould spores, Jerusalem limestone etc etc.

I have a Model 10 (flour-imprinting) that accounts for the image colour, superficiality, negative image, 3D response to ImageJ, bleachability, thermal stability, water-resistance, microscopic properties etc. I'm entitled to a fair hearing, am I not, instead of being insulted and written off here as an incompetent? My cv is not that of an incompetent, having close on 500 citations last time I looked for my single-author 1986 resistant starch/dietary fibre paper.

I'm clearly wasting my time here. :mad:
How about the squares circled in green. What are they from? Do you know?
 

Attachments

  • fibre-400x-mag-from-porter-feb-10-2002-as-is-v-zeke2.png
    fibre-400x-mag-from-porter-feb-10-2002-as-is-v-zeke2.png
    18.6 KB · Views: 105
I displayed an image this morning, before v after Zeke. Look at the difference especially inside the blue circles, and tell me and others on this posting what is "incompetent" about deploying that contrast tool.

Is this how you think real image analysts work? What's incompetent is not having the faintest idea how you got from "before" to "after," not especially caring, and demanding that the results have evidentiary value. You're working in a science rife with false positives and you're not even interested in controlling for them in the customary ways. That's what's incompetent. Well, that's a start for what's incompetent.

I say the tool has practical utility, giving particulate material superior definition against background. As such, your term "incompetent" is totally uncalled for.

What makes it uncalled for? Your authoritarian edict to that effect? I've asked you numerous questions designed to scrutinize your understanding of the field and your understanding and use of the algorithm All I get in response is indignant waffling and wagging of the finger saying no one but you understands scientific inquiry. The message is clear: your word is simply not to be questioned. That's the antithesis of science.

For all your undoubted cleverness, you clearly do not have the first clue about the scientific modus operand...

So you keep saying. Unfortunately the best you can muster for the modus operandi in this particular science is that it must work exactly like the one other science you say you know about.

You validated the "contrast tool" empirically for your purpose? Okay, show how you empirically derived the gamma transformation function for the Zeke filter.

Zeke is to assist with plain old microscopy (>x100 mag) or even 'macroscopy' (<x100 mag), merely an extension of normal eyesight.

Or so you believe, not understanding how spectrum affects the behavior of digital image manipulation intended to work on luminance alone. Tsk, tsk. There's so much of that field you could be taught.
 
OK, so the site's into wind-up-merchant mode.

No, we're asking appropriate questions about your method. You refuse to answer, and simply demand that it must be sound because you say so. That may be what passes for science among Shroud hobbyists, but not here or in the real world.

My cv is not that of an incompetent, having close on 500 citations last time I looked...

Unless any of them are in the field of digital image processing and analysis, then you have no business claiming competent in that field. Do you think so little of other scientific fields that you simply assume it takes comparatively little effort to become expert in them? Or do you subscribe to the misguided notion that if you've learned one science you've learned them all? As I said, I've read a bit of the attempts at serious science regarding the Shroud, and they all seem to lack rigor in the image processing part. You have the opportunity to rise above that, but you don't seem to have the interest.

The facts in evidence point to the conclusion that you cannot demonstrate necessary proficiency in the field of digital image processing and analysis, which you're using to develop your theory. You've been charitably given several chances to provide that demonstration, but you simply bristle at being questioned at all.

I'm clearly wasting my time here. :mad:

You're clearly wasting your time, full stop. Do you think the digital image processing and analysis fields -- professional or academic -- will go easier on you than we have? Maybe you're accustomed to adulation and praise in the little walled garden of Shroud devotees, but if that's what you are seeking here then I'm afraid you'll continue to be disappointed until you're willing to humbly submit to a serious examination of your claims.
 
OK, so the site's into wind-up-merchant mode. So be it. One last word from me: if one has developed a hypothesis that predicts there should be left-over debris, maybe micro-particulate, then the next step is to go looking for it. No, it's not a case of seeing things that aren't there, but of locating the things that are there, aided by a handy readily-available contrast-enhancing tool, and enquiring into their real nature.


You have shown no basis for your assertion that your image analysis discloses the presence of particulate matter. What is the size range of these purported particles and how does that compare with the resolution of the images? I'll bet you don't know


Sub-micron particles? McCrone claimed to have seen them under his microscope...


He did see them. There are photomicrographs of them right there in the paper you can't be bothered to read. You know, the paper I offered to send you.


...and said he didn't need to do chemical tests. They were birefringent so ipso facto were iron oxide.


Please provide a citation for this. (Hint: you can't). If you'd read that paper, or my earlier post, you'd know he performed SEM-EDX elemental analysis, X-ray diffraction, refractive index and microchemical tests on numerous particles.

Others have seen tiny particles and described them as mould spores, Jerusalem limestone etc etc.


So what? McCrone might have seen such things too. But the particles he analyzed were present in the tape lift samples from the image areas of the shroud and absent in samples from the non-image areas. Making those particles--wait for it--the image chromophores.

I have a Model 10 (flour-imprinting) that accounts for the image colour, superficiality, negative image, 3D response to ImageJ, bleachability, thermal stability, water-resistance, microscopic properties etc. I'm entitled to a fair hearing, am I not, instead of being insulted and written off here as an incompetent? My cv is not that of an incompetent, having close on 500 citations last time I looked for my single-author 1986 resistant starch/dietary fibre paper.




I'm clearly wasting my time here. :mad:


Yup, this probably isn't the best place to seek uncritical adulation.
 
You have shown no basis for your assertion that your image analysis discloses the presence of particulate matter. What is the size range of these purported particles and how does that compare with the resolution of the images? I'll bet you don't know .
Yay. Scale. Yet another of the basics that gets tossed right out.

He did see them. There are photomicrographs of them right there in the paper you can't be bothered to read. You know, the paper I offered to send you.
Wait? You think he cares? In any way?

Please provide a citation for this. (Hint: you can't).
Not happening.

If you'd read that paper, or my earlier post, you'd know he performed SEM-EDX elemental analysis, X-ray diffraction, refractive index and microchemical tests on numerous particles.
Not happening either. Borking about with sliders is so much more fulfilling.

So what? McCrone might have seen such things too. But the particles he analyzed were present in the tape lift samples from the image areas of the shroud and absent in samples from the non-image areas. Making those particles--wait for it--the image chromophores.
Gosh. What if you are right? Would it not be far more easy to ignore you? Why yes it would.

Apparently, anyone with experience must be dismissed. And we are not talking professionals here. Anyone who remotely had any experience ever must perforce be wrong. Photoshop is wrong, Gimp is wrong, MsPaint is wrong, people who do this for a living? they are definitely wrong.

One could go along with this to some extent, and even argue it, but the argument is that Zeke is a forensic tool. That claim is beyond ******** up.

Yup, this probably isn't the best place to seek uncritical adulation.
Dunno. Zeke is supposed to be the ultimate ne plus ultra of forensic diagnostics. I wait with bated breath the next Microsoft app. I assume it will diagnose all of our former lives.

ETA: Sorry Ferd. There was a hat load of sarcasm in there. Not for you, I was just a bit annoyed.
 
Last edited:
Yay. Scale. Yet another of the basics that gets tossed right out.


With respect to yourself and the other image analysis pros here, I think if the resolution isn't sufficient the rest of the image analysis questions are moot, at least in the context of particles.

Wait? You think he cares? In any way?


I think ignoring the paper is essential if he's to preserve his belief in his experiments.

Not happening.

Not happening either. Borking about with sliders is so much more fulfilling.

Gosh. What if you are right? Would it not be far more easy to ignore you? Why yes it would.


I have criticisms of the paper, in no way fatal to McCrone's conclusions. Neither is chanting "diimide" with no reference.

Apparently, anyone with experience must be dismissed. And we are not talking professionals here. Anyone who remotely had any experience ever must perforce be wrong. Photoshop is wrong, Gimp is wrong, MsPaint is wrong, people who do this for a living? they are definitely wrong.


I earned my living for a while using a microscope and microanalytical methods to solve design and manufacturing problems. And, full disclosure, part of my training was at the McCrone Institute in Chicago. I don't call myself an expert but it can be difficult to remain civil when someone apparently ignorant of the subject pontificates.


ETA: Sorry Ferd. There was a hat load of sarcasm in there. Not for you, I was just a bit annoyed.


Not at all; the direction of your comments was clear.
 
So if folk here pushing for a full explanation of an image contrast tool at the pixel level were find they were getting a bit short or longsighted, what do they do? Go to an optician, wait while a suitable bit of shaped glass corrects the problem, then harangue the optician until given a full explanation at the micro-anatomic level of precisely what's happening to rays of light at the cornea, the vitreous humour, the retina, the optic nerve, the visual cortex? Oh come now. Get real.

What matters is being able to walk out that opticians with the prospect of sharper vision - being able to see things more clearly, not worrying about 'artefacts' that might accompany sharper vision when there are no scientific or even empirical grounds for invoking them.

Precisely the same principle applies to a fuzzy centuries-old image, one where the nebulous quality is almost certainly a result of a unique method of manufacture with no known precedents - namely powder imprinting if my Model 10 is correct as I SUSPECT it to be (no one's being force-fed what is still a hypothesis).

The Zeke filter/contrast tool is just that - a tool - in the same way that a machine tool that shapes glass for spectacle lenses is just a tool.

Would the IT and photoediting narcisssists here kindly get real, and start addressing the given problem - what happened to create the Shroud image. Kindly stop substituting your self-centred barrage of irrelevant questions for the real one.
 
So if folk here pushing for a full explanation of an image contrast tool at the pixel level were find they were getting a bit short or longsighted, what do they do? Go to an optician, wait while a suitable bit of shaped glass corrects the problem, then harangue the optician until given a full explanation at the micro-anatomic level of precisely what's happening to rays of light at the cornea, the vitreous humour, the retina, the optic nerve, the visual cortex? Oh come now. Get real.

What matters is being able to walk out that opticians with the prospect of sharper vision - being able to see things more clearly, not worrying about 'artefacts' that might accompany sharper vision when there are no scientific or even empirical grounds for invoking them.


A useful analogy, if and when you understand that you're not the customer, you're the optician. You're the guy who needs to have sufficient understanding of your tools and the science to be sure the product you deliver does what it's supposed to.

Precisely the same principle applies to a fuzzy centuries-old image, one where the nebulous quality is almost certainly a result of a unique method of manufacture with no known precedents - namely powder imprinting if my Model 10 is correct as I SUSPECT it to be (no one's being force-fed what is still a hypothesis).

The Zeke filter/contrast tool is just that - a tool - in the same way that a machine tool that shapes glass for spectacle lenses is just a tool.

Would the IT and photoediting narcisssists here kindly get real, and start addressing the given problem - what happened to create the Shroud image. Kindly stop substituting your self-centred barrage of irrelevant questions for the real one.


Despite your claim of objectivity, I think you're so wed to your idea you're going to continue to reject or dodge all critical questions. But maybe if you tell us what you think the real question is we can find a way forward.
 
So if folk here pushing for a full explanation of an image contrast tool at the pixel level were find they were getting a bit short or longsighted, what do they do? Go to an optician, wait while a suitable bit of shaped glass corrects the problem, then harangue the optician until given a full explanation at the micro-anatomic level of precisely what's happening to rays of light at the cornea, the vitreous humour, the retina, the optic nerve, the visual cortex? Oh come now. Get real.

What matters is being able to walk out that opticians with the prospect of sharper vision - being able to see things more clearly, not worrying about 'artefacts' that might accompany sharper vision when there are no scientific or even empirical grounds for invoking them.

Precisely the same principle applies to a fuzzy centuries-old image, one where the nebulous quality is almost certainly a result of a unique method of manufacture with no known precedents - namely powder imprinting if my Model 10 is correct as I SUSPECT it to be (no one's being force-fed what is still a hypothesis).

The Zeke filter/contrast tool is just that - a tool - in the same way that a machine tool that shapes glass for spectacle lenses is just a tool.

Would the IT and photoediting narcisssists here kindly get real, and start addressing the given problem - what happened to create the Shroud image. Kindly stop substituting your self-centred barrage of irrelevant questions for the real one.

What would you think if the optician examined your eyes with a pick-axe and prescribed you a nice pair of lederhosen?
 
Yesterday I posted a before and after image showing how the 'Zeke' filter in Windows 10 acts as a contrast tool, that it merely accentuates micro-particulate matter that is faintly visible pre-Zeke, that it does not create any obvious new artefacts. AsI say, a contrast tool, and a handy one too for Shroud studies, being well-adapted to the range of hues on the Shroud body image.

So why the raucous laughter, or the internet forum equivalent thereof? Why the assumption that I am an innocent abroad in matters to do photoediting, notable the mode of action of contrast controls? How many people here have been following my postings these last 5 years and more, currently in excess of 400?

As indicated earlier, I had a brief conversation in August 2015 with Mario Latendresse PhD, creator of the Shroud Scope tool. Here's the relevant part of that posting in full:



Posting: Aug 28, 2015

Update: 1 month on (30 September 2015):

Why does the allegedly straw-coloured TS image look so mauve and low-contrast in Shroud Scope, with scarcely any difference between body image and blood? I have emailed Mario Latendresse, and put it to him that the “Durante 2002” image supplied to him may have been photoedited to decrease contrast – with tell-tale reduction in the percentage of red/green (corresponding to yellow in additive, non-pigment colour mixing) combined with an increase in the percentage of blue. This hard-to-fathom shift in colour balance is readily modelled when one takes, say, other TS images, notably those labelled as “Durante” and/or “post 2002 conservation” and intentionally decreases contrast to obtain that somewhat unsatisfactory default “Shroud Scope” look that cries out for extra contrast. * (See below for a copy of my latest email to Mario, October 4th 2015, setting out my case in detail for giving his Shroud Scope post 2002 conservation Durante-derived image some additional contrast).

Update: October 4th. Here’s a copy of an email sent earlier today to Mario Latendresse (see above):

Hello again Mario. Sorry to be so long in getting back. I’ve been busy looking at precisely what happens when one takes a typical ‘sepia-toned’ photograph of the TS, or rather its central zone, avoiding those 1532 burned regions, reducing contrast in my MS Office Picture Manager by degrees until it looks more like your Shroud Scope, and recording the RGB total and composition at each stage, using Image J (provided in the 3D Plugins/Analyze menu). Pure white is of course a max of 255,255,255 and pure black 0,0,0, giving max values of 765 and 0 respectively if one were working in grayscale (which of course we’re not, choosing to stay with colours, artefacts an’all, with a view to identifying precisely how the artefacts arise.

See attached my diagram in paint for the two simultaneous changes that take place when one lowers the contrast (though I doubt that any of this will be new to you, while for me it’s been an interesting pattern-finding exercise).

pie-chart.png


Caption

Highly schematic representation of what decreasing contrast does in terms of total RGB value (max 255,255,255) and % composition. Reducing contrast reduces total (R+G+B) and produces a strictly unit-for-unit shift from (R+G) to blue, which is equivalent to yellow to blue in additive colour mixing.

(Note: the above colours are not pure RGB 255 values, needless to say, being straight from the MS Paint palette).

First, the total (R+G+B) reduces when one moved the Contrast control from the mid-range zero down through negative values towards -100. Distinct blueing of any white areas in the image becomes apparent when one goes past -30 (I deliberately introduced a small solid white circle to my TS image in order to monitor that effect, though not enough to affect the average RGB except marginally).

The reduction in total RGB was related to the contrast setting by applying the formula:

% reduction = contrast setting x 0.3 (accurate to within 1% or less).

So on a Contrast setting of -40, there was a 12% reduction in total (R+G+B). On – 60 it was 18% etc.

As for the shift in colour balance, that too proved easy to quantify, simply by eye-balling the numbers and spotting the pattern. With progressive decrease in contrast, the sum of (R +G) as a percentage of total (R+G+B) became smaller, while B became larger. In fact, the two were simply related. If the initial (R+G) as a percentage of (R+G+B) was, say x%, and reduced to y% of the new (R+G+B) total, i.e. an absolute change of (x-y) percentage points, then the B component increased by (x-y)% when comparing its final % contribution to the total with the initial in absolute terms.

No doubt there are sound theoretical reasons for these patterns that you will understand, and possibly provide a link, but as I say, I’ve been content to see it simply as a hands-on pattern-finding exercise.

Conclusion: at least when using my photoediting software to decrease contrast, its impossible to avoid (a) a reduction in total (R+G+B) and (b) a shift away from (R+G), i.e. yellow towards blue.

That does not seem unreasonable, even if unavoidable. When increasing contrast in gray scale, the aim is to make the brighter regions brighter still, i.e. shift from gray to white, and the darker regions shift from gray towards black. So the opposite applies to a reduction in contrast – making the darker gray regions a lighter gray, making the paler gray regions a darker gray, thus reducing contrast. To achieve a similar effect with coloured images, yellow stands in for white (requiring a mix of R + G) while blue substitutes for black.

The test for the soundness of that conclusion from this novice is to predict what happens when one increases contrast in a TS image. One expects it to become progressively brighter, more yellow, less blue, and that is indeed what one sees.

Take home message: looking at your default Shroud Scope image alongside the more straw/sepia-toned Durante images available elsewhere online, I strongly suspect that the image you were given was originally sepia-toned too, and that the supplier deliberately reduced the contrast. Would you agree or disagree with that conclusion Mario?

If you agree, would it not be better to increase the contrast as default, rather than supply that option for increasing or decreasing contrast. Granted there might be a problem in deciding what was the “correct” default value. Maybe there isn’t one, at least scientifically, given that arbitrary decisions need to be made when converting a faint 4m x 1m image to a compact one on a computer screen. However, I maintain that it’s reasonable to expect better differentiation between blood and body image than one sees on Shroud Scope, where most of the difference ones sees with the other images have been largely lost (a shame in my view, making for a less interesting image).

(Update: helpful and constructive reply received to my email – 9th October – see below)

Email reply just in (still October 8th) from Mario L – see earlier re the contrast level in Shroud Scope(my italics):

Hi Colin,

You have seriously taken on this project.

The Durante photo can certainly be modified to enhance contrast on a computer screen.

My only little quibble is on your use of “deliberately reduced the contrast” as if some negative intention were implied, but I might be reading too much in that statement. It was probably deliberate but for a specific reason, such as to increase visual acuity when printed, but I have no idea if this is the case.

A probable project would be to enhance the contrast of various areas of the photo with different parameters. Enhancing contrast over the whole photo is not ideal for viewing, because in doing so, the visual acuity is reduced in some areas.

By the way, I prefer to use raw tools such as “convert” from ImageMagick, because you can explicitly and systematically control the transformations. You can also automate the process used and document it so that others can easily reproduce it, even by using other software based on such transformations. I have never used ImageJ.

I will wait until I get the new implementation of Shroud Scope to reconsider what to do about the Durante photos. I might change its default contrast.

Best,

Mario

(end email)

I shall now take break from this site. The attempts to debase arguments via reductio ad absurdum, to browbeat perceived opponents into submission is frankly tedious and time-wasting. As someone who's reached his 70s, I have better things to do with my remaining time than squander it here.
 
Last edited:
Yesterday I posted a before and after image showing how the 'Zeke' filter in Windows 10 acts as a contrast tool

The Zeke filter is a cosmetic filter, part of a package of filters designed to mimic the popular functions of image-sharing social networks such as Snapchat and Instagram. There are probably a hundred or more such filters available for free in these sorts of programs, and all of them will transform the overall image luminance, either locally (e.g., such as by deconvolution) or globally. Pointing out that it affects relative luminance does not make it a "contrast tool."

that it merely accentuates micro-particulate matter that is faintly visible pre-Zeke, that it does not create any obvious new artefacts.

You haven't provided any evidence to show that such a filter reliably alters luminance such as to enable detection of particles in photographs. You have shown no evidence of testing to determine the propensity of the algorithm to create false positive results. You have steadfastly resisted questions aimed at rather obvious artifacts. From this we can conclude that you have neither applied suitable empirical controls nor that you intend to.

AsI say, a contrast tool, and a handy one too for Shroud studies, being well-adapted to the range of hues on the Shroud body image.

You provide no rational basis for this statement. It is more likely you chose it because the product that provides it was convenient and inexpensive to obtain. And from among all the easily-obtainable algorithms, this one produced results that were the most informally consistent with the conclusion you desired to draw -- that additional evidence supports your baked-flour theory for the Shroud. Your correspondent even recommends a more general tool which, unlike Zeke, at least has a discoverable algorithm.

So why the raucous laughter, or the internet forum equivalent thereof?

Because you seem quite unaware of how transparently you're avoiding any semblance of real science and trying to establish your claim on bluster alone.

That does not seem unreasonable, even if unavoidable.

There are techniques in image processing for working solely in a luminance channel, even if the original has been first encoded in RGB. But here's a hint: it's not achieved by using toy image manipulation programs and just wiggling the sliders. Sadly, among amateur Shroud researchers, I have seen so very much of this ignorant waffling about, trying to reinvent wheels that were invented long ago -- and generally failing to get them even close to round. If you intend to pursue this as serious science, please consult someone who's qualified rather than trying to reinvent it yourself.

I shall now take break from this site. The attempts to debase arguments via reductio ad absurdum, to browbeat perceived opponents into submission is frankly tedious and time-wasting.

Look around you. You're the one doing the browbeating. People are trying to ask you serious and pertinent questions about your work, and you simply want none of it. You are the one trying to establish the strength of your findings on nothing more than allusions to some illustrious work you say you did in unrelated sciences, and constant insinuations that no one but you can possibly understand how to do rigorous science.

This site will not give you undeserved credit. It will not give you unearned adulation. If that is what you seek, then retreat back into the walled garden because that is the only way you'll get it.
 
So if folk here pushing for a full explanation of an image contrast tool at the pixel level were find they were getting a bit short or longsighted, what do they do? Go to an optician, wait while a suitable bit of shaped glass corrects the problem, then harangue the optician until given a full explanation at the micro-anatomic level of precisely what's happening...

The optician is required to demonstrate competence in his profession before offering his services to the public. That allows him to expect a degree of trust from his patients. You are unwilling and unable to demonstrate competence in the field of digital image analysis, nor to submit to a test of proficiency in the tools and techniques. Therefore you have no right to demand implicit trust from your readers. When you are willing to meet the same standards as your hypothetical optician, you can demand the same trust.

Further, there are objective measurements of the optician's success. Standard tests of visual acuity can determine, in an objective and repeatable manner, that the practice of his art and science have resulted in a measurable improvement in the patient's vision. You are unable to provide any such validated control. While you purport a control, you do not demonstrate how your ad hoc controls actually achieve reliability in the method. You cannot tie them to the behavior of the underlying model, so they are not objective.

What matters is being able to walk out that opticians with the prospect of sharper vision - being able to see things more clearly...

Which, ultimately, is within the ken of a lay observer. You're trying to obviate the problems in the method you've developed by sidestepping the parts of it that would not be within the ken of the lay observer.

not worrying about 'artefacts' that might accompany sharper vision when there are no scientific or even empirical grounds for invoking them.

If the treatment provided by an optician causes you to see things that are not there, then that would not be a positive indicator of his success. And even in the case where such artifacts were unavoidable, would the optician rationally suggest to his patient that those artifacts are indeed real objects? Or would he carefully counsel the patient that certain well-understood artifacts are an unfortunate side-effect that he should train himself to ignore?

Precisely the same principle applies to a fuzzy centuries-old image...

No, it does not when you are comparing yourself to the optician in this analogy, and when you wrongly suggest that the results require only lay interpretation. It's obvious what you're trying to do, because so much pseudo-science before you has done it. You're trying to deny the need for expertise you don't have, not because it's objectively unnecessary but because you objectively don't have it. You want to be respected as a scientist without having bothered to learn the science. It doesn't work that way.

The Zeke filter/contrast tool is just that - a tool - in the same way that a machine tool that shapes glass for spectacle lenses is just a tool.

No, in a wholly different way. The process of assessing optical aberration in the human eye is a well-developed science, practiced by people who must demonstrate proficiency in the relevant arts and sciences before they are allowed to ask people to rely upon their conclusions. The production of corrective lenses is also a well-understood, well-developed process undertaken daily using well-tested, well-standardized, and generally unremarkable techniques. Each optician does not develop his own lens-grinding technique using ill-suited tools and then browbeat the patient who does not accept his untested results as nothing short of brilliant.

Your attempt to style a toy filter as a precision scientific tool is akin to suggesting that an optician grind lenses using a carpenter's wood rasp. Or maybe walking into an optician's office to see him "fixing" his keratotomy appratus with a sledge hammer.

Would the IT and photoediting narcisssists here kindly get real, and start addressing the given problem - what happened to create the Shroud image. Kindly stop substituting your self-centred barrage of irrelevant questions for the real one.

It's clear you have little if any respect for practitioners of sciences that aren't your own. Too bad. True and useful knowledge foreign to you exists, and other people will inevitably become better masters of it. Calling them names will not make your subject-matter ignorance go away. If your ego is bruised by other people having useful knowledge that you do not possess, then you are ill-equipped to practice science outside a walled garden of chosen admirers.

Of course we are interested in what happened to create the Shroud image. It's been a repeated topic of conversation for several years here, and several centuries in the world at large. However, this forum attracts parties that are also interested in rational skepticism, which means a methodical, unflinching examination of claims according to the relevant evidence. The "real" question of how the Shroud image formed, when answered with some particular candidate hypothesis, will inexorably generate dozens of subordinate questions designed to test how well that answer fits. Skeptics properly reject answers that don't fit.

And questions to test fit are most certainly not irrelevant. You want to argue that the fit of your hypothesis is helped by manipulating digital images of the Shroud in way that, according to you, reveals the presence of particulates that your hypothesis predicts would be there. No matter how much you want those uncomfortable questions to go away, the rigor with which you performed that manipulation is entirely relevant to how well you can argue your hypothesis fits.
 
With respect to yourself and the other image analysis pros here, I think if the resolution isn't sufficient the rest of the image analysis questions are moot, at least in the context of particles.

It's clear from the before-after image pair that the Zeke filter, among other things, attempts a deconvolution. Support-constrained deconvolution is a helpful technique in some cases for recovering an original signal that has been spatially dispersed, if we are able to know or bound certain quantities pertaining to the dispersion. Primitive implementations of simple deconvolution kernels appear in most packages similar to the one in which we find Zeke. For example in Instagram, both the Sharpen and Structure filters employ a simple deconvolution kernel with only one variable parameter. These are not intended as data recovery mechanisms, but simply as cosmetic enhancements. Their customary use in those contexts -- and in the Zeke context -- is to improve the visual appearance of blurred images by increasing the frequency with which certain transitions in the signal occur. In both cases the presumption is that the signal we see is a convolution of the original signal and a blurring function. But knowing neither the blurring function nor the original signal, not every attempt at deconvolution can produce the original signal or avoid generating signals that were not previously present. It generally does not matter for cosmetic image enhancement purposes whether artifacts are introduced.

For signal recovery purposes -- revealing detail that may be present but difficult to see -- it is vital to have full control not only over the deconvolution kernel but also of the basis of support, which has three degrees of freedom in this case. Not even Adobe Photoshop provides this level of control in their deconvolution constraints. We must generally turn to specially built image analysis tools to achieve a professional degree of control. The reason this control is important is that the only way to avoid producing deconvolution artifacts is to properly constrain, shape, and orient the support.

Spatial resolution as affected in one direction by the sensor pitch and in the other direction by optical dispersion and scatter (including those caused by contamination of the optical assembly) affects the degree to which we can bound effective supports for deconvolution. Commonly we rely upon mathematical lens models to estimate the degree of optical dispersion, and technical knowledge of the sensor to determine the effects of spatial quantization. But it can sometimes be usefully bounded and parameterized based on information gleaned from the photograph itself.

One of the big problems we come up with in spatial resolution is after the transformation and decomposition to frequency space. If the quantization deprives us of a good picture of the signal, we may be unsure in which frequency bands the primary signal forms lie in. This leads to errors in amplification. Normally band-pass filters must be applied to eliminate apparent high-frequency variation that arises anomalously from insufficient sampling and quantization.

This is made more problematic by the relative cleanliness of the signal. These techniques are most successful in contexts such as astronomy where the signal has little noise. A star seen against a dark background provides a robust basis for deriving the necessary deconvolution parameters. In contrast the Shroud photos are noisy. Not just in sampling noise, but also in the fact that the desired "details" are alleged to be against the cloth background. This is noise for the purposes of identifying material deposited on the surface. Transformation into frequency space then provides ambiguous and inconclusive clustering of principle frequencies.

A particular artifact called "ringing," which commonly occurs in blunt attempts at deconvolution, appears more likely to be the cause of meccanoman's alleged particles. If the deconvolution support is too broad and/or improperly oriented, transformed sideband signals appear which, when applied close together, themselves convolve to form periodic peaks or troughs in the signal. These peaks can then interact with background noise -- above which the desired signal has not risen sufficiently -- to create "details" that aren't really there. Naturally when professionals do this, there are mathematical techniques to detect such behavior. It is not always apparent optically, but because of how deconvolution works it can be detected in the intermediate stages of the process. The error is usually that the support is too wide along one degree of freedom and must be reduced to eliminate disruptive side bands.

There is no evidence that meccanoman appropriately formulated his deconvolution kernel or parameterized its support according to appropriate methods. In fact, the controls of the Zeke tool simply do not allow such a thing to happen in it. He is left with a canned answer which has apparently displayed expected artifacts in a known pattern. His "lines of dots" are connected by subtle transverse signal peaks. This is a classic "ringing" pattern as it forms around a linear (but not perfectly straight) feature when deconvolution breaks down. (The term "ringing" refers to the artifact around a single kernel. When the kernel is applied continuously to a linear feature, the rings convolve in a way that creates lines spaced by a certain period. When it is applied to an irregular contour, patterns emerge that may appear as splotches or dots.)

Moving on, what meccanoman naively alludes to as a "contrast tool" would be better described as a gamma transformation kernel. And it appears Zeke is attempting this as well, although it's difficult to determine without proper controls over the algorithm parameters. Gamma transformation assumes the need to correct the entire luminance signal without regard to spatial distribution. This is why gamma transformation generally happens only in luminance space (and why I asked meccanoman whether he was in signal/image space or wavelength/luminance space). For photographic purposes it happens in a luminance space derived algebraically from luminance signals in three discrete bands of visible wavelengths. For scientific purposes, we sample many more bands, such that the algebraic derivation has fewer degrees of freedom. In each case it's bounded by the quantization of absorption response -- the brightness resolution, not the spatial resolution.

We understand from physiology and photometry that relative (perceived) luminance for this purpose is actually a function of wavelength. Scientific imaging provides enough luminance samples across enough discrete wavelength bands to accurately represent a luminance function for proper image analysis. Photographic imaging uses only three wavelength bands and maintains enough luminance fidelity to preserve an image that's acceptable for human vision and may support limited image analysis. But for tools such as Zeke it's usually priority to produce approximate results quickly rather than correct results rigorously, so often a "flat" or otherwise naive method is used to derive a single approximation for luminance irrespective of wavelength. Meccanoman has noted that attempting what he thinks is a gamma transformation has improper side effects in the chrominance channels. To a real image processing analyst, this is a red flag not to trust the transformed luminance channel.

There is a direct effect on the overall image in that relative luminance may be improperly transformed and thus create visually significant illusions. There is a combined effect between this and the aforementioned convolution in that when used together the effect is generally for gamma transformation to amplify noise which then rises to frequency-space significance and is improperly interpreted as a salient portion of the original signal. A variant of this technique is, in fact, how some kinds of noise are introduced into an image for aesthetic effect.
 
Last edited:
It's clear from the before-after image pair that the Zeke filter, among other things, attempts a deconvolution. Support-constrained deconvolution is a helpful technique in some cases for recovering an original signal that has been spatially dispersed, if we are able to know or bound certain quantities pertaining to the dispersion. Primitive implementations of simple deconvolution kernels appear in most packages similar to the one in which we find Zeke. For example in Instagram, both the Sharpen and Structure filters employ a simple deconvolution kernel with only one variable parameter. These are not intended as data recovery mechanisms, but simply as cosmetic enhancements. Their customary use in those contexts -- and in the Zeke context -- is to improve the visual appearance of blurred images by increasing the frequency with which certain transitions in the signal occur. In both cases the presumption is that the signal we see is a convolution of the original signal and a blurring function. But knowing neither the blurring function nor the original signal, not every attempt at deconvolution can produce the original signal or avoid generating signals that were not previously present. It generally does not matter for cosmetic image enhancement purposes whether artifacts are introduced.

For signal recovery purposes -- revealing detail that may be present but difficult to see -- it is vital to have full control not only over the deconvolution kernel but also of the basis of support, which has three degrees of freedom in this case. Not even Adobe Photoshop provides this level of control in their deconvolution constraints. We must generally turn to specially built image analysis tools to achieve a professional degree of control. The reason this control is important is that the only way to avoid producing deconvolution artifacts is to properly constrain, shape, and orient the support.

Spatial resolution as affected in one direction by the sensor pitch and in the other direction by optical dispersion and scatter (including those caused by contamination of the optical assembly) affects the degree to which we can bound effective supports for deconvolution. Commonly we rely upon mathematical lens models to estimate the degree of optical dispersion, and technical knowledge of the sensor to determine the effects of spatial quantization. But it can sometimes be usefully bounded and parameterized based on information gleaned from the photograph itself.

One of the big problems we come up with in spatial resolution is after the transformation and decomposition to frequency space. If the quantization deprives us of a good picture of the signal, we may be unsure in which frequency bands the primary signal forms lie in. This leads to errors in amplification. Normally band-pass filters must be applied to eliminate apparent high-frequency variation that arises anomalously from insufficient sampling and quantization.

This is made more problematic by the relative cleanliness of the signal. These techniques are most successful in contexts such as astronomy where the signal has little noise. A star seen against a dark background provides a robust basis for deriving the necessary deconvolution parameters. In contrast the Shroud photos are noisy. Not just in sampling noise, but also in the fact that the desired "details" are alleged to be against the cloth background. This is noise for the purposes of identifying material deposited on the surface. Transformation into frequency space then provides ambiguous and inconclusive clustering of principle frequencies.

A particular artifact called "ringing," which commonly occurs in blunt attempts at deconvolution, appears more likely to be the cause of meccanoman's alleged particles. If the deconvolution support is too broad and/or improperly oriented, transformed sideband signals appear which, when applied close together, themselves convolve to form periodic peaks or troughs in the signal. These peaks can then interact with background noise -- above which the desired signal has not risen sufficiently -- to create "details" that aren't really there. Naturally when professionals do this, there are mathematical techniques to detect such behavior. It is not always apparent optically, but because of how deconvolution works it can be detected in the intermediate stages of the process. The error is usually that the support is too wide along one degree of freedom and must be reduced to eliminate disruptive side bands.

There is no evidence that meccanoman appropriately formulated his deconvolution kernel or parameterized its support according to appropriate methods. In fact, the controls of the Zeke tool simply do not allow such a thing to happen in it. He is left with a canned answer which has apparently displayed expected artifacts in a known pattern. His "lines of dots" are connected by subtle transverse signal peaks. This is a classic "ringing" pattern as it forms around a linear (but not perfectly straight) feature when deconvolution breaks down. (The term "ringing" refers to the artifact around a single kernel. When the kernel is applied continuously to a linear feature, the rings convolve in a way that creates lines spaced by a certain period. When it is applied to an irregular contour, patterns emerge that may appear as splotches or dots.)

Moving on, what meccanoman naively alludes to as a "contrast tool" would be better described as a gamma transformation kernel. And it appears Zeke is attempting this as well, although it's difficult to determine without proper controls over the algorithm parameters. Gamma transformation assumes the need to correct the entire luminance signal without regard to spatial distribution. This is why gamma transformation generally happens only in luminance space (and why I asked meccanoman whether he was in signal/image space or wavelength/luminance space). For photographic purposes it happens in a luminance space derived algebraically from luminance signals in three discrete bands of visible wavelengths. For scientific purposes, we sample many more bands, such that the algebraic derivation has fewer degrees of freedom. In each case it's bounded by the quantization of absorption response -- the brightness resolution, not the spatial resolution.

We understand from physiology and photometry that relative (perceived) luminance for this purpose is actually a function of wavelength. Scientific imaging provides enough luminance samples across enough discrete wavelength bands to accurately represent a luminance function for proper image analysis. Photographic imaging uses only three wavelength bands and maintains enough luminance fidelity to preserve an image that's acceptable for human vision and may support limited image analysis. But for tools such as Zeke it's usually priority to produce approximate results quickly rather than correct results rigorously, so often a "flat" or otherwise naive method is used to derive a single approximation for luminance irrespective of wavelength. Meccanoman has noted that attempting what he thinks is a gamma transformation has improper side effects in the chrominance channels. To a real image processing analyst, this is a red flag not to trust the transformed luminance channel.

There is a direct effect on the overall image in that relative luminance may be improperly transformed and thus create visually significant illusions. There is a combined effect between this and the aforementioned convolution in that when used together the effect is generally for gamma transformation to amplify noise which then rises to frequency-space significance and is improperly interpreted as a salient portion of the original signal. A variant of this technique is, in fact, how some kinds of noise are introduced into an image for aesthetic effect.

Impressive, mind-blowingly so. But if it's all the same to you, I'll stick with the evidence of my own eyes (visual scrutiny, backed up with that thing called life experience and, er, what's it called now, that additional highly-evolved accessory endowing human intelligence and rational judgment as distinct from the output from a passive mindless electronic scanner that requires the scientist to unscramble the unscrambleable). Ah, yes, I've suddenly remembered. It's the mind, the human mind. Delivers instant snap judgements, to be sure. But it keeps the show on the road, instead of careering off into a ditch, correction, semantic swamp.
 
Impressive, mind-blowingly so. But if it's all the same to you, I'll stick with the evidence of my own eyes...[blah blah gobbledy-gook blah blah dismissive twaddle]

It's nothing to me. You'll be the one making a fool of yourself, especially when your would-be fans find out you were given a thorough analysis of your method and could offer up only a folksy brush-off in response.
 
It's clear from the before-after image pair that the Zeke filter, among other things, attempts a deconvolution. Support-constrained deconvolution is a helpful technique in some cases for recovering an original signal that has been spatially dispersed, if we are able to know or bound certain quantities pertaining to the dispersion. Primitive implementations of simple deconvolution kernels appear in most packages similar to the one in which we find Zeke. For example in Instagram, both the Sharpen and Structure filters employ a simple deconvolution kernel with only one variable parameter. These are not intended as data recovery mechanisms, but simply as cosmetic enhancements. Their customary use in those contexts -- and in the Zeke context -- is to improve the visual appearance of blurred images by increasing the frequency with which certain transitions in the signal occur. In both cases the presumption is that the signal we see is a convolution of the original signal and a blurring function. But knowing neither the blurring function nor the original signal, not every attempt at deconvolution can produce the original signal or avoid generating signals that were not previously present. It generally does not matter for cosmetic image enhancement purposes whether artifacts are introduced.

For signal recovery purposes -- revealing detail that may be present but difficult to see -- it is vital to have full control not only over the deconvolution kernel but also of the basis of support, which has three degrees of freedom in this case. Not even Adobe Photoshop provides this level of control in their deconvolution constraints. We must generally turn to specially built image analysis tools to achieve a professional degree of control. The reason this control is important is that the only way to avoid producing deconvolution artifacts is to properly constrain, shape, and orient the support.

Spatial resolution as affected in one direction by the sensor pitch and in the other direction by optical dispersion and scatter (including those caused by contamination of the optical assembly) affects the degree to which we can bound effective supports for deconvolution. Commonly we rely upon mathematical lens models to estimate the degree of optical dispersion, and technical knowledge of the sensor to determine the effects of spatial quantization. But it can sometimes be usefully bounded and parameterized based on information gleaned from the photograph itself.

One of the big problems we come up with in spatial resolution is after the transformation and decomposition to frequency space. If the quantization deprives us of a good picture of the signal, we may be unsure in which frequency bands the primary signal forms lie in. This leads to errors in amplification. Normally band-pass filters must be applied to eliminate apparent high-frequency variation that arises anomalously from insufficient sampling and quantization.

This is made more problematic by the relative cleanliness of the signal. These techniques are most successful in contexts such as astronomy where the signal has little noise. A star seen against a dark background provides a robust basis for deriving the necessary deconvolution parameters. In contrast the Shroud photos are noisy. Not just in sampling noise, but also in the fact that the desired "details" are alleged to be against the cloth background. This is noise for the purposes of identifying material deposited on the surface. Transformation into frequency space then provides ambiguous and inconclusive clustering of principle frequencies.

A particular artifact called "ringing," which commonly occurs in blunt attempts at deconvolution, appears more likely to be the cause of meccanoman's alleged particles. If the deconvolution support is too broad and/or improperly oriented, transformed sideband signals appear which, when applied close together, themselves convolve to form periodic peaks or troughs in the signal. These peaks can then interact with background noise -- above which the desired signal has not risen sufficiently -- to create "details" that aren't really there. Naturally when professionals do this, there are mathematical techniques to detect such behavior. It is not always apparent optically, but because of how deconvolution works it can be detected in the intermediate stages of the process. The error is usually that the support is too wide along one degree of freedom and must be reduced to eliminate disruptive side bands.

There is no evidence that meccanoman appropriately formulated his deconvolution kernel or parameterized its support according to appropriate methods. In fact, the controls of the Zeke tool simply do not allow such a thing to happen in it. He is left with a canned answer which has apparently displayed expected artifacts in a known pattern. His "lines of dots" are connected by subtle transverse signal peaks. This is a classic "ringing" pattern as it forms around a linear (but not perfectly straight) feature when deconvolution breaks down. (The term "ringing" refers to the artifact around a single kernel. When the kernel is applied continuously to a linear feature, the rings convolve in a way that creates lines spaced by a certain period. When it is applied to an irregular contour, patterns emerge that may appear as splotches or dots.)

Moving on, what meccanoman naively alludes to as a "contrast tool" would be better described as a gamma transformation kernel. And it appears Zeke is attempting this as well, although it's difficult to determine without proper controls over the algorithm parameters. Gamma transformation assumes the need to correct the entire luminance signal without regard to spatial distribution. This is why gamma transformation generally happens only in luminance space (and why I asked meccanoman whether he was in signal/image space or wavelength/luminance space). For photographic purposes it happens in a luminance space derived algebraically from luminance signals in three discrete bands of visible wavelengths. For scientific purposes, we sample many more bands, such that the algebraic derivation has fewer degrees of freedom. In each case it's bounded by the quantization of absorption response -- the brightness resolution, not the spatial resolution.

We understand from physiology and photometry that relative (perceived) luminance for this purpose is actually a function of wavelength. Scientific imaging provides enough luminance samples across enough discrete wavelength bands to accurately represent a luminance function for proper image analysis. Photographic imaging uses only three wavelength bands and maintains enough luminance fidelity to preserve an image that's acceptable for human vision and may support limited image analysis. But for tools such as Zeke it's usually priority to produce approximate results quickly rather than correct results rigorously, so often a "flat" or otherwise naive method is used to derive a single approximation for luminance irrespective of wavelength. Meccanoman has noted that attempting what he thinks is a gamma transformation has improper side effects in the chrominance channels. To a real image processing analyst, this is a red flag not to trust the transformed luminance channel.

There is a direct effect on the overall image in that relative luminance may be improperly transformed and thus create visually significant illusions. There is a combined effect between this and the aforementioned convolution in that when used together the effect is generally for gamma transformation to amplify noise which then rises to frequency-space significance and is improperly interpreted as a salient portion of the original signal. A variant of this technique is, in fact, how some kinds of noise are introduced into an image for aesthetic effect.


Thanks for your generously detailed reply, Jay. You have me at a loss, trying to get a sip of water from a firehose. I'll need to look up a lot of terms and concepts before I can begin to apprehend your reply.


My comment was meant simply, maybe too much so. My understanding of meccanoman's claim is that by using his filter he can detect the presence of particles in the images. I think we all agree that there is no burden of matter on the shroud such that the fibers are obscured. So I think if particles are present (spoiler: they are), they are of the same order of size or smaller than the linen fiber width, roughly 10-20 micrometers. Now my assumption, which needed BOTE confirmation before I made it :o, is that each pixel in the macro images corresponds to dimensions several orders of magnitude larger, i.e. millimeter(s) than the particles he concluded were there. He defintely sees something new when he uses the filter, all I'm saying is even it's real he has no evidence to identify it as "particles". If he's seeing something real it could just as well be the result of staining or dyeing.

ETA: FWIW the examples abaddon posted convinced me the filter introduced artifacts
 
Last edited:
From the general thrust of comments we're seeing on this site re my use of the Zeke tool, I can only suppose that folk here are not aware of a severe practical constraint that faces anyone embarking on Shroud image research. The form of the "Man" is only visible if one stands back a metre or two: the distinction between image/non-image disappears from view if one gets closer. So from the very beginning, image studies have required some means of increasing contrast and with it image definition. In the early days of silver salt photography, e.g. Enrie in the early 1930s the remedy was to select particular emulsions known to provide high contrast (and nobody was lectured for doing so through not knowing the whys and wherefores for one emulsion working better then another - it was sufficient to know in broad terms what a contrast adjustment was doing, i.e. polarizing midtone hues between one or other end of the B/W or colour spectrum, going to whichever of the two ends was closer, so to speak.

So why all the catcalls and derision when I stumble upon a particular contrast tool which empirically does something that is of huge current interest to this Shroud investigator. It greatly increases the discrimination between small particulate matter and background, maybe through being especially sensitive and discriminating towards hues in the colour range of interest (yellow to brown), and making useful changes - and fairly modest ones at that, to contrast. Think of it as a niche product - seemingly custom-made for sindonology, but in fact an accidental discovery.

Yesterday I discovered a x400 photomicrograph of a Shroud image fibre that had both obvious and less obvious particulate matter which I suspect could be a left over from a medieval era image-imprinting process. I applied the Zeke so-called filter (which I now prefer to call a contrast tool) and suspicions were confirmed.

[qimg]https://shroudofturinwithoutallthehype.files.wordpress.com/2017/04/fibre-400x-mag-from-porter-feb-10-2002-as-is-v-zeke.png[/qimg]

There is indeed particulate material, some highly ordered (lines of dots along image fibres) some less so, that becomes clearly visible after Zeke, but is faintly visible before (in other words, Zeke is not generating new morphological artefacts, merely emphasizing what is already there through effects on image/background contrast.

I might at some point start to analyze precisely how and why Zeke scores over other contrast tools, but have to say I don't view it as a priority. Hundreds of organic chemists make routine everyday use of nmr (nuclear magnetic resonance spectroscopy) which assists in discriminating between the numerous hydrogen atoms in a molecule of interest without caring or needing to know how and why the technique works. It's sufficient for it to show that one hydrogen atom is in a different micro-environment from another, enabling one to arrrive at meaningful classifications that assist with predicting real world behaviour.

Chemists have to be empiricists, or nothing would ever get done or achieved. I humbly suggest the same applies to Shroud image analysts, especially those interested in knowing how it was produced (chemical? thermal? thermochemical? radiation? etc etc)

Yesterday I proposed on my site that all Shroud images, past, present and future, should be looked at with the Zeke tool with a view to unmasking micro-particulate matter, if present. Understanding the basis of Zeke's practical utility can come later.
TL;DR because your samples look the bloody same to me. One would think an all-powerful being would make the very-slightest effort to be convincing. He knows we're stupid. He could be smarter.
 

Back
Top Bottom