This may be the biggest mystery of all?

clip_image001A reader, who is a computer graphics expert, writes:

Has anyone ever attempted to create a graph of measured distance between points on a three-dimensional human form (or just a head) and an abstract surface or hypothetical plane using gray-scale tones or luma to represent the data? In other words has anyone produced a heightmap? See http://en.wikipedia.org/wiki/Heightmap?

I contend that such a set of measurements would produce a fuzzy, ghost like representation of the body form and not the detailed and realistic image we see on the shroud. This would be true for both linear and curved gray-scales. The image on the shroud does not represent collimated body to cloth distance. The claim that it does is pure fiction. I suggest that the VP-8 Image Analyzer was not used correctly thus leading to a lot of misunderstanding.

What is interesting is that if you apply two-dimensional (xy) Gaussian filters to a digital shroud of Turin image, you create a heightmap equivalent. That gives a good plot. Given that it is mathematically impossible to go the other way, just as you cannot find the dividend and the divisor from the quotient, it would be impossible to produce the actual image on the shroud from the data derived. This may be the biggest mystery of all.

That is hard to understand. However, what follows is an edited reposting from February that may help.


What is right or wrong with the material from page 9 of The Shroud: A Critical Summary of Observations, Data and Hypotheses, by Robert W. Siefker and Daniel S. Spicer, which states in Table I, Item 3.0:

The luminance distribution of both front and back images can be correlated to the clearances between the three-dimensional surface of the body and a covering cloth. This is why many state that the Shroud is a 3D image. . . .

imageThe variation in the image density has been analyzed mathematically to render a high resolution 3-dimensional body image.

While a photograph can be either a positive or a negative, there is no correlation in a photograph between the density of the imprint and the distance to the object. Uniquely, the image on the shroud appears denser in the areas where the vertical distance to the body from the cloth surface would logically be shorter. This allows the use of a simple mathematical function to recover the 3-dimensional information about the body. The 3-D characteristics present on the Shroud cannot be recovered with any normal reflected light photograph or painting.

We are being forced to think of this only in terms of a cloth covering a body. While this may be the case, this is an assumption and not an image characteristic. It should be avoided.

A better way to describe this is to use accepted terminology from the world of three-dimensional graphics. The image is a height-field or height-map.

Smoke ring height fieldSmoke ring plotted as craterWith a VP8 Image Analyzer or newer computer software (POV Ray, ImageJ, etc.), the gray scale values at many xy points in the height-field to the left are plotted as elevation or terrain.

The software uses several variables including an altitude scale, a viewing angle and a virtual light source to enable us to visualize the shape.

Face as height fieldFace plottedThe same software with the same viewing angles and artificial lighting produces the apparent elevation in the face. This is true for the entire body of the man imaged on the Shroud of Turin.

It is important to note, as
Siefker and Spicer state, a normal photograph or a painting is a representation of reflected light as detected by a camera or perceived from an artists viewing position. 

There is no useful relationship between the gray scale values in a normal painting or photograph and spatial distance as found in height-fields.

imageVirtual reality and gaming software regularly uses similar height-field images (above left) to produce realistic landscapes. NASA uses them to generate 3D surface representations of the moon and planets. Those height-fields are created by radar and lasers. Google Earth software makes 3D renderings of our planet the same way. NOAA produces 3D images of hurricanes from radar data represented in height-fields. Height-fields are regularly used in new-generation 3D ultrasound sonograms.

Note: Height-field is a convenient term. Gray scale values found in such a dataset are applicable for both vertical and horizontal plots.

Here is an image I prepared using ImageJ. See: Do Your Own VP8-Like 3D Images of the Shroud of Turin

24 thoughts on “This may be the biggest mystery of all?”

  1. Hey, ho, fools rush in where angels fear to tread…

    I am not a computer graphics expert, but I disagree. I have a once trendy office toy, which consists of a horizontal mesh about 15cm square, pierced with thousands of vertical steel rods about 8cm long. At rest, the rods all hang at the same level, but when the grid is lowered over an object, the steel rods adjust to the height of the point they touch, and an exact representation of the object appears, modelled in steel, 8cm above the object. If I knew what it was called I’d reference a picture of it; they used to be very common. Anyway, I can’t believe there isn’t a digital computerised version of this, probably using a laser beam instead of steel rods. The height data thus aquired could be imaged as a 3-D representation, as on ImageJ, or the numbers could be converted into a grey-scale, and a “shroud-image” produced. I think you would get quite a good version, and not a “fuzzy, ghost-like representation.”

    Next, I also take issue with: “There is no correlation in a photograph between the density of the imprint and the distance to the object.” In theory, of course this is true, and there are a number of clever artistic installations that bear this out. A series of flat black curtains at different distances from a camera ought to, and often do, produce the illusion of a single plane, and great surprise in an audience when a man walking between them seems just to appear and dissapear in and out of nowhere. However, in practice there often is some correlation between intensity and distance, and faces, especially, bear this out. Consider a white sphere, lit from directly in front. Does it appear in a photo as a disc? No. As the surface of the sphere slopes away from the frontal plane, the angle it makes with that plane reflects less and less light back to the camera, and in the photo the sphere is darker at the edges than it is in the middle. Much the same happens with faces – essentially pale spheres lit from directly in front. However, in detail, it would be truer to think of four or five such spheres squashed together, a big one for the forehead, a small one between two slightly larger ones for the nose and cheeks, and another for the chin. Each sphere appears equally bright where its surface is tangential to the plane in front of the face, which is why in the shroud, and in many photographs, the forehead, nose, cheeks and chin all appear more or less equally bright, and more or less equally high on an ImageJ repesentation. The more angled the surface of the ‘sphere’, the sides of the nose say, or the slopes of the eye sockets, the more different they are in intensity from the ‘flat’ frontal surfaces, and the greater the appearance of distance.

    In this way, by coincidence more than anything else, the distance/intensity colouration of the shroud image, which is probably real, resembles the distance/intensity colouration of a photograph.

    I’m not sure what the parameters of this observation are. Photos of people, as I have demonstrated on this blog, do show this correlation, while the full moon appears quite flat. Is this due to our inability to resolve contrast at such high illumination, given that the background is so dark? I don’t know.

    1. Hi Hugh, if I amy throw my two cents:

      “There is no correlation in a photograph between the density of the imprint and the distance to the object.” In theory, of course this is true

      In theory it is both true, and false, dependent on conditions and purpose -remember VP-8 was designed to process photographies.

      However, in detail, it would be truer to think of four or five such spheres squashed together, a big one for the forehead, a small one between two slightly larger ones for the nose and cheeks, and another for the chin. Each sphere appears equally bright where its surface is tangential to the plane in front of the face, which is why in the shroud, and in many photographs, the forehead, nose, cheeks and chin all appear more or less equally bright, and more or less equally high on an ImageJ repesentation. The more angled the surface of the ‘sphere’, the sides of the nose say, or the slopes of the eye sockets, the more different they are in intensity from the ‘flat’ frontal surfaces, and the greater the appearance of distance.

      But remember also about:

      * shadowed areas on the face.
      * secondary reflections.
      * variations in albedo, even for statue painted white, that was used by Nicholas Allen.
      * light directionality and stability.

      In this way, by coincidence more than anything else, the distance/intensity colouration of the shroud image, which is probably real, resembles the distance/intensity colouration of a photograph.

      In case of the Shroud, the correlation is definetly not by coincidence.

      I’m not sure what the parameters of this observation are. Photos of people, as I have demonstrated on this blog, do show this correlation,

      Very poor correlation, compared to the Shroud.

      while the full moon appears quite flat. Is this due to our inability to resolve contrast at such high illumination, given that the background is so dark? I don’t know.

      No, it’s much complex matter. Generally we can adopt a simple model saying that reflected light splits into two components:

      * light reflected according to the law of reflection (the angle of incidence equals the angle of reflection) (specular reflection)

      * light scattered in all directions.(diffusion reflection)

      The overwhelming majority is the second component. Thus the surface of them moon seen at any angle is similar, but not identical. In fact, the luminosty of the full moon is more than twice half moon.

      The diffusion reflection mechanism is the way VP-8 works in case of the planets. But for our eyes, the relative differences in distance between each part (center or edges) of the moon are too small to see any effect. And the variations of albedo due to the moon’s topography (dark lunar maria, bright uplands) are differences of much larger magnitude.

  2. It’s all a bit over my head here but is it possible to deduce from the Shroud image (using our modern understanding of light and image formation) if the “light’ is shining on the human form, versus from the human form itself, when it was made?

  3. computer graphics expert :
    I suggest that the VP-8 Image Analyzer was not used correctly thus leading to a lot of misunderstanding.

    Obviously, the shroud is not a simple heightmap with a direct, linear relationship between contrast and body-cloth clearance.
    It would be pure fiction to think of such a claim.

  4. “Has anyone ever attempted to create a graph of measured distance between points on a three-dimensional human form (or just a head) and an abstract surface or hypothetical plane using gray-scale tones or luma to represent the data? In other words has anyone produced a heightmap?”

    Yes and no.

    This has been done by Jackson and al. in ” Three dimensional characteristics of the Shroud Image” in Proceedings of the International Conference on Cybernetics and Society, October 1982″ (16 pages).

    They first used the “Small Sample Correlation Technique”:
    ” Our procedure for measuring the degree of correlation between image shading and cloth-body distance involved first measuring the transmittance of a black and white transparency of the face taken of the Shroud in 1978 by a microdensitometer. We chose to sample 13 image locations: tip of nose, edges of nose, cheek, eyes, eye sockets, bridge of nose, lips, mustache, and forehead (…)
    Next we measured cloth-body distance by draping a linen model of the Shroud, hand-woven as to correspond with the herringbone weave and thickness of the Shroud, over a bearded volunteer subject. Side photographs were made with the cloth in place and then after immediately being removed. By superimposing these photographs and using contours gadges (…)we determined cloth-body distances (…)
    We then plotted these data of transmittance and cloth-body distance and determined a linear regression line (…)
    The measured coefficient of determination, r2, was 0.60 for the 13 data points, at the 95% confidence level, which implies that the actual coefficient of determination, lies between 0.20 and 0.83. Through the range is quite large owing to the small number of data points available, some observations can nevertheless be made. First the null hypothesis [no correlation] is excluded, indicating that some image correlation with image shading and cloth-body distance is present (……..)

    The reliability in the measurements of [the coefficient of determination] could be increased if more data points were sampled. We can estimate the numbers, n, of datapoints required (..).
    We calculate the number of data points to be 1700. (..) This value of 1700 is prohibitive by the manual sampling technique discussed above, but may be possible via some automated sampling algorithms.”

    “Relief Image Technique
    Although we have not as yet developed an adequate large number sampling algorithm, we have studied the Shroud image with another technique that allows visual estimation of how well image shading correlates with distance.”: the VP8.

    Then, in this paper, the authors described in detail, step by step and with many experiments how and why the VP8 can be used to estimate the reliability of the “cloth-body distance hypothesis”.
    They concluded:
    ” These results demonstrate that image shading on the Shroud correlates with distance between 2 surfaces, one of which can be interpreted as a body shape and the other as a cloth draping over that body surface. Logically, this does not prove that a cloth was draped over a body shape when the Shroud image was formed because other hypotheses not involving a cloth covered body shape might conceivably account for such an effect. (..)

    “Thus we may refer to the Shroud image as having a “three dimensional characteristic” which means, simply, that image shading can be self-consistently interpreted as being correlated with the distance between a body an an enveloping cloth.”

    And finally they tested many image formation hypothesis using this “technique”:
    ” (..) A correct hypothesis of image formation must be able to produce an image structure capable of a “three dimensional interpretation”, for in doing so the shading distribution of the shroud image would be duplicated”.

    This paper is THE fundamental study of the “3-D interpretation” of the TS image.
    Everybody who wants to discuss seriously must read it carefully.
    I had to read it carefully at least 3 times to understand the logic of the reasoning.
    For me, the reasoning seems to be irrefutable.
    Is it ?

    You can ask me for this paper.

    1. Correlation is not high, meaning, correlation is not simple, direct, linear. But correlation is irrefutable, for sure.

  5. Archive from the Guild.

    A new procedure utilized in Phase III was based on experiments
    conducted on a VP-8 Image Analyzer, a microdensity and image color
    enhancer maintained by the Electrical Engineering Department
    A negative photographic technique in image enhancement was begun
    early in Phase III and continued to be used effectively throughout the
    research period. Although details of the technique are covered on
    pages 30-40 in this report, the principle of the technique involved
    the production of negative photographic prints from color and black and
    white positive transparencies. Originally designed as an expedient
    photo printing process by this investigator in 1971, the negative print
    technique was applied to the ERTS imagery for image enhancement. With
    the negative print technique applied to band 5, cultural landscapes in
    terms of urban and suburban areas, shopping centers, highway construction
    sites, strip mines, and agricultural areas were enhanced in dark tones
    against a light background. On band 7 negative prints, water and
    topographic surfaces were enhanced in light tones and light shaded
    enhancements on dark surfaces displayed as tone reversals in the
    negative prints (see Figure 4).
    The mapping of photomorphic regions and examples of landscape change
    occupied the majority of the Phase III activities.

    Can we consider this as a possible working solution to landuse
    mapping? I believe so but with reserved optimism. First, consider the
    ways in which we analyze imagery. The human interpreter basically looks
    at tonal variations in terms of light or dark signatures to interpret
    black and white ERTS imagery. Tonal variations reflect densities which
    can be measured, sorted, and displayed by the VP-8 system. Thus, the
    VP-8 distinguishes between light, gray, or dark toned areas and displays mappable areas to which we assign landuse interpretation. The VP-8 can only reconstitute and sort density levels for us. We must interpret those levels in terms of landuse categories. The interpretation can only be as good as the interpreter. His subjective and objective knowledge of the area enters a bias which the VP-8 cannot override. The interpreter or VP-8 machine operator selects the color assignments for different densities and also determines, in part, the spatial distribution of density levels. He can combine density levels into a common color unit and thus obliterate detailed information. Conversely, the interpreter can display up to 8 density color slices and combinations with highly detailed definition. It is this kind of subjectivity in the choice of density combinations which alarms some investigators. No single interpreter, interpretation system, or for that matter data gathering device, or sensor can possibly meet the needs of every
    investigation. The VP-8 must be used with caution with the understanding that it produces color coded densities which the interpreter must choose to display.

    Giorgio

    I like to add that the negative print Technique allows less deviation between operators because you’re basically producing a high contrast image with limited gray scales. Another limitation can be the aperture of the microdensitometer used to gather the sampling. In this program above, a 25 microdensitometer was used. 10 micros is the minimum for Stochastic reproduction. However based on the negative print technique that proably would not be condusive to the VP-8 system.

  6. The Jackson paper is a good study of the problem, and although I think it extrapolates a little too much from the actual measurements, it clearly explains how. However it is a shame the two crucial photographs – that of a man lying down taken from the side, and that of the same man covered by a sheet – are not included in the paper. One must wonder why. From photos elsewhere on the internet, it appears that the volunteer lay completely flat, with no bent legs or pillow to lift his shoulders, and his feet sticking up. Is that correct?
    I believe that Mario Latendresse is working on something similar, but may be able, with the power of modern computing, to work with more than a single line of actual data, and a variety of body poses, cloth drapings, and radiation directions.

  7. Giorgio :
    Archive from the Guild.
    The interpretation can only be as good as the interpreter. His subjective and objective knowledge of the area enters a bias which the VP-8 cannot override.

    Meaning how does the interpreter understand the half tone effect ? Reaction completion ? Autocatalytical reaction ? Blurring of the image due to the molecular weight of diffusing molecule ? This kind of subjective hypothesis ?

    1. Interpretation for mapping gray levels to 8 slices that in this case defines streets, water, vegetation and so on. VP-8 measures a density, that is all. How you want to interpret it, that becomes your hypothesis. I for one do not understand how a reproduction photograph records molecular weight of diffusing molecule, autocatalytical reaction and so on that can be used for quantitative analysis. Can you explain

      1. Let´s say you’re looking to an experiment on a linen with different level of completion of maillard reaction, EDTA applied chronogically.
        Grey level will refer to degree of reaction completion, not to a heightmap.

      2. Good anolgy of the meaning of interpertation is let’s suppose you have a 16 bit file and your printing on an 8 bit printer. That means, 8 bits of information will be discarded. How would you make that decision which information to discard? You don’t; you rely on the printer’s profile to do that for you. The negative photographic technique employed in this case discarded much of the information so interpretation won’t vary between operators. IMHP it was a good idea.

      3. I didn’t think of this interpretation.
        You’re talking of a selection biais of the data.
        I was talking of how one could interpret the data according to his own subjective hypothesis (eg how STURP team determined cloth-body distance)

  8. “I suggest that the VP-8 Image Analyzer was not used correctly thus leading to a lot of misunderstanding.”

    Does this statement mean what it sounds like it means: The Production Engineer who put the VP8 together who taught Jackson & Jumper to use it did not use it correctly? Does it further imply that the brightness map on the Shroud which correlates to height and depth does not?

  9. If that is the case, the gray levels caused by the Maillard reaction or some type of acceleration from an alkaline, in all probability was done in a controlled environment. Unless there is studies about cellular structures of flax cell walls being stable enough to record perfect gray tones to measure the reaction or for that matter any such natural organic material. And if you find that, please explain how the cloth was draped over the corpse to maintain the perfect correlation of gray tones made by the reaction.

    anoxie :
    Let´s say you’re looking to an experiment on a linen with different level of completion of maillard reaction, EDTA applied chronogically.
    Grey level will refer to degree of reaction completion, not to a heightmap.

    1. And if you find that, please explain how the cloth was draped over the corpse to maintain the perfect correlation of gray tones made by the reaction.

      I would put it different way; whether in position in which the corpse was laid, and the cloth drapped over it, the perfect correlation of gray tones made by the reaction can be maintained (IMHO probably not).

      See this article by Fanti & Marinelli:

      Click to access FANTI4A.PDF

      1. Draped or staged? That is the question! Shakespeare I believe. Then again I stimulated my brains with Mad Magazine is growing up so it might have been Alfred E. Newman.

    2. It was a suggestion to test different parameter. Concerning the shroud different degrees of completion may result from differentinitial concentrations.

      Maybe in the end after years of slow ongoing reactin, it´s binary, reaction completed or not. But, given the maillard reaction hypothesis, one has to link the half tone effect to reaction completion/ concentration of molecules.

      Has anyone tried the vp8 on Rogers ‘ experiment with a rope ? Where does his color gradient comes from : concentration ? reaction completion ? Similar half tone effect ?

  10. Personal bias we’re all guilty of I imagine.

    anoxie :
    I didn’t think of this interpretation.
    You’re talking of a selection biais of the data.
    I was talking of how one could interpret the data according to his own subjective hypothesis (eg how STURP team determined cloth-body distance)

Comments are closed.