Home > 3D, Image Theory > Only the Shadow knows

Only the Shadow knows

December 12, 2014

The apparent 3D in the picture to the right was created by the arc of the sun.
The green color was photoshopped in for illustration purposes only.

imageA reader points out:

[Colin Berry wrote], “Takeaway message: there is no encoded 3D information in a photograph of the TS. There is merely a 2D image that has patterns of light or dark that can be computer-processed to give APPARENT relief. But if the original image was not created by photography, but by some other imaging mechanism, there is always the possibility that one will be fooled into thinking that a darker-than-average feature on the image represents high relief. It ain’t necessarily so. Different imaging models create different interpretations.”

He is right. Colin has shown one way that apparent relief can be generated that has nothing to do with shape. Dan, do you remember Nathan Wilson and his Shadow Shroud?  He showed another way to create a 3D image that had nothing to do with it.

How could I not remember Nathan.  We debated briefly on an ABC Radio program.  He got the better of me that day and made me look foolish.  Nice guy, though. And yes, you are right, he did demonstrate another way to generate apparent relief. Apparent is the essential word, however.

imagePersonally, I don’t think what we see on the Turin Shroud is only apparent relief.  I think it is real relief. I think the grayscale values (my preferred term for brightness, darkness, intensity, luminosity, shade, density, etc.) represent that. Regrettably, the only evidence I see for this is that it seems so. “It seems so” is one step lower on the evidence quality ladder than “I think I see.”  Moreover, I have no reason to think that the grayscale values of the shroud images represent distance between the outer surface of the body and facial hair and a burial shroud above and below it. My gut says it is so. Occam’s razor is a great temptation. But in the end there is really no evidence for the scene we imagine.  Could it not just as easily be distance from a hypothetical plane that intersects the body from head to toe, so to speak?  Or relative distance to some point in space? Those notions are harder to imagine because we can’t imagine an action-at-a-distance scenario for them. We have imagined a scenario to go with body to shroud distance. Well, sort of, maybe.

Speaking of imagined scenarios:  “John Jackson’s ‘Fall-through’ hypothesis for image formation belong to the class of hypotheses that invokes the action of photon radiation” (newest Critical Summary 2.1 out of Colorado, page 73). Are we to measure brightness in terms of distance or measure it in terms of time that the cloth spends falling through a mechanically transparent body?  And then  there is Frank Tipler’s hypothesis of dematerialization by electroweak quantum tunneling, in which a “proton plus electron goes to neutrino plus antineutrino.”  Is this a measure of anything that we can comprehend? Tipler, in his book, The Physics of Christianity implies it is distance by telling us that at the time of dematerialization of the body, the cloth is perfectly flat.

And of course we can’t imagine inexplicable non-process miracles; the touch of a hand or finger, a few spoken words, or a seeming unrelated unusual activity all serve as examples. In the New Testament Jesus changes water into wine, a woman is healed when she touches Jesus’ garment, Jesus feeds a multitude with five loaves of bread and two fish. Where is the process in these? What imagined actions are taking place.?  More recently, there is the apparition of Mary appearing to Juan Diego; Mary had arranged flowers in Juan Diego’s cloak and when later he opened his cloak in the presence of a bishop the flowers fell out leaving behind an image now known as Our Lady of Guadalupe. Perhaps the image on the shroud is the result of an inexplicable non-process miracle like that.

Or let’s just suppose Colin is right and the image we see was formed by a hot template. Could the resulting variations in grayscale values  have been caused by varying temperatures in a large piece of metal?

In 2005, Nathan Wilson, experimented with a method he devised, a method for creating an image similar to the shroud.  This isn’t to suggest that this was how the image was formed. But it does show that so-called brightness (grayscale) maps can be generated by other methods that are not formed by action at a distance and that do not contain real distance information. Wilson writes:

The image on the Shroud is dark on a light background. Previous theories had all attempted to explain how linen could be darkened without the use of chemicals, stains, or paints. Wilson wondered if it would be possible to lighten the already dark linen, leaving only a dark image behind.The simplest means of lightening linen, available to all men throughout time, is to bleach it with sunlight. Wilson believed that if an image of a man were painted on glass with a light shade of paint, placed over darker linen, and left beneath the sun, a dark image would be left on a light background. More importantly, he believed a dark and light inversion would take place, creating a photonegative. Wherever light paint had been used, the linen would be shaded from the sun and left dark and unbleached. Wherever the darker shade of linen had been left exposed, the sun would bleach the cloth light. In addition, it was also believed that because the sun would be exposing the linen from approximately one hundred and eighty degrees, a crude three dimensional image would be created.

How 3D-ish are the images?  Colin’s, as well?  Those of the shroud?


Wilson’s Results


Oil paint on glass, produced by David Beauchamp in roughly forty-five minutes while watching stand-up comedy. This painting was the most successful and was used to produce three different images on linen.

The first linen image created by Beauchamp’s window, exposed for ten days generally parallel to the sun’s path. The linen bears a negative image, dark on light (left), which becomes positive, light on dark (right), in a true photonegative.

The second linen image created by Beauchamp’s window, exposed for fifteen days generally perpendicular to the sun’s path. The lines are much harder than those in the first image.

The third and final image created by Beauchamp’s window, exposed for approximately one hundred and forty hours beneath a sunlamp. The stationary light source created an image flat and scattered.

Beauchamp’s parallel shroud (right), and the Turin Shroud (left) both topographically rendered.

The Turin Shroud rendered three-dimensionally. Shabby chic.

The Beauchamp parallel shroud rendered three-dimensionally. Shabbier chic.

  1. anoxie
    December 12, 2014 at 11:25 am

    “Colin has shown one way that apparent relief can be generated that has nothing to do with shape.”

    And footprints in the sand have nothing to do with shape either? The contrast of a scorch has all to do with shape, temperature, pressure, and the nature of the hot template.

    Scorching is a contact mechanism, and has many properties different from the Shroud image.

    But that’s not even close to the real problem of the Shroud image, being the nature of the distant mechanism.

  2. December 12, 2014 at 12:47 pm

    Might it be the conversion of a simple shade chart with no 3D history to apparent 3D topography that so impressed “reader”, not model scorches in the first instance? In other words, 3D rendering is not unique to the TS, it’s not even unique to the TS and model scorches. It’s simply the way that software programs like ImageJ interpret a 2D ‘darkness’ map – any 2D darkness map.

    The next target for this slow but systematic model-builder is to scrutinize every single detail of the TS image, before and after 3D-rendering, to gain clues as to the nature of the imprinting process. In fact, that’s been happening for some months now. But there’s a major uncertainty that is holding up progress. A contact scorch off a template can be obtained in one of two ways. The first is ‘tactile’. where the linen is moulded with fingers to the contours of the template (as with Luigi Garlaschelli’s powder or paste ‘frottage’ using a live volunteer, or what I called LOTTO scorching – Linen On Top, Then Overlay) . The other is entirely non-tactile, where the inanimate template is pressed forcibly down onto linen with a suitable underlay, e.g.several layers of damp cloth (LUWU configuration : Linen Underneath, With Underlay). There’s a subtle difference between the quality of imprints obtained by the two methods, tactile moulding allowing one to capture greater detail especially where there’s abrupt changes of relief and/or angularities (not that there are many of those on the TS to serve as ‘acid test’ areas), albeit with greater risk of lateral distortion if one moulds “too far” round the sides. Interestingly, there is little risk of lateral distortion using LUWU, as mentioned recently, showing comparisons of scorch versus (inferior) paint imprints.

    One working hypothesis is that tactile and non-tactile occurred simultaneously, i.e. in a non-authenticity medieval hoax model, tactile moulding was used for the frontal side, non-tactile for the dorsal. It could have been inspired by and/or modelled on that artwork ones sees at the top of this posting (Descent from the Cross straight onto J of A’s up-and-over linen), or what I call the pseudo-sweat imprint model (pseudo if scorch technology was used to simulate a yellowed centuries-old sweat imprint).

    There’s some corroborating evidence for that model, especially where the chin/neck is concerned, maybe the feet too. But there’s only so much an amateur can do. What’s needed are model studies in which real bodies and effigies thereof are actively scanned electronically with a mobile tracking photocell on guide rails, with and without over- or underlay of linen to see whether or not the TS image can be modelled purely by means of a contact-only mechanism. I suspect to be the case, though it’s scarcely more than a hunch at the present time.

  3. December 12, 2014 at 1:34 pm

    Guys don’t get fooled!

    The Beauchamp/Wilson shadow FAILS 3D TEST!

    TS on the left, Beauchamp/Wilson shadow on the right.

    The distortions due to different illumination are obvious. This is because the original painting on glass is imperfect, as it is extremely hard to accurately render the perfect 3D effect (intensity vs body-cloth) via painting.

  4. Dan
    December 12, 2014 at 1:54 pm

    I agree. Now what about Colin’s scorched images?

      • December 12, 2014 at 4:10 pm

        i disagree. What are your criteria for “good” and “bad” ImageJ images that the Beauchamp and Colin Berry images fail? It seems to me to have a lot to do with how you calibrate the Max, Min, and z scales.

        • December 12, 2014 at 4:31 pm

          Criteria? Resemblance to the original mold -in case of the crucifix -or to the human face, in case of copies of the Shroud face.

          I have seen many attempts to copy the Shroud, or (usually) just the face of it. But neither of them was good enough to fool a kid that he was looking at the relief of the real person, like in the case of the Shroud. That is the “criteria” and what we actually understand by 3D effect. To make a crude imitation of face, or even a rough imitation of 3D relief of the face, everyone can. But to make convincing relief -that’s much greater challenge, satisfied by no one so far. Because like I said, every child would recognize that those are not faces or reliefs of real person -just rough and primitive artificial imitations.

          Hugh, look at this picture:

          This is a car.

          Look next at this picture:

          http://tinyurl.com/nm23mzc

          Is that a car also? It has wheels, it has a body, it has lights, just like cars have. But is that really a car?

          This is the difference between the Shroud and the attempts to reproduce it.

        • December 12, 2014 at 5:26 pm

          Yes I see that, but it’s very subjective – I think this, you think that. I thought there was something about your colour spectrum adjustments that helped objectify your point.

        • PHPL
          December 12, 2014 at 10:32 pm

          Hi Hugh,
          I see that you have added a “n” to your family name. What’s happened ? Did you have troubles to log in ?

  5. December 12, 2014 at 3:11 pm

    And BTW, here is Giulio Fanti’s analysis of Nathan Wilson’s theory, from 2005 Dallas conference:

    http://www.dii.unipd.it/-giulio.fanti/research/Sindone/PresWILSON.pdf

  6. December 12, 2014 at 4:39 pm

    This would seem as good a place and time as any to stress that this blogger no longer has all his eggs in the one “scorch” basket.

    Dan’s choice of (cropped) picture at the top could not have come at a better time. Take a look at the whole picture:

    That’s exactly the one I’ve been looking for to illustrate what I now call the “pseudo-sweat imprint” hypothesis. Note the absence of “scorch”. Thanks Dan (and Stephen Jones for finding it in 2012).

    Are you reading this Charles? Take a close look at that TS imprint. Does that look painted to you? (I’ll can give you the wiki details if you wish re attribution – one of 2 possible artists – and their period).

    Thermal imprinting aka contact scorching is just one possible means of achieving a negative image that centuries later would be shown to have 3D properties. It might be better to describe the imprinting method as almost certainly 3D body or template-based, without being too specific as to whether the imprinting mechanism was thermal, chemical or thermochemical. If it weren’t for the radiocarbon dating, one could say the proposed imaging mechanism was compatible with either authenticity or medieval forgery, provided the first was a contact model, as per “sweat imprint”.

    • December 12, 2014 at 4:52 pm

      This is a depiction where you can see the Crown of Thorns, the long hair at the back and the gap between the elbows and the body on the dorsal image,all now vanished. This makes my point that what we have now is a diminished images of what there once was.
      The Crown of Thorns in place AFTER the deposition is datable to the fourteenth century.
      The gap between the elbows and the body is an important feature as if you try the pose yourself, and I do this with my audiences, you cannot also cross the hands in front. This suggests two completely independent images and not a cloth wrapped simultaneously over the same body.
      The loincloth suggest that this is a post 1578 image as we have no examples if a loincloth placed on the Shroud before this date.
      We have many other depictions by different artists that confirm these features.
      So lots to work with. Thanks, Colin.

      • December 13, 2014 at 2:52 am

        Not for the first time, you seem to have overlooked something, Charles. Look closely at the painting (lower, not upper half) and you should see that Jesus is no longer wearing a crown of thorns. Look a bit closer, and you will see that it is lying on the ground, left foreground. So what you see on the Shroud imprint (yes, clearly an imprint, NOT painting) is not, and cannot be a crown of thorns, since that would conflict with the logic and self-consistency of the artist’s visual ‘narrative’. Yes, there are dark regions on the head, both frontal and dorsal sides, but they do not extend outside the line of the hair., or if they do (frontal side) only marginally so,and open to alternative interpretation. I would suggest that what the artist was showing there was not the crown, but the copious blood that flowed from the wounds, exactly as still visible on today’s Shroud image, with blood stains in the hair region, but NO crown of thorns, or the slightest hint that there ever was a crown of thorns.

        There are one of two inconsistencies with the picture, notably the paucity of blood on the corpse (artistic sensibilities?) and presence of a complete upper torso image on the TS, despite the post 1532 burn holes that obliterated the shoulders. However, the latter should have been the cue not to place too much emphasis on the position of arms etc, given the artistic licence (measured and restrained) on display. Artistic licence probably accounts also for the artist’s decision to show a loin cloth on the corpse, and thus on the TS imprint too, though interestingly visible as such on the dorsal side only, while failing to imprint on the frontal side (explicable in an artist’s eye view as imprinting better when assisted by body weight) – imprinting, note, NOT applying paint from a palette.

        It cannot be said too often, Charles: you are missing the point entirely. What the picture shows without any shadow of doubt is that the TS image was interpreted by that artist as a passive IMPRINT – not a painting, and with blood – from nails, lance in side, flagrum, crown of thorns.

        So what produced that relatively homogeneous, monochrome, negative body image/imprint? Bodily perspiration aka sweat must surely be the most obvious explanation,and, more importantly, what the artist either assumed and/or wished to portray by linking the TS with the immediate aftermath of removal from the cross (before arrival at tomb and final preparations for interment).

        Am I the only one to be curious to know Dan’s reasons for cropping that picture? If I had to chose just one picture from thousands to illustrate 16th/17th century perceptions as to the likely origin of the TS double image, if only to counter Charles Freeman’s dud hypothesis, and maybe talk up my own real v pseudo-sweat imprint heresy, it would be that one (but necessarily COMPLETE and UNCENSORED!).

        How many words does that make now obnoxie?

    • December 12, 2014 at 5:08 pm

      Here’s the caption that accompanied the picture on Stephen Jones’s site, October 2012:

      “Descent from the Cross with the Holy Shroud,” by Giovanni Battista della Rovere (c. 1575-c. 1640) or Giulio Clovio (1498–1578): Wikipedia. This aquatint print accurately depicts from the information on the Shroud of Turin how Jesus’ body was laid on the bottom half of the Shroud and then the top half was taken over His head and overlapped at His feet. See above the front and back, head to head, image on the Shroud held by angels, with the anachronistic burn marks from a fire in 1532.”

      http://theshroudofturin.blogspot.co.uk/2012_10_01_archive.html

  7. December 12, 2014 at 5:33 pm

    Yes I see that, but it’s very subjective – I think this, you think that. I thought there was something about your colour spectrum adjustments that helped objectify your point.

    Hugh, color images are used only for more objective quantification of data -that’s why I recommend using them (instead of just viewing reliefs under various angles, which may be visually misleading).

    The principle behind them is the same as beyond color scale on this astrophysical charts:

    (The caption is “Polarization of X-ray and gamma-ray Emission” , probably in significance values)

  8. anoxie
    December 12, 2014 at 6:01 pm

    CB: You’ve lost your credit (and all your eggs) defending your scorch hypothesis. Dwelling on another “pseudo-sweat imprint without being too specific”, for hundreds of posts and thousands of comments, will be endless and fruitless. Game is over.

  9. December 13, 2014 at 5:07 am

    Should anyone here still be in any doubt as to the power of modern 3D-rending programs to generate apparent 3D from entirely 2D imprints OR even centuries old artwork, here’s what happens when one takes the TS image from that painting above, gives it some additional contrast, and then uploads it into ImageJ.. (Charles: please note that the man’s imprint on the shroud has exactly the same monochrome colour as the1532 burn marks – hardly what one would expect of an artistic depiction of the TS from late16th/early 17th century if the latter had been the work of an earlier artist).

  10. Stan Walker MD
    December 13, 2014 at 8:39 am

    Colin, Thanks for pointing this out. The painting is a facsimile of the shroud. Of course it will have some 3D qualities. You are inadvertently supporting the authenticity argument.

    • December 14, 2014 at 6:28 am

      “Inadvertently” you say Stan Walker MD?

      If you think that, then you’ve clearly lost the plot, and/or have not been following its twists and turns these last few months.

      This painting above is, as eagle-eyed Hugh has confirmed, by one Giovanni Battista Della Rovere (1560 -1621)

      https://it.wikipedia.org/wiki/Fiammenghini

      or, contradictory dates elsewhere, see:

      http://www.artic.edu/aic/collections/artwork/artist/Rovere,+Giovan+Battista+della

      It fits EITHER a pro-authenticity OR an anti-authenticity narrative.

      It all depends whether that TS image is an imprint of the body in question undergoing deposition from cross onto Joseph of Arimathea’s linen (lower half of picture)

      OR:

      a pseudo-sweat imprint created by a medieval forger to invoke the above scene.

      It’s the serendipity thing – to log on to this site as I did yesterday to find 16th/17th century paintings displaying the origin of the present TS as a blood AND (probably) sweat imprint too onto Joseph’s linen, whether a real historical event OR, as likely as not, a fanciful one (maybe a post-mortem echo of the legendary imprinting of the Veronica face-only image?). Christmas for me has come early this year, especially as Dan withheld the crucial top half of the first of the current crop of ‘Deposition’ pictures initially. (why Dan?).

      Am I not correct in thinking from the above picture that the double body image is being captured and imprinted BEFORE final preparations for interment, indeed even before arrival at the tomb, and that it is incorrect therefore to refer to Joseph’s bolt of linen as a burial shroud if it was intended merely as a temporary expedient, i.e. to assist with deposition from cross and transport to tomb? Sorry to have to repeat myself (see earlier ‘body bag’ posting).

      The scene in the above painting was in my considered opinion the chosen narrative that inspired, 350 or so years earlier, a highly elaborate but ultimately successful medieval HOAX.

      However that would not preclude pro-authenticity 1st century narratives, if one’s prepared, as so many here are, to dismiss out of hand the radiocarbon dating (which incidentally this blogger provisionally accepts, but would like to see confirmed, and indeed is appalled was never immediately extended beyond the corner chosen for initial testing).

      As I say, there was nothing “inadvertent” in what I wrote. I try not to do inadvertent, despite 2014 being the year I reached three score years and ten.

  11. December 13, 2014 at 9:01 am

    It looks good when

  12. December 13, 2014 at 9:05 am

    Sorry, mysteriously cut off too soon!

    Anyway, it looks good when you use the “original colours” option, but then, most things do, regardless of the quality of the 3D rendering. Looking at it using “greyscale” or one of OK’s spectrum options gives a fairer assessment, I think.

  13. Dan
    December 13, 2014 at 9:35 am

    But this may be a more accurate version of the painting:

  14. December 13, 2014 at 4:09 pm

    No tricks I assure you – just my usual routine with ImageJ. The 3D-enhanced image above was posted after cropping off the ImageJ settings. Here’s the same image with those settings. Click to enlarge.

    Colin you used 16-pixel smoothing for 306×72 pixel image????!!!! YOU BASTARD!!! Do you even know what you are doing????!!!

    This:

    One big smeared stain!

    • December 13, 2014 at 4:23 pm

      Kindly cease the name calling OK, if you wish me to engage with you,

      For now I would point you to my studies from March 2012 – nearly 3 years ago – in which I normalized ImageJ settings on scorch imprints from 3D templates to get the best match with my templates.

      http://strawshredder.wordpress.com/2012/03/18/there-is-something-rather-special-and-unusual-about-this-image-of-the-man-on-the-shroud/

      Those same settings produced a very satisfactory result when applied to the TS.

      • December 13, 2014 at 4:27 pm

        Showing that you actually don’t understand anything.

        Doesn’t it ring a gaussian bell to you?

        • December 13, 2014 at 4:33 pm

          Give it a rest OK. Your ‘expertise’ with ImageJ is entirely subjective. Try normalizing imprints against their real-life subjects if you want to match my objectivity.

  15. December 13, 2014 at 4:49 pm

    Give it a rest OK. Your ‘expertise’ with ImageJ is entirely subjective. Try normalizing imprints against their real-life subjects if you want to match my objectivity.

    No Colin, your ‘objectivity’ means ignorance in this case. Like unfortunately most of other guys playing 3D in ImageJ not understanding what actually they are doing.

    I asked: doesn’t it ring a gaussian bell to you?

    So now a few illustrations for dummies:

    http://imageshack.com/i/f0QsE3Rhp

    Will make it 3D:

    http://imageshack.com/i/exImwrpkj

    Now go smoothing:

    http://imageshack.com/i/p1MGppSlj

    Remmeber, as you use extensive smoothing you will always get bell shape in the end. As the shape of the cloth draping the body was also similar, the effect looks more realistic, but actually it is MISLEADING.

    Remember this lesson once for a lifetime.

    • December 13, 2014 at 4:50 pm

      • December 13, 2014 at 5:20 pm

        What planet are you on tonight, OK? Why are you showing us those ridiculous results with that black square? Why are you using those crazy settings that yield those results? What point are you trying to make? Is there any point at all, or are you simply trying to bamboozle us with your playtime antics?

        To those on this site who continue to set store by commonsense, let me ask you a question.If you put a black square into a 3D-rendering program, what commonsense result might you expect to obtain?

        OK, you’ve thought long enought and yes, you were right. You expect to get a black tile.

        Well. I’ve just this minute put OK’s black square into ImageJ with MY sensible settings (see below), and guess what? I get a black tile.

        Sure, there’s a tiny bit of distortion at the corners, which may or may not disappear with a little tweaking.

        Here are the settings.

        Grid 512, Smoothing 16.0, Perspective 0, Lighting 0, Both boxes (xy and invert) left blank, Scale 2.02, z scale 0.15, Max 100%, Min 0%.

        Note the sizeable number that have NOT been altered from ImageJ’s default values (and as I said earlier, I’ve been using ImageJ for close on 3 years, guided initially by my ‘normalization’ experiments to ensure that my choice of settings is not entirely subjective (not so with OK and his claimed ‘expertise’ that should fool no one).

        • December 13, 2014 at 5:24 pm

          Correction: I had to tick the Invert box to get the image to come out of the page. Leaving it unticked gives sunken relief.

        • December 13, 2014 at 5:46 pm

          Colin:

          What point are you trying to make? Is there any point at all, or are you simply trying to bamboozle us with your playtime antics?

          That you are doing everything WRONG all the time, and don’t understand the principles that rule this play.

          Here are the settings.

          Grid 512, Smoothing 16.0, Perspective 0, Lighting 0, Both boxes (xy and invert) left blank, Scale 2.02, z scale 0.15, Max 100%, Min 0%.

          Note the sizeable number that have NOT been altered from ImageJ’s default values (and as I said earlier, I’ve been using ImageJ for close on 3 years, guided initially by my ‘normalization’ experiments to ensure that my choice of settings is not entirely subjective

          Maybe one choice settings values is satisfactory for some old chap that uses one solution that worked fine for him for the whole lifetime, but in these times, if you cannot adapt to the situation, you are finished.

          Or in other words: THERE IS NO ONE PERFECT SETTING of ImageJ values that can be used in every situation!

          All depends on the picture you use. Its greyscale limits, resolution, properties, etc. You need to adjust them every time to obtain good results -and whatsmore not distorted results.

          For example, in the case of Battista image, you used a smoothing with a 16-pixel radius gaussian, while the image was only 72 pixels wide. What did you expect to obtain? You got bell shaped torso and hands, but actually this only reflects the shape of convolving kernel, nothing more! If you turn the heat map on (where the true data are), all you can see is a smeared, meaningles stain!

          Colin, do you know the rules how this machine works? The smoothing is nothing more but applying gaussian blur. Read yourself about this http://en.wikipedia.org/wiki/Gaussian_blur

          It is actually nothing more than convolving the image with 2-D gaussian (I hope you still remember what convolution is). Look at the wiki to remind yourself what it is, pay attention especially to the animation:

          http://en.wikipedia.org/wiki/Convolution

        • December 13, 2014 at 5:58 pm

          ImageJ is best viewed as an empirical research tool, OK. It is not for you to go imposing your so-called rules, least of all on someone who has performed and published normalization studies that compare 3D-renderings with their parent templates.

          Goodnight to you.

  16. December 13, 2014 at 5:59 pm

    OK, you must understand that expletives like that, and in capital letters, and accompanied by excessive punctuation marks, have little impact on scientists, particularly non-authenticists. However, they act as a powerful disincentive for for the authenticist cause, as they suggest that your views are wholly subjective and have little basis in actual observation. A passionate faith in the Shroud is an excellent thing (although one wonders what its subject would make of such invective) but unless it is controlled, it loses any scientific credibility. Is it any wonder that authenticists find themselves fighting a rearguard battle when one of their more authoritative spokesmen can do no better than that?

    This is a pity, because you might have a valid point. However, as you have no idea what image Colin was using, your very precise “16-pixel smoothing for 306×72 pixel image” is mere guesswork.

    When I try to explore ImageJ I try to begin with the biggest image I can. I select the brighter of the della Rovere images above (I find a good one at http://4.bp.blogspot.com/_k-bPNZiL84A/SWTn3oN7FjI/AAAAAAAAATM/vwIRl6vP8lM/s1600-h/gbdrov.jpg) and enlarge it on my screen until the length of the Shroud almost fills the screen (I use a 13″ laptop). Taking a screengrab of that gives me an image of 1136 x 242 pixels. Feeding that into ImageJ and selecting the “Interactive 3D plot” option, I get a good 3D image with the following settings:
    Gridsize: 512, Smoothing: 4.0, Scale: 3.0, z-Scale: 0.1, Max: 100%, Min: 0%.

    I agree that this image may be an artifact of the software, especially as the painting seems to be completely flat and without shading. However, if I carry out exactly the same procedure with the Shroud (using the image at http://www.world-mysteries.com/sar_2wiki5.jpg), with a screengrab of 1168 x 292 pixels, and exactly the same settings, I get a remarkably similar picture. In neither picture did I make any adjustment to the colour, brightness, contrast or anything else.

    The truth is that it would obviously be best if the Smoothing was zero, but the “noise” over the Shroud makes it almost unrecognisable in that case.

    So is there any validity to your claim? Well, yes. I have also examined my two screen grabs with a Smoothness of zero, in which case the elevation above the background appears in both csses as a series of needlepoints, but this time I used the Spectrum LUT option, made the z-scale 0.5, and adjusted the Max and Min values in an attempt to discern different heights of the needlepoints. Amost all the della Rovere needlepoints appear the same height, while the needlepoints of the shroud image do indeed show a variety of heights, lower at the edges of the image and higher in the middle.

    Both the VP-8 and ImageJ seem to have a smoothing function, which inevitably gives a slope even to sharp edges. Even at zero smoothing, the needle points in ImageJ appear as pyramids rather than rectilinear blocks. What we don’t know is how much smoothing has been applied to the VP-8 image; the observed curve of the legs and arms might be as much due to smoothing as to genuine image intensity. But then again, it may not.

    • December 13, 2014 at 6:39 pm

      Hugh:

      This is a pity, because you might have a valid point. However, as you have no idea what image Colin was using, your very precise “16-pixel smoothing for 306×72 pixel image” is mere guesswork.

      This is not a guesswork! Those are exactly the parameters provided by Colin! I reconstructed Colin’s 3D, and I know that he is doing things completely wrong. Actually not for the first time. As I observe this blog (and enter his own from time to time) I know that this a constant way he is doing this thing (and not only he). Most people playing ImageJ simply don’t understand what they are doing, and they set a high smoothing, which looks effective, but actually is deceiving!

      That’s why insist on using color map.

      I agree that this image may be an artifact of the software, especially as the painting seems to be completely flat and without shading. However, if I carry out exactly the same procedure with the Shroud (using the image at http://www.world-mysteries.com/sar_2wiki5.jpg), with a screengrab of 1168 x 292 pixels, and exactly the same settings, I get a remarkably similar picture.

      Even with that picture, you cannot get much:

      It has 786×150 pixels, about twice as much as Collin’s. But still its minimum.

      Here you have 3D with 16-pixel smoothing:

      As you can see, it’s useless.

      Here is with 2-pixel smooth:

      Now it’s much better. But the image is of course flat, I agree. That’s what one should have expected.

      The truth is that it would obviously be best if the Smoothing was zero, but the “noise” over the Shroud makes it almost unrecognisable in that case.

      No, Hugh, zero smoothing is almost NEVER best option. And actually, as the Shroud image is halftone, smoothing is necessary, otherwise had we got high enough resolution image, and no smoothing, we would see no 3D effect at all. Exactly because the halftone effect -in a strict sense the Shroud image is flat, or rather needle forest.

      What we should do is to use moderate smoothing to eliminate impurities, extreme pixels and so on.

      I think I must make at last this presentation about 3D properties of the Shroud. People must learn how to properly use those toys, at last. Unfortunately I don’t have much free time now.

  17. December 13, 2014 at 8:32 pm

    I only follow some of this, OK. My images, as I said, were 1136 x 242 and 1168 x 292 pixels respectively. Obviously, the images themselves contain a certain amount of smoothing, as the alleged half-tone effect of the shroud has a “pixel-width” of a single fibre diameter, and anyway the shadows between the fibres would completely obliterate the image if a truly representative image were submitted to ImageJ. Although zero smoothing is not a good option for 3D visualisation, it is the only way of removing the artificial curvature to which you so objected in your black cube demonstration above, which is why I used it to explore the true variation in height of the image. By using moderate smoothing, as I said, both Shroud and della Rovere produce remarkably similar images, both quite “life-like”.

    • December 14, 2014 at 2:59 am

      Good morning Hugh.

      I don’t know about you, but I’m starting to tire of being beat up by OK on this smoothing issue as if it were some huge lapse of understanding of the “rules of the game”. (I also consider his standard recourse to thermal LUT mode instead of natural colour to be as often as not an unhelpful distraction, but that’s in passing). As I’ve said before, there are no rules. The mere fact that a 2D square can be turned into a shallow tile or a skyscraper should be sufficient to tell folk there are no rules, only judgements as to when to stop the digital re-processing.

      My rule of thumb, based on comparisons between 3D rendered images and scorch imprints AND their original templates, extended recently to comparisons between entirely 2D images (like that black square above, or my colour-coded/BW relief maps, is this: go for minimalist processing, and make no claims that the result is real relief, merely apparent relief.

      “Correct” smoothing settings? Well.I’ll say this for OK. He has introduced a handy term: “needle forest”. If one has no smoothing then on some images (not all) there is a needle forest due to height expansion being visible at the level of individual pixels. One can then carefully increase the smoothing by degrees so as to get rid of the needles, but go no further. That has been the way I use smoothing – empirically. Theoretical basis? Never mind theory. There is no theory – just common sense. The smoothing control is needed to bring digitized information, strings of 0s and 1s into our everyday smoothed-off analogue world. It’s merely an extension of what we did in the pre-digital era with our experimental data points. If we had just a few we would display them as a bar chart. If there were scores of readings, then we felt justified in drawing a curve linking up mid-points of the bars. i.e. smoothing the data, converting it from discrete to continuous.

      Here for what it’s worth is a suggestion to all those, OK especially, who make regular use of ImageJ. Adopt as one’s internal standard a 2D graphic that has no 3D history . Always show what one’s current settings do to that 2D control. Is the APPARENT 3D-rendered result sensible? Is it minimalist – as my 3D renderings are always minimalist these days, for the simple reason that I keep my settings as close as possible to ImageJ’s defaults, and when in doubt I ALWAYS run a check against some kind of internal standard.

      OK’s suggestion that I am a slave to a particular set of parameters is simply not true. It is total misrepresentation, especially with that allusion to my age. ImageJ stores one’s most recent settings and applies them initially when one returns days or weeks later with a new image. I inspect the result at those old settings, note their values, and then proceed to experiment with new settings as if using ImageJ for the very first time. It’s second nature for someone with a lifetime’s experience in scientific research to operate in this fashion, and to keep a record of all settings on the off-chance that someone will ask how one obtained one’s results, as TH did earlier, to which I responded with the relevant numbers.

      As I said earlier, ImageJ is best viewed as an empirical (“suck it and see”) research tool. There is no need to portray it, as OK does, as some kind of secret garden open only to those who’ve passed his initiation tests. It’s there for everyone to use – cautiously, and with plentiful screen shots that capture one’s control settings for all to see or request.

  18. December 14, 2014 at 6:18 am

    In short, 16-pixel smoothing on 1000 pixel wide picture is safe, but on 72-pixel wide is not.

    The point is not to remove the influence of convolving gaussian kernel entirely, only not to make it dominant feature of the 3D reconstruction -which happens if you use relatively high smoothing settings compared to the image size. The smoothing scale of about 1/100 of original image size is usually good, but of 1/10 and higher is not.

    And I still think it is best visualised using thermal LUT mode. Load it, and play with smoothing. When you see your image reduced into smeared stain, you know the smoothing setting is too high.

  19. Thibault HEIMBURGER
    December 14, 2014 at 3:09 pm

    https://app.box.com/s/uxg1xps16i4ithba4bk9

    Now I better understand the problem. Thanks to you, OK and Colin.

    Look at the original “Rovere 1625 couleur” picture.
    You can see that the color density is almost the same in all parts of the body image.
    Nevertheless, using JImage, you can obtain a 3D rendering.

    The same is true for the negative image of the original image in grey-scale.

    What does it mean?
    It means that, using JImage, you can obtain a pseudo-3D from any kind of image.

    Why?
    How is it that, using JImage, one can obtain a 3D rendering from an monochrome painting with sharp contours?

    I think I have the answer and that OK is right.

    • anoxie
      December 14, 2014 at 3:37 pm

      I think we’ve already talked about misusing imageJ, 3D and smoothing:
      https://shroudstory.com/2014/06/05/everymans-vp8/

    • December 14, 2014 at 3:46 pm

      “How is it that, using JImage, one can obtain a 3D rendering from an monochrome painting with sharp contours?

      I think I have the answer and that OK is right.”

      I hate to mention it TH, but I showed many moons ago that one could do a crude monochrome charcoal sketch of the TS, and obtain a TS-like 3D response in ImageJ.

      http://shroudofturinwithoutallthehype.wordpress.com/2012/05/02/a-scientists-eye-view-of-how-the-iconic-turin-shroud-image-could-have-come-into-being-a-happy-accident-of-thermographic-and-photographic-inversion/

      Whether it’s as good or not as the TS that has had days or weeks of intense image enhancement work – not minutes – is hardly the issue. it’s the principle that counts.

      As I say, I worked on a sketch, a drawing, not a painting, but again it’s the principle that counts. We used to be told with monotonous frequency that the TS image was uniquely responsive to 3D-rendering programs, that photographs did not respond, that paintings did not respond, that imprints did not respond.

      Yesterday I showed that a photograph of an early 17th century painting depicting the TS imprint shows a lively response to ImageJ, and what’s more using minimalist changes from ImageJ default settings.

      So I’m intrigued to know what’s up your sleeve right now, such that you and OK are right. Right about what? Does that make me wrong? If so, please don’t keep me in suspense. Kindly tell me this side of Christmas where it is I’m supposed to have gone wrong.

      • Thibault HEIMBURGER
        December 15, 2014 at 5:10 pm

        Colin,

        Sorry but I can’t answer now (to much professional work)
        More later at the end of the week.

      • anoxie
        December 16, 2014 at 1:19 am

        I think i have the answer: it is mickey mouse science.

        I’ve recently browsed your blog, in 2009 you talked about a big crunch: it is mickey mouse science.

        People got a nobel prize proving universe is expanding and testing their theories.

        Should i dive deeper into your blog?

  20. December 14, 2014 at 5:22 pm

    Have a look at this. http://i.imgur.com/WI7Obvl.png. It illustrates a considerably more accurate verson of a 2D image forming a 3D face than the Shroud. It is simply not true that the Shroud is a “good” subject for this kind of interpretation, looking, as it does, so much more like a bas-relief than a realistic full-depth model of a head when imaged in ImageJ. Comments welcome!

    • December 14, 2014 at 5:42 pm

      That’s a truly beautiful rendering, if somewhat minimalist Hugh. Be on your guard, however. Judging by a previous comment a short while ago, you’ll be told that the 3D effect is simply due to sharp contours, and is thus totally irrelevant to the TS.

      In fact, anyone with access to MS Paint and ImageJ can knock that one straight on the head. Paint has an air brush tool, which one can use to create a fuzzy contour maps as a model for the TS. Enter that into ImageJ and it behaves exactly like any other mapping of variable image density.

      Here’s what I saw a few minutes ago, shown here with two levels of smoothing, zero and something higher and more sensible. One has to expect some flak for cavalier use of that image-manipulating tool!

    • December 14, 2014 at 6:48 pm

      Hugh, Colin.

      Obviously with modern software, some knowledge about the Shroud properties, and what one wants to obtain, making an artificial computer image of the face with 3D rendering is easy -but you have all those technical helps, that allows you to “cheat”.

      It is not that the Shroud has some miraculous property that allows only itself to be 3D-enhancable. No. Modern printed replicas of the Shroud also have this property inherited from the original (see below):

      It is possible in theory to obtain 3D relief of face or body via ordinary painting, or photography. I never denied this. Yet in practice, it is extremely hard, so no satisfactory results have been obtain so far (the other thing is that there were no purpose for that). The problem gets more complicated if you want to obtain subtle, low-contrast image with 3D and negative properties. Simply as you have only your eyes at disposal (like medievals) this task is virtually beyond human capabilities.

      The problem only complicates further when you use some tricks, with making bas-relief reflections, experimenting with scorches, acids or other chemicals (and so on), thus in fact introducing more parameters, which results in that the space for making errors is only increasing.

      The principle is very simple, but implementation very hard.

      More tomorrow.

    • December 14, 2014 at 11:20 pm

      Hugh, Colin,

      1) Smoothing the image is cheating with the settings because it definitely help create a smooth variation of the color which is exactly what the supposed artist would need to do to get the 3D effect we see on the Shroud. No modification should be done on the original image before attempting to generate a 3D interpretation.

      Obviously, we can generate computer images that respond well to 3D effect, but that’s irrelevant regarding the uniqueness of the Shroud.

      For example, on the production of the 3D anaglyph on Shroud Scope (http://goo.gl/Bn3osf) there is no smoothing transformation applied. The most basic operations were done to generate it and they are fully described so that they can be applied again to any photograph.

      2) What is relevant is finding a 4th-16th painting or drawing that respond to as much as what we can see on the Shroud, in particular for the details seen in the face, using the same technique (no smoothing, direct use of the image), is the real challenge.

      3) Perhaps reading what Yves Delage wrote more than 100 years ago about the image of the Shroud (see last posting at http://www.sindonology.org) could shed some light to what can readily be perceived on the Shroud for which no other equivalent 4th-16th century paintings can be found.

  21. December 14, 2014 at 6:42 pm

    Hoe are you getting images onto this blog? I can upload mine into imgur, but I can’t be sure what formula to put in a comment to get the pictures here direct.

    • December 14, 2014 at 6:48 pm

      Just a direct link to images.

    • Dan
      December 15, 2014 at 2:48 am

      On a line with no other text simply copy and paste a complete link to an image. (e.g. htt…….jpg). You won’t see the result until the comment is published.

  22. Kelly Kearse
    December 14, 2014 at 7:48 pm

    HF wrote:
    “Have a look at this. http://i.imgur.com/WI7Obvl.png. It illustrates a considerably more accurate verson of a 2D image forming a 3D face than the Shroud. It is simply not true that the Shroud is a “good” subject for this kind of interpretation, looking, as it does, so much more like a bas-relief than a realistic full-depth model of a head when imaged in ImageJ. Comments welcome!”

    Comments:

    I think it should primarily be considered that however the Shroud was created, the end objective was not to create a type of (accurate, detailed) 3-D projection when analyzed by modern technology. It’s easy to work backwards and find conditions that render more appealing images, because the end result is the goal.

    I visited Kevin Moran one summer (who has a VP-8 in his garage) and brought along a few photos to try it out. I tried to choose those that had prominent contrast, my top pick was the CD booklet/cover of the first Beatles album, Meet the Beatles, featuring black & white head shots. 3-D features were apparent (John was the best, the other three, in my opinion were mediocre-don’t know if they could be identified without knowing who they were), but it wasn’t great-some distortion in each of the four was evident. The Shroud photos we placed looked identical to those in Heller’s book, collectively more detail was maintained throughout & more 3-D effect was obvious.

    The caption under the “control” photo of William Ercoline in Heller’s book states regarding “gross distortion of all features and the two-dimensional quality” of other photos, that “the only exception is the Shroud”. Overhyped? I suppose it may be a matter of opinion-clearly, you can see 3-D effects of some sort in various artworks (bas-relief, painting) using this technique. I took it to mean that the degree of detail is what makes it “unique”, not that it is an “absolute”, one of a kind in possessing any aspect of this characteristic. I think most would agree if it isn’t a “unique” property, it’s certainly an interesting one.

    Difficult to imagine an artist, at least a painter, taking this into consideration, or even occurring via coincidence, particularly to such detail-of course, it may have only resulted after the paint had (relatively uniformly) flaked off, true appearance of the original being down the rabbit hole as it were; My own opinion is that it points more strongly to the cloth covering a three dimensional form (whatever one thinks this is). Using software to convert light/dark flat circles into conical towers is impressive-it certainly isn’t apparent just looking at the flat shape, but I think even a skilled artist working a priori and with hand held equipment would have screwed it up somewhere, at some point-in the face, along the length of the body and some distortion/misalignment would be apparent.

  23. December 15, 2014 at 3:07 am

    I’ve now said all I wish to say on this thread. If in spite of all the evidence to the contrary, folk still wish to maintain that the TS image has unique 3D properties, then let them. There’s no law against it, and this site is clearly the best place to show that “uniqueness” flag, given the rallying cry one still finds in the blog margin (“But no one has created images that match the chemistry, peculiar superficiality and profoundly mysterious three-dimensional information content of the images on the Shroud.”)

    I now have other more important matters to attend to, like clearing the backlog of DVD viewing of last year’s Christmas presents, “Breaking Bad” especially (good, innit?) before new ones arrive.

    • Dan
      December 15, 2014 at 4:09 am

      Best TV series ever made. My wife and I binge-watched the whole thing.

      • December 15, 2014 at 4:42 am

        Agreed Dan. I’m watching 3 episodes per day, and am amazed at the way they’ve kept the storyline going, with a myriad of interacting subplots. It helps that we’re both of us attuned to the genre from Shroudology, needless to say.

  24. December 15, 2014 at 8:38 am

    Thanks Mario and Kelly.

    My image was, of course quite crude, having only a school lab, a projector and a pupil to work with, so the smoothing was useful to create the realistic profile. The ‘contour lines’ are perhaps 5mm apart. Had they been 0.5mm apart, the face would have been much more recognisable and smoothing would have been unnecessary. However, what I was showing was that it is possible to create a 2D brightness/elevation image which, when converted with appropriate software, actually looks like a head. The Shroud image cannot be made to do this, as it stands; it is far too flat.

    There are possible ways round this, as Mario has experimented with. If, for example, the Shroud 3D-Image were printed out by a 3D-printer in rubber, and the flexible sheet so printed were then curved (both longitudinally and latitudinally) over a solid oval template, then the resulting shape might resemble a head a little more. I imagine this is similar to what Soons and Downing were attempting digitally. There would be distortion of a different kind, but the overall shape would be a bit more realistic. This would be explained by authenticists in terms of the image being formed while the cloth was draped over the body, and not, as per Jackson and Piczek, horizontal. However then many of the distance measurements would be wrong, and the head have far too small a diameter to be a realistic image of a real person.

    You are of course perfectly correct that it is easy to do all this with hindsight, and that even if it were a medieval creation, it is unlkely that the artist was thinking in terms of 2D-3D image visualisation (or, for that matter, the power of the negative image), but it is sometimes assumed that the “incredible accuracy” of the 3D possibility means that it would have been “impossible” for an artist to produce such an image. In fact, as we see, there is no “incredible accuracy” on that score, and so an artist cannot be ruled out on those grounds alone. There are, I agree, no contemporary parallels, but if a 13th/14th century artist did envisage a sheet covering a body and did create an image based on how he thought such an image may have come about, the rather crude an unrealistic 3D effect we actually see is just what he might have come up with.

    If, on the other hand, the shroud was created by some form of emanation from a dead body, then so far (and Mario may well come up with an acceptable solution some day) the 3D image cannot be reconciled with any brightness/distance correlation yet proposed, without being considerably subjectively distorted to fit the preconception. I’m afraid I find Kelly’s last comment ironic. “I think even a skilled artist working a priori and with hand held equipment would have screwed it up somewhere, at some point-in the face, along the length of the body and some distortion/misalignment would be apparent.” To me, he did screw up, he did fail to appreciate a true brightness/distance correlation, and there is considerable distortion and misalignment, both in the face and along the length of the body.

    • Mario Latendresse
      December 16, 2014 at 1:35 am

      Hugh, “To me, he did screw up, he did fail to appreciate a true brightness/distance correlation, and there is considerable distortion and misalignment, both in the face and along the length of the body.”

      Could you identify some of these places where brightness/distance correlation is not correct according to you? Considerable misalignment, distortion. Where? How did you conclude they are misalignment and distortion? I am not saying there is none, but I am curious to know which ones you located and how you came to that conclusion. Thanks.

      • December 16, 2014 at 8:19 am

        Thanks, Mario.

        If I were an artist trying to represent a true body lying under a shroud, in terms of a brightness/distance correlation, I would be aiming to create something which (even though the technology to do so would not be invented for another 600 years!) would result in something like the top image if it were magically translated into a sculpture. If I ended up with something like the bottom image, I would realise I had not appreciated the true rightness/distance correlation, as the whole thing looks far too flat, more like a bas relief than a full sculpture.

        I have a great respect for your own paper “The Turin Shroud Was Not Flattened” which I think illustrates that human body measurements are not badly distorted if a sheet is assumed to be draped, more or less as flat as it could be, but left to settle under gravity, but you do not address the brightness/distance correlation in that paper, by which all points of contact would be equally bright, and the sides of the cheeks (for example) would appear as high as the bridge of the nose. I think you were thinking of undertaking such a study, but have not heard how it progressed.

        Given the wonders of digital imaging, it should be possible to create a digital body (rather as Ray Downing has done), drape, wrap or lay a digital Shroud over it (which appears in several animations), and then calculate, convert into a coloured dot and plot, the distance between the body and the sheet in hundreds of places. Would the image so obtained resemble the Shroud image?

        • Mario Latendresse
          December 17, 2014 at 12:06 am

          Hugh, thanks for the images. I think I understand what you mean.

          The Shroud image appears to have been substantially smoothed, but that is a peripheral point.

          Interestingly, if an artist would have done the first image, he/she would have made a major blunder. The second image appears as the real correct one and here is the reason.

          The cloth is loosely draping the body. So, the cloth falls on each side of the head getting closer to the cheeks and the hair (top and sides). They should therefore appear closer, and they do on the second image. The way the cloth is draping the body must be taken into account, which the first image does not do.

          So, I would say that, at first approximation without going through a lot of precise measurements, the Shroud image appears as the real correct one.

          You wrote: “I have a great respect for your own paper “The Turin Shroud Was Not Flattened” which I think illustrates that human body measurements are not badly distorted if a sheet is assumed to be draped, more or less as flat as it could be, but left to settle under gravity, but you do not address the brightness/distance correlation in that paper, by which all points of contact would be equally bright, and the sides of the cheeks (for example) would appear as high as the bridge of the nose. I think you were thinking of undertaking such a study, but have not heard how it progressed.”

          Thanks for the compliment about my paper.

          This is correct, I do not address specifically the brightness/distance correlation in a precise manner because it requires more advanced techniques than what I presented in that paper. I could calculate the horizontal flattening of what was underneath the cloth with the technique used in that paper but not the vertical cloth/body distance in a very precise way. Yes, I am planning of undertaking such a study of measuring this distance given some real cloth draping over a body and also simulate image formation using different diffusing assumptions.

          A body and cloth could be computer simulated, this is one clear approach. Cloth simulation is very tricky and only recently has the simulation techniques improved enough to the point of physical reality. The advantage, of cloth simulation, is the possibility of a large number of essays and variations of cloth draping and observe the result of the image produced. Using a real body and cloth should be done also to compare it to the results of the simulations. I have done some 3D scanning. This work is in progress. Lots of code need to be written to make this happen.

  25. Kelly Kearse
    December 15, 2014 at 10:11 am

    HF wrote:
    “I’m afraid I find Kelly’s last comment ironic. “I think even a skilled artist working a priori and with hand held equipment would have screwed it up somewhere, at some point-in the face, along the length of the body and some distortion/misalignment would be apparent.” To me, he did screw up, he did fail to appreciate a true brightness/distance correlation, and there is considerable distortion and misalignment, both in the face and along the length of the body.”

    Problem is how do you know what’s considerable-it’s extremely subjective in nature-morever, by any of the proposed authentic mechanism one wishes to believe in (Maillard, energy/light, other) do you really know that “considerable distortion & misalignment” would not exist in any of these cases? What would those look like? How does one know?

  26. anoxie
    December 15, 2014 at 10:30 am

    Hugh Farey:
    “the 3D image cannot be reconciled with any brightness/distance correlation yet proposed, without being considerably subjectively distorted to fit the preconception.”

    This is not correct.

    But this is a critical point, the body/3D template is missing, we have to work backward to re-model its shape.

    Let’s think of a lost bronze statue you’d like to rebuild, all you have left are old black and white photographs (uncontrolled light, orientation).

    What’s the method? What are the assumptions?

    And what’s the difference with the Shroud image?

    • December 15, 2014 at 12:09 pm

      You may be both right, Kelly and anoxie. That’s why I said that “so far” nothing has really fitted the bill. Somebody may come up with something, and I’d put my money on Mario, if I had any!

      • anoxie
        December 15, 2014 at 12:38 pm

        It was a question, but actually, Mario’s answers are welcomed as well.

  27. December 16, 2014 at 3:53 am

    Big Crunch ? (see obnoxie above)

    Oh dear, we are scraping the barrel if some musings on my site 5 years ago as to what might cause a singularity to explode are now to be the acid test of the site’s scientific soundness.

    Maybe my understanding of cosmology was overly influenced by those earlier Dr.Who TV series (with expanding and contracting Universe in the opening and closing credits respectively). ;-)

    Maybe it now needs a little updating to take account of dark energy, the cosmological constant etc etc.

    However, where science is concerned, some might consider it unscientific for obnoxie to imagine that permanent expansion is a proven fact. It’s not. See this passage from wiki: “Big Crunch” (my italics).

    “Recent experimental evidence (namely the observation of distant supernovae as standard candles, and the well-resolved mapping of the cosmic microwave background) has led to speculation that the expansion of the universe is not being slowed down by gravity but rather accelerating. However, since the nature of the dark energy that is postulated to drive the acceleration is unknown, it is still possible (though not observationally supported as of today) that it might eventually reverse sign and cause a collapse.”

    • December 16, 2014 at 4:37 am

      PS: Have just this minute re-read the posting in question:

      http://colinb-sciencebuzz.blogspot.fr/2009/10/can-entropy-decrease-in-big-crunch.html

      There’s no need to retract a single world: the newer thinking re a continually-expanding Universe was acknowledged right at the beginning.

      In any case, the question addressed was not so much about the inevitability or otherwise of a Big Crunch. It was to do with the entropy problem that arises if one embraces the idea of contraction, as distinct from expansion, given it’s only expansion with its capacity for constant energy dissipation that intuitively obeys the Second Law of Thermodynamics (increasing entropy).

    • December 16, 2014 at 7:43 am

      Colin:

      http://arxiv.org/abs/1303.5062

      http://arxiv.org/abs/1303.5076

      So far, Einstein and FLRW rules, several alternative theories lying down, Universe is accelerating and flat.

      The other thing that probability of obtaining a flat Universe is exactly ZERO.

      So small deviations in new observation, and still Big Crunch may happen.

      • anoxie
        December 16, 2014 at 12:26 pm

        O.K.
        “So small deviations in new observation, and still Big Crunch may happen.”

        No, the margin is large enough, no big crunch ahead.

        Colin, the problem is when you write:
        “Will the Universe go on expanding for ever? If you believe in Dark Matter and Dark Energy, then the answer is probably yes. But so far, neither of those hypothetical entities has yet been detected. ”

        “Believers” are scientists, scientists who obtained the consensus and got their theory confirmed with an extraodinary precision.

        I don’t care about your quote from wiki, even if 99% of scientists admit current theory, you’ll always find an outcast claiming a big crunch is possible “though not observationally supported”.

        “Though not observationally supported” anything may happen!!!

        I think it’s all what your scorch theory is about… “not observationally supported”.

        • December 16, 2014 at 1:24 pm

          anoxie, cosmology is such a field, that there is say 50 % of science, and 50 % of philosophy in it. So our philosphical assumptions influence our conclusions. Similar issue is with the Shroud.

          So far Einstein’s general relativity and FLRW model is holding up well against rivalries -but is that an ultimate description of the Universe, as a whole? Are our assumptions correct? Rember that we don’t know the true nature of Dark Matter and Dark Energy -those are simply parameters of the model ‘taken out of the whole cloth’. With certain values adopted, they fit observations well, but what they are in reality, Heaven (or Hell) only knows.

          See also this:
          http://mnras.oxfordjournals.org/content/397/1/431.full.pdf+html

          So far it seems our Universe is flat. But this is a borderline situation, like hitting a zero out of the whole real numbers set. Slight deviation in minus, and we have open, hyperbolic geometry of the infinite Universe. Slight deviation in plus, and we have closed, finished spherical Universe with Big Crunch.

        • anoxie
          December 16, 2014 at 2:45 pm

          O.K.
          “So small deviations in new observation, and still Big Crunch may happen.”

          No.

          Quote from your link: “Constraints on the total energy density of the Universe, OmegaTtot, have improved spectacularly in the last two decades”.

          Planck has improved these constraints, no big crunch ahead, this is precision cosmology.

          No borderline situation.

  28. December 16, 2014 at 3:00 pm

    O.K.
    “So small deviations in new observation, and still Big Crunch may happen.”

    No.

    Quote from your link: “Constraints on the total energy density of the Universe, OmegaTtot, have improved spectacularly in the last two decades”.

    Planck has improved these constraints, no big crunch ahead, this is precision cosmology.

    No borderline situation.

    anoxie, this is something different. It seems you don’t understand the FLRW models (no problem with that).

    In short there are three classes of FLRW metrics (see http://en.wikipedia.org/wiki/Friedmann%E2%80%93Lema%C3%AEtre%E2%80%93Robertson%E2%80%93Walker_metric#Hyperspherical_coordinates )

    1) with k>0 (which can always be rescaled to k=+1), that is finite, closed Universe and Big Crunch
    2) with k<0 (which can always be rescaled to k=+1), that is infinite, open Universe, no Big Crunch
    3) with k=0 (EXACTLY ZERO) that is flat, infinite Universe, no Big Crunch.

    So far measurements indicate that k=0 with lower and lower margin of error. But unless k is EXACTLY zero, both 10 and 2) scenarios are possible. Had k=0 this would be extreme coincidence (with, as I said, exactly ZERO probability for random occurence) indicating that the Universe is tuned by some Higher Power, whatever defined (God, or some higher laws of physics, whatever you prefer).

    Planck, WMAP and other measurements can improve constraints on our cosmological parameters, assuming that our cosmology model is correct (so far it passes succesfully all the tests) -but cannot assure us with 100% certainity that the predictions of those models are correct. Especially as there is possibility of some unknown physics, impossible to test experimentally on Earth, coming into action.

    • anoxie
      December 16, 2014 at 4:12 pm

      “3) with k=0 (EXACTLY ZERO) that is flat, infinite Universe, no Big Crunch.”
      O.K., this is boring, i perfectly understand FLRW models and don’t need wiki quotes.

      “Perhaps the most remarkable possibility is that a vanishing or negative local curvature (ΩK ≡ 1 − Ωtot ≥ 0) does not necessarily mean that our Universe is infinite. Indeed we can still be living in a universe of finite volume due to the global topological multi-connectivity of space, even if described by the flat or hyperbolic FRW solutions.”

      http://planck.caltech.edu/pub/2013results/Planck_2013_results_26.pdf

      Planck, Hubble (and other experiments) have paved the way to precision astronomy, universe is accelerating, no big crunch ahead.

      • December 16, 2014 at 4:25 pm

        Perhaps the most remarkable possibility is that a vanishing or negative local curvature (ΩK ≡ 1 − Ωtot ≥ 0) does not necessarily mean that our Universe is infinite.

        Perhaps, or perhaps there is still eventuality of Big Crunch. We simply don’t know. And we can end this off-topic discussion at this point.

        • anoxie
          December 16, 2014 at 4:42 pm

          Eventuality of Big Crunch “not observationally supported”.

          Build a model, fit the data, and we’ll re open this topic.

  1. No trackbacks yet.
Comments are closed.
%d bloggers like this: