The news about AI heated up this past week. Even Michael Smerconish devoted a Saturday morning segment to it on CNN: Software engineer who warned about AI: “It’s a runaway train” (don’t bother). I was reading an article that suggested that ChatGPT would be very careful with its answer if your question hinted in any way that you knew the answer.
So, I asked, “Was the VP-8 Image Analyzer an imputus for STURP?”
ChatGPT answered:
No, the VP8 image analyzer was not the impetus for the STRUP examination of the Shroud of Turin. The STRUP examination was a scientific investigation of the shroud that took place in 1978, and the VP8 image analyzer was used as part of that investigation to analyze photographs of the shroud. The VP8 analyzer was developed by Dr. John Jackson, a physicist who was part of the investigation team, and it was used to create a 3D image of the shroud’s surface. The image was then analyzed to try to determine the characteristics of the body that had been wrapped in the shroud. However, the STRUP examination was prompted by a desire to study the shroud using modern scientific methods, and to try to determine its origins and authenticity. The examination included a variety of tests and analyses, including radiocarbon dating, blood analysis, and pollen analysis.
An article in the Verge
What is important to remember is that chatbots are autocomplete tools. They’re systems trained on huge datasets of human text scraped from the web: on personal blogs, sci-fi short stories, forum discussions, movie reviews, social media diatribes, forgotten poems, antiquated textbooks, endless song lyrics, manifestos, journals, and more besides. These machines analyze this inventive, entertaining, motley aggregate and then try to recreate it.
Vincent, J. (2023, February 17). Introducing the AI Mirror Test, which very smart people keep failing. The Verge. https://www.theverge.com/2023/2/17/22289673/ai-mirror-test-artificial-intelligence-chatbot-consciousness
Notice — this is really true — blogs and twitter and Tictok rants and long-winded conspiracy theories about the KGB hacking the carbon 14 tests of the Shroud get equal treatment with papers in ethical peer reviewed journals (including pay to publish pseudo-journals). In fact, this posting probably reinforces the errors.
Here is what ChatGPT had to say for itself:
As an AI language model, ChatGPT was trained on a large corpus of text data that includes a variety of sources such as books, websites, scientific articles, and other types of documents. It is designed to analyze and learn from patterns in the text it encounters, regardless of the source or format.
While peer-reviewed journals are typically considered more credible and reliable sources of information in the scientific community, ChatGPT’s training data is likely to include a wide range of sources to ensure it can handle diverse types of information and writing styles. This means that ChatGPT is not biased towards or against any particular type of source, including peer-reviewed journals, but rather considers the information contained in the text it encounters.
They say that ChatGPT is the fabric of all that we have published. Algorithmic fabrication is more like it.