The reliability of AI as a fact-checking tool is questionable, as demonstrated by a recent incident involving the author's book dedication. The author, who regularly searches for their name online, discovered that an AI named Grok attributed a dedication to their book that they never wrote. This dedication was to characters from the movie Frozen, and the author does not have multiple children. The author then tested other AI systems, including Google, Copilot, ChatGPT, and Claude, to see if they could provide accurate information about the dedication. Surprisingly, only Claude, the AI from Anthropic, did not provide incorrect information and admitted it did not have reliable search information on the answer.
The author then tested the AI's ability to provide accurate information about other book dedications, including a book by Richard Kadrey and Daniel H. Wilson. In each case, the AI provided incorrect information, either misattributing the dedication or fabricating it entirely. This raises concerns about the reliability of AI as a fact-checking tool, especially when it comes to known facts that can be easily verified.
The author concludes that AI should not be used as a search engine or a source of factual information, as it frequently provides incorrect information. Instead, users should double-check every fact provided by AI and consider eliminating the middleman by not using AI at all. The author dedicates this essay to those who take these lessons to heart and do not trust AI to tell them things.