Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)LA
Posts
0
Comments
470
Joined
2 yr. ago

  • I just asked chatgpt4 that exact question copy and pasted, and here is its response:

    There was no president of the United States in 3500 BC. The United States of America did not exist as a country until 1776, when the Declaration of Independence was signed. ¹ The first president of the United States was George Washington, who took office in 1789. ² Before that, the territory that is now the United States was inhabited by various Native American tribes and colonized by European powers such as Spain, France, and Great Britain. ³¹: Declaration of Independence
    ²: List of presidents of the United States - Wikipedia
    ³: [History of the United States - Wikipedia].

    Source: Conversation with Bing, 2/27/2024
    (1) List of presidents of the United States - Wikipedia. https://en.wikipedia.org/wiki/ListofPresidentsoftheUnitedStates.
    (2) List of presidents of the United States - Simple English Wikipedia, the .... https://simple.wikipedia.org/wiki/ListofPresidentsoftheUnitedStates.
    (3) President of the United States - Wikipedia. https://en.wikipedia.org/wiki/PresidentoftheUnitedStates.
    (4) List of presidents of the United States - Wikipedia. https://en.wikipedia.org/wiki/ListofPresidentsoftheUnitedStates.
    (5) List of presidents of the United States - Wikipedia. https://en.wikipedia.org/wiki/ListofPresidentsoftheUnitedStates.
    (6) List of U.S. Presidents in Chronological Order - HistoryNet. https://www.historynet.com/us-presidents/.
    (7) Getty Images. https://www.gettyimages.com/detail/photo/seal-of-the-president-of-the-united-states-royalty-free-image/1084903580.

  • That's a fun story, but isn't applicable to the topic here. That could very easily be verified as true or false by a secondary system. In fact you can just ask Wolfram Alpha. Ask it what are the odds that any two people share the same birthday. I just asked it that exact question and it replied 1/365

    EDIT

    in fact I just asked that exact same question to chatgpt4 and it also replied 1/365

  • There are already existing multiple different LLMs that are essentially completely different. In fact this is one of the major problems with LLMs, because when you add even a small amount of change into an LLM it turns out to radically alter the output it returns for huge amounts of seemingly unrelated topics.

    For your other point, I never said bouncing their answers back and forth for verification was trivial, but it's definitely doable.

  • That's not a problem at all, I already use prompts that allow the LLM to say they don't know an answer, and it does take that option when it's unable to find a correct answer. For instance I often phrase questions like this "Is it known whether or not red is a color in the rainbow?" And for questions where it doesn't know the answer it now will tell you it doesn't know.

    And to your other point, the systems may not be capable of discerning their own hallucinations, but a totally separate LLM will be able to do so pretty easily.

  • No, I've used LLMs to do exactly this, and it works. You prompt it with a statement and ask "is this true, yes or no?" It will reply with a yes or no, and it's almost always correct. Do this verification through multiple different LLMs and it would eliminate close to 100% of hallucinations.

    EDIT

    I just tested it multiple times in chatgpt4, and it got every true/false answer correct.

  • I extremely doubt that hallucination is a limitation in final output. It may be an inevitable part of the process, but it's almost definitely a surmountable problem.

    Just off the top of my head I can imagine using two separate LLMs for a final output, the first one generates an initial output, and the second one verifies whether what it says is accurate. The chance of two totally independent LLMs having the same hallucination is probably very low. And you can add as many additional separate LLMs for re-verification as you like. The chance of a hallucination making it through multiple LLM verifications probably gets close to zero.

    While this would greatly multiply the resources required, it's just a simple example showing that hallucinations are not inevitable in final output

  • Capitalizing all letters only makes sense if you're quoting an actual news article headline, like from the New York Times or wherever. But capitalizing the first letter of every word makes reading the sentence stilted and hard to read, so should be avoided whenever possible