Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)TE
Posts
0
Comments
299
Joined
2 yr. ago

welp

Jump
  • I remember back in the day this automated downloader program.. the links had a limit of one download at a time and you had to solve a captcha to start each download.

    So the downloader had built in "solve other's captcha" system, where you could build up credit.

    So when you had say 20 links to download you spent some minutes solving other's captchas and get some credit, then the program would use that crowdsourcing to solve yours as they popped up.

  • That's like saying car crash is just a fancy word for accident, or cat is just a fancy term for animal.

    Hallucination is a technical term for this type of AI, and it's inherent to how it works at it's core.

    And now I'll let you get back to your hating.

  • It's less the calculations and more about memory bandwidth. To generate a token you need to go through all the model data, and that's usually many many gigabytes. So the time it takes to read through in memory is usually longer than the compute time. GPUs have gb's of RAM that's many times faster than the CPU's ram, which is the main reason it's faster for llm's.

    Most tpu's don't have much ram, and especially cheap ones.