Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)QC
Posts
0
Comments
114
Joined
2 yr. ago

  • While yes, that's an accurate quip, it actually does highlight a deeper issue in the industry. If everyone passes your scam test, they don't need to buy your scam test.

    Additionally, scam emails aren't 50/50 yes/no pass/fail. It's more a combination of red flags to gauge how risky the email is to click on links, reply to, download attachments from, etcetera.

    Currently the scam testing industry has no way to rate an individuals ability other than how many scam emails they did or didn't click on. That is a false metric. It incites scam testers to trick people to justify their value to the customer.

  • It's been about 5yrs now. UNESCO know it's endangered, but the government keep debating their data and privately sourcing bs data that pretends everything is okay.

    It's probably too late now. The reef is below a stable level to recover from any more impact. And there is definitively more impact coming from the climate crisis.

  • If it can't see numbers, then it isn't as smart as your $5 calculator or the majority of the human race. If you can convince it it's wrong, it's even more distinctly less intelligent.

    It barely passes as a language model and only passes as a conversational model. Having citations doesn't mean it understands citations. Having incorrect citations quite simply proves that it doesn't understand what a citation is meant for. It does not understand the concept.

    ChatGPT is pre trained on a number of directories. All of them sampled pre 2021. Nothing after that date exists for chatGPT. That isn't intelligence. It doesn't possess the intelligence to understand the nature of its databases. And if you really don't think the databases it was trained on came from the internet, please show us a source.

    It's continually entertaining how you continue to point out the substantial limitations of a language model AI and yet insist it's showing more intelligence than an average brain that's has none of those limitations and achieves more accurate and better results any minute of every day. And then claiming it understands concepts when that concept itself is not part of its architecture is really astounding. I can almost identify the exact neuron that's misfiring in your brain.

  • While it's humorous how personally you are taking critiques of, chatGPT, it is unfortunate you are also demonstrating a fundamental lack of basic understanding of how ChatGPT works. Because of that, you have inflated what you believe chatGPT is doing.

    Even when it gets basic maths wrong repeatedly. Because I can tell it 2+2=5 and it will agree with me. Conversationally. Since it has no concept of what 2+2=5 means.

    Even though it has no memory of previous conversations, you believe it somehow retains understanding of concepts it discusses.

    Even though it searches the internet to provide it the knowledge to answer questions, which is why it can cite sources that don't exist or don't support its claims, clearly demonstrating a fundamental lack of understanding the concept, or even the concept of citing sources.

    Even though it was literally trained by humans telling it what the three most correct conversational response would be out of the 5 answers it gave every calibration question, you still believe it actually possesses intelligence above any human, who can have a conversation without making any of these mistakes.

    I clearly put chatGPT "intelligence" as remarkably low as is possible, even non-existent. I also must concede in this situation it is smarter than at least one human I am aware of.

  • New knowledge is simply creativity which AI distinctly do not have. The shoelace and Rorschach are variations of the same point. ChatGPT regurgitates info from the internet and uses confirmation bias to present it conversationally. ChatGPT cannot understand the concept that a shoe has a lace that should be tied. It can only answer a question about that by using prepublished information related to tying shoelaces. As for Rorschach, even with a visual component, ChatGPT is by its nature incapable of interpreting the data itself. It is quite simply not what the engine does.

    Understand what ChatGPT does, do not project your idea of what an AI can do onto its single occasionally accurate trick.

  • Can it tie a shoelace? No. If you gave it manipulators and a shoe, would it tie the laces? No. Can it do a Rorschach test? No. Can it create a new idea? No.

    It can barely pretend to talk reasonably about these things because it is only designed to talk reasonably about anything. That is not intelligence.

  • Keep in mind ChatGPT is a language model. It's designed specifically to simulate sounding like a human. It does that... Okay. It doesn't understand the information or concepts it is using. It just sounds like it does. It can't reliably do basic maths and doesn't try or need to. It just needs to talk about it in a believably conversational way.

    The brain does far more than process information. And ChatGPT doesn't even really do that.