Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)AN
Posts
59
Comments
528
Joined
2 yr. ago

  • Someone that has their own morals, and thinks killing is bad?

    Can we guarantee they'd report him if there was no financial incentive?

    The system really doesn't give a fuck about your or anyone else's morals, let's not pretend otherwise. That's why they put a bounty on the killer, after all.

  • Have you actually looked at that list? It's incredibly inconsistent and chaotic. "Story structure" is not some objective universally measurable thing in the first place, and nobody in their right mind would claim that originality can be realised only in story structure.

    Overall this is a weird and pointless topic that you picked to argue about.

  • You haven't actually suggested any way in which the guy's work and behaviour could be viewed "three-dimensionally". While I can agree that discourse especially online slips into dehumanisation of (real or imagined) enemies too easily... this is really not a case where this is the incorrect approach.

    Edit: Regarding the guy's family, I can agree that they did not deserve the death of the father/husband. But that does not really concern the guy by himself, his own moral character, it's someone else's problem. When a criminal gets sent to jail or executed, does anyone really give a crap about how much his family will suffer from that? Not really, the criminal is assumed to be a morally independent being that can tell right from wrong by himself, and his failure to do that is his own.

  • Thank you, this is interesting to read. I also use ISMLP from time to time and can only imagine how valuable it is to actual musicians. Now, it is simply true that sheet music that is under copyright is, indeed, under copyright, but as ISMLP focuses on classical music it's not such a big deal, as much of it is in public domain (many 20th century classics still aren't, I believe, such as Stravinsky, Shostakovich...), at least the original old editions.

    just take a look at this short list taken from your own list of “banned books” affected by the decision:

    “The Adventures of Huckleberry Finn” by Mark Twain (first published 1884-85) “The Awakening” by Kate Chopin (first published 1899) “An American Tragedy” by Theodore Dreiser (first published 1925) “Candide” by Voltaire (first published in 1759, also in English translation, again in English 1762) “The Decameron” by Giovanni Bocaccio (written ca.1353, published in English by 1620)

    All five of the originals are public domain worldwide, even the two translated into English.

    This part lacks important detail, though. The two translations are likely to be new ones, not from 17th/18th century, so they have new copyright too. The other two books may be under legitimate new copyright because of the supplementary materials or textological work. I talked about this with some people on reddit who I guess were knowledgeable about this, and basically when an editor works on a new edition they might introduce corrections to the text based on the manuscripts or some other version of the text (e.g. censored sections). This is work that should (I guess) also be copyrighted. Now, I haven't gotten a completely satisfying answer about what really can be covered by this, because it can be difficult to explain whether mere modification of spelling of e.g. Shakespeare (original <walk'd> = modern

    <walked>

    ) counts as copyrightable work, or does it require more extensive work (such as dealing with the textual variations in early Shakespeare editions, which are mind-boggling).

  • Here's an another uncomfortable statistic: most of ASOIAF was published before 9/11.

    Personally I gave up after AFFC, which I read around a decade ago. The quality of writing had plummeted, and it was increasingly obvious the series is going to take way too long to be finished.

    The guy should just give up, relive himself of the duty and the audiences of the frustration, and spend his remaining years peacefully writing Dunk and Egg stories.

  • Chiquita and Nestlé come to mind. Within tech industry, I'd say Amazon and probably Microsoft are worse as well, and there's probably a ton of potentially even worse companies lurking in the shadows outside the top of the economic food chain.

  • Why do we expect a higher degree of trustworthiness from a novel LLM than we de from any given source or forum comment on the internet?

    The stuff I've seen AI produce has sometimes been more wrong than anything a human could produce. And even if a human would produce it and post it on a forum, anyone with half a brain could respond with a correction. (E.g. claiming that an ordinary Slavic word is actually loaned from Latin.)

    I certainly don't expect any trustworthiness from LLMs, the problem is that people do expect it. You're implicitly agreeing with my argument that it is not just that LLMs give problematic responses when tricked, but also when used as intended, as knowledgeable chatbots. There's nothing "detached from actual usage" about that.

    At what point do we stop hand-wringing over llms failing to meet some perceived level of accuracy and hold the people using it responsible for verifying the response themselves?

    at this point I think it’s fair to blame the user for ignoring those warnings and not the models for not meeting some arbitrary standard

    This is not an either-or situation, it doesn't have to be formulated like this. Criticising LLMs which frequently produce garbage is in practice also directed at people who do use them. When someone on a forum says they asked GPT and paste its response, I will at the very least point out the general unreliability of LLMs, if not criticise the response itself (very easy if I'm somewhat knowledgeable about the field in question). This is practically also directed at the person who posted that, such as e.g. making them come off as naive and uncritical. (It is of course not meant as a real personal attack, but even a detached and objective criticism has a partly personal element to it.)

    Still, the blame is on both. You claim that:

    Theres a giant disclaimer on every one of these models that responses may contain errors or hallucinations

    I don't remember seeing them, but even if they were there, the general promotion and ways in which LLMs are presented in are trying to tell people otherwise. Some disclaimers are irrelevant for forming people's opinions compared to the extensive media hype and marketing.

    Anyway my point was merely that people do regularly misuse LLMs, and it's not at all difficult to make them produce crap. The stuff about who should be blamed for the whole situation is probably not something we disagree about too much.

  • referencing its data sources

    Have you actually checked whether those sources exist yourself? It's been quite a while since I've used GPT, and I would be positively surprised if they've managed to prevent its generation of nonexistent citations.