Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)BE
Posts
1
Comments
284
Joined
2 yr. ago

  • This is some crazy clickbait. The researchers themselves say that it wasn't a likely scenario and was more of a mistake than anything. This is some more round peg square hole nonsense. We already have models for predicting stock prices and doing sentiment analysis. We don't need to drag language models into this.

    The statements about training honestly being harder than helpfulness is also silly. You can train a model to act however you want. Full training isn't really even necessary. Just adding info about the assistant character being honest and transparent in the system context would have probably have made it acknowledge the trade or not make it in the first place.

  • I was mainly referring to language models which have somewhat predictable scaling laws. It doesn't make sense to continue scaling the parameters when you can scale the data instead.

    Diffusion models are a completely different domain which is less established. Most advancements made in that space are related to the architecture and training methodology. In terms of scale they haven't changed much.

    Large models will always be trained in datacenters because the compute will always be exponentially greater and cheaper than what you could get as an individual. Local finetuning already happens but it's expensive and limited.

  • The paper they're referring to is Attention is all you need, the paper that first demonstrated the transformers architecture, primarily focused on machine translation though also found to perform exceptionally for language modeling. Blaming him for others' misuse is like blaming the inventor of the hunting rifle for assault rifles.

  • I find a lot of "visual overhaul" mods beyond just textures make games look worse. Most that I've tried go overboard with lighting effects that are distracting and don't really fit the original art. The best visual mods I've used were the ones that extended view distances, increased shadow resolution and fixed small but noticeable visual issues like banding and shimmering. Trying to completely rework lighting in a game rarely turns out well.

  • It will take at least another 10 years to get a majority of the market off of x86 with the 20+ years of legacy software bound to it. Not to mention all of the current gen x86 CPUs that will still be usable 10 years from now.

  • What a shitty investigator. It took the new one less than a week to figure it out. I can't see how this could have possibly happened for any reason other than malice or gross incompetence. Either way both the original investigator and the rest of the department need to be looked into.

  • There is likely some csam in most of the models as filtering it out of a several billion image set is nearly impossible even with automated methods. This material likely has little to no effect on outputs however since it's likely scarce and was probably tagged incorrectly.

    The bigger concern is users down stream finetuning models on their own datasets with this material. This has been happening for a while, though I won't point fingers(Japan).

    There's not a whole lot that can be done about it but I also don't think there's anything that needs to be done. It's already illegal and it's already removed from most platforms semiautomatically. Having more of it won't change that.

  • "Threaten to overwhelm", they found 3,000 images. By internet standards that's next to nothing. This is already illegal and it's fairly easy to filter out(or it would be if companies could train on the material legally).

  • This tool would be the first to allow content owners to push back in a meaningful way against unauthorized model training

    -Ben Zhao

    This statement is strange. This is far from the first paper attempting this. This isn't even this author's first attempt. A few months ago he contributed to the glaze paper attempting the same thing, marketed the same way.