Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)KI
Posts
0
Comments
111
Joined
2 yr. ago

  • Not sure what other people were claiming, but normally the point being made is that it's not possible for a network to memorize a significant portion of its training data. It can definitely memorize significant portions of individual copyrighted works (like shown here), but the whole dataset is far too large compared to the model's weights to be memorized.

  • Entirely fair! I think FreeCAD is still fine for hobbyists like myself though. It does take quite a bit of getting used to (I came from Fusion360 and Inventor first) since it operates somewhat differently, but it's good that we have at least one option.

    Hopefully it'll see more development and become substantially more viable in the future.

  • This is a result of the topological naming problem. FreeCAD currently doesn't handle this well at all. There's been a lot of work on this front though - you can use realthunder's fork which should be a lot better in this regard. Alternatively, you can avoid creating features directly on top of other features, and instead make planes and reference them exclusively.

  • The big thing you get with frameworks is super simple repairability. This means service manuals, parts availability, easy access to components like the battery, RAM, ssd, etc. Customizable ports are also a nice feature. You can even upgrade the motherboard later down the line instead of buying a whole new laptop.

  • I haven't read the article myself, but it's worth noting that in CS as a whole and especially ML/CV/NLP, selective conferences are generally seen as the gold standard for publications compared to journals. The top conferences include NeurIPS, ICLR, ICML, CVPR for CV and EMNLP for NLP.

    It looks like the journal in question is a physical sciences journal as well, though I haven't looked much into it.

  • I'm curious what field you're in. I'm in computer vision and ML and most conferences have clauses saying not to use ChatGPT or other LLM tools. However, most of the folks I work with see no issue with using LLMs to assist in sentence structure, wording, etc, but they generally don't approve of using LLMs to write accuracy critical sections (such as background, or results) outside of things like rewording.

    I suspect part of the reason conferences are hesitant to allow LLM usage has to do with copyright, since that's still somewhat of a gray area in the US AFAIK.

  • Also one very important aspect of this is that it must be possible to backpropagate the discriminator. If you just have access to inference on a detector of some kind but not the model weights and architecture itself, you won't be able to perform backpropagation and therefore can't generate gradients to update your generator's weights.

    That said, yes, GANs have somewhat fallen out of favor due to their relatively poor sample diversity compared to diffusion models.

  • It's unfortunately super clear from their Steam charts. When they had creator events and whatnot, the player count spiked, but other than that they only have about 1000 players active and I seriously doubt many people spend money on the game since it's already rather F2P friendly.

    It's a shame, the game was a lot of fun and I still play with friends.