There's a game menu with a diagram of a military hierarchy of named enemies, and their strengths/vulnerabilities. When you find the named enemies in the game and interact with them in some way (iirc it's basically limited to winning/losing a fight or mind controlling them), it affects their traits and their place in the tree, and you'll get a short cutscene where they say stuff referencing your past interactions.
tbf the widely used nomenclature for them is "open weights", specifically to draw that distinction. There are genuinely open source models, in that the training data and everything is also documented, just not as many.
To me the disadvantage would be, the library likely does many more things than just what you need it for, so there is way more code, so you probably can't realistically read and understand it yourself before incorporating it. This would lead to among other issues the main thing that irritates me about libraries; if it turns out something in it is broken, you are stuck with a much bigger debugging problem where you first have to figure out how someone else's code is structured.
Although I guess that doesn't apply as much to implementations of common algorithms like OP since the library is probably solid. I would consider favoring LLM code over most anything off npm though.
Compatibility problems caused by third parties only targeting Windows are still Linux issues for the end user if they become a problem when they use Linux. It isn't fair but that is the practical reality.
I don't think it's actually such a bad argument because to reject it you basically have to say that style should fall under copyright protections, at least conditionally, which is absurd and has obvious dystopian implications. This isn't what copyright was meant for. People want AI banned or inhibited for separate reasons and hope the copyright argument is a path to that, but even if successful wouldn't actually change much except to make the other large corporations that own most copyright stakeholders of AI systems. That's not really a better circumstance.
as a commemoration of the anime Sword Art Online, Luckey created a VR headset art piece that kills its human user in real life when the user dies digitally in the video game, by means of several explosive charges affixed above the screen
Luckey blogged, "The idea of tying your real life to your virtual avatar has always fascinated me—you instantly raise the stakes to the maximum level and force people to fundamentally rethink how they interact with the virtual world and the players inside it."[77] Luckey additionally described it as "just a piece of office art, a thought-provoking reminder of unexplored avenues in game design". He also mentioned that while it is "the first non-fiction example of a VR device that can actually kill the user, it won’t be the last."[75]
... and his job is making autonomous weapon systems. I wonder what the future built by Jigsaw types like this is gonna look like
LLMs have a tendency to come up with bullshit excuses to avoid tricky requests, and are also trained on corpospeak moral hand wringing, this kind of thing is the result sometimes
This is confusing as I've never seen a toilet stall that is just a regular room, rather than a cubicle divider thing