I recently agonized over this decision a bit and went with a Bambu. When it comes down to it, the prusa printers are just really hard to justify given that you are paying more money for fewer features even if you assemble it yourself.
I agree with what others have said that the reliability and longevity of Bambu printers is a concern, but frankly if I’m still into printing in a number of years and Bambu starts to really enshitify, I’ll build a Voron or get something even better that hasn’t come out yet.
I’m not sure you could go to most hospitals and get an MRI just because. Diagnostic tests still carry risks, especially MRIs given how strong the magnetic field is and that you can’t easily turn them off.
It isn’t, but the GDPR requires companies to scrub PII when requested by the individual. OpenAI obviously can’t do that so in theory they would be liable for essentially unlimited fines unless they deleted the offending models.
In practice it remains to be seen how courts would interpret this though, and I expect unless the problem is really egregious there will be some kind of exception. Nobody wants to be the one to say these models are illegal.
I got one in a niceish area for that. All you have to do is buy a small foreclosure and then spend literal years renovating while you live somewhere else and run up a bunch of high interest credit card debt paying for those renovations. 🥲
I don’t understand why I would want a bunch of usb c ports? On a phone where there obviously isn’t space for a full sized port sure, but I find that fiddling with the one usb c port on the back of my desktop is a pain in the ass and the port really struggles to keep a good connection when attached to a stiff or heavy cable.
I mean I would definitely consider it in poor taste if a woman started making tone deaf jokes about male suicide rates. You get a lot more leeway when making fun of a group you are a part of. You combine that with the general assumption that everyone on the internet is male until proven otherwise, and yeah in this kind of forum it’s much more acceptable to make jokes at the expense of men than women.
There’s also a bit of a disparity in the examples you gave. The idea that men die earlier because they take medical advice from Joe Rogan is obviously not made in sincerity. The overwhelming majority of men have never listened to Joe Rogan and besides a few high profile examples I don’t think he actually gives that much medical advice. Though it would be harmful if people genuinely believed this was true, it doesn’t seem likely that anyone would.
On the other hand, Women being worse workers due to emotions or their periods or whatever is something a lot of people genuinely believe. In some circles those statements wouldn’t be considered jokes but rather serious opinions. Repeating those things, even if you don’t personally believe then, reinforces the ideas and is clearly harmful to women.
A similarly offensive “joke” at men’s expense would be something like “men die earlier because they’re too stupid to see a doctor”. This would be a bad joke, because it’s taking something which is basically true, men don’t see doctors as frequently, and tying it to a real and harmful stereotype, men are dumber than women.
I do this a lot (not in public) because it’s way too easy to hang up with your ear and it fatigues my arm after like half an hour. It’s fine as long as you don’t shout into the phone.
I don’t think the idea is to protect specific images, it’s to create enough of these poisoned images that training your model on random free images you pull off the internet becomes risky.
Not really, if you read the paper what they’re doing is creating an image that looks like a dog, is labeled as a dog, but is very close to the model’s version of a cat in feature space. This means manual review of the training set won’t help.
I’m skeptical that an LLM could answer questions as effectively just with documentation. A big part of the value in stack overflow and similar sites is that the answers provided come from people who have experience with a given technology and have some understanding of the pain points. Often times you can ask the wrong question and still get a useful answer because the context is enough for others to figure out what you might be confused by.
I’m not sure an LLM could do the same just given the docs, but it would be interesting to see how close it could get.
For a horrifying take on this check out this short story by qntm
https://qntm.org/mmacevedo