Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)SI
Posts
0
Comments
741
Joined
2 yr. ago

  • Here’s probably all the info you could ever need:

    https://redcanary.com/blog/threat-intelligence/raspberry-robin/

    Next, you need to get your systems scanned and cleaned. Malware bytes is likely enough, but I always recommend BitDefender. Their efficacy rates are always fantastic, and they have been leading the industry for several years now. Download the AV on a clean system, put on clean flash drive, and install that way.

    Last, you’re gonna need to reset your passwords. Yes, I know that’s toxic af. But this is the reality and why we always need to be veeeery careful with what we do. This worm communicates with a c2 server which means it can update itself which makes detection hard, and it also means that, at one point it may have been spying on your activity (and it likely was if not continues to)

    This stuff happens, don’t beat yourself up too much. Live and learn

  • Look man I’m an adult, you may talk to me like one

    I used the term consumer when discussing things from a business sense, ie we’re talking about big businesses and implementations of technology. It’s also in part due to the environment I live in.

    You’ve also dodged my whole counter point to bring up a new point you could argue.

    I think we’re done with this convo tbh. You’re moving goal posts and trying to muddy water

  • I can see where you're coming from - however I disagree on the premise that "the reality is that (rationale) the control of AI is in the hands of the mega corps". AI has been a research topic not done solely by huge corps, but by researchers who publish these findings. There are several options out there right now for consumer grade AI where you download models yourself, and run them locally. (Jan, Pytorch, TensorFlow, Horovod, Ray, H2O.ai, stable-horde, etc many of which are from FAANG, but are still, nevertheless, open source and usable by anyone - i've used several to make my own AI models)

    Consumers and researchers alike have an interest in making this tech available to all. Not just businesses. The grand majority of the difficulty in training AI is obtaining datasets large enough with enough orthogonal 'features' to ensure its efficacy is appropriate. Namely, this means that tasks like image generation, editing and recognition (huge for medical sector, including finding cancers and other problems), documentation creation (to your credit), speech recognition and translation (huge for the differently-abled community and for globe-trotters alike), and education (I read from huge public research data sets, public domain books and novels, etc) are still definitely feasible for consumer-grade usage and operation. There's also some really neat usages like federated tensorflow and distributed tensorflow which allows for, perhaps obviously, distributed computation opening the door for stronger models, run by anyone who will serve it.

    I just do not see the point in admitting total defeat/failure for AI because some of the asshole greedy little pigs in the world are also monetizing/misusing the technology. The cat is out of the bag in my opinion, the best (not only) option forward, is to bolster consumer-grade implementations, encouraging things like self-hosting, local operation/execution, and creating minimally viable guidelines to protect consumers from each other. Seatbelts. Brakes. Legal recourse for those who harm others with said technology.

  • You’ll see more and more to a certain extent too, as it becomes more normal and, namely safe, to be trans.

    Don’t let anyone convince you anyones “becoming” trans. Always have been, always will be

  • I do not want that for anyone. AI is a tool that should be kept open to everyone, and trained with consent. But as soon as people argue that its only a tool that can harm, is where I'm drawing the line. That's, in my opinion, when govts/ruling class/capitalists/etc start to put in BS "safeguards" to prevent the public from making using of the new power/tech.

    I should have been more verbose and less reactionary/passive aggressive in conveying my message, its something I'm trying to work on, so I appreciate your cool-headed response here. I took the "you clearly don't know what ludites are" as an insult to what I do or don't know. I specifically was trying to draw attention to the notion that AI is solely harmful as being fallacious and ignorant to the full breadth of the tech. Just because something can cause harm, doesn't mean we should scrap it. It just means we need to learn how it can harm, and how to treat that. Nothing more. I believe in consent, and I do not believe in the ruling minority/capitalist practices.

    Again, it was an off the cuff response, I made a lot of presumptions about their views without ever having actually asking them to expand/clarify and that was ignorant of me. I will update/edit the comment to improve my statement.

  • I'm not sure I agree with your example - it's more like giving the owners of the donation the ability to choose WHO they are donating to. That means choosing not to donate to companies that might take your food donation and sell it as damaged goods for example. I wouldn't want my donation to be used that way. Thats how I see it anyway

  • A SO alternative cannot exist if a user who posted an answer owns it. That defeats the purpose of sharing your knowledge and answering questions as it would mean the person asking the question cannot use your answer.

    Couldn't these owners dictate how their creations are used? If you don't own it, you don't even get a say.

  • So does that mean anyone is allowed to use said content for whatever purposes they'd like? That'd include AI stuff too I think? Interesting twist there, hadn't thought about it like this yet. Essentially posters would be agreeing to share that data/info publically. No different than someone learning how to code from looking at examples made by their professors or someone else doing the teaching/talking I suppose. Hmm.

  • idk if you can call image generation derived from colored static based on preexisting statistically common knowledge/examples "planning" per se xD

    Humans have come up with plenty worse, this is just more of the same at worst imo haha

  • Well I suppose in that case, protesting via removal is fine IMO. I think the constructive, next-step would be to create a site where you, the user, own what you post. Does Reddit claim ownership over posts? I wonder what lemmy's "policies" are and if this would be a good grounds (here) to start building something better than what SO was doing.

  • They had an impact because people allowed themselves to take their fear mongering seriously.

    It’s regressionist and it stunts progress needlessly. That’s not to say we shouldn’t pump the brakes, but I am saying logic like “it could hurt people” as rationale to never use it, is just “won’t someone think of the children” BS.

    You don’t ban all the new swords, you learn how they’re made, how they strike, what kinds of wounds they create and address that problem. Sweeping under the rug/putting things back in their box, is not an option.

  • You’ll notice I used the lower case L which implies I’m referring to a term, likely as it’s commonly used today. (edit: this isn't an excuse to ruin the definition or history of what luddites were trying to do, this was wrong of me)

    Further, explain to me how this is different from what the luddites stood for, since you obviously know so much more and I’m so off base with this comment.

    edit: exactly. just downvote and don't actually make any sort of claim. Muddy that water! edit 2: shut up angsty past me.