Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)GO
Posts
2
Comments
157
Joined
2 yr. ago

  • You’re complaining about ads that are being pushed on you by an algorithm from ad companies. I’m telling you there are no ad companies. Seems relevant.

    I never typed the word "ad". I specifically said post of top brands. How can this be achieved in ActivityPub? Easy. When you are using lets say Threads, their proprietary system treats differently ads and normal posts. However, anything their system is pushing on the federated network (ActivityPub) is disguised as a normal post. A post that is having millions of engagement will be visible in "All".

    Nothing prevents them from doing that right now.

    You act like you're new to internet. What prevents them is that their audience is not here and companies are not paying them for that. However companies will pay them to promote their products in Threads. With the current numbers, Threads has the potential to dominate the "All" page.

  • that's irrelevant. Nothing prevents the influencers to promote their products as they do in all the popular platforms. You're thinking only in terms of ads as coming from an ad server but this is not necessarily the case.

  • I agree on the first part. However this is from 2012 and in the meantime Linus himself realized and admitted that he was not proud of behaving like that and took real measures and seeked help in order to improve himself.

  • this doesn't work. AI still needs to know what is CP in order to create CP for negative use. So you need to first feed it with CP. Recent example of how OpenAI was labelling "bad text"

    The premise was simple: feed an AI with labeled examples of violence, hate speech, and sexual abuse, and that tool could learn to detect those forms of toxicity in the wild. That detector would be built into ChatGPT to check whether it was echoing the toxicity of its training data, and filter it out before it ever reached the user. It could also help scrub toxic text from the training datasets of future AI models.

    To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021. Much of that text appeared to have been pulled from the darkest recesses of the internet. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest.

    source: https://time.com/6247678/openai-chatgpt-kenya-workers/

  • Are there any predators smart enough to strategize like this?

    it is the predators that build such passages. Have you ever seen any construction company building them? Even in the first photo that is under construction, there is not any human worker in sight

  • yes, you're very correct on that. I failed to write it in a way that gives space for exclusions. I wanted to write something like "from the people who migrate in mostly atheistic (or at least less religious) countries, when they continue being fanatics in their religion, then this decision is by choice". Because they are now in a place that if they want to get rid of that culture, it is easier to do it.

    Sure, there are people who migrate because they want to leave from the oppression they experience in their home countries and they decide to follow a completely different lifestyle but these are not the majority. But they surely exist.

  • Memes @lemmy.ml

    This is literally the internet nowadays without an adblock

    Linux @lemmy.ml

    How can I find the reason my PC crashes?