Skip Navigation

Posts
0
Comments
293
Joined
1 yr. ago

Help

Jump
  • sorry about the late response, got a lot of stuff going on currently and it seemed like you got useful replies here already anyway when i checked before.

    we currently have a rule in place that blocks traffic with too high of a threat score. this rule was implemented before i joined, i'll have to check with the team about the original reason for this and if we want to relax this.

    at least the error message should be improved if we can do that, i think that's just returning a static message currently.

  • the list is not public, no. various instances have automods in place already, with varying aggressiveness.

    the community bans are automatic when an instance ban is issued, they've been reverted along with the instance unban.

  • please don't link to domains used by spammers

  • have you ever seen this since we updated to Lemmy 0.19.9+?

  • I truly appreciate the offer, but my concern isn't about money, it's about this taking away even more of my personal time. If this was a regular day job I was doing, rather than my actual day job, which I have in addition to the time I spend on Lemmy.World related activities, then I could file this during my regular working hours. After all, it's time I'd spend being at work anyway.

    I've recently been spending countless hours already dealing with other stuff that is not directly tied to Lemmy.World but came up "around" it, including sending abuse reports to various instances about CSAM that federated to them a long time ago. This includes time spent on identifying such material, then finding suitable abuse report mechanisms, providing instructions for how to deal with it. Afterwards it needs reviewing whether the content has been removed or requires further escalation steps, such as one case where I've filed a police report today for a case where neither the instance itself nor the hosting provider deals with abuse reports at all.

    As mentioned before, there also seem to have been two different people involved in sending these messages, the original person, where most of the information is/was available publicly and has been collected by various people already, who would be in a much better position to report this content to law enforcement.
    The person sending the gore images did in fact use a Lemmy.World account in one case, which we do have more information about than publicly available or available for users on other instances, so this would be the only case for which we'd be in a privileged position for reporting. This however would also most certainly not be a report that would help any sort of harassment investigation, as this copycat probably doesn't have any ties to the original harasser.

    If we had a significantly larger amount of donations towards our foundation we'd also be able to pay someone to deal with things like this, but we're currently just over the hosting costs with our monthly donations.

  • you could easily report this to the police yourself then. i don't really have anything more than what is publicly available, with the exception being one of the gore spam accounts.

    I'm not saying you have to, but given that various people already collected a lot of information related to that stuff, they would be much better suited in actually reporting this to police somewhere.

  • Depends on the type of feature request.

    For most feature requests the project issue trackers are likely the best place.

    Alternative UIs and apps have their own issue trackers as well.

    If it's something that's just about configuration or something we have built ourselves !support@lemmy.world can be a good place for that.

    Feel free to reply here though and I can tell you where it's best placed.

  • we currently have our own solution to send emails with a custom text explaining why people were rejected and what they can do next. we'll have to review whether the built-in solution would be capable of replacing this functionality adequately if we add rejection reasons to lemmy when rejecting the applications.

    our current solution rejects applications and then deletes the user from the database to ensure that they can sign up again if they want, as denied applications only get deleted after a week or so and an appeal process would require support tickets and a lot more time to be spent by us on addressing those.

    our application process is fully automatic and just depends on certain words to be provided and the email not being disposable.

  • The link is up there in the post for you to use.

  • The screenshot in my previous comment is directly from their abuse form at https://abuse.cloudflare.com/csam. Your email is specifically about their proactive scanner, not about abuse reports that are submitted.

    They also explicitly state on their website that they forward any received CSAM reports to NCMEC:

    Abuse reports filed under the CSAM category are treated as the highest priority for our Trust & Safety team and moved to the front of the abuse response queue. Whenever we receive such a report, generally within minutes regardless of time of day or day of the week, we forward the report to NCMEC, as well as to the hosting provider and/or website operator, along with some additional information to help them locate the content quickly.

  • they do, they just don't require you to be registered with them anymore for their csam scanner:

  • potentially identifying information, such as addresses, must be removed. images of the person must either be heavily pixelated or entirely cut out.

  • with the content i've seen it gave me more of an impression of being captures of a live stream, but that's just guessing

  • likely unrelated, but I already forwarded the PM reports we received on LW to .ee admins a few hours ago. probably just a "normal" pm spammer.

  • unless you operate the instance that is being used to send this material you can generally only work with the content that is being posted/sent in PMs. almost all identifying information is stripped when it leaves your local instance to be federated to other instances. even if there was a group of instances collaborating on e.g. a shared blocklist, abusers would just switch to other instances that aren't part of the blocking network. there's a reason why it's not recommended to run a lemmy instance with open signups if you don't have additional anti-spam measures and a decently active admin team. smaller instances tend to have fewer prevention measures in place, which results in a burden for everyone else in the fediverse that is on the receiving end of such content. unfortunately this is not an easy task to solve without giving up (open) federation.

  • I'm sorry, sometimes it's hard to tell whether people actually mean it. I can totally see people commenting that and being serious.

  • mentally ill people can have plenty of time on their hands to invest this much effort in harassing others. people claiming that this can't be harassment are effectively supporting the harassment, as that tries to further blame the likely victim of this. obviously this is just speculation, as we don't know the full truth.

  • the problem with this spam and generally federated platforms is that you can only really try detecting it based on the content. the accounts tend to get created on another instance and then the messages federate over to you, which means you won't see a lot of the identifying information you'd see for a local user, such as their IP address.