This is addressed in the upcoming Lemmy release 0.19.4 where contents will no longer be included in API responses. Until then it's up to clients to actually hide it. Content is kept for a few days to allow you to undo deletion, but you can also edit your content before deleting it to remove that. There is also a scheduled task running once a week I believe that will replace contents of deleted comments with something like PERMANENTLY DELETED.
Regardless, as Lemmy is a public platform, you should be aware that people may be storing this information on linked platforms regardless and may not respect the edits/deletions at all.
is this consistently happening with the same posts?
Do you see comments when you try the same post again later?
You mentioned this happens with both the default front end and also with Alexandrite, does it happen with the same posts on both of them?
Can you reproduce this in a private browser window?
If you can, would you mind sharing a post that this is happening with so we can take a look?
It would also be useful to see the associated error message.
there's an email address for our ticket system in the sidebar of this community.
it's either that or trying your luck by sending a private message to a random admin. tickets are often a better option.
we've switched from using multiple federation sending containers (which are supposed to split receiving instances across workers) to just using a single one.
we've been spending a bunch of time already during the last days to get a solution in place on our end that will allow us to selectively reject federated activities from kbin, such as allowing comments and posts while rejecting votes, which seem to be the main issue currently, but we're seeing some stability issues with this currently.
we're planning to unban the affected users from the communities once we have this stabilized, as we currently have to pick between
defederate from kbin.social (and other kbin instances when they are affected)
reject all inbound activities from affected instances
temporarily ban affected users in the communities associated with the issue
drop all activities with certain characteristics, such as votes, when coming from a specific instance
drop all activities with certain characteristics, such as votes, when coming from a specific instance and exceeding a rate limit
1-3 are all options we can do with existing tools, 4 and 5 require a custom implementation on our side. as 3. has the least overall impact of those we decided to go with 3 for now, which seems to work out rather well so far, except for the individual user experience of affected users.
4 has been our primary focus to implement currently, but it takes time to ensure this works as expected, as we're essentially building this from scratch. 5. may be implemented afterwards if we want to spend additional time on it.
maybe I misunderstood your comment, I read your Texas AG example as asking for information about users. did you mean Texas AG asking for the removal of comments where people are stating they're trans?
as I'm very tired right now, I only want to comment on one of the arguments/questions you brought up.
you're asking for the difference between taking down content and providing information about users.
its very simple actually. sharing non-public data is a very different story than removing access to otherwise public information, whether it's originally coming from Lemmy.World or elsewhere.
when we take down content, even if it's more than legally strictly necessary, the harm of such a takedown is at most someone no longer being able to consume other content or interact with a community. there is no irreversible harm done to anyone. if we decided to reinstate the community, then everyone would still be able to do the same thing they were able to do in the beginning. the only thing people may be missing out on would be some time and convenience.
if we were asked to provide information, such as your example of a Texas AG, this would neither be reversible nor have low impact on people's lives. in my opinion, these two cases., despite both having a legal context, couldn't be much further from each other.
We do question the validity of claims, but when it comes to takedowns of copyright related content, we simply do not have the resources to throw money at lawyers to evaluate this in detail. We can apply common sense to determine if something appears to be a reasonable request, but we can't pay a lawyer to evaluate every single request. We also can't afford going to court over every case, even if we were to win, because those processes take large amounts of personal time and have a risk of significant penalties.
Legal advocates on Lemmy or any other platform for that matter are not a substitution for legal council.
Moving the instance behind Tor and hoping to never get identified?
As long as you're operating a service on the internet you'll be bound by laws in one place or another. The only thing you can do against this is trying to avoid being identified and therefore trying to evade prosecution. This is not a legal defense.
Lemmy.World is legally primarily bound by the countries listed here.
If we get a request, of course we will evaluate that request.
When it comes to taking down content, such as copyright infringing content, we may err on the side of caution to reduce the legal risk we're exposing ourselves to.
There is a very simple explanation for this specific case: nobody on hackertalks.com is subscribed to !goodoffmychest@lemmy.world!
Most community related activities on Lemmy will only be sent to instances that have at least one subscriber for the community.