Congratulations, you made more AI slop, and the problem is still unsolved 🤣
Current AI solves 0% of difficult programming problems, 0%, it's good at producing the lowest common denominator, protocols are sitting at 99th percentile here. You're not going to be developing anything remotely close to a new, scale able, secure, federated protocol with it.
Nevermind the interoperability, client libraries...etc Or the proofs and protocol documentation. Which exist before the actual code.
It's a solution to a problem Lemmy will soon have in that case.
Which is bots.
Lemmy isn't flooded with bots and astroturfing because it's essentially too small to matter. The audience is something like < 0.001% that of reddit.
Once it grows the problem comes here as well, and we have no answers for it.
It's a shitty situation for the internet as a whole, and the only solution is verifying humans. And corporations CANNOT be trusted with that kind of access/power
2 years ago I talked about the core problem with federated services was the abismal scale ability.
I essentially got ridiculed.
And here we are, with incredibly predictable scaling problems.
If we refuse to acknowledge problems till they become critical, we will never grow past a blip on the corner of the internet. Protocol development is HARD and expensive.
You can't really host your own AWS, You can self-host various amalgamations of services that imitate some of the features of AWS, but you can't really self-host your own AWS by any stretch of the imagination.
And if you're thinking with something like localstack, that's not what it's for, and it has huge gaps that make it unfit for live deployment (It is after all meant for test and local environments)
I mean at this point you're just being intentionally obtuse no? You are correct of course, volatile memory if you consider it from a system point of view would be pretty asinine to try and store.
However, we're not really looking at this from a system's view are we? Clearly you ignored all the other examples I provided just to latch on to the memory argument. There are many other ways that this data could be stored in a transient fashion.
Of course, data is persisted somewhere, in a transient fashion, for the purpose of computation. Especially when using event based or asynchronous architectures.
And then promptly deleted or otherwise garbage collected in some manner (either actively or passively, usually passively). It could be in transitory memory, or it could be on high speed SSDs during any number of steps.
It's also extremely common for data storage to happen on a caching layer level and not violate requirements that data not be retained since those caches are transitive. Let's not mention the reduced rate "bulk" non-syncronous APIs, Which will use idle, cheap, computational power to do work in a non-guaranteed amount of time. Which require some level of storage until the data can be processed.
A court order forcing them to start storing this data is a problem. It doesn't mean they already had it stored in an archival format somewhere, it means they now have to store it somewhere for long term retention.
And the reason that it's sad is that most of the individual veneers on proprietary projects deeply about a project itself and have the same goals as they do with open source software, which is just to make something that's useful and do cool shit.
Yep, the business itself can force them not take care of problems or force them to go in directions that are counter to their core motivations.
Please to see: https://github.com/jellyfin/jellyfin/issues/5415
Someone doesn't necessarily have to brute Force a login if they know about pre-existing vulnerabilities, that may be exploited in unexpected ways