This is basically the same thing as what the big platforms do. You’re just offloading the decisions of what to see to a neural network and hope it’s deciding correctly. I’m not sure what a solution would be but I’m not sure I would put my eggs in the llm/ai basket. Not without a lot more details from the models on why they made a decision.
I pronounce gif like zyhfe to annoy both jif and gif pronouncers equally. I also advocate for the initial array index to be .5 to be equally annoying to programmers and mathematicians alike.
As a recently former hpc/supercomputer dork nfs scales really well. All this talk of encryption etc is weird you normally just do that at the link layer if you’re worried about security between systems. That and v4 to reduce some metadata chattiness and gtg. I’ve tried scaling ceph and s3 for latency on 100/200g links. By far NFS is easier than all the rest to scale. For a homelab? NFS and call it a day, all the clustering file systems will make you do a lot more work than just throwing hard into your nfs mount options and letting clients block io while you reboot. Which for home is probably easiest.
Your continent is the perpetual exception to the rule. Least in north America there aren’t a ton of spiders that pose a huge threat past this 8 legged trauma people have. Most of our spiders are lil jumpy boys. And web ones but they are pretty obvious. The ones I’m not overly keen on are the daddy long legs. Legs for days but they just seem like sea spiders on land.
This is basically the same thing as what the big platforms do. You’re just offloading the decisions of what to see to a neural network and hope it’s deciding correctly. I’m not sure what a solution would be but I’m not sure I would put my eggs in the llm/ai basket. Not without a lot more details from the models on why they made a decision.