Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)PA
Posts
0
Comments
32
Joined
10 mo. ago

  • 130ms is perceivable but still quite small, and you’d only hit it once per domain (per TTL). If you care enough to intentionally use it then I wouldn’t worry about it. You’ll rarely notice the difference.

    There are a few other services with similar ethos that you may want to check out as alternatives. Quad9 is the one I remember off the top of my head.

  • Some things do charge different amounts though. YouTube Premium for example is more expensive if you subscribe in iOS but maybe that’s just because it’s Google.

    They also could have just not let anyone subscribe through the iOS app. Lots of things do that.

  • Japan already passed a law that explicitly allows training on copyrighted material. And many other countries just wouldn’t care. So if it becomes a real problem the companies will just move.

    I think they need to figure out a middle ground where we can extract value from the for profit AI companies but not actually restrict the competition.

  • I don’t think they’re wrong in saying that if they aren’t allowed to train on copyrighted works then they will fall behind. Maybe I missed it in the article, but Japan for example has that exact law (use of copyright to train generative AI is allowed).

    Personally I think we need to give them somewhat of an out by letting them do it but then taxing the fuck out of the resulting product. “You can use copyrighted works for training but then 50% of your profits are taxed”. Basically a recognition that the sum of all copyrighted works is a societal good and not just an individual copyright holders.

    https://jackson.dev/post/generative-ai-and-copyright/

  • I can’t help, just chiming in to say that I’ve also had that experience with Immich. It’s the one service I’ve used that has somehow managed to break itself multiple times like this.

    No idea how it happens, I don’t do anything weird with the setup and it just breaks. I’d heard that feedback from other people too but didn’t believe it until it happened to me. It’s been a few months so maybe I’ll try again, I’m just not too happy importing hundreds of gigs of photos multiple times.

    So yea just… you’re not alone, good luck.

  • Because most people do not understand what this technology is, and attribute far too much control over the generated text to the creators. If Copilot generates the text “Trans people don’t exist”, and Microsoft doesn’t immediately address it, a huge portion of people will understand that to mean “Microsoft doesn’t think trans people exist”.

    Insert whatever other politically incorrect or harmful statement you prefer.

    Those sorts of problems aren’t easily fixable without manual blocks. You can train the models with a “value” system where they censor themselves but that still will be imperfect and they can still generate politically incorrect text.

    IIRC some providers support 2 separate endpoints where one is raw access to the model without filtering and one is with filtering and censoring. Copilot, as a heavily branded end user product, obviously needs to be filtered.

  • I understand why they need to implement these blocks, but they seem to always be implemented without any way to workaround them. I hit a similar breakage using Cody (another AI assistant) which made a couple of my repositories unusable with it. https://jackson.dev/post/cody-hates-reset/

  • This showed up on HN recently. Several people who wrote web crawlers pointed out that this won’t even come close to working except on terribly written crawlers. Most just limit the number of pages crawled per domain based on popularity of the domain. So they’ll index all of Wikipedia but they definitely won’t crawl all 1 million pages of your unranked website expecting to find quality content.

  • I am up to speed on this little drama, but it’s still unclear to me what they’re suing over.

    Yea, Honey effectively took over affiliate links. And yes, they were obviously shady (I never used it, because I did not know how they made money). But I don’t quite understand how other people trying to make money from affiliate links have a real claim against them.

    Or is this just a case of the influencers realizing they have the moral high ground and the public’s ear, and wanting a pay out?

  • On the feature side, according to Mastodons recent 4.3 release post development is only 4 full time employees and a budget of under $500k annually. That is basically nothing in the realm of social media companies.

    Improving Mastodons features requires money and resources, but Mastodons users are unwilling to pay for instances and unwillingly to fund development. Hell, the .world folks host a bunch of instances for collectively hundreds of thousands of users and they take in about $1k a month in donations. I’m surprised that even covers hosting costs.

    So…it’s no wonder that it isn’t going to be as polished as other social media in ways that would reduce the attrition.

  • Meh, just run several associated services and keep the same username on all of them. Nothing is interoperable, stop trying to force it. And a rogue app with bad user data handling practices is still going to leak your data, even if you store your copy of the data securely.

    My fediverse accounts are always "patrick@

    <service>

    .bestiver.se". I currently am only running Mastodon/Lemmy and a few supporting services (e.g. a link manager - https://bestiver.se/@patrick), but I'm adding more as I get to them. Pixelfed, Peertube, Loops(?), Piefed...

    Adopting this ActivityPods thing looks like it will require each Fediverse project to make what I'd guess are fairly significant changes to their user data handling, and none of those projects are properly funded for this. In fact what this actually seems to be doing is asking every other Fedi app to build on top of their user data API.

    I applaud the attempt at building a new standard in the Fediverse, but I doubt it's going to happen.

  • It’s definitely instance dependent. I run the servers for my instance at the closest Hetzner data center to myself (west coast USA) for latency reduction and over-size/engineer it for better perf.

    My instance is open for registration too, if anybody reading here would find that useful.