Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)IN
Posts
10
Comments
242
Joined
2 yr. ago

  • Is that only between an instance of discourse and the rest of fediverse? Does it also apply for different discourse instances talking to each other?

    Essentially, I’m wondering whether I can use 1 account to comment across different forums.

  • thanks for the explanation! I wonder whether it is possible, or rather scalable, if users can pick their own parameters, even define their own functions. Is this calculated and cached at the server side or user side?

  • I agree with you on both points. Fixed texts in my comment from AI to generative tech, mostly because I honestly dont fully have a good grasp on what exactly can be considered intelligence.

    But your second point, I think, is more important, at least to me. We can have debates on what AI/AGI or whatever is, the thing that matters right now and in years (even months) to come is that we as humans have multiple needs.

    We need to work, some of our work requires generating something (code, arts, blueprints, writing) that may be replaceable by these techs really soon. Such work takes years, even decades, of training and experience, especially domain knowledge experience that is invaluable to issues such as necessary human interaction, communication, bias detection and resolution, ... Yet within a couple of years, if all of that effort might get replaced by a bot (that might have more unintended consequences but cut costs), instead of augmented/assisted, many of us would struggle to have a job for living while the companies that build these profit and benefit from that.

  • I believe with humans, the limitations of our capacity to know, create, learn, and the limited contexts that we apply such knowledge and skills may actually be better for creativity and relatability - knowing everything may not always be optimal, especially when it is something about subjective experience. Plus, such limitations may also protect creators from certain claims about copyright, 1 idea can come from many independent creators, and can be implemented briefly similar or vastly different. And usually, we, as humans, develop a sense of work ethics to attribute the inspirations of our work. There are other who steal ideas without attribution as well, but that’s where laws come in to settle it.

    On the side of tech companies using their work to train, AI gen tech is learning at a vastly different scale, slurping up their work without attributing them. If we’re talking about the mechanism of creativity, AI gen tech seems to be given a huge advantage already. Plus, artists/creators learn and create their work, usually with some contexts, sometimes with meaning. Excluding commercial works, I’m not entirely sure the products AI gen tech creates carry such specificity. Maybe it does, with some interpretation?

    Anyway, I think the larger debate here is about compensation and attribution. How is it fair for big companies with a lot of money to take creators’ work, without (or minimal) paying/attributing them, while those companies then use these technologies to make more money?

    EDIT: replace AI with gen(erative) tech

  • While a small kinda innocuous example, this seems to showcase how trust can start to erode with these technologies in an implicit inconspicuous way.

    Along the same line of this, when art students enter in their portfolios in schools/competition, some may use generative tech, some may not. Would the admissions office reject them because they have doubts about the tools used to generate? Would they be transparent in such decisions? Anyone have thoughts/insights on this?

    The other way around (use AI to judge a submission/applicant) is also currently complicated and controversial, at least with new legislation in New York on transparency and accountability when companies use AI for hiring/screening applications (https://www.technologyreview.com/2023/07/10/1076013/new-york-ai-hiring-law/)

  • privacyguides recommends Raivo OTP, see https://www.privacyguides.org/en/multi-factor-authentication/

    Raivo OTP is a native, lightweight and secure time-based (TOTP) & counter-based (HOTP) password client for iOS. Raivo OTP offers optional iCloud backup & sync. Raivo OTP is also available for macOS in the form of a status bar application, however the Mac app does not work independently of the iOS app.

    Its Github repo is at https://github.com/raivo-otp

  • Can you elaborate on how the Nostr protocol seems better than ActivityPub? Do you mean it in terms of privacy?

    I don’t know much about the underlying backend of Nostr, just that when I tried the platform, like you said, full of BTC stuff without any meaningful content of interaction.