Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)GL
Posts
2
Comments
529
Joined
2 yr. ago

  • When I worked at Pixar long ago an intern had a cron job that was intended to clean up his nightly build and ended up deleting everything on the network share for everyone!

    Fortunately there were back-ups and it was fine, but that day was really hilariously annoying while they tracked down things disappearing.

  • It depends, some M-devices are iOS and iPadOS devices, which would have this hardware issue but don't have actual background processing, so I don't believe it's possible to exploit it the way described.

    On Mac, if they have access to your device to be able to set this up they likely have other, easier to manage, ways to get what they want than going through this exploit.

    But if they had your device and uninterrupted access for two hours then yes.

    Someone who understands it all more than I do could chime in, but that's my understanding based on a couple of articles and discussions elsewhere.

  • This requires local access to do and presently an hour or two of uninterrupted processing time on the same cpu as the encryption algorithm.

    So if you're like me, using an M-chip based device, you don't currently have to worry about this, and may never have to.

    On the other hand, the thing you have to worry about has not been patched out of nearly any algorithm:

    https://xkcd.com/538/

  • I know Apple Bad, but if you look at the things that the Justice Dept complains about, they're almost all either not true, no longer true, or Apple has already announced plans to address them.

    It's kind of... strange.

    At least the EU's regulations apply to something that is true.

  • Honestly, my dream lemmy client would combine posts in my home and all feed based solely on the links in the post regardless of community or instance, and it would then provide UX to present the rest of the information if I choose to click into it.

    Lemmy is designed around a concept that almost requires but definitely invites spamming links. Assuming you have good intentions and want to reach a wider federated audience, you would post your link to a few instances at once.

  • I read the article and its linked sources in a few cases. How else would I have been able to directly address them?

    When Australian scientists tested the accuracy of popular mushroom ID apps last year after a spike in poisonings, they found the most precise one correctly identified dangerous mushrooms 44 percent of the time.

    Notice this paragraph which links to https://pubmed.ncbi.nlm.nih.gov/36794335/

    The extract for which talks about the following apps:

    Picture Mushroom (Next Vision Limited©), Mushroom Identificator (Pierre Semedard©), and iNaturalist (iNaturalist, California Academy of Sciences©)

    None of which use LLMs and predate the issue that the article is talking about. I checked, before my comment, all of their pages on the iOS App store, at least. They're all 4+ years old and none use LLMs.

    Amusingly enough, the Public Citizen article linked earlier in OP's article calls out iNaturalist as something they've been working with to positively improve the experience of identifying mushrooms:

    https://www.citizen.org/article/mushroom-risk-ai-app-misinformation/

    The Fungal Diversity Survey, a project devoted to correcting the many gaps in understanding regarding fungal biodiversity, partners with iNaturalist to document and verify mushroom observation

    But ultimately there were no apps ACTUALLY TESTED that use OpenAI or LLMs for their identification.

  • While I would not advocate anyone taking up amateur mycology under any circumstances, let alone with an app, or book, to guide them, it's important to note that this article is biased and makes false or misleading claims.

    The main issue is that it is talking about AI and meaning LLM-based algorithms. But it uses a study that showed that apps which identify mushrooms are inaccurate in which all of the apps predate, and do not use, LLMs as part of their identification process.

    Countering misinformation with misinformation isn't generally the best option in my opinion so I just wanted to point that out.