Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)MH
Posts
0
Comments
354
Joined
2 yr. ago

  • It does?

    Pixel Camera (previously known as Google Camera) can take full advantage of the available cameras and image processing hardware as it can on the stock OS and does not require GSF or sandboxed Google Play on GrapheneOS. Direct TPU and GXP access by Google apps including Pixel Camera is controlled by a toggle added by GrapheneOS and doesn't provide them with any additional access to data. The toggle exists for attack surface reduction. Every app can use the TPU and GXP via standard APIs including the Android Neural Networks API and Camera2 API regardless.

    TPUs and GXP are what enable apps to do on device ais with whatever model they choose to bring.

  • Who knows. We do know that all of the pixel photo features work assuming you install the pixel photo app and give it NPU permissions.

    The exciting bit is that we know you can deny internet access and all the picture AI stuff still works.

  • Although I do see that that bot has a very slight right wing bias I like it. It prevents the normalization of the use of literal propaganda outlets as news sources.

    I have a suggestion that might be a good compromise.

    The bot only comments on posts that are from less factual news sources or are from extreme ends of the spectrum.

    On a post from the AP the bot would just not comment.

    On a post from Alex Jones or RT the bot would post a warning.

    That way there is less “spam”, but people are made aware when misinformation or propaganda is being pushed.

    Also with such a system smaller biasses are less relevant and therefore become less important.

  • It’s a waste of everyone’s time for sure. It’s just good business sense to make your customers happy though.

    As for typing speed perhaps ya lol. You could be faster. But I think the best approach here is using high quality locally run LLMs that don’t produce slop. For me I can count on one hand how many times I’ve had to correct things in the past month. It’s a mater of understanding how LLMs work and fine tuning. (Emphasis on the fine tuning)

  • My main workstation runs Linux and I use Llama.cpp. I used it with mistral’s latest largest model but I have used others in the past.

    I appreciate your thoughts here. Lemmy I think, in general, has an indistinguishing anti LLM bias.

  • The LLM responses are more verbose but not a crazy amount so. It’s mostly adding polite social padding that some people appreciate.

    As for time totally. It’s faster to write “can’t go to meeting, suggest rescheduling it for Thursday.” And proofread than to write a full boomer style letter.

  • I can understand that. I don’t actually use chatGPT to be fair. I use a locally run open source LLM. This all being said I do think it’s important to fine tune any LLM you use to match your writing style. Else you end up with chatGPT generic style writing.

    I would argue that not fine tuning a LLM to match tone and style counts as either misuse or hobbyist use.

  • Because in my experience some business clients feel offended or upset that you aren’t being formal with them. American businesses seem to care less I noticed but outside of the USA (particularly in Germany) I noticed that formality serves better. Also the LLM uses the thread history to add context. Stuff like “I know we agreed on meeting on Tuesday at last meeting but unfortunately I can’t do that…” this stuff matters to clients.

    I don’t offload because I don’t remember. I offload because it saves me time. Of course I read what is written before I send it out.

  • I think it might be because AI (aka LLMs) is genuinely useful when used properly.

    I use AI all the time to write emails. I give the LLM the email thread along with instructions like “I can’t make it Tuesday ask if they can do Wednesday at 2pm”

    The AI will write out an email that’s polite and relevant in context. Totally worth it.

    I think the problem is people/companies trying to shove LLMs where they don’t make sense.

  • I am going to ignore the weird race stuff. I don’t agree with it but don’t want to spend the energy.

    I will speak about this:

    I just suppose that the risk of alienating men and them getting more violent may outweigh the immediate benefit of increased plane safety, eventually turning against women themselves. But to prove or disprove that point, I'd love to see more numbers

    This again dehumanizes women and removes agency.

    You are saying that women are the tools that are used to prevent male violence. By treating women as a means to reduce violence without considering the women themselves as people you are dehumanizing and removing agency.

    Women are people just as men are people. Women are not the tools to reduce male violence.

    You also say giving women the choice to sit with women is radical. Women having the chose to protect themselves is not radical. It is a basis for a moral society.

    You shouldn’t need studies to prove how effective or not using women as tools to reduce male violence is.

    Women are not tools.