Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)MI
Posts
86
Comments
1,927
Joined
2 yr. ago

  • I had asked for the same thing a while back but didn't really get much. The round-about method that I have found is to finetune FOSS LLMs on data you want it to represent (largely text) and then diving into some prompt engineering to get it to say something you like.

    However, I haven't been able to find a test which can accurately point towards text not having specific weights that it relies on. Cue the attacks on GPT-4 which deanonymises data it was trained on. You might also want to read about DPT and Shadowing techniques to red-team LLMs and LLM-generated text as literature.

    Cheers

  • If you're talking about k8s or similar, the initial time investment is heavy. After that though, it's not very hard to get containers running with HA, better network segmentation and compatibility across run times. Containers are a lot more portable too, and allow granular levels of isolation and security.

    Also, I personally think SELinux is somewhat hard to do well.