Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)FA
Posts
0
Comments
72
Joined
2 yr. ago

  • I agree with revealJS, I recommend trying it with org-mode in eMacs with the plugin (plus you also get banner for free)

    Alternatively, it also works with Jupyter.

    This is what I use for every presentation I need to give.

  • Basically, RoCM and CUDA allows one to do math on the GPU. Most Linear Algebra operations (i.e. LLM or NNs and ML generally) can be parallelized over a GPU which is much more performant than CPU.

    To perform calculations on GPU, one needs some sort of interface to to their programming language of choice, NVIDIA has CUDA which is in CPP with bindings to python: (pytorch, Tensorflow etc. ), Julia: Flux etc.

    RoCM is AMDs solution, there bindings are young and not widely implemented.

    My advice, play around with Flux RoCM and PyTorch RoCM just to get an idea. Suffice it to say, when I started doing RL and LLMs more seriously I gave up my colab and sold my AMDs to fund a 3060.

  • Re your update.

    My framework has been great, I’ve had no issues with it and I’m quite happy. Make sure to go with the matte screen though.

    In saying that, I think I was happier with my thinkpad, but I have no good scientific reason for that, I suspect the nipple and keyboard are a big part of it.

  • Go with EndeavourOS. It won’t “just work”, but it will be the best compromise between confusing abstraction and low level frustrations.

    Fedora is good but it abstracts a little too much away, this is great when you understand how software works, but it’s very confusing when you’re new to Linux and programming.

    Arch is good, but you won’t be able to hid the ground running, you’d have to sacrifice a weekend to learn.

    Go:

    1. [Optional] Fedora
    2. Endeavour
    3. Arch
    4. Learning
    • Ghost BSD
    • Void
    • Gentoo

    Tinkering with those in that order, after about 6 months, you’ll start to feel at home.

  • Worth mentioning that one can use bubblewrap directly over chroot to get similar behaviour as well.

    It’s often simpler to use distrobox but being able to rsync chroot a between devices can be very convenient.

  • I wish there was more variety.

    You basically have BSD and Linux and in the Linux space {glibc/musl systemd/openrc/runit PKGBUILD,ebuild,deb,rpm} which seems like a lot but it’s the really niche stuff that’s fun to pull apart and play with.

  • These comments often indicate a lack of understanding about ai.

    Ml algorithms have been in use for nearly 50 years. They certainly become much more common since about 2012, particularly with the development of CUDA, It’s not just some new trend or buzz word.

    Rather, what we starting to see are the fruits of our labour. There are so many really hard problems that just cannot be solved with deductive reasoning.

  • Oh no you need a 3060 at least :(

    Requires cuda. They’re essentially large mathematical equations that solve the probability of the next word.

    The equations are derived by trying different combinations of values until one works well. (This is the learning in machine learning). The trick is changing the numbers in a way that gets better each time (see e.g. gradient descent)

  • Many are close!

    In terms of usability though, they are better.

    For example, ask GPT4 for an example of cross site scripting in flask and you'll have an ethics discussion. Grab an uncensored model off HuggingFace you're off to the races