Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)AU
Posts
0
Comments
410
Joined
7 mo. ago

  • Tesla's share price is up 40% in just one month. You can't short when it's rising, unless you want to lose shitloads of money. It's also half way through to full recovery, which should happen within a month more.

    What is happening is that everyone who were panic selling lost their money, those who were buying from panicking lemmings got already rich. 40% returns in one month is just fucking bonkers!

  • They are extremely useful for software development. My personal choice is locally running qwen3 used through AI assistant in JetBrains IDEs (in offline mode). Here is what qwen3 is really good at:

    • Writing unit tests. The result is not necessarily perfect, but it handles test setup and descriptions really well, and these two take the most time. Fixing some broken asserts takes a minute or two.
    • Writing good commit messages based on actual code changes. It is a good practice to make atomic commits while working on a task and coming up with commit messages every 10-30 minutes is just depressing after a while.
    • Generating boilerplate code. You should definitely use templates and code generators, but it's not always possible. Well, Qwen is always there to help!
    • Inline documentation. It usually generates decent XDoc comments based on your function/method code. It's a really helpful starting point for library developers.
    • It provides auto-complete on steroids and can complete not only the next "word", but the whole line or even multiple lines of code based on your existing code base. It gets especially helpful when doing data transformations.

    What it is not good at:

    • Doing programming for you. If you ask LLM to create code from scratch for you, it's no different than copy pasting random bullshit from Stack Overflow.
    • Working on slow machines - a good LLM requires at least a high end desktop GPU like RTX5080/5090. If you don't have such a GPU, you'll have to rely on a cloud based solution, which can cost a lot and raises a lot of questions about privacy, security and compliance.

    LLM is a tool in your arsenal, just like other tools like IDEs, CI/CD, test runners, etc. And you need to learn how to use all these tools effectively. LLMs are really great at detecting patterns, so if you feed them some code and ask them to do something new with it based on patterns inside, you'll get great results. But if you ask for random shit, you'll get random shit.

  • It does respect robots.txt, but that doesn't mean it won't index the content hidden behind robots.txt. That file is context dependent. Here's an example.

    Site X has a link to sitemap.html on the front page and it is blocked inside robots.txt. When Google crawler visits site X it will first load robots.txt and will follow its instructions and will skip sitemap.html.

    Now there's site Y and it also links to sitemap.html on X. Well, in this context the active robots.txt file is from Y and it doesn't block anything on X (and it cannot), so now the crawler has the green light to fetch sitemap.html.

    This behaviour is intentional.

  • What kind of code are you writing that your CPU goes to sleep? If you follow any good practices like TDD, atomic commits, etc, and your code base is larger than hello world, your PC will be running at its peak quite a lot.

    Example: linting on every commit + TDD. You'll be making loads of commits every day, linting a decent code base will definitely push your CPU to 100% for a few seconds. Running tests, even with caches, will push CPU to 100% for a few minutes. Plus compilation for running the app, some apps take hours to compile.

    In general, text editing is a small part of the developer workflow. Only junior devs spend a lot of time typing stuff.