Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)FI
Posts
1
Comments
473
Joined
2 yr. ago

  • Definitely SQLite. Easily accessible from Python, very fast, universally supported, no complicated setup, and everything is stored in a single file.

    It even has a number of good GUI frontends. There's really no reason to look any further for a project like this.

  • Honestly I think the complaints about the job market are overblown. If you are good then there will always be a job for you somewhere.

    If you've already tried programming and you enjoy it then it is a really great career. Crazy money (especially in the US) for low effort and low responsibility.

    Just be aware that CS is usually a lot more theoretical than most programming. You'll be learning about things like Hoare logic and category theory. Tons of stuff you only really need in the real world if you're doing formal verification or compiler design.

    Still, I kind of wish I did have that theoretical background now I am doing formal verification and compiler design! (I did a mechanical engineering degree.)

    Also you don't need a CS degree to get a programming job. I did a survey of colleagues once to see what degree they had and while CS was the most common, fewer than half had one. Most had some kind of technical degree (maths, physics, etc.), but some had done humanities and one guy (who was very good!) didn't have a degree at all.

    I wouldn't worry about the market. Maybe take a look at the syllabus for places you might apply to, e.g. here's the one for Cambridge. Also I guess an important question is what's the alternative? What would you do otherwise?

  • I've definitely seen "this is more correct, but all the other code does it like this so can you change it?"

    I can't say I entirely disagree with it either - usually the "more correct" is not "the existing code doesn't work at all", and keeping it consistent makes it easier to fix all of the code later, because you're only fixing one style instead of two (or more).

  • No, they're inherently optional in Git. There's no way to "check in" a git hook. You have to put in your README

    Clone the repo and then please run pre-commit install! Oh and whatever you do don't git commit --no-verify!

    You definitely need to actually check the lints in CI. It's very easy though, just add pre-commit run -a to your CI script.

  • So... to store encrypted data that only the user can decrypt you don't need any fancy zero knowledge algorithms. Just have the user keep the encryption key.

    For authentication you could use one of these algorithms. OPAQUE seems to be popular. I'm not an expert but it seems like it has several neat zero-knowledge style properties.

    But probably forget about implementing it without a strong background in cryptography.

  • To check that people ran the pre-commit linters.

    Committing itself won’t be possible

    That's not how pre-commit hooks work. They're entirely optional and opt-in. You need CI to also run them to ensure people don't forget.

  • Normally when I merge a PR I put the long PR message (if there is one) in the merge commit (again if there is one), rather than shitty Merge PR from patch1 that people seem to use.

    You can actually change the behaviour on GitHub to be sane: https://blog.mergify.com/how-to-change-the-default-commit-message-on-github/amp/

    If I'm not keeping the branch (usually PRs are not big enough to make preserving multiple commits useful) then I squash & merge which gives you the chance to edit the commit message and copy details from the PR message in.

  • Based on my experience of AI coding I think this will only work for simple/common tasks, like writing a Python script download a CSV file and convert it to JSON.

    As soon as you get anywhere that isn't all over the internet it starts to bullshit.

    But if you're working in a domain it's decent at, why not? I found in those cases fixing the AI's mistakes can be faster than writing it myself. Actually often I find it useful for helping me decide how I want to write code because the AI does something dumb, and I go "no I obviously don't want it like that"...

  • TL;DR: Intellisense works best if you write bottom-up (true) and it means you have to remember less stuff (also true), therefore it makes you write worse code (very doubtful).

    So I don’t think IntelliSense is helping us become better programmers. The real objective is for us to become faster programmers, which also means that it’s cheapening our labor.

    This doesn't make any sense though.

    1. People don't have unlimited time. Writing high quality code takes time and wasting it remembering or typing stuff that Intellisense can take care of means I have less time for refactoring etc. Also one of the really useful things about Intellisense is that it enables better refactoring tools!
    2. It doesn't make you dumber to use tool assistance. It just means you get less practice in doing the thing that the tool helps you with. Does that matter? Of course not! Does it matter that I can't remember how to do long division because I always use a calculator? Absolutely not. Similarly it doesn't matter that I can't remember off the top of my head which languages use starts_with, HasPrefix, startswith, etc. doesn't matter at all if Intellisense can easily tell me.
    3. You don't have to use the Intellisense suggestions. Just press escape. It's very easy.
    4. It's very well known that making something easier to do increases demand for it.
  • In my experience taking an inefficient format and copping out by saying "we can just compress it" is always rubbish. Compression tends to be slow, rules out sparse reads, is awkward to deal with remotely, and you generally always end up with the inefficient decompressed data in the end anyway, whether in temporarily decompressed files or in memory.

    I worked in a company where they went against my recommendation not to use JSON for a memory profiler output. We ended up with 10 GB JSON files, even compressed they were super annoying.

    We switched to SQLite in the end which was far superior.

  • If I'm understanding you correctly, you can create a branch to mark where you are git branch tmp then abort the rebase. Switch to tmp get the history like you wanted, then switch back. Finally do a git rebase -i again, but immediately git reset --hard tmp. Now you have the resolved commits you want, and can delete any you don't want to do again with git --edit-todo.

    Maybe.