Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)EX
Posts
0
Comments
455
Joined
2 yr. ago

  • I'd say it's definitely worth it. I don't actually use nixos itself, but I do use nix a lot. I have everything I need for work in a home manager configuration, so I can literally just install nix and load up my config and have all programs and configuration of said programs installed and ready to go (on any UNIX system). I started doing this since changing jobs means a new machine, and I got really tired of all of the inconsistencies between machines when bringing over my dotfiles, and having to install a bunch of packages I use every time I changed jobs.

    I do want to make the switch from Arch to nixos on my personal machine eventually too, but I hardly spend any time on computers outside of work these days, unfortunately. But the great thing is that my home manager configuration can pretty easily slide right into a nixos configuration, which is what many people do.

  • ...

    Jump
  • Haskell. It's a fantastic language for writing your usual run of the mill DB-backed web APIs and can do a lot of things that other languages simply can't (obviously not in terms of computation, but in terms of what's possible with the type system).

    I've been writing it professionally for a while and am very happy with it. Would be nice if the job market for it was a bit broader. You can definitely get jobs doing it, you just don't have quite as broad of a pool to choose from.

  • Umm, queueing is standard practice particularly when a task is performance intensive and needs limited resources.

    Basically any programming language using any kind of asynchronous runtime is using queues in their scheduler, as well.

  • I'm not familiar with any special LLVM instructions for Haskell. Regardless, LLVM is not actually a commonly used backend for Haskell (even though you can) since it's not great for optimizing the kind of code that Haskell produces. Generally, Haskell is compiled down to native code directly.

    Haskell has a completely different execution model to imperative languages. In Haskell, almost everything is heap allocated, though there may be some limited use of stack allocation as an optimization where it's safe. GHC has a number of aggressive optimizations it can do (that is, optimizations that are safe in Haskell thanks to purity that are unsafe in other languages) to make this quite efficient in practice. In particular, GHC can aggressively inline a lot more code than compilers for imperative languages can, which very often can eliminate the indirection associated with function calls entirely. https://gitlab.haskell.org/ghc/ghc/-/wikis/commentary/compiler/generated-code goes into a lot more depth about the execution model if you're interested.

    As for languages other than Haskell without such an execution model (especially imperative languages), it's true that there can be the overhead you describe, which is why the vast majority of them use iterators to achieve the effect, which avoids the overhead. Rust (which has mapping/filtering, etc. as a pervasive part of its ecosystem) does this, for example, even though it's a systems programming language with a great deal of focus on performance.

    As for the advantage, it's really about expressiveness and clarity of code, in addition to eliminating the bugs so often resulting from mutation.

  • Pushing to master in general is disabled by policy on the forge itself at every place I've worked. That's pretty standard practice. There's no good reason to leave the ability to push to master on.

    There's no reason to avoid force pushing a rebased version of your local feature branch to the remote version of your feature branch, since no one else should be touching that branch. I literally do this at least once a day, sometimes more. It's a good practice that empowers you to craft a high-quality set of commits before merging into master. Doing this avoids the countless garbage fix typo commits (and spurious merge commits) that you'd have otherwise, making both reviews easier and giving you a higher-quality, more useful history after merge.

  • Pretty much everything that can act as a git remote (GitHub, gitlab, etc.) records the activity on a branch and makes it easy to see what the commit sha was before a force push.

    But it's a pretty moot point since no one that argues in favor of rebasing is suggesting you use it on shared branches. That's not what it's for. It's for your own feature branches as you work, in which case there is indeed very little risk of any kind of loss.

  • No, there are no fast-forwards with rebasing. A rebase will take take the diff of each commit on your feature branch that has diverged from master and apply those each in turn, creating new commits for each one. The end result is that you have a linear history as though you had branched from master and made your commits just now.

    If you had branched like this:

     
        
    A -> B -> C (master)
       \
         \ -> D (feature)
    
      

    It would like this after merging master into your feature branch:

     
        
    A -> B -> C (master) ->   E (feature)
      \                                    /
        \ -> D -------------------> /
    
    
      

    And it would like this if you instead rebased your feature branch onto master:

     
        
    A -> B -> C (master) -> D' (feature)
    
      

    This is why it's called a "rebase": the current state of master becomes the starting point or "base" for all of your subsequent commits. Assuming no conflicts, the diff between A and D is the same as the diff between A and D'.

  • Huh, you know what, maybe I'll give something like that a try. In the past I've tried doing one worktree per branch, but it was a pretty big hassle since I'd have to copy over a bunch of files every time (stuff sitting in the directory but not version-controlled). Yeah it can be automated, but it didn't seem worth it. But a persistent set of work trees that I can use to parallelize when needed sounds pretty good.

  • That's exactly the same thing. A branch is nothing more than a commit that you've given a name to. Whether that name is your original branch's name or a new branch's name is irrelevant. The commit would be the same either way.

    A junior cannot actually do any real damage or cause any actual issue. Even if they force push "over" previous work (which again, is just pointing their branch to a new commit that doesn't include the previous work),, that work is not lost and it's trivial to point their branch to the good commit they had previously. It's also a good learning opportunity. The only time you actually can lose work is if you throw away uncommitted changes, but force pushing or not is completely irrelevant for that.

  • In case you're not familiar, https://en.m.wikipedia.org/wiki/Grok.

    It's somewhat common slang in hacker culture, which of course Elon is shitting all over as usual. It's especially ironic since the meaning of the word roughly means "deep or profound understanding", which their AI has anything but.

  • Uh, it's definitely a bad idea to be concurrently developing on the same branch for a lot of reasons, large org or not. That's widely considered a bad practice and is just a recipe for trouble. My org isn't that huge, and on our team for our repo we have 9 developers working on it including myself. We still do MRs because that's the industry standard best practice and sidesteps a lot of issues.

    Like, how do you even do reviews? Patch files?

  • Ah gotcha.

    like to rebase after fetching and before pushing. IMO that's the most sensible way to use it even in teams that generally prefer merge.

    What do you mean? Like not pushing at all until you're making the MR? Because if the branch has ever been pushed before and you rebase, you're gonna need to force push the branch to update it.

    Personally I'm constantly rebasing (like many times a day) because I maintain a clean commit history as I develop (small changes to things I did previously get commits and are added to the relevant commit as a fixup during interactive rebasing). I also generally keep a draft MR up with my most recent work (pushing at end of day) so that I can have colleagues take a look at any point if I want to validate anything about the direction I'm taking before continuing further (and so CI can produce various artifacts for me).

    It's also not obvious to beginners since pull is defaulted to fetch+merge.

    Yeah, pull should definitely be --ff-only by default and it's very unfortunate it isn't. Merging on pull is kind of insane behavior that no one actually wants.

  • I was replying to the other comment, not yours. Though there's not really a way of using rebasing without force pushing unless it's a no-op.

    Rebasing is really not a big deal. It's not actually hard to go back to where you were, especially if you're using git rebase --interactive. For whatever reason people don't seem to get that commits aren't actually ever lost and it's not that hard to point HEAD back to some previous commit.

  • Yeah it is something people should take time to learn. I do think its "dangers" are pretty overstated, though, especially if you always do git rebase --interactive, since if anything goes wrong, you can easily get out with git rebase --abort.

    In general there's a pretty weird fear that you can fuck up git to the point at which you can't recover. Basically the only time that's really actually true is if you somehow lose uncommitted work in your working tree. But if you've actually committed everything (and you should always commit everything before trying any destructive operations), you can pretty much always get back to where you were. Commits are never actually lost.

  • Or, you know, on your own feature branch to clean up your own commits. It's much, much better than constantly littering your branch's history with useless merge commits from upstream, and it lets you craft a high-quality, logical commit history.

  • This a really bad take and fundamentally misunderstands rebasing.

    First off, developers should never be committing to the same branch. Each developer maintains their own branch. Work that needs to be tested together before merging to master belongs on a dedicated integration branch that each developer merges their respective features branches into. This is pretty standard stuff.

    You don't use rebasing on shared branches, and no one arguing for rebasing is suggesting you do that. The only exception might be perhaps a dedicated release manager preparing a release or a merge of a long-running shared branch. But that is the kind of thing that's communicated and coordinated.

    Rebasing is for a single developer working on a feature branch to produce a clean history of their own changes. Rebasing in this fashion doesn't touch any commits other than the author's. The purpose is to craft a high quality history that walks a reader through a proposed sequence of logical, coherent changes.

    Contrary to your claim, a clean history is incredibly valuable. There's many tools in git that benefit significantly from clean, well-organizes commits. git bisect, git cherry-pick... Pretty much any command that wants to pluck commits from history for some reason. Or even stuff like git log -L or git blame are far more useful when the commit referenced is not some giant amalgamation of changes from all over the place.

    When working on a feature branch, if you're merging upstream into your branch, you're littering your history with pointless, noisy commits and making your MR harder to review, in addition to making your project's history harder to understand and navigate.

  • 2 things:

    1. You don't pull rebased work pretty much ever. Rebasing is for feature branches by a single author to craft a high quality history, generally. It's much, much better than littering your branch with merge commits from upstream.
    2. If for some reason you do need to pull rebased changes, you simply do git pull --rebase. Works without issue.