Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)AT
Posts
0
Comments
103
Joined
2 yr. ago

  • There are other reasons to use it. A major one is doing a “code review” of changes before committing, or even deciding to drop a chunk of code from a commit entirely (like a debug statement that no longer is necessary.)

    I’m all about frequent commits (and right-sized commits), but the functionality can still be beneficial even in those scenarios.

    I also don’t care if I have a broken commit. This turns up very quickly, and there is zero expectation that feature branches are always in a working/stable state. The expectation is that pending work gets off the local machine on a regular interval.

  • They basically aren’t?

    If you’re doing one-off hobbyist stuff, maybe.

    But literally anything in a professional setting should be in text that can be committed and searched in a source code repository. If you can’t commit it to git, it didn’t happen.

  • You can stage individual chunks of a file.

    Useful if you have a large set of changes you want to make separate commits for. I also just find that it’s a good way to do a review of each chunk before committing changes blindly.

    Give it a shot some time, worst case is you stage some stuff that you don’t want to commit, but it’s non-destructive.

  • I use git on the CLI exclusively. I almost never rebase, but otherwise get by with about 5-10 commands. One that will totally change your experience is git add -p

    I also have my diff/mergetool configured to use kaleidoscope, but still do everything else in the CLI.

  • I think you’ll have better luck with podcasts. Technical books tend to have long tracts of code that would be excruciating to listen to.

    You might enjoy:

    • co-recursive
    • software unscripted
    • this developer’s life
  • We used JIRA effectively at my last job, the things that made it work for us:

    • stop adding shitloads of required fields. Title, description, branch, priority (defaulted), status (defaulted), type (bug/feature). We might have had some others, but that was all I remember being required.
    • stop writing shitty descriptions: spend more time writing something that your co-worker can use. Respect their time enough to try to include enough detail for them to actually use the ticket. Be available to answer questions when they are assigned a ticket you wrote.
    • you don’t need extremely granular statuses: the functional role of the assignee is enough to determine what “state” it’s in, trying to codify a unidirectional flow of tickets is maddening and overly complicated. Work is messy, it flows back and forth, you do not need a “rejected by qa” status. Just leave it open and reassign to the developer with a comment. Managers find out when individuals are submitting half-assed work on a regular basis, you don’t need JIRA for that (unless you need metrics to fire them… different problem).

    I agree with the premise of the article, JIRA is a communication tool, not a management tool.

  • I use dev containers on Mac, it’s not just about launching services that you need to test your code, it’s about specifying the entire build toolchain to get a deterministic dev environment in an isolated way.

    You don’t need to manage the docker containers at all, vscode handles their lifecycle.

    You can specify different extensions/configurations per project, so if you bounce between several languages, you’re only using the extensions/configs for a given project.

    It also allows for a mostly seamless debugger experience with the browser when you launch a process.

    The nice thing is that it sits off to the side, you can use your docker-compose as you normally would, but if you want to provide a full working dev environment for contributors, basically all they need is docker and vscode installed and they can get started.

    The devcontainer spec is based on open standards, so it probably will end up in other editors, because it solves a huge problem for teams. The only thing that I think will come close is Nix, but I think it’s limited in scope in some important ways for this use case.

  • I have no idea what you mean when you say you found an error in the design that says it was an INSERT instead of a SELECT.

    If you are relying on a design doc with SQL in it, that’s a massive waste of time.

    How many tables are in the schema? Have you reviewed them? Are there any naming conventions being followed, or is everything inconsistently named? Are there specific cases where tables are not normalized properly that you can ask specific questions about why they are that way? If the person that designed the schema is making “trivial” mistakes, there’s no reason to expect that stuff that doesn’t make sense to you will be something they intentionally did.

    I guess what I’m saying is, you need to do some due diligence and survey the schema and write down some specific questions and that may lead to writing a UML or other doc to identify errors, but it doesn’t sound like you’ve done that.

  • You said you were in the role of the “front end dev”. I presumed a structure of an API (usually implemented by a “backend dev”), and a UI (usually implemented by a “front end dev”).

    My advice still stands:

    You need to clarify the interface where each of your responsibilities are handed off.

    If you are implementing the API, you can still produce the same document and then you need to get the other people that need to use it to verify that they can build what they’re doing from that. This means they will need to map the data from the API into the UI elements they need to provide. It also means that someone will need to see how that data will be sourced from the database, and identify anything that is not available in the database.

  • I understand the predicament you are in, but I don’t know that you’ll get very far with your current approach.

    The actual artifact you need from your collaboration is the list of API endpoints and the structure of what the request and response payload will look like for each one. This doesn’t need to be UML, it can literally be a text doc.

    By asking for something highly structured that the other people may not know how to make, or do not have the tools to make, you’re putting them in a position where they have to acknowledge that they don’t know how to do something, or causing them to do work that they don’t value. Once you’ve had success on a team and developed trust/respect, you can push for adding process/tooling, but if you’re new to a team, you really have to work with what you’ve got.

    Either of you could write a simple doc outlining the API and then collaborate on that (it’s also much easier to actually comment/collaborate on than a diagram, anyway).

  • I haven’t made a UML diagram in years. Or an ER diagram, for that matter.

    Getting a schema dump and/or generating a diagram from an existing system would be useful, it won’t be UML, but can convey similar information. At a certain point, keeping an updated UML diagram is extra work that is almost guaranteed to go out of data instantly.

  • It really depends on context, but if I’m just talking about estimating something, it’s usually rounding a decimal to a whole number or if it’s already a whole number, rounding it to the closest value that is divisible by 5 or 10.

    Other than that, it’s basically just about reducing significant figures to make doing rough estimates more easily.

  • The problem is that when you are alerted for trivial/non-actionable stuff it contributes to “alert-fatigue” and you just start ignoring all of the alerts.

    As far as I’m aware, there’s also no way to triage an alert from an install other than to upgrade the offending package, which means you can’t really discriminate on the basis of “acceptable risk”

  • There is no incentive for adding the friction of gas or PoW for these types of systems.

    The parties involved can have a shared log and private keys for signing entries. Party A provides a thing and Party B signs an entry that says they were provided with the thing. Party A can wait for that signed entry before releasing the goods, etc. The problem with block chain to track physical stuff is that that handoffs are not instantaneous, so there’s always lag between the real state of the world and what the log says. In practice, this may be a few seconds, and a human might wait for confirmation before physically granting access to a recipient.

    To put it another way, the party that is signing is not incentivized to forge that they have received an object from someone else, as that is effectively the fulfillment of the obligation. They’re only going to sign an entry if they get the object.

  • Sorry, I didn't mean to reference the detail member, I meant "extension members" as defined in the RFC.

    In the RFC, they are outlined as top-level elements. In the version I proposed, these are bundled up inside of an optional context member. This can be useful in making the serialization and deserialization process a little bit easier to implement in languages that support generics without the need to subclass for the common elements. The RFC specifically defines "extension members" as optional. The key difference is that in what I was describing, they'd be bundled into one object, rather than being siblings of the top-level response.

    It also side-steps any future top-level reserved keyword collisions by keeping "user-defined" members a separate box.

    You seem to be laboring under the notion that this spec produces something that can be entirely negotiated by generic clients, but I don't see that at all. Even for "trivial" examples (multiple validation errors, or rate-limiting thottling), clients would need to implement specialized handlers, which is only vaguely touched upon by the need to have a "problem registry".

    And, like it or not, considering how easy or messy it is for a downstream client to consume a result is actually an important part of API design. I don't see how considering the browser, javascript, and the Fetch API behavior aren't relevent considerations when we're talking about extending HTTP with JSON responses.

    Did you author this RFC? I don't exactly understand why you seem to be taking the criticism personally.