Skip Navigation

Posts
7
Comments
207
Joined
2 yr. ago

  • monopoly: the exclusive possession or control of the supply of or trade in a commodity or service.

    GitHub is not a monopoly: it has competition. If you're upset about it's market share, switch to GitLab, Bitbucket, or host your own instance. If you're upset about people not being aware of the other options, be an advocate and spread awareness of the alternatives.

  • I absolutely detest leetcode style interview questions. I am good at solving problems and writing modular code. I am not good at writing search algorithms. Any guesses which one of those is more relevant to my job? 99% of development does not involve writing low level algorithms, because guess what someone else already did that for you, it's called a library!

  • As far as I'm concerned the advantage is I can have three windows (or three editor views) tiled horizontally and each one is the perfect width. A half width (half of 1080p/16:9) is too narrow and a full width window wastes space, but a 2/3 (of 1080p) width window is about perfect. If I tried to do that with two regular monitors, the middle window would be split across the bezel.

    *When I say 1080p, I really mean the aspect ratio. My monitor is effectively a double width 1440p monitor, but with the display scaling I use the space is effectively 1080p.

  • I just share one window at a time. I put the meeting on one half and the window I want to share on the other, which makes it 16:9 and works perfectly for what I need to share.

  • I wonder how relevant this is to Go (which is what I work in these days), at least for simple data retrieval services. I can see how transforming code to a functional style could improve clarity, but Go pretty much completely eliminates the need to worry about threads. I can write IO bound code and be confident that Go will shuffle my routines between existing threads and create new OS threads as the existing ones are blocked by syscalls. Though I suppose to achieve high performance I may need to start thinking about that more carefully.

    On the other hand, the other major component of the system I'm working on is responsible for executing business logic. It's probably too late to adopt a reactive programming approach, but it does seem like a more interesting problem than reactive programming for a data retrieval service.

  • User provided content (post using custom emojis) caused havoc when processing (doesn’t matter if on server or on client). This is lack of sanitization of user-provided-data.

    100%. Always act as though user provided content is malicious.

    JavaScript (TypeScript) has access to cookies (and thus JWT). This should be handled by web browser, not JS.

    Uh... what? JavaScript is a client-side language (unless you're using NodeJS, which Lemmy is not). Which means JavaScript runs in the browser. And that JavaScript has access to cookies, that's just a basic part of how web browsers work. Lemmy can't do anything to prevent that.

    How the attacker got those JWTs? JavaScript sent them to him? Web browser sent them to him when requesting resources form his server? This is lack of site isolation, one web page should not have access to other domains, requesting data form them or sending data to them.

    Again, Lemmy can't do anything about that. Once there's a vulnerability that allows an attacker to inject arbitrary JS into the site, Lemmy can't do anything to prevent that JS from making requests.

    Then, if they want to administer something, they should log-in using separate username + password into separate log-in form and display completely different web page

    On the backend you'd still have a single system which kind of defeats the purpose. Unless you're proposing a completely independent backend? Because that would be a massive PITA to build and would drastically increase the system's complexity and reduce maintainability.

  • Be my guest 😊

  • Maybe I'm misunderstanding what "dependency injection" means. When I hear "dependency injection" I think of a DI framework such as Unity, so I thought "using DI" meant using one of those frameworks.

  • grumble. I dabbled in Scala a few years back and I am really grumpy every time I remember that Go doesn't have sum types, pattern matching, and the closed union of types construction you can create with an abstract final class in Scala. I loved that last one and used the heck out of it. I would love to have a compiler-enforced guarantee that a set of types was closed and could not be extended.

  • One of the reasons I love Go is that it makes it very easy to collect profiles and locate hot spots.

    The part that seems weird to me is that these articles are presented as if it's a tool that all developers should have in their tool belt, but in 10 years of professional development I have never been in a situation where that kind of optimization would be applicable. Most optimizations I've done come down to: I wrote it quickly and 'lazy' the first time, but it turned out to be a hot spot, so now I need to put in the time to write it better. And most of the remaining cases are solved by avoiding doing work more than once. I can't recall a single time when a micro-optimization would have helped, except in college when I was working with microcontrollers.

  • I thought it might be helpful for optimizing cryptographic code, but it hadn't occurred to me that it would prevent side channel leaks

  • If you're writing data processing code, there are real advantages to avoiding branches, and its especially helpful for SIMD/vectorization such as with AVX instructions or code for a GPU (i.e. shaders). My question is not about whether its helpful - it definitely is in the right circumstances - but about how often those circumstances occur.

  • Do you recall what the presentation was called? I built a pipelined packet processing system (for debugging packets sent over an RF channel) which sounds like a fairly representative example of what you're talking about, but it's not obvious to me how to naturally extend that to other types of projects.

  • One of my problems is that I've gotten so practiced at reading code that my standards for "this is readable, it doesn't need much commenting" are much lower than those of the other developers I work with. I've had to recalibrate from "Will I be able to understand this six months from now?" to "Will I need to explain this in the review?"

  • Makes sense. The most programming I've ever done for a GPU was a few simple shaders for a toy project.

  • Senior engineers have learned through hard-won experience that writing code is the ultimate diminishing return.

    I feel seen. That entire section is absolute gold.

  • I am all aboard the code readability train. The more readable code is, the more understandable and therefore debuggable and maintainable it is. I will absolutely advocate for any change that increases readability unless it hurts performance in a way that actually matters. I generally try to avoid nesting ifs and loops since deeply nested expressions tend to be awful to debug.

    This article has had a significant influence on my programming style since I read it (many years ago). Specifically this part:

    Don't indent and indent and indent for the main flow of the method. This is huge. Most people learn the exact opposite way from what's really proper — they test for a correct condition, and if it's true, they continue with the real code inside the "if".

    What you should really do is write "if" statements that check for improper conditions, and if you find them, bail. This cleans your code immensely, in two important ways: (a) the main, normal execution path is all at the top level, so if the programmer is just trying to get a feel for the routine, all she needs to read is the top level statements, instead of trying to trace through indention levels figuring out what the "normal" case is, and (b) it puts the "bail" code right next to the correctness check, which is good because the "bail" code is usually very short and belongs with the correctness check.

    When you plan out a method in your head, you're thinking, "I should do blank, and if blank fails I bail, but if not I go on to do foo, and if foo fails I should bail, but if not i should do bar, and if that fails I should bail, otherwise I succeed," but the way most people write it is, "I should do blank, and if that's good I should do foo, and if that's good I should do do bar, but if blank was bad I should bail, and if foo was bad I should bail, and if bar was bad I should bail, otherwise I succeed." You've spread your thinking out: why are we mentioning blank again after we went on to foo and bar? We're SO DONE with blank. It's SO two statements ago.

  • Code readability is often way more important

    This. 100% this. The only thing more important than readability is whether it actually works. If you can't read it, you can't maintain it. The only exception is throw away scripts I'm only going to use a few times. My problem is that what I find readable and what the other developers find readable are not the same.

    I’d say, being able to identify bottlenecks is what really matters, because it’s what will eventually lead you to the hot loop you’ll want to optimize.

    I love Go. I can modify a program to activate the built-in profiler, or throw the code in a benchmark function and use the tool chain to profile it, then have it render a flame graph that shows me exactly where the CPU is spending its time and/or what calls are allocating. It makes it so easy (most of the time) to identify bottlenecks.

  • If you want your code to run on the GPU, the complete viability of your code depend on it.

    Because of the performance improvements from vectorization, and the fact that GPUs are particularly well suited to that? Or are GPUs particularly bad at branches.

    it is only one of the many micro-optimization techniques you can do to take a few nanoseconds from an inner loop.

    How often do a few nanoseconds in the inner loop matter?

    The thing to keep in mind is that there is no such thing as “average developer”. Computing is way too diverse for it.

    Looking at all the software out there, the vast majority of it is games, apps, and websites. Applications where performance is critical, such as control systems, operating systems, databases, numerical analysis, etc, are relatively rare compared to apps/etc. So statistically speaking the majority of developers must be working on the latter (which is what I mean by an "average developer"). In my experience working on apps there are exceedingly few times where micro-optimizations matter (as in things like assembly and/or branchless programming as opposed to macro-optimizations such as avoiding unnecessary looping/nesting/etc).

    Edit: I can imagine it might matter a lot more for games, such as in shaders or physics calculations. I've never worked on a game so my knowledge of that kind of work is rather lacking.

  • What does Go’s simplicity have to do with dependency injection?

    In my experience, following Go's philosophy of simple solutions eliminates the need for complex solutions such as dependency injection.

    How do you unit test your extremely complex projects if your business logic carries the additional responsibility of creating objects?

    I write modular code that accepts interfaces so I can test the components I want to test. The vast majority of object creation happens at initialization time, not in the business logic. For the projects I've worked on, that would be true with or without DI - I don't see how that's relevant.

    perhaps your “extremely complex projects” wouldn’t be so extremely complex if you practiced dependency injection?

    When the CTO says, "Make it distributed and sharded," I do what I'm told, but that is an intrinsically complex problem. The complexity is in the overall behavior of the system. If you zoom in to the individual execution units, the business logic is relatively simple. But the behavior of the system as a whole is rather complex, and DI isn't going to change that.

    Edit: I was interpreting "using DI" to mean using a DI framework such as Unity, and I would be happy to never need one of those frameworks ever again.