Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)LY
Posts
65
Comments
357
Joined
2 yr. ago

  • The article is not talking about async processing. It’s talking about the process scheduler and thread blocking.

    No, not really.

    The article doesn't even cover process scheduling at all. The whole point of the article, which is immediately obvious to anyone who ever worked on a GUI, is what code runs on event handlers and how doing too much in them has a noticeable detrimental impact on user experience (i.e., blocks the main thread).

    It's also obvious to anyone who ever worked on a GUI that you free the main thread of these problems by refactoring the application to run some or all code in a problematic handler asynchronously.

  • The problem with the article is that it’s confusing hard realtime and low latency requirements. Most UIs do not require hard realtime, even soft realtime is a nice to have and users will tolerate some latency.

    I don't think that's a valid take from the article.

    The whole point of the article is that if a handler from a GUI application runs for too long then the application will noticeably block and degrade the user experience.

    The real time mindset is critical to be mindful of this failure mode: handlers should have a time budget (compute, waiting dor IO, etc), beyond which the user experience degrades.

    The whole point is that GUI applications, just like real-time applications, must be designed with these execution budgets in mind, and once they are not met them the application needs to be redesigned avoid these issues.

  • Interesting viewpoint, but I think the applications aren’t at fault: The operating system should ensure that the user has control of the computer at all times.

    The whole point us that the OS does ensure that the user has control of the computer, at least as far as a time-sharing system goes. The problem is that the user (or the software they run) often runs code on the main thread that blocks it.

    The real-time mentality towards constraints on how much can be executed by a handler is critical to avoid these issues, and it should drive the decision on whether to keep running a handler on the main thread or get it to trigger an async call.

  • This is a ridiculous definition of “real-time”. To accomplish this you’d need to subvert the kernal’s scheduler (...)

    You missed the whole point of the article.

    It makes no sense to read the article and arrive at the conclusion that "I need to subvert the Kernel's scheduler". The whole point of the real-time analogy is that handlers have a hard constraint on the time budget allocated to execute each handler. If your handler is within budget them it's perfectly reasonable to run on the UI thread. If your handler exceeds the budget then user experience starts to suffer, and you need to rework your implementation to run stuff async.

    Keep in mind that each mouse click/hover/move/sneeze triggers a handler on GUI applications. Clicking on a button can trigger small, instant changes like updating UI, or can trigger expensive operations like running an expensive task. Some handlers start off as doing small UI updates but end up doing more and more stuff that ultimately start to become noticeable.

  • Maybe I’m dumb because I’m a backend dev, but if we can’t offload these tasks to Async tasks and we need to block the main thread, why can’t we just put up a loading screen?

    That's not the problem. These tasks can be offloaded to async. The underlying issue, and the reason why I think this is an outstanding article, is that running code on the UI thread straight from handlers is easy and more often than not it goes perfectly unnoticed. Only when the execution time of those handlers grow do these blocking calls become an issue.

    There's a gray area between "obviously we need to make these calls async" and "obviously we can run this on the main thread", and here's where the real-time mental model and techniques pay off.

    “Don’t turn off the application we are saving” games have been doing this for a decade and you can’t convince me that your enterprise application is heavier than a AAA game.

    You're missing the whole point.

    The point is that running handlers in the main thread leads to far simpler code and, depending on the usecases, is adopted in scenarios where the approach works well with 99.9% of the conceivable usecases. But then the software starts to be modified and get features added, and some of these code paths start to do more things and take more time to run. When this happens, the 99.9% starts to shrink and some main thread blockages start to become more and more noticeable.

    The article does a very good job in underlying the mental model that needs to be in place to avoid this slippery slope to become a problem.

  • Now you mention it, maybe people with a better interview/offer rate are also doing a better job on not wasting time with positions they aren’t a great fit?

    Yes, that's indeed a key factor. However, I should stress that some of these adverts simply do not have a position to fill. Recruiters post these ads, they go through candidates, sometimes they even line up some interviews, but ultimately they do not have a job to fill at all. In my experience this is the norm with staffing agencies.

    If you're applying for positions posted by staffing and recruiting agencies, I believe you should set your expectations so that you expect nothing to come out and, even though you should do your best when applying, you should take a fire-and-forget approach to them.

  • I think my interview/offer ratio is somewhere below 1%.

    Keep your spirits up, and be mindful that there are tons of job adverts out there that don't actually have a real job position to fill, and are only used by recruiters and consulting companies to harvest CVs and meet their internal quotas. 1% sounds about right

  • My workplace has the opposite problem.

    I don't see that as a problem. The job description of an engineer includes dealing with new problems and onboarding onto new things. So you never wrote a parser and now you have to. That's ok, just go ahead and start from the ground up.

    What I perceive as a major problem is the utter disconnect between what companies test for, and what companies actually do.

    It makes no sense at all to evaluate candidates on obscure trivia questions no one will ever care about or use, let alone reject an applicant because they mixed up O(nlogn) with O(logn). It matters more if you know a good, healthy answer to tabs vs spaces.

    I once was a part of an hiring loop where we assessed a candidate, and one other fellow assesser wanted outright to reject the candidate because he failed to answer one of his questions on data structures. Everyone in the meeting voted in favour of that hire, except that one guy. When we asked to reconsider his position, he threw a tantrum because he felt that it was a matter of principle that we had to not hire a candidate that didn't knew trivia. The hiring manager asked if that info was important, and in case he felt it was whether it could be looked up online in a matter of minutes, but the assesser tried to argue that it was besides the point.

    Data structures and algorithms trivia feels like ladder pulling.

  • your 2 decades of experience mean much more than memorizing algorithms, you know how to produce real value

    That's all fine and dandy but the HR recruiter that can't tell apart git from grunt needs to cross boxes in the skills assessment section, and if you don't ace coding challenges you are as good as dead to them.

  • A few years ago I was in a hiring loop where four interviewers grilled me on a number of subjects, including algorithms and data structures. They asked me all sorts of trivia questions on assimptotic complexity of this and that algorithm, how to implement this and that, how to traverse stuff, etc. As luck would have it, I was hired. I spent a few years working for that company and not a single time did I ever implemented a data structure at all or wrote any sort of iterator. Not once.

    I did spend months writing stuff in an internal wiki.

    I can't help but feel that those bullshit leetcode data structures computational complexity trivia are just a convoluted form of ladder-pulling.

  • Is it just me or is this a nightmare implementation in terms of software maintenance and operations? Each state transition requires a database trip, state machine transitions are determined at runtime and there's no simple way to reproduce them locally, and in the case of the state machine database going down the system simply cannot work.

    What exactly is the selling point of this approach?

  • If GitHub changes terms of use to pay for basic stuff, or starts breaking compatibility or adding egregious bugs, I would start looking for alternatives.

    A while ago I had all my personal projects on GitLab. I was a GitLab fanboy and advocated it everywhere to the point I convinced the project manager of a previous job to migrate the team's projects to it and pay for GitLab ultimate. Without going into details, that goodwill ended the moment I stumbled upon a regression introduced by GitLab which affected my personal projects, and their customer support essentially said the issue was won't fix but it was fixed in premium customers. I simply unblocked myself by moving all projects to GitHub, disabled GitLab CICD and shut down my GitLab runners, and onboarded onto a mix of GitHub Actions and CircleCI. I could still stick with GitLab, but why bother?

    I would do the same to GitHub if I experienced anything remotely similar.

  • This was the first time I saw someone refer to Python's type hints as a performance tool. Up until now, I only saw references to type hints as a way to help static code analyzer tools verify that objects and invocations comply with contracts.

    I guess that having additional info at hand to determine how some calls are expected to be made is helpful to gather info to drive optimization steps, but PEP 484 is clear in stating that it's goal is to help type checkers, and that code generation using type hints might be limited to some contexts.

    This sounds like yet another example supporting the old law of interfaces, where all it takes for an interface to be abused is for it to exist.

  • Microservices are not just about scaling and performance but it is a core advantage. To say they have “nothing” to do with it is outright false.

    They have nothing to do with performance. You can improve performance with vertical scaling, which nowadays has a very high ceiling.

    It's not a coincidence that startups are advised against going with microservices until they grow considerably. The growth is organizational, and not traffic.

    Microservices are about modular design and decoupling units of code from each other.

    Yes, but you're failing to understand that the bottleneck that's fixed by peeling off microservices is the human one faced by project managers. In fact, being forced to pay the microservices tax can and often does add performance penalties.

    The problem with this approach is that switching from vertical to horizontal is extremely hard if you didn’t plan for it from the start.

    I think you're missing the point that more often than not ain't going to need it.

    In the rare cases you do, microservices is not a magic wand that fixes problems. The system requires far more architectural changes that go well beyond getting a process to run somewhere else.

  • Using linting to prevent coupling between modules can give you some of the benefits of micro services without going all in.

    My point was that modularizing an application and decoupling components does not, by any mean, give any of the benefits of microservices.

    The benefits of microservices are organizational and operational independence. Where do you see coupling between components to play a role in any of these traits?

  • Microservices are great if you have enough traffic that you can get an efficiency gain by independently scaling all those services. But if you aren’t deploying onto thousands of servers just to handle traffic volume, you probably don’t need 'em.

    I don't think that's a valid take. Microservices have nothing to do with scaling or performance, at least for 99% of the cases out there. Microservices are a project- and team-management strategy. It's a way to peel out specific areas of responsibility from a large project, put together a team that is dedicated to that specific area of responsibility, and allow it to fully own and be accountable for the whole development life cycle, specially operations.

    Being able to horizontally scale a service is far lower in the priority queue, and is only required once you exhaust the ability to scale vertically.