Skip Navigation

User banner
Posts
59
Comments
168
Joined
2 yr. ago

  • Thanks! So much for my reading skills/attention span 😂

  • Which Debian version is it based on?

  • Meta (lemm.ee) @lemm.ee

    Would you be interested in opting-in to lemmy-meter?

    Blahaj Lemmy Meta @lemmy.blahaj.zone

    Would you be interested in opting-in to lemmy-meter?

  • RE Go: Others have already mentioned the right way, thought I'd personally prefer ~/opt/go over what was suggested.


    RE Perl: To instruct Perl to install to another directory, for example to ~/opt/perl5, put the following lines somewhere in your bash init files.

     
        
    export PERL5LIB="$HOME/opt/perl5/lib/perl5${PERL5LIB:+:${PERL5LIB}}"
    export PERL_LOCAL_LIB_ROOT="$HOME/opt/perl5${PERL_LOCAL_LIB_ROOT:+:${PERL_LOCAL_LIB_ROOT}}"
    export PERL_MB_OPT="--install_base \"$HOME/opt/perl5\""
    export PERL_MM_OPT="INSTALL_BASE=$HOME/opt/perl5"
    export PATH="$HOME/opt/perl5/bin${PATH:+:${PATH}}"
    
      

    Though you need to re-install the Perl packages you had previously installed.

  • lemmy.ml meta @lemmy.ml

    Would you be interested in opting-in to lemmy-meter?

  • First off, I was ready to close the tab at the slightest suggestion of using Velocity as a metric. That didn't happen 🙂


    I like the idea that metrics should be contained and sustainable. Though I don't agree w/ the suggested metrics.

    In general, it seems they are all designed around the process and not the product. In particular, there's no mention of the "value unlocked" in each sprint: it's an important one for an Agile team as it holds Product accountable to understanding of what is the $$$ value of the team's effort.

    The suggested set, to my mind, is formed around the idea of a feature factory line and its efficiency (assuming it is measurable.) It leaves out the "meaning" of what the team achieve w/ that efficiency.

    My 2 cents.


    Good read nonetheless 👍 Got me thinking about this intriguing topic after a few years.

  • This is fantastic! 👏

    I use Perl one-liners for record and text processing a lot and this will be definitely something I will keep coming back to - I've already learned a trick from "Context Matching" (9) 🙂

  • That sounds a great starting point!

    🗣Thinking out loud here...

    Say, if a crate implements the AutomatedContentFlagger interface it would show up on the admin page as an "Automated Filter" and the admin could dis/enable it on demand. That way we can have more filters than CSAM using the same interface.

  • I couldn't agree more 😂

    Except that, what the author uses is pretty much standard in the Go ecosystem, which is, yes, a shame.

    To my knowledge, the only framework which does it quite seamlessly is Spring Boot which, w/ sane and well thought out defaults, gets the tracing done w/o the programmer writing a single line of code to do tracing-related tasks.

    That said, even Spring's solution is pretty heavy-weight compared to what comes OOTB w/ BEAM.

  • I got to admit that your point about the presentation skills of the author are all correct! Perhaps the reason that I was able to relate to the material and ignore those flaws was that it's a topic that I've been actively struggling w/ in the past few years 😅

    That said, I'm still happy that this wasn't a YouTube video or we'd be having this conversation in the comments section (if ever!) 😂


    To your point and @krnpnk@feddit.de's RE embedded systems:

    That's absolutely true that such a mindset is probably not going to work in an embedded environment. The author, w/o explicitly mentioning it anywhere, is explicitly talking about distributed systems where you've got plenty of resources, stable network connectivity and a log/trace ingestion solution (like Sumo or Datadog) alongside your setup.

    That's indeed an expensive setup, esp for embedded software.


    The narrow scope and the stylistic problem aside, I believe the author's view is correct, if a bit radical.
    One of major pain points of troubleshooting distributed systems is sifting through the logs produced by different services and teams w/ different takes of what are the important bits of information in a log message.

    It get extremely hairy when you've got a non-linear lifeline for a request (ie branches of execution.) And even worse when you need to keep your logs free of any type of information which could potentially identify a customer.

    The article and the conversation here got me thinking that may be a combo of tracing and structured logging can help simplify investigations.

  • Thanks for sharing your insights.


    Thinking out loud here...

    In my experience with traditional logging and distributed systems, timestamps and request IDs do store the information required to partially reconstruct a timeline:

    • In the case of a linear (single branch) timeline you can always "query" by a request ID and order by the timestamps and that's pretty much what tracing will do too.
    • Things, however, get complicated when you've a timeline w/ multiple branches.
      For example, consider the following relatively simple diagram.
      Reconstructing the causality and join/fork relations between the executions nodes is almost impossible using traditional logs whereas a tracing solution will turn this into a nice visual w/ all the spans and sub-spans.

    That said, logs do shine when things go wrong; when you start your investigation by using a stacktrace in the logs as a clue. That (stacktrace) is something that I'm not sure a tracing solution will be able to provide.


    they should complement each other

    Yes! You nailed it 💯

    Logs are indispensable for troubleshooting (and potentially nothing else) while tracers are great for, well, tracing the data/request throughout the system and analyse the mutations.

  • I'm not sure how this got cross-posted! I most certainly didn't do it 🤷‍♂️

  • General Programming Discussion @lemmy.ml

    Tracing: structured logging, but better in every way

    Technology @lemmy.ml

    Kamal v1.0.0 - Deploy web apps anywhere

  • That was my case until I discovered that GNU tar has got a pretty decent online manual - it's way better written than the manpage. I rarely forget the options nowadays even though I dont' use tar that frequently.

  • This is quite intriguing. But DHH has left so many details out (at least in that post) as pointed out by @breadsmasher@lemmy.world - it makes it difficult to relate to.

    On the other hand, like DHH said, one's mileage may vary: it's, in many ways, a case-by-case analysis that companies should do.

    I know many businesses shrink the OPs team and hire less experienced OPs people to save $$$. But just to forward those saved $$$ to cloud providers. I can only assume DDH's team is comprised of a bunch of experienced well-payed OPs people who can pull such feats off.

    Nonetheless, looking forward to, hopefully, a follow up post that lays out some more details. Pray share if you come across it 🙏

  • I think I understand where RMS was coming from RE "recursive variables". As I wrote in my blog:

    Recursive variables are quite powerful as they introduce a pinch of imperative programming into the otherwise totally declarative nature of a Makefile.

    They extend the capabilities of Make quite substantially. But like any other powerful tool, one needs to use them sparsely and responsibly or end up w/ a complex and hard to debug Makefile.

    In my experience, most of the times I can avoid using recursive variables and instead lay out the rules and prerequisites in a way that does the same. However, occasionally, I'd have to resort to them and I'm thankful that RMS didn't win and they exist in GNU Make today 😅 IMO purist solutions have a tendency to turn out impractical.

  • Uh, I'm not sure I understand what you mean.

  • TBH I use whatever build tool is the better fit for the job, be it Gradle, SBT or Rebar.

    But for some (presumably subjective) reason, I like GNU Make quite a lot. And whenever I get the chance I use it - esp since it's somehow ubiquitous nowadays w/ all the Linux containers/VMs everywhere and Homebrew on Mac machines.

  • Linux @lemmy.ml

    Variables in GNU Make: Simple and Recursive

    General Programming Discussion @lemmy.ml

    Variables in GNU Make: Simple and Recursive

  • I see.

    So what do you think would help w/ this particular challenge? What kinds of tools/facilities would help counter that?


    Off the top of my head, do you think

    • The sign up process should be more rigorous?
    • The first couple of posts/comments by new users should be verified by the mods?
    • Mods should be notified of posts/comments w/ poor score?

    cc @PrettyFlyForAFatGuy@lemmy.ml

  • I'm nitpicking but can you properly quote your code?

  • I just quote my comment on a similar post earlier 😅

    A bit too long for my brain but nonetheless it is written in plain English, conveys the message very clearly and is definitely a very good read on the topic. Thanks for sharing.

  • Open Source @lemmy.ml

    RIP Thien-Thi Nguyen (ttn) 😑

    Programmer Humor @lemmy.ml

    Vagrant Public Networks

    General Programming Discussion @lemmy.ml

    Determine if given lists intersect

    Linux @lemmy.ml

    Using Make and cURL to measure Lemmy's performance

    General Programming Discussion @lemmy.ml

    Using Make and cURL to measure Lemmy's performance

    General Programming Discussion @lemmy.ml

    github-license-observer: A Firefox add-on to annotate GitHub repos w/ information about their license

    Firefox @lemmy.ml

    github-license-observer: A Firefox add-on to annotate GitHub repos w/ information about their license

    Technology @lemmy.ml

    LinkedIn's new content strategy

    lemmy.ml meta @lemmy.ml

    Recent momentary outages

    Linux @lemmy.ml

    Gnome Online Accounts (Google)

    Linux @lemmy.ml

    #.mk - A Matrix room dedicated to Make

    General Programming Discussion @lemmy.ml

    #.mk - A Matrix room dedicated to Make

    General Programming Discussion @lemmy.ml

    Quickly benchmark commands using Perl