Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)LY
Posts
65
Comments
357
Joined
2 yr. ago

  • …it’ll still be 35 users/month

    I'm not sure you are aware how irrelevant this is. This could mean as little as a single user opening the community page daily, or 30 different users accidentally navigating into the community page from the main page just because an article showed up in their feed.

    To frame the absurdity of this argument, I moderate !nodejs@programming.dev , which in the past month registered also 30 users/month, and that community is also dead.

  • No, you’re lying by using a different definition of “dead”.

    Now you're being silly and acting defensively. I don't need to do anything for the !dotnetmaui@programming.dev group to be dead or remain dead, as it was expected to be. Anyone can take a look at it and see that if they filter out your personal inorganic traffic, which is already of dubious relevance, nothing remains.

    You can stay up all night arguing otherwise, but it is what it is.

    It's ok if you feel that it's your personal mission to generate traffic for a particular channel on a lemmy instance. Just don't try to pretend it's something that's relevant for anyone beyond yourself.

  • If you want to use a fourth language, first you need approval to train every single employee in that language.

    I've worked at a company where each and every single engineer was free to pick up what he felt was the best tool for the job.

    It was an utter mess of unmaintainable code, and everyone wasted time trying to get projects not die out of bitrot.

    Training people is not a problem. You also do not have to train everyone to create a single project in a particular framework/programming language. What you do have to factor into your analysis is the inefficiency of having to waste time managing multiple fameworks/runtimes/deployments/programming language development environments, and the lack of progress you will have in your team's skillsets if everyone turns into a one-man silo.

  • None of the communities you have mentioned are dead. In fact according to the January stats, all the communities you mentioned have above average usage.

    I'm sorry, you're trying to blatantly lie with statistics.

    "Above average" means nothing if the majority of communities is already dead. You're just arguing that some communities are more dead, which is pointless.

    You're also lying regarding what traffic is being posted to !dotnetmaui@programming.dev. All posts ranging back to the last two weeks come from a single user account: https://programming.dev/u/SmartmanApps .

    To make this even more pathetic, the bulk of the posts going into !dotnetmaui@programming.dev were posted by your account after I pointed out the community was dead and already dead on arrival.

    You're not refuting the point: you're proving the point that the community is dead.

  • I mod the MAUI Community, which was created shortly before Xmas, and I made some announcements then (like on Mastodon, Daily Dew Drop, etc.) and some people joined then.

    I'm late to the game but I should point out that the MAUI community is a textbook example of how communities should definitely not be created, and it was clear from the start that it was already born a dead community.

    The C# barely gets a single post per week. The .NET community is even more of a niche community, and in spite of all the non-organic posts it's already dead.

    Even though you were fully aware of this and you were repeatedly pointed out the obvious fact that a niche of a niche won't take off, you ignored te feedback and still went ahead with the creation of the community. Which is of course dead.

    Lemmy in general and programming.dev in particular already have groups with traction. I hope that moving forward the group creation process is based on peeling specialized topics from existing communities. Otherwise the MAUI fiasco will repeat itself and we'll end up with an even longer tail of dead communities vulnerable to spam and takeovers by bad actors.

  • See, you’re not claiming that processes are important, you’re claiming that your process is important and

    No, I'm claiming that processes are important.

    It's important that stumbling upon a tangentially related bug or even linting issue does not block your work, forces you to fork your work, nor forces you to work around it. It's important that you can just post a small commit, continue with your work, and only handle that in the very end.

    It's also important that you can work on your feature branch as you please, iterate on tests and fixes as you see fit, and leave cleanup commits to the very end so that your PR contributes a clean commit history instead of reflecting your iterations.

    It's important that you can do any work you feel is important without having to constrain yourself to adapt your work to what you absolutely have to push your changes in a squraky clean state without iterations.

    It's important that you can work on tasks as well as cleanup commits, and not be forced to push them all in a single PR because you are incapable of editing your local commit history.

    It's not about my workflow. It's about the happy path of a very mundane experience as a professional software developer, specially in a team which relies on a repository's commit history to audit changes and pinpoint regressions.

    This is stuff anyone who works in professional teams can tell you right away. Yet, you talk about this if it was a completely alien concept to you. Why is that? Is everyone around you wrong and your limited superficial experience dictates the norm?

    Yet you talk about autism.

  • Oh, okay. I’ve never encountered a situation where I needed that bug fixed for the task but it shouldn’t be fixed as part of the task;

    So you never stumbled upon bugs while doing work. That's ok, but others do. Those who stumble upon bugs see the value of being able to sort out local commits with little to no effort.

    Also, some teams do care about building their work on atomic commits, because they understand the problems caused by mixing up unrelated work on the same PR, specially when auditing changes to track where a regression was introduced. You might feel it's ok to post a PR that does multiple things like bumping up a package version, linting unrelated code, fixing an issue, and post comments on an unrelated package, but others know those are four separate PRs and should be pushed as four separate PRs.

    if they’re touching the same functionality like that I really don’t see the need for two PRs.

    That's ok, not everyone works with QA teams. Once you grow over a scale where you have people whose job is to ensure a bug is fixed following specific end to end tests and detect where a regression was introduced, you'll understand the value of having tests that verify if a bug is fixed, and only afterwards proceed with changing the user-facing behavior. For those with free-for-all commits where "fixes bug" and "update" show up multiple times in their commit history, paying attention to how a commit history is put together is hardly a concern.

  • And those who don’t immediately insult

    Pointing out someone's claim that they don't care about processes when it's the critical aspect of any professional work is hardly what I'd call an insult.

    Just go ahead and say you don't use a tool and thus you don't feel the need to learn it. Claiming that a tool's basic functionality is "a solution in search for a problem" is as good as announcing your obliviousness,and that you're discussing stuff you hardly know anything about.

  • That sounds like a solution in desperate need for a problem.

    It's ok if you never did any professional software development work. Those who do have to go through these workflows on a daily basis. Some people don't even understand why version control systems are used or useful, and that is perfectly ok. Those who do work have to understand how to use their tools, and those who don't can go about their life without even bothering with this stuff.

  • See, I don't think you understood the example. The commits built upon each other (bugs are fixed while you work on the task, and to work on your task you need the bugs to be fixed) and reordering commits not only take no time at al but they are also the very last thing you do and you have to do the just once.

  • It's ok if you don't feel a need to change your persona workflow.

    Nevertheless I'm not sure you understood the example, so I'm not sure you fully grasp the differences.

    The whole point of my example was to point out the fact that, thanks to interactive rebase, you do not even need to switch branches to work on multiple unrelated PRs. You can just keep going by doing small commits to your local feature branch and keep doing what you're doing. In the end all you need to do is simply reorder, squash, and even drop commits to put together multiple PRs from commits that are built upon each other.

    Simple, and straight to the point.

  • If you need anything more complex than cherrypick, you already screwed up big time.

    I think this is a clueless comment. You can use Git to greatly improve your development workflow if you dare going beyond the naive pull/commit/push workflow.

    Take for example interactive rebase. It let's you do very powerful stuff such as changing the order of commits in a local branch and merge/squash contiguous commits. This unlocks workflows such as peeling off bugfix and cleanup commits from your local feature branch without having to switch branches. I'm talking about doing something like:

    a) - you're working on your task, b) - you notice a bug that needs fixing, c) - you fix the bug and commit your fix, e) - you continue to work on your task, f) - you notice a typo in your bugfix code, so you post a fixup commit. g) - you post a few commits to finish your task, h) - you noticed your bugfix commit didn't had the right formatting, so you post a couple of commits to lint your code both in your bugfix commits and task.

    When you feel you're done, you use interactive rebase to put everything together.

    a) you reorder your commits to move your bugfix commit to the top of your local branch, followed by the typo fixup commit and the linter commit. b) you mark both the typo and linter commits as fixup commits to merge them with the bugfix one, c) you post a PR with the single bugfix commit, d) you finally post a PR for your task.

    Notice that thanks go git interactive rebase you did not had to break out of your workflow or do any sort of context switch to push multiple PRs. You just worked on things you had to work, and in the end you just reorganize the commit history in your local branch to push your work.

    Is this what you call "screwed up big time"?

  • You are making an extreme assumption, and it also sounds like you’ve misread what I wrote. The “attempts” I’m talking about are studies (formal and informal) to measure the root causes of bugs, not the C or C++ projects themselves.

    I think you're talking past the point I've made.

    The point I've made is that the bulk of these attempts don't even consider onboarding basic static analysis tools for projects. Do you agree?

    If you read the post of other studies you've quoted, you'd be aware that some of them quite literally report results of onboarding a single static analysis tool to C or C++ projects. The very first study in your list is quite literally the results of onboarding projects to Hardware-assisted AddressSanitizer, after acknowledging that they haven't onboarded AddressSanitizer due to performance reasons. The second study in your list reports results of enabling LLVM’s bound sanitizer.

    Yet, your personal claim over "the lack of memory safety" in languages like C or C++ is unexplainably based on failing to follow very basic and simple steps like onboarding any static analysis tool, which is trivial to do. Yet, your assertion doesn't cover that simple step. Why is that?

    Again, I think this comparison is disingenuous. You take zero effort to address whole family of errors and then proceed to claim that whole family of errors are not addressed, even though nowadays there's a myriad of ways to tackle those. That doesn't sound like a honest comparison to me.

  • C syntax is simple, yes, but C semantics are not; there have been numerous attempts to quantify what percentage of C and C++ software bugs and/or security vulnerabilities are due to the lack of memory safety in these languages, and (...)

    ...and the bulk of these attempts don't even consider onboarding basic static analysis tools to projects.

    I think this comparison is disingenuous. Rust has static code analysis checks built into the compiler, while C compilers don't. Yet, you can still add static code analysis checks to projects, and from my experience they do a pretty good job flagging everything ranging from Critical double-frees to newlines showing up where they shouldn't. How come these tools are kept out of the equation?

  • C has always been (...)

    I think you tried too hard to see patterns where there are none.

    It's way simpler than what you tried to make it out to be: C was one of the very first programming languages put together. It's authors rushed to get a working compiler while using it to develop an operating system. In the 70s you did not had the benefit of leveraging half a century of UX, DX, or any X at all. The only X that was a part of the equation was the developers' own personal experience.

    Once C was made a reality, it stuck. Due to the importance of preserving backward compatibility, it stays mostly the same.

    Rust was different. Rust was created after the world piled up science, technology, experience, and craftsmanship for half a century. Their authors had the benefit of a clean slate approach and insight onto what worked and didn't worked before. They had the advantage of having a wealth of knowledge and insight being formed already before they started.

    That's it.

  • It was supposed to be a better C without the bloat and madness of C++, right?

    D was sold as a better C++, way back then when C++ was stuck with C++98. To be more clear, D promised to be C++ under active development. That was it's selling point.

    In the meantime C++ received 2 or 3 major updates, and thus D lost any claim to relevance.