Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)NO
Posts
0
Comments
716
Joined
2 yr. ago

  • Honestly I cannot tell. Nutrision studies are very hard to do and you often see contradictory studies released all the time. I don't think media should be reporting on these studies. Better to report on the meta analysis studies that take a much more holistic view on the subject.

  • analysed data spanning 36 years from over 200,000 adults enrolled in the Nurses’ Health Studies I and II and the Health Professionals Follow-up Study.

    So they looked at some data and found that the more heme you ate the higher chance of developing type 2 diabetes. That sounds a lot like a correlation not causation. What was the rest of their diet like? Did the higher heme eaters also consume more in general? I could not find a non paywalled copy of the paper (though I did not look that hard tbh) so cannot tell how good the study was, but from what I read I would not put any stock into these results.

    Like so many other dietary studies that make the headlines, they really don't paint the whole, or even a useful part of the picture.

  • You might want to check out the sovol V8 that came out not too long ago. It is based on the voran v2.4 but comes mostly prebuilt with mass manufactured parts to lower cost. But still holds true to the open nature of the voran. And is quite cheap for what it is at $579 for the base open frame model.

    Only down side is it does not have an automatic filiment changer though I don't really care for that feature.

  • No, and IMO you should not. It causes extra stress on the mirrors. If everyone did it every day that would be a significant load for very little gain on the end users side. The mirrors speeds don't change that often to need to worry about always being on the absolute fastest.

    Especially if you are updating the the background anyway, what does it matter if you end up on a slightly slower mirror for a bit?

  • This does not work for everyone. A lot of people will try to switch, but find one tool they are used to they cannot now use and are not used to the alternatives so feel frustrated when trying to use them for real work. Then get pissed off at Linux and switch back to windows.

    This advice is more for people that are thinking about Linux but have some professional or semi professional or hobby workflow on their computers that they need to be productive in. It can be very hard for them to switch os and tooling they are used to with no way to fall back to what they know when they need to.

    You will find most people don't rely on these tools and they can doba quick check and decide to switch straight away. But ignoring this advice for the rest can make transitioning to Linux easier.

    We need to stop pretending that switching tools that you rely on and have spent decades learning to be proficient in is a trivial task for everyone.

  • While being accurate about it is hard outside the lab it is very easy to tell where you are on the balance and how much out you are. Just count the calories you consume and weight yourself regularly. If you are gaining weight then you are eating too much, so lower the number of calories you are consuming, if you are losing weight then you are eating less than you are burning. If you weight remains stable then you are in balance. And the amount you are gaining/losing tells you how much of a surplus or deficit you are in.

    Over time you can then change the amount you eat by I few hundred calories at a time and you will see yourself move on that balance point. If anything else changes but your intake remains the same then it is likely your calories out that has changed. But even if technically you are digesting less for some reason it does not really matter - the bigger/easier leaver you have to pull is the number you are eating.

    Because you are measuring the final output - your weight - it is fairly accurate over time and helps you track actual progress. There is no need to get super accurate about how much your body adobes, shits out or you burn off at rest or through exercise - those might be important in the lab but in real life the far easier to measure weight and how much you are eating is more important.

  • You cannot accurately measure just that. But measuring calories you eat is a good enough approximation to help you control how much you eat. You can estimate you calories out by your weight, if you are gaining weight you are eating (and adsorbing) more then you are using, if you are losing weight then you are eating less - and that is the most important part.

    There is also water weight to account for, but realistically there is an upper and lower bound to that and over several weeks you can get a pretty good idea for what level of calories you ingest leads to weight gain or loss. And if that changes for any reason you can adjust the amount you eat in correspondence. We are just looking for averages over time and the overall balance here, no need to be super accurate with exactly what you adsorb and what you have accurately used during an exercise. I never even measure calories burnt as it does not give much value vs just weighting your self over time.

  • While strictly speaking calories in < calories out is the most important factor in weight loss, what you eat can drastically affect your hunger and thus indirectly affect your calories in - or at least make you far more miserable in sticking to lower calories. Eating more protein can help but I also find blander food helps as well - which typically means avoiding sugars and sweet foods. You are going to find it extremely hard to stick to a calorie limit eating nothing bot oreos and hostess snack cakes.

  • It also tells you nothing about the data flow or the data at all. What do these functions do? What data to they act on? It is all just pure side effects and they could be doing anything at all. That is far from what I consider clean.

    "Show me your flowchart and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won't usually need your flowchart; it'll be obvious." -- Fred Brooks, The Mythical Man Month (1975)

  • This is an absolute terrible post :/ I cannot believe he thinks that is a good argument at all. It basically boils down to:

    Here is a new feature modern languages are starting to adopt.

    You might thing that is a good thing. Lists various reasonable reasons it might be a good thing.

    The question is: Whose job is it to manage that risk? Is it the language’s job? Or is it the programmer’s job?

    And then moves on to the next thing in the same pattern. He lists loads of reasonable reasons you might want the feature gives no reasons you would not want it and but says everything in a way to lead you into thinking you are wrong to think you want these new features while his only true arguments are why you do want them...

    It makes no sense.

  • But no one actually pulls that rule through, do they?

    They do though. Loads of new people to programming read that book and create unreadable messes of a code base that follow all of his advice. I have lost count of the number of times I have inlined functions, removed layers of abstraction and generally duplicated code to get a actual understanding of what is going on only to realize there is a vastly simpler way to structure the code that I could not see until all the layers and indirection are removed. Then to refactor again to remove redundant code and apply more useful layers again that actually made sense.

    And that is the problem we have with his book. People that need it take up as many bad habits as they do good ones leading to an overall decline in their code quality. It is not until years of experience that you can understand the bad bits and ignore them. So overall his book is a net negative on the programming world. Not all his advice is bad, but if you can tell that then you likely don't need his advice.

    But on the layers of abstractions specifically, he takes this too far. Largely because of the 4 line limit he has. There is a good level of abstraction and I generally find more than 2 or 3 levels of abstraction is where I start to loose any sense of what is going on. He always seems to jump on abstraction as soon as he can, but I find waiting a while and abstraction when you need to to lead to fewer and vastly better layers of abstraction overall.

    And adding more abstraction does not help the people of people doing too many things inside a function - they just move it to sub functions rather than extracting the behavior for the caller to deal with. I have never seen him give advice on what that is appropriate, only keeps the functionality of the original function the same and move the logic into a nested function instead and that only covers up the issue of the function doing too much.

  • But I feel like Uncle Bob is leaning more towards that if a task requires 100 different operations, then that should be split into 100 different functions. One operation is one thing. Maybe not exactly, but that’s kind of vibes I get from his examples.

    Oh yeah he defiantly does. He even says so in other advice like a function should be about 1-3 lines. Which IMO is just insane advice.

  • I kinda disagree with him on this point. I wouldn’t necessarily limit to one thing, but I think functions should preferably be minimal.

    I do actually agree with him on that point - functions should do one thing. Though I generally disagree on what one thing is. It is a useless vague term and he tends to lean on the smallest possible thing a thing can be. I tend to lean on larger ideas - a function should do one thing, even if that one thing needs 100s of lines to do. Where the line of what one thing is, is a very hard hard idea to define though.

    IMO a better metric is that code that changes together should live together. No jumping around lots of functions or files when you need to change something. And split things out when the idea of what they do can be isolated and abstracted away without taking away from the meaning of what the original function was doing. Rather than trying to split everything up in to 1-3 line functions - that is terrible advice.

  • One of the big problems with this is there is no global shortcut for copy and pasting. At most there are the primary, secondary and selection buffers which applications can copy into and paste from. But each application handles the copy/pasting functionality in their own way. Or rather they typically let the toolkit they are using deal with it.

    Klipper, the kde clipboard manager comes close to something like what you want. You can CTRL+C multiple times and it stores a history of everything, then you can assign shortcuts to cycle through the entries and paste them out again. All it is really doing is reading the clipboard when it changes, saving that value than essentially copying from that saved list when you cycle though it. So it would be possible to write something similar that has specific numbered buffers - but you would still be saving/loading into the primary clipboard which applications can then paste from rather than creating a new set of shortcuts to paste from each buffer directly into an application.

  • I dont think multiple streaming platforms is a problem. The problem is exclusivity. I dont want to pay for every subscription service to watch popular things. I want to watch any show I want on one platform that I choose. Much like I do for music. But no, with TV shows everyone has their own walled garden of exclusives. Fuck that.

  • I did not find it very had to relearn the difference in bindings. Quite a lot are actually the same but one big difference is the selection before action rather than vims movement then action. Which IMO I find the helix way nicer after using it for a while. Never really lost the ability to use vim either and I can switch between them with relative ease. Though I do miss the helix way of working when I am forced to use vi input on things.

  • Not in any bothersome way. But if you really want to reinstall often that is valid as well. You can very easily script the arch install process to get you back to the same state far easier than other distros as well. Or you can just mass install everything except base and some core packages and reinstall the things you care about again which almost gives you a fresh install minus any unmanaged files (which are mostly in home and likely want to keep anyway).

  • Any major Linux distribution has a system for building packages

    I have built packages for all the major ones. Non arch packages are a pain to build and I never want to do it again. In contrast arch PKGBUILDs are quite simple and straight forward.

    How can you trust code with root access to the system just because it’s in the aur repository?

    Because you can view the source that builds the packages before building them. A quick check to not see any weird commands in the builds script and that it is going to an upstream repo is normally good enough. Though I bet most people work on the if others trust it then so do I mentality. Overall due to its relative popularity it is not a big target for threats when compared to things like NPM - which loads of people trust blindly as well and typically on vastly more important machines and servers.

  • I’ve tried NeoVim but I really don’t want to waste time doing text-based configuration and messing with extensions just to get some basic features working.

    This is the reason I switched to helix. Comes out the box with what you would expect so you dont need 10s of plugins and 100s of lines of config to get a base line experience.