If you want to start the most effective, upgrade your router or primary switch to 2.5G or 10G. Then at least there is a low likelihood of a bottleneck when your devices are communicating internally with each other and youll have overhead downstream. Then, if you have multiple switches, prioritize the highest bandwitch between them over upgrading your devices beyond 1gb nic's.
I use an opnsense router with 2.5g nic's, and then I have a 2.5g switch and a 1gb switch than are connected via a 10gb fiber link. (This is all enterprise ubiquity level stuff). But all my downstream devices and switches are 1gb snd I have no plans to upgrade intentionally. Internally, I won't see bottlenecks often since communication between the switches and modems is enough to support multiple devices spamming 1gb/s file transfers simultaneously (not that itll happen often lol)
So my WiFi access points, primary NAS, and my most used PC are all on 2.5gb connections since they could benefit. But everything else is on 1gb since the switch has way more ports and was way cheaper.
I'm not against buying 10g switches for future proofing, but they're still too costly for my needs, and its unlikely I'll wish I had 10g any time soon esp when it comes to internet. Even if I upgrade beyond 1gb fiber service, it'd be so thay multiple devices can fully saturate a 1gb NIC at the same time, not so one computer can speed test 3gb+.
Thay said, what I have is overkill, but i enjoy some homelab tinkering.
Most likely fiber. Around here the ADSL provider (CenturyLink) was the first to start deploying fiber to compete with cable able to do 1gb (which is, of course, highly variable and full of asterisks because coax, quality to neighbors modems to support a stronger mesh, possible MoCA interference, etc.)
More recently they rebranded fiber as a different company... Probably to get rid of the DSL name stigma.
The EU is going to slap them silly for making it an easier process? Instead of needing to know a magic key combo to bypass the security check, now it acts just like any other security permission (for example, screen recording) and sends you to settings. This is absolutely better than it was and the article is clickbait.
This is a million times better than the current. Using homebrew, you often have to re-approve apps that brew ended up reinstalling in a manner to remove the previous exception.
Now, worst-case, it's the same process as any other app permission, and best case, it can be adjusted via the terminal.
The Thunderbird team has talked in the past about bringing Mozilla Send back as more of a feature for Thunderbird files embedded in emails, hence some of the work that's happening off and on by Mozilla themselves in the original project, some of which has been merged into the project this post is about.
I moved one of my computers to endeavor, but one is still on manjaro and the contrast is kinda hilarious. Manjaro machine always gets funky after updates, it struggles to deal with sleep and hibernation, and it feels slow even when its like 4x as powerful as my EndeavourOS machine.
Its been in testing for a bit. It got bumped from the last major desktop Thunderbird release and since Thunderbird releases used to be synced to firefox ESR releases its been a slow turnaround to get their schedule moving faster.
I haven't used it, but I've heard good things about it and I enjoy many of their other apps. I too just use the google pixel blocker because its way too good.
Anyone know if this works fine alongside work profiles (seems like it doesnt conflict) and if apps like insular/shelter are going to be able to configure the private space?
Edit: partially answered my question. Set up a private space just fine (though I can't access it since I'm not using the OS launcher) and it doesnt conflict with the work profile.
You also don't need to create another google account for it.
You could argue the rationality of an effort before any major technology or cultural breakthrough for all of human history, yet the reason those breakthroughs happened was due to humans acting irrationally by accident or on purpose.
There are many tipping points, and we dont always know if we've hit one yet or not. The drastic increase in sea temperature the last two years is possibly a tipping point we've passed, esp since the warmer the water is, the less co2 its able to absorb. OMAC shut down (if it happens) is possibly a tipping point, which will only feedback loop into warming waters.
Honestly, the permafrost melt is more likely to be the KO punch after one or more other tipping points accelerate it.
Meh. I got one for free from a job's tech allowance, and it's never really a problem. It charges fast and the OS warns you early enough to plug it in on a lunch break or at the end of the day well before it runs out. Not ideal but def not garbage. Honestly, I get more frustrated with noise canceling headphones and keyboards dying at inconvenient times than I ever do the mouse.
I dont use it daily, but it is a pretty good mouse for my laptop bag. Charge holds a long time for once/week use. If it's dead when I get to the coffee shop or wherever I'm working, itll be usable in 15 mins or less anyway. It also works nicely with Linux out of the box, which is a rarity among Bluetooth mice (in my experience).
The other elephant in the room: not having multitasking gestures on a mouse when using macOS is a serious drawback for any other mouse out there, so there is a reason people are willing to put up with the annoyance (if they ever get annoyed in the first place)
Idk for sure, but if excalidraw uses canvas then there are a lot more possible machine/OS specific problems that come up. Web browser features that hand tasks off to the GPU have gotten a lot better over recent years but there are still oddities like max shaders for a specific browser/OS/GPU combo that'll lead to some funny behavior.
I mean, the issues were present and widely reported for several months before Intel even acknowledged the problems. And it wasn't just media reporting this, it was also game server hosts who were seeing massive deployments failing at unprecedented rates. Even those customers, who get way better support than the average home user, were largely dismissed by intel for a long time. It then took several more months to ship a fix. The widespread nature of the issues points to a major failure on the companies part to properly QA and ensure their partners were given accurate guidance for motherboard specs. Even so, the patches only prevent further harm to the processor, it doesnt fix any damage that has already been incurred that could amount to years off of its lifespan. Sure they are doing an extended warranty, but thats still a band-aid.
I agree it doesnt mean one should completely dismiss the possibility of buying an Intel chip, but it certinally doesn't inspire confidence.
Even if this was all an oversight or process failure, it still looks a lot like Intel as a whole deciding to ship chips that had a nice looking set of numbers despite those numbers being achieved through a degraded lifespan.
Yup, unfortunately there is still a premium on linux-specific manufacturers. You get better driver support, but without scale things will stay a bit pricey.
The other longterm solution is postmarketOS, but there aren't a ton of android tablets out there right now that can really compete on the drawing front so the supported devices aren't very compelling.
Maybe. But its a bit pointless if only a subset of the user base goes through the effort.