Please share your power optimizations to maximize battery life
Veraxis @ Veraxis @lemmy.world Posts 0Comments 78Joined 2 yr. ago

I have a laptop with integrated Intel graphics and a desktop with Nvidia graphics. I use Wayland on the former right now as of KDE 6. I have noticed some odd behaviors, but overall it has been fine. The latter, however, just boots to a black screen. I have neither the time nor the desire to debug that right now, so I will adopt Wayland on that machine when it works with Nvidia to a reasonable degree of stability.
Yep, I dual boot on my laptop so that I can run certain programs for my schoolwork as well. I use Refind as my boot manager so that I can easily select one or the other on startup.
Blah blah blah blah blah...
tl;dr the author never actually gets to the point stated in the title about what the "problem" is with the direction of Linux and/or how knowing the history of UNIX would allegedly solve this. The author mainly goes off on a tangent listing out every UNIX and POSIX system in their history of UNIX.
If I understand correctly, the author sort of backs into the argument that, because certain Chinese distros like Huawei EulerOS and Inspur K/UX were UNIX-certified by Open Group, Linux therefore is a UNIX and not merely UNIX-like. The author seems to be indirectly implying that all of Linux therefore needs to be made fully UNIX-compatible at a native level and not just via translation layers.
Towards the end, the author points out that Wayland doesn't comply with UNIX principles because the graphics stack does not follow the "everything is a file" principle, despite previously admitting that basically no graphics stack, like X11 or MacOS's graphics stack, has ever done this.
Help me out if I am missing something, but all of this fails to articulate why any of this is a "problem" which will lead to some kind of dead-end for Linux or why making all parts of Linux UNIX-compatible would be helpful or preferable. The author seems to assume out of hand that making systems UNIX-compatible is an end unto itself.
My particular testing was with an SSK SD300, which is roughly 500MB/s up and down. I have benchmarked this and confirmed it meets its rating.
I have thought about buying something like a Team Group C212 or Team Group Spark LED, which are rated at 1000MB/s. The 256GB version of the C212 is available on places like Newegg and Amazon for around $27 USD at time of writing, but they make variants as high as 1TB.
I have done some basic testing, and the speed of the USB stick you use does make a noticeable difference on the boot time of whatever you install on it.
If I recall correctly, A low speed USB 2.0 stick took around 30-60 seconds to load (either time to login screen or time to reach a blinking cursor for something like an arch install disk). If this is something for occasional use, even this works perfectly fine.
Slightly faster USB 3 sticks in the 100MB/s range can be had for only around $5-15 USD and work significantly better, maybe 7-15 seconds. These usually have assymetric read/write speeds, such as 100MB/s read and 20MB/s write, but for a boot disk the read speed is the primary factor.
Some high end flash drives can reach 500-1000MB/s and would load in only a few seconds. A high speed 256GB stick might cost $25-50, and a 1TB stick maybe $75-150.
An NVMe enclosure might cost $20-30 for a decent quality 1GB/s USB 3 enclosure, or $80-100 for a thunderbolt enclosure in the 3GB/s range so long as your hardware supports it, plus another $50-100 for a 1TB NVMe drive itself. This would of course be the fastest, but it is also bulkier than a simple flash drive, and I think you are at the point of diminishing returns in terms of performance to cost.
I would say up to you on what you are willing to spend, how often you realistically intend to use it, and how much you care about the extra couple seconds. For me, I don't use boot disks all that often, so an ordinary 100MB/s USB 3 stick is fine for my needs even if I have faster options available.
As others have mentioned, secondhand laptops and surplus business laptops are very affordable and probably better value for the money than a chromebook. My understanding is that drivers for things like fingerprint sensors, SD card readers, or oddball Wi-Fi chipsets can be issues to watch out for. But personally I don't care about the fingerprint sensor and only the Wi-Fi would be a major issue to me.
A couple years ago now I picked up a used Acer Swift with 8th gen intel and a dent in the back lid for something like $200 to use as my "throw in a backpack for travel" laptop, and it has been working great. In retrospect, I would have looked for something with 16GB of RAM or upgradeable RAM (8GB soldered to the motherboard, ugh), but aside from that minor gripe it has been a good experience.
Permanently Deleted
The Arch installation tutorial I followed originally advised using LVM to have separate root and user logical volumes. However, after some time my root volume started getting full, so I figured I would take 10GB off of my home volume and add it to the root one. Simple, right?
It turns out that lvreduce --size 10G volgroup0/lv_home
doesn't reduce the size by 10GB, it sets the absolute size to 10GB, and since I had way more than 10GB in that volume, it corrupted my entire system.
There was a warning message, but it seems my past years of Windows use still have me trained to reflexively ignore dire warnings, and so I did it anyway.
Since then I have learned enough to know that I really don't do anything with LVM, nor do I see much benefit to separate root/home partitions for desktop Linux use, so I reinstalled my system without LVM the next time around. This is, to date, the first and only time I have irreparably broken my Linux install.
Not my preference personally, but cool.
I have been daily driving Linux for a couple years now after finally jumping ship on Windows. Here are a few of my thoughts:
- it is important to make the distinction between the distro and the desktop environment, which is a big part of how the UI will look and feel. Many of these DEs such as KDE Plasma, XFCE, and GNOME will be common across many distros. I might do some research on which DE you like the look of. I personally have used KDE the most and that is what I prefer, but all of them are valid options.
- Coming from Windows, I would go into this with the mindset that you are learning a new skill. Depending on how advanced you are with windows, you will find that some things in Linux are simply done differently to how they are in Windows, and you will get used to them over time. Understanding how the file system works with mounting points rather than drive letters was probably a big one for me, but now that I have a grasp of it, it makes total sense to me and I really like it.
- It will also be learning a skill in terms of occasionally debugging problems. As much as I would like to report that I've never had a problem, I have occasionally run into things which required a bit of setup at first or didn't "just work" right out of the box. I know that probably sounds scary, but it really isn't with the right mindset, and there are tons of resources online and people willing to help.
In everything I have seen, there has been no way to turn it off fully (laptop with a GTX 1060). Nvidia x server settings shows no option for a power saver mode, and even Optimus-manager set to integrated graphics only does not seem to have changed it. It seems to continuously idle at the minimum clock speed at around 5W of draw, according to programs like nvtop.
Do you have any recollection of the name, or a link? I have the nvidia xserver settings gui program, but I do not see any option to put the GPU into a powersave mode.
It depends on a few factors. Stock laptop experience with no power management software will likely result in poor battery life. You will need some kind of power management like TLP, auto-cpufreq, or powertop to handle your laptop's power management settings.
Second is the entire issue of dedicated GPUs and hybrid graphics in laptops, which can be a real issue for Linux laptops. In my own laptop with a dGPU, I am reasonably certain that the dGPU simply never turns off. I have yet to figure out a working solution for this, and so my battery life seems to be consistently worse than the Windows install dual-booted with it on the same machine.
I am not sure if we are discussing hibernation for encrypted systems only, and I do not know what special provisions are needed for that, but for anyone curious, here is what I do on my own machine (not encrypted) per my own notes for setting up Arch, with a swap file rather than a swap partition, and rEFInd as the boot manager (the same kernel params could probably be used in Grub too, though):
- create a file at
sudo nano /etc/tmpfiles.d/hibernation_image_size.conf
(copy paste the template from https://wiki.archlinux.org/title/Power_management/Suspend_and_hibernate) - if you made your swap file large enough (~1.2x ram size or greater), set the argument value to your amount of ram, e.g. 32GB= 34359738368
- after a reboot, you can verify this with
cat /sys/power/image_size
findmnt -no UUID -T /swapfile
to get swapfile UUIDfilefrag -v /swapfile | awk '$1=="0:" {print substr($4, 1, length($4)-2)}'
to get offset- Go into your kernel parameters and add resume=UUID=### resume_offset=###
- e.g. in /boot/refind_linux.conf (with efi partition unmounted)
- go into /etc/mkinitcpio.conf and add “resume” after the “filesystem” and before the “fsck” hooks
- run
mkinitcpio -p linux-zen
(or equivalent linux type)---
I do not know what sort of power management software exists by default on Ubuntu, but for laptop use I would strongly recommend getting a power management package like TLP to configure power profile settings for your laptop when on battery and on charge. It can greatly improve battery performance. Some alternatives like auto-cpufreq and powertop exist, but I have tried all 3 and found that TLP worked the best for me.
I could see that being the case for some things, at least in cases where it is an older design being updated or VE'd. Perhaps some sourcing person changes the LED part number on the AVL and forgets to check with engineering whether the resistor value which goes with it is a sane level of brightness still.
Electrical engineer here who also does hobby projects. I'm with you. I think some of the reason may be that modern GaN-type green or blue LEDs are absurdly efficient, so only a couple mA of drive current is enough to make them insanely bright.
When I build LEDs into my projects, for a simple indicator light, I might run them at maybe only a tenth of a milliamp and still get ample brightness to tell whether it is on or not in a lit room. Giving them the full rated 10 or 20mA would be blindingly bright. I also usually design most things with a hard on/off switch so they can be turned all the way off when not in use.
Of things I own normally I also have two power strips with absurdly bright LEDs to indicate the surge protection. It lights up my whole living room with the lights off. If I had to have something like that in my bedroom, I would probably open it up and disconnect the LEDs in some way, or maybe modify the resistor values to run at the lowest current I could get away with.
I feel like designers have lost sight of the fact that these lights are meant to be indicators only-- i.e. a subtle indication of the status of something and not trying to light a room-- and yet they default to driving them at full blast as if they were the super dim older-gen LEDs from 20+ years ago.
As with everyone else, Brother laser printers are the way. I have owned multiple. I think one which my family uses is about 10 years old now, another which is about 5-6, and one which I got at the start of the pandemic, so around 3.5 years. Zero problems with any of them.
All the ones I have tried work fine on both Linux and Windows, work over wifi for both scanning and printing, and the toner drums last ages without needing to be replaced unlike inkjet cartridges which constantly need replacing or dry out if you don't use them often enough.
I am not sure what graphics you have, but I have an older-ish laptop with hybrid 10-series Nvidia graphics which do not fully power down even with TLP installed. I was finding that it continued to draw a continuous 14W even in an idle state. I installed envycontrol so that I can manually turn off/on hybrid graphics or force the use of integrated graphics. I noticed my battery life jumped from 2-3 hours to 4-5 hours after I did this, and unless I am gaming (which I rarely do on this laptop) I hardly ever need the dgpu in this.
I also use TLP. I have tried auto-cpufreq and powertop, and I found TLP offered the most granular control and worked the best for my system/needs.