My computer is slow at compiling, esp. LLVM. If I were to buy a new computer, what components would I focus on to improve this?
19 comments
Compilation is CPU bound and, depending on what language mostly single core per compilation unit (I.e. in LLVM that's roughly per file, but incremental compilations will probably only touch a file or two at a time, so the highest benefit will be from higher single core clock speed, not higher core count). So you want to focus on higher clock speed CPUs.
Also, high speed disks (NVME or at least a regular SSD) gives you performance gains for larger codebases.
On linux you can also use vmtouch to force cache the project files in RAM. This would speed up the first compilation of the day. On repeated compilations files that are read from disk would naturally be in the RAM cache already and it would not matter what drive you have.
I have used this in the past when I had slow drives. I was forcing all necessary system libs (my IDE, jdk etc.) and my project files into RAM at the start of the day, before going on a 2min break to make coffee while it read all that stuff from a hdd. Which sped up the workflow in general, at least at the start of the day.
It is not the same as a ramdisk, as the normal linux file cache writes back changes to the disk in the background.
You can also pin your fastest core to a specific process, so that it gets no tasks except for the one you want it to do. But that seems more hassle than it's worth, so I never tried that.
Lots of RAM, 32+, and compile in tmpfs.
I'm thinking 2x32GB, tempted by more but seems excessive
Ram is cheap as fuck, time is not.
Very few things need 64GB memory to compile, but some do. If you think you'll be compiling web browsers or clang or something, then 64GB would be the right call.
Also, higher speeds of DDR5 can be unstable at higher capacities. If you're going with 64GB or more of DDR5, I'd stick to speeds around 6000 (or less) and not focus too much on overclocking it. If you get a kit of 2x32GB (which you should rather than getting the sticks independently), then you'll be fine. You won't benefit as much from RAM speed anyway as opposed to capacity.
Monitor it's current resources, but disk I/O usually is a thing to look at. Don't use spinning disks
Don’t use spinning disks
Lets be honest here, that applies to any modern PC outside of mass media storage/backups. I've been saying for nearly 10 years now that HDDs don't belong in normal computers.
I'm going NVME
If you're doing big compilations, get good cooling also.
would a decent air cooler suffice, or are you thinking AIO?
I was thinking a good air cooler, I don't like AIO!
Air cooling is sufficient to cool most consumer processors these days. Make sure to get a good cooler though. I remember Thermalright's Peerless Assassin being well reviewed, but there may be even better (reasonably priced) options these days.
If you don't care about price, Noctua's air coolers are overkill but expensive, or an AIO could be an option too.
AIOs have the benefit of moving heat directly to your fans via fluid instead of heating up the case interior, but that usually doesn't matter that much, especially outside of intense gaming.
Compilation is CPU bound and, depending on what language mostly single core per compilation unit (I.e. in LLVM that's roughly per file, but incremental compilations will probably only touch a file or two at a time, so the highest benefit will be from higher single core clock speed, not higher core count). So you want to focus on higher clock speed CPUs.
Also, high speed disks (NVME or at least a regular SSD) gives you performance gains for larger codebases.
On linux you can also use vmtouch to force cache the project files in RAM. This would speed up the first compilation of the day. On repeated compilations files that are read from disk would naturally be in the RAM cache already and it would not matter what drive you have.
I have used this in the past when I had slow drives. I was forcing all necessary system libs (my IDE, jdk etc.) and my project files into RAM at the start of the day, before going on a 2min break to make coffee while it read all that stuff from a hdd. Which sped up the workflow in general, at least at the start of the day.
It is not the same as a ramdisk, as the normal linux file cache writes back changes to the disk in the background.
You can also pin your fastest core to a specific process, so that it gets no tasks except for the one you want it to do. But that seems more hassle than it's worth, so I never tried that.