Let’s Encrypt Begins Supporting IP Address Certificates
Vorpal @ Vorpal @programming.dev Posts 2Comments 25Joined 2 yr. ago
Thanks, didn't think about that. Two reasons I can think of:
- Vase mode should reduce stringing on TPU as it avoids retractions. Though I have found that just drying TPU + enabling "avoid crossing perimeters" usually hides most stringing.
- Additionally, it would let you have more precise control over how squishy/firm the TPU part is by adjusting the number of perimeters. Though you can use modifier volumes in the slicer to adjust infill and number of perimeters locally in a part.
Is there any other reason why this is good for TPU that I missed?
Unless they are in different cities they wouldn't be safe from a fire, lightning strike, earth quake/flood/tsunami/typhon/hurricane/etc (remove whichever ones are not relevant to where you live).
That seems like a really big downside to me. The whole point of locking down your dependencies and using something like renovate is that you can know exactly what version was used of everything at any given point in time.
If you work in a team in software, being able to exactly reproduce any prior version is both very useful and consider basically required in modern development. NixOS can be used to that that to the entire system for a Linux distro (it is an interesting project but there are parts of it I dislike, I hope someone takes those ideas and make it better). Circling back to the original topic: I don't see why deploying images should be any different.
I do want to give Komodo a try though, hadn't heard about it. Need to check if it supports podman though.
I haven't used Komodo, but would it commit to the updated docker files to git? Or just use the "latest" tag and follow that? In the latter case you can't easily roll back, nor do you have a reproducible setup.
Hm, that is a fair point. Perhaps it would make sense to produce a table of checks: indicate which checks each dependency fails/passes, and then colour code them with severity.
Some experimentation on real world code is probably needed. I plan to try this tool on my own projects soon (after I manually verified that your crate match your git code (hah! Bootstrap problem), I already reviewed your code on github and it seemed to do what it claims).
Yes, obviously there are more ways to hide malicious code.
As for the git commit ID, I didn't see you using it even when it was available though? But perhaps that could be a weakness, if the commit ID used does not match the tag in the repo, that would be a red flag too. That could be worth checking.
Due to the recent xz trouble I presume? Good idea, I was thinking about this on an ecosystem wise scale (e.g. all of crates.io or all of a Linux distro) which is a much harder problem to solve.
Not sure if the tag logic is needed though. I thought cargo embedded the commit ID in the published package?
Also I'm amazed that the name cargo-goggles was available.
Sure, but my point was that such a C ABI is a pain. There are some crates that help:
- Rust-C++: cxx and autocxx
- Rust-Rust: stabby or abi_stable
But without those and just plain bindgen it is a pain to transfer any types that can't easily just be repr(C)
, and there are quite a few such types. Enums with data for example. Or anything using the built in collections (HashMap, etc) or any other complex type you don't have direct control over yourself.
So my point still stands. FFI with just bindgen/cbindgen is a pain, and lack of stable ABI means you need to use FFI between rust and rust (when loading dynamically).
In fact FFI is a pain in most languages (apart from C itself where it is business as usual... oh wait that is the same as pain, never mind) since you are limited to the lowest common denominator for types except in a few specific cases.
Yes, rust is that much of a pain in this case, since you can only safely pass plain C compatible types across the plugin boundary.
One reason is that rust doesn't have stable layouts of structs and enums, the compiler is free to optimise the to avoid padding by reordering, decide which parts to use as niches for Options etc. And yes, that changes every now and then as the devs come up with new optimisations. I think it changes most recently last summer.
So there is a couple of options for plugins in Rust (and I haven't tried any of them, yet):
- Wasm, supposedly https://extism.org/ makes this less painful.
- libloading + C ABI
- One of the two stable ABI crates (stabby or abi_stable) + libloading
- If you want to build them into your code base but not have to update a central list there is linkme and inventory.
- An embedded scripting language might also be a (very different) option. Something like mlua, rhai or rune.
I don't know if any of these suit your needs, but at least you now have some things to investigate further.
I would go with the Arch specific https://aur.archlinux.org/packages/aconfmgr-git instead of ansible, since it can save current system state as well. I use it and love it. See another reply on this post for a slightly deeper discussion on it.
I can second this, I use aconfmgr and love it. Especially useful to manage multiple computers (desktop, laptop, old computer doing other things etc).
Though I'm currently planning to rewrite it since it doesn't seem maintained any more, and I want a multi-distro solution (because I also want to use it on my Pis where I run Raspbians). The rewrite will be in Rust, and I'm currently deciding on what configuration language to use. I'm leaning towards rhai (because it seems easy to integrate from the rust side, and I'm not getting too angry at the language when reading the docs for it). Oh and one component for it is already written and published: https://github.com/VorpalBlade/paketkoll is a fast rust replacement for paccheck (that is used internally by aconfmgr to find files that differ).
I went ahead and implemented support for filtering packages (just made a new release: v0.1.3).
I am of course still faster. Here are two examples that show a small package (where it doesn't really matter that much) and a huge package (where it makes a massive difference). Excuse the strange paths, this is straight from the development tree.
Lets check on pacman itself, and lets include config files too (not sure if pacman has that option even?). Config files or not doesn't make a measurable difference though:
console
$ hyperfine -i -N --warmup 1 "./target/release/paketkoll --config-files=include pacman" "pacman -Qkk pacman" Benchmark 1: ./target/release/paketkoll --config-files=include pacman Time (mean ± σ): 14.0 ms ± 0.2 ms [User: 21.1 ms, System: 19.0 ms] Range (min … max): 13.4 ms … 14.5 ms 216 runs Warning: Ignoring non-zero exit code. Benchmark 2: pacman -Qkk pacman Time (mean ± σ): 20.2 ms ± 0.2 ms [User: 11.2 ms, System: 8.8 ms] Range (min … max): 19.9 ms … 21.1 ms 147 runs Summary ./target/release/paketkoll --config-files=include pacman ran 1.44 ± 0.02 times faster than pacman -Qkk pacman
Lets check on davici-resolve as well. Which is massive (5.89 GB):
console
$ hyperfine -i -N --warmup 1 "./target/release/paketkoll --config-files=include pacman davinci-resolve" "pacman -Qkk pacman davinci-resolve" Benchmark 1: ./target/release/paketkoll --config-files=include pacman davinci-resolve Time (mean ± σ): 770.8 ms ± 4.3 ms [User: 2891.2 ms, System: 641.5 ms] Range (min … max): 765.8 ms … 778.7 ms 10 runs Warning: Ignoring non-zero exit code. Benchmark 2: pacman -Qkk pacman davinci-resolve Time (mean ± σ): 10.589 s ± 0.018 s [User: 9.371 s, System: 1.207 s] Range (min … max): 10.550 s … 10.620 s 10 runs Warning: Ignoring non-zero exit code. Summary ./target/release/paketkoll --config-files=include pacman davinci-resolve ran 13.74 ± 0.08 times faster than pacman -Qkk pacman davinci-resolve
What about a some midsized packages (vtk 359 MB, linux 131 MB)?
console
$ hyperfine -i -N --warmup 1 "./target/release/paketkoll vtk" "pacman -Qkk vtk" Benchmark 1: ./target/release/paketkoll vtk Time (mean ± σ): 46.4 ms ± 0.6 ms [User: 204.9 ms, System: 93.4 ms] Range (min … max): 45.7 ms … 48.8 ms 65 runs Benchmark 2: pacman -Qkk vtk Time (mean ± σ): 702.7 ms ± 4.4 ms [User: 590.0 ms, System: 109.9 ms] Range (min … max): 698.6 ms … 710.6 ms 10 runs Summary ./target/release/paketkoll vtk ran 15.15 ± 0.23 times faster than pacman -Qkk vtk $ hyperfine -i -N --warmup 1 "./target/release/paketkoll linux" "pacman -Qkk linux" Benchmark 1: ./target/release/paketkoll linux Time (mean ± σ): 34.9 ms ± 0.3 ms [User: 95.0 ms, System: 78.2 ms] Range (min … max): 34.2 ms … 36.4 ms 84 runs Benchmark 2: pacman -Qkk linux Time (mean ± σ): 313.9 ms ± 0.4 ms [User: 233.6 ms, System: 79.8 ms] Range (min … max): 313.4 ms … 314.5 ms 10 runs Summary ./target/release/paketkoll linux ran 9.00 ± 0.09 times faster than pacman -Qkk linux
For small sizes where neither tool performs much work, the majority is spent on fixed overheads that both tools have (loading the binary, setting up glibc internals, parsing the command line arguments, etc). For medium sizes paketkoll pulls ahead quite rapidly. And for large sizes pacman is painfully slow.
Just for laughs I decided to check an empty meta-package (base, 0 bytes). Here pacman actually beats paketkoll, slightly. Not a useful scenario, but for full transparency I should include it:
console
$ hyperfine -i -N --warmup 1 "./target/release/paketkoll base" "pacman -Qkk base" Benchmark 1: ./target/release/paketkoll base Time (mean ± σ): 13.3 ms ± 0.2 ms [User: 15.3 ms, System: 18.8 ms] Range (min … max): 12.8 ms … 14.1 ms 218 runs Benchmark 2: pacman -Qkk base Time (mean ± σ): 8.8 ms ± 0.2 ms [User: 2.8 ms, System: 5.8 ms] Range (min … max): 8.4 ms … 10.0 ms 327 runs Summary pacman -Qkk base ran 1.52 ± 0.05 times faster than ./target/release/paketkoll base
I always start a threadpool regardless of if I have work to do (and changing that would slow the case I actually care about). That is the most likely cause of this slightly larger fixed overhead.
It very much is (as I even acknowledge at the end of the github README). 😀
I have only implemented for checking all packages at the current point in time (as that is what I need later on). It could be possible to add support for checking a single package.
Thank you for reminding me of pacman -Qkk
though, I had forgotten it existed.
I just did a test of pacman -Qk
and pacman -Qkk
(with no package, so checking all of them) and paketkoll
is much faster. Based on the man page:
pacman -Qk
only checks file exists. I don't have that option, I always check file properties at least, but have the option to skip checking the file hash if the mtime and size matches (paketkoll --trust-mtime
). Even though I check more in this scenario I'm still about 4x faster.pacman -Qkk
checks checksum as well (similar to plainpaketkoll
). It is unclear to me if pacman will check the checksum if the mtime and size matches.
I can report that paketkoll
handily beats pacman in both scenarios (pacman -Qk
is slower than paketkoll --trust-mtime
, and pacman -Qkk
is much slower than plain paketkoll
). Below are the output of using the hyperfine benchmarking tool:
console
$ hyperfine -i -N --warmup=1 "paketkoll --trust-mtime" "paketkoll" "pacman -Qk" "pacman -Qkk" Benchmark 1: paketkoll --trust-mtime Time (mean ± σ): 246.4 ms ± 7.5 ms [User: 1223.3 ms, System: 1247.7 ms] Range (min … max): 238.2 ms … 261.7 ms 11 runs Warning: Ignoring non-zero exit code. Benchmark 2: paketkoll Time (mean ± σ): 5.312 s ± 0.387 s [User: 17.321 s, System: 13.461 s] Range (min … max): 4.907 s … 6.058 s 10 runs Warning: Ignoring non-zero exit code. Benchmark 3: pacman -Qk Time (mean ± σ): 976.7 ms ± 5.0 ms [User: 101.9 ms, System: 873.5 ms] Range (min … max): 970.3 ms … 984.6 ms 10 runs Benchmark 4: pacman -Qkk Time (mean ± σ): 86.467 s ± 0.160 s [User: 53.327 s, System: 16.404 s] Range (min … max): 86.315 s … 86.819 s 10 runs Warning: Ignoring non-zero exit code.
It appears that pacman -Qkk
is much slower than paccheck --file-properties --sha256sum
even. I don't know how that is possible!
The above benchmarks were executed on an AMD Ryzen 5600X with 32 GB RAM and an Gen3 NVME SSD. pacman -Syu
executed as of yesterday most recently. Disk cache was hot in between runs for all the tools, that would make the first run a bit slower for all the tools (but not to a large extent on a SSD, I can imagine it would dominate on a mechanical HDD though)
In conclusion:
- When checking just file properties
paketkoll
is 3.96 times faster than pacman checking just if the files exist - When checking checksums
paketkoll
is 16.3 times faster than pacman checking file properties. This is impressive on a 6 core/12 thread CPU. pacman must be doing something exceedingly stupid here (might be worth looking into, perhaps it is checking both sha256sum and md5sum, which is totally unneeded). Compared topaccheck
I see a 7x speedup in that scenario which is more in line with what I would expect.
paketkoll - Check installed distro files for changes (much faster than paccheck)
Swedish layout. Not ideal for coding (too many things like curly and square brackets etc are under altgr. And tilde and backtick are on dead keys.
But switching back and forth as soon as you need to write Swedish (for the letters åäö) is just too much work. And yes, in the Swedish alphabet they are separate letters, not aao with diacretics.
Interesting repo and seems useful as a teaching aid, the algorithms seem to be written with a focus on readability.
However, if you actually need to do any of these operations in production I would recommend finding an optimised and well tested implementation instead. This is especially important for the cryptographical algorithms! But even for something like counting set bits, modern x86-64 CPUs even have a built in instructions for that (POPCNT).
LGPL specifically does as far as I understand have some issues when used in rust. In particular the border for the copyleft is dynamic linking. That doesn't work well with rust. I would instead consider MPL where the copyleft border is on a source file level.
That said, I'm not a lawyer!
Saying “it’s a graph of commits” makes no sense to a layperson.
Sure, but git is aimed at programmers. Who should have learned graph theory in university. It was past of the very first course I had as an undergraduate many years ago.
Git is definitely hard though for almost all the reasons in the article, perhaps other reasons too. But not understanding what a DAG is shouldn't be one of them, for the intended target audience.
Let's Encrypt is meant yo be used with automated certificate renewal using the ACME protocol. There are many clients for this. Both standalone and built into e.g. Caddy, Traefik and other software that does SSL termination.
So this specific concern doesn't really make sense. But that doesn't mean I really see a use case for it either, since it usually makes more sense to access resources via a host name.