Unable to read the paywalled article, but ROCm is like Nvidia's CUDA for their GOUs. These used to be supported on and off for consumer GPUs, but get aupported mostly only for datacenter ones. Linux used to be supported, but they are bringing ROCm to windows as well.
Probably AMD is planning to improve support for consumer GPUs, since their GPUs are competitive to Nvidia's at a lower price point, given that local LLMs, Image generation and other AIs are in a kind of a booming trend.
Currently, Nvidia has CUDA which is more or less the industry standard, so AMD with their ROCm and Intel with their OpenVino etc are trying to chip away at the monopoly.
Probably if you split tunnel the vpn connection from mullvad to your torrent application and not run the vpn for the entire laptop's network stack this could be done.
Alternatively, dockerize the entire vpn+torrent (+jellyfin) setup? That way the container gets the vpn but you can still access using your host ip for jellyfin.
Just don't get xiaomi if you're in the US. I used to have one when I moved from India last year and majority of the bands were unsupported, so I was stuck on 2G-3G speeds.
I'm just going to cheat here a bit and use chatGPT to summarize this, since I don't want to do the calculation wrong. Hope it makes sense. I'm just excited to share this!
########## Integrated GPU #########
Total inference time = Load time + Sample time + Prompt eval time + Eval time
Total inference time = 26205.90 ms + (6.34 ms/sample * 103 samples) + 29234.08 ms + 118847.32 ms
Total inference time = 26205.90 ms + 653.02 ms + 29234.08 ms + 118847.32 ms
Total inference time = 174940.32 ms
So, the total inference time is approximately 174940.32 ms.
########## Discrete GPU 6800M #########
Total inference time = Load time + Sample time + Prompt eval time + Eval time
Total inference time = 60188.90 ms + (3.58 ms/sample * 103 samples) + 7133.18 ms + 13003.63 ms
Total inference time = 60188.90 ms + 368.74 ms + 7133.18 ms + 13003.63 ms
Total inference time = 80594.45 ms
So, the total inference time is approximately 80594.45 ms.
#####################################
Taking the difference Discrete - Integrated : 94345.87 ms.
Which is close to about 53% faster or about 1.5 minutes faster. The integrated GPU takes close to 175 seconds and the discrete finishes in about 81 seconds.
I do think that adding more RAM at some point could definitely help in improving the loading times, since the laptop has currently about 16Gb RAM.
Not necessarily, generally the defaults for most of the tools tend to be sane, but when you have a swiss army knife with dozens of attachments, you'd still need a manual to figure what is what.
Note that many tools use ffmpeg under the hood so users are generally never exposed to the various options. But sometime if they need to, cheatsheets like these are really useful.
I use gonic with sonixd on my laptops, but probably might move to supersonic from sonixd.
On my phone, Tempo is really awesome!