I mainly use it instead of googling and skimming articles to get information quickly and allow follow up questions.
I do use it for boring refactoring stuff though.
Those are also the main uses cases I use it for.
Really good for getting a quick overview over a new topic and also really good at proposing different solutions/algorithms for issues when you describe the issue.
Doesn't always respond correctly but at least gives you the terminology you need to follow up with a web search.
Also very good for generating boilerplate code. Like here's a sample JSON, generate the corresponding C# classes for use with System.Text.Json.JsonSerializer.
Hopefully the hardware requirements will come down as the technology gets more mature or hardware gets faster so you can run your own "coding assistant" on your development machine.
That depends, we have quite a few images that are just a single shell script or a collection of shell scripts which run as jobs or cronjobs. Most of them are used for management tasks like cleaning up, moving stuff or debugging.
Has the big advantage of being identical on each node so you don't have to worry about keeping those shell scripts up to date with volumes. Very easy to just deploy a job with a debug image on every node to quickly check something in a cluster.
Of course, if the shell script "belongs" to an application you might as well add the shell script in the application container and override the start arguments.
Anything interesting going on in the kernel log while connection doesn't work?
If so, you could maybe write a bug report at the amdgpu repo.
One thing I could imagine that is happening is that Linux chooses a higher chroma subsampling than on Windows. Had that issue before with a monitor that had a wrong EDID. Unfortunately it's a real pain to set the chroma subsampling on Linux with AMD.
That's strange, 6.6.14 is the same version that's on Fedora currently. My friend with a 7900 XTX is still on 6.5.0 so I can't get him to test that version right now.
Having a bleeding edge kernel can and will come back to bite you. There's a reason why many distros hold back with kernel updates for so long, there's issues that only can be found with user feedback.
From experience, "stable" in the kernel world doesn't mean much unfortunately. I encountered dozens of issues over various versions and different hardware already and it's the main reason I don't run rolling release distros on my main rig.
There's also been enough times where the latest Nvidia driver borked my test system at work so I'm fine with just not running the latest kernel instead.
Definitely go with K3s instead of K8s if you want to go the Kubernetes route. K8s is a massive pain in the ass to setup. Unless you want to learn about it for work I would avoid it for homelab usage.
I currently run Docker Swarm nodes on top of LXCs in Proxmox.
Pretty happy with the setup except that I can't get IPv6 to work in Docker overlay networks and the overlay network performance leaves things to be desired.
I previously used Rancher to run Kubernetes but I didn't like the complexity it adds for pretty much no benefit. I'm currently looking into switching to K3s to finally get my IPv6 stack working. I'm so used to docker-compose files that it's hard to get used to the way Kubernetes does things though.
I use an 6900 XT and run llama.cpp and ComfyUI inside of Docker containers. I don't think the RX590 is officially supported by ROCm, there's an environment variable you can set to enable support for unsupported GPUs but I'm not sure how well it works.
AMD provides the handy rocm/dev-ubuntu-22.04:5.7-complete image which is absolutely massive in size but comes with everything needed to run ROCm without dependency hell on the host. I just build a llama.cpp and ComfyUI container on top of that and run it.
Same here, even asked the developer if the Steam Deck is supported but they couldn't tell me.
Refunded it for now, might check back next sale if it works and the microtransactions are acceptable.