Messing around with system python/pip and newly installed versions till all was broken and then looking at documentation.
This was way back on the 00's and I'm still ashamed on how fast completely I messed it up.
I have a couple of questions.
Imagine I have a powerful consumer gpu card to trow at this solution, 4090ti for the sake of example.
How many containers can share one physical card, taking into account total vram memory will not be exceeded?
How does one virtual gpu look like in the container? Can I run standard stuff like PyTorch, Tensorflow, and CUDA stuff in general?
Messing around with system python/pip and newly installed versions till all was broken and then looking at documentation.
This was way back on the 00's and I'm still ashamed on how fast completely I messed it up.