Very large amounts of gaming gpus vs AI gpus
starshipwinepineapple @ starshipwinepineapple @programming.dev Posts 1Comments 97Joined 1 yr. ago
starshipwinepineapple @ starshipwinepineapple @programming.dev
Posts
1
Comments
97
Joined
1 yr. ago
Deleted
Permanently Deleted
Tflops is a generic measurement, not actual utilization, and not specific to a given type of workload. Not all workloads saturate gpu utilization equally and ai models will depend on cuda/tensor. the gen/count of your cores will be better optimized for AI workloads and better able to utilize those tflops for your task. and yes, amd uses rocm which i didn't feel i needed to specify since its a given (and years behind cuda capabilities). The point is that these things are not equal and there are major differences here alone.
I mentioned memory type since the cards you listed use different versions ( hbm vs gddr) so you can't just compare the capacity alone and expect equal performance.
And again for your specific use case of this large MoE model you'd need to solve the gpu-to-gpu communication issue (ensuring both connections + sufficient speed without getting bottlenecked)
I think you're going to need to do actual analysis of the specific set up youre proposing. Good luck