DeepSeek-V3 now runs at 20 tokens per second on Mac Studio, and that’s a nightmare for OpenAI
ParetoOptimalDev @ ParetoOptimalDev @lemmy.today Posts 0Comments 204Joined 2 yr. ago
ParetoOptimalDev @ ParetoOptimalDev @lemmy.today
Posts
0
Comments
204
Joined
2 yr. ago
No, people who want something approaching chatgpt but local want to run at least deepseek V3 32B.
Qwen at least fares much worse for my usage as do deepseek V3 under 32B.