DeepSeek-V3 now runs at 20 tokens per second on Mac Studio, and that’s a nightmare for OpenAI
IndeterminateName @ IndeterminateName @beehaw.org Posts 0Comments 35Joined 2 yr. ago
IndeterminateName @ IndeterminateName @beehaw.org
Posts
0
Comments
35
Joined
2 yr. ago
Featured
What are you reading?
A bit like a syllable when you are talking about text based responses. 20 tokens a second is faster than most people could read the output so that's sufficient for a real time feeling "chat".