I'm having a fantastic time with this model.
noneabove1182 @ noneabove1182 @sh.itjust.works Posts 89Comments 149Joined 2 yr. ago
noneabove1182 @ noneabove1182 @sh.itjust.works
Posts
89
Comments
149
Joined
2 yr. ago
Inside The OnePlus Open – And The Machines That Torture It [Exclusive] - MrMobile
QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models
Effective Long-Context Scaling of Foundation Models | Research - AI at Meta
Amazon investing in Anthropic - Expanding access to safer AI with Amazon
Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding
GitHub - nicholasyager/llama-cpp-guidance: A guidance compatibility layer for llama-cpp-python
Hmm had interesting results from both of those base models, haven't tried the combo yet, will start some exllamav2 quants to test
What's it doing well at?
quant link for anyone who may want: https://huggingface.co/bartowski/OpenHermes-2.5-neural-chat-7b-v3-1-7B-exl2