LLM_FQL_LogScale (GGUF f16)

This repo contains a GGUF f16 export for use with llama.cpp / Ollama / LM Studio.

Files

  • model-f16.gguf

Notes

  • This is an f16 GGUF. For smaller sizes, quantize to q4_k_m using llama.cpp quantize.
Downloads last month
37
GGUF
Model size
8B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support