-
GGUF Editor
🏢84Edit GGUF metadata for Hugging Face models
-
mergekit-gui
🔀290Merge AI models using a YAML configuration file
-
GGUF My Repo
🦙1.8kCreate quantized models from Hugging Face repos
-
SignRoundV2: Closing the Performance Gap in Extremely Low-Bit Post-Training Quantization for LLMs
Paper • 2512.04746 • Published • 13
Joe
Joe57005
·
AI & ML interests
None yet
Recent Activity
updated
a collection
5 days ago
Models to try
updated
a collection
6 days ago
Models to try
updated
a collection
18 days ago
Good for home automation
Organizations
None yet
For MOE 1.5B
Models to try
-
bunnycore/Gemma2-2b-function-calling-lora
Updated • 1 -
NickyNicky/gemma-2b-it_oasst2_all_chatML_function_calling_Agent_v1
Text Generation • 3B • Updated • 10 • 1 -
hugging-quants/Llama-3.2-1B-Instruct-Q8_0-GGUF
Text Generation • 1B • Updated • 382k • 38 -
gorilla-llm/gorilla-openfunctions-v2
Text Generation • Updated • 951 • 244
For finetune
Good for home automation
Large context LLMs that work well with Home Assistant via Llama.cpp server running on CPU with 16GB ram.
-
inclusionAI/Ling-mini-2.0
Text Generation • 16B • Updated • 8.68k • 184 -
Orion-zhen/Qwen3-30B-A3B-Instruct-2507-IQK-GGUF
31B • Updated • 49 • 1 -
Intel/Qwen3-30B-A3B-Instruct-2507-gguf-q2ks-mixed-AutoRound
31B • Updated • 103 • 26 -
Tiiny/SmallThinker-21BA3B-Instruct
Text Generation • 22B • Updated • 127 • 111
LLM Tools
-
Running84
GGUF Editor
🏢84Edit GGUF metadata for Hugging Face models
-
Runtime errorFeatured290
mergekit-gui
🔀290Merge AI models using a YAML configuration file
-
Running on A10G1.8k
GGUF My Repo
🦙1.8kCreate quantized models from Hugging Face repos
-
SignRoundV2: Closing the Performance Gap in Extremely Low-Bit Post-Training Quantization for LLMs
Paper • 2512.04746 • Published • 13
For finetune
For MOE 1.5B
Good for home automation
Large context LLMs that work well with Home Assistant via Llama.cpp server running on CPU with 16GB ram.
-
inclusionAI/Ling-mini-2.0
Text Generation • 16B • Updated • 8.68k • 184 -
Orion-zhen/Qwen3-30B-A3B-Instruct-2507-IQK-GGUF
31B • Updated • 49 • 1 -
Intel/Qwen3-30B-A3B-Instruct-2507-gguf-q2ks-mixed-AutoRound
31B • Updated • 103 • 26 -
Tiiny/SmallThinker-21BA3B-Instruct
Text Generation • 22B • Updated • 127 • 111
Models to try
-
bunnycore/Gemma2-2b-function-calling-lora
Updated • 1 -
NickyNicky/gemma-2b-it_oasst2_all_chatML_function_calling_Agent_v1
Text Generation • 3B • Updated • 10 • 1 -
hugging-quants/Llama-3.2-1B-Instruct-Q8_0-GGUF
Text Generation • 1B • Updated • 382k • 38 -
gorilla-llm/gorilla-openfunctions-v2
Text Generation • Updated • 951 • 244