SAGE-MM-Qwen3-VL-8B-SFT_RL-GGUF
The SAGE-MM-Qwen3-VL-8B-SFT_RL from allenai is an 8B-parameter reinforcement learning (RL) post-trained vision-language model, refined via RL from the SAGE-MM-Qwen3-VL-8B-SFT (fine-tuned from Qwen/Qwen3-VL-8B-Instruct), serving as the core decision-maker in the SAGE (Smart Any-Horizon Agent) system for long video reasoning with superior performance gains over SFT through two-stage operation: Stage-1 analyzes initial sampled frames/metadata to classify queries as single-turn (immediate answers) or multi-turn (tool-required), while Stage-2 iteratively generates JSON-formatted tool calls for web-search (external knowledge), transcribe-speech (timestamped ASR), ground-event (temporal localization), extract-video-parts (high-res frames/subclips), and analyze (visual breakdown) to progressively update context until resolution. Engineered for arbitrary-length video Q&A across sports, narratives, events, or timelines beyond fixed horizons, it requires the SAGE GitHub runtime for tool parsing/execution/observation feedback, achieving state-of-the-art on benchmarks like MINERVA under Apache 2.0 for research/educational use per Ai2 guidelines, with GGUF quantizations (Q4_K_S/M recommended) for efficient deployment.
SAGE-MM-Qwen3-VL-8B-SFT_RL [GGUF]
| File Name | Quant Type | File Size | File Link |
|---|---|---|---|
| SAGE-MM-Qwen3-VL-8B-SFT_RL.IQ4_XS.gguf | IQ4_XS | 4.59 GB | Download |
| SAGE-MM-Qwen3-VL-8B-SFT_RL.Q2_K.gguf | Q2_K | 3.28 GB | Download |
| SAGE-MM-Qwen3-VL-8B-SFT_RL.Q3_K_L.gguf | Q3_K_L | 4.43 GB | Download |
| SAGE-MM-Qwen3-VL-8B-SFT_RL.Q3_K_M.gguf | Q3_K_M | 4.12 GB | Download |
| SAGE-MM-Qwen3-VL-8B-SFT_RL.Q3_K_S.gguf | Q3_K_S | 3.77 GB | Download |
| SAGE-MM-Qwen3-VL-8B-SFT_RL.Q4_K_M.gguf | Q4_K_M | 5.03 GB | Download |
| SAGE-MM-Qwen3-VL-8B-SFT_RL.Q4_K_S.gguf | Q4_K_S | 4.8 GB | Download |
| SAGE-MM-Qwen3-VL-8B-SFT_RL.Q5_K_M.gguf | Q5_K_M | 5.85 GB | Download |
| SAGE-MM-Qwen3-VL-8B-SFT_RL.Q5_K_S.gguf | Q5_K_S | 5.72 GB | Download |
| SAGE-MM-Qwen3-VL-8B-SFT_RL.Q6_K.gguf | Q6_K | 6.73 GB | Download |
| SAGE-MM-Qwen3-VL-8B-SFT_RL.Q8_0.gguf | Q8_0 | 8.71 GB | Download |
| SAGE-MM-Qwen3-VL-8B-SFT_RL.f16.gguf | F16 | 16.4 GB | Download |
| SAGE-MM-Qwen3-VL-8B-SFT_RL.mmproj-Q8_0.gguf | mmproj-Q8_0 | 752 MB | Download |
| SAGE-MM-Qwen3-VL-8B-SFT_RL.mmproj-f16.gguf | mmproj-f16 | 1.16 GB | Download |
Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):
- Downloads last month
- 162
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Model tree for prithivMLmods/SAGE-MM-Qwen3-VL-8B-SFT_RL-GGUF
Base model
Qwen/Qwen3-VL-8B-Instruct