SAGE-MM-Qwen2.5-VL-7B-SFT-GGUF

The SAGE-MM-Qwen2.5-VL-7B-SFT from allenai is a 7B-parameter vision-language model fine-tuned from Qwen/Qwen2.5-VL-7B-Instruct, serving as the core decision-maker in the SAGE (Smart Any-Horizon Agent) system for long video reasoning through a two-stage process: Stage-1 analyzes initial sampled frames and metadata to classify queries as single-turn (immediate answers) or multi-turn (tool-required), while Stage-2 iteratively generates JSON-formatted tool calls for web-search, speech transcription on timestamps, event grounding, video part extraction, and detailed visual analysis to build context progressively [attached_file:1 equivalent]. Designed to handle arbitrary-length videos beyond fixed horizons—like sports events, narratives, or complex timelines—it requires the SAGE GitHub runtime for tool parsing/execution and observation feedback, enabling robust Q&A via dynamic tool orchestration under Apache 2.0 for research/educational use per Ai2 guidelines, with GGUF quantizations for efficient deployment. This SFT variant powers the SAGE framework's superior performance on benchmarks like MINERVA, outperforming prior Qwen3-VL-4B baselines in extended video comprehension.

SAGE-MM-Qwen2.5-VL-7B-SFT [GGUF]

File Name Quant Type File Size File Link
SAGE-MM-Qwen2.5-VL-7B-SFT.IQ4_XS.gguf IQ4_XS 4.25 GB Download
SAGE-MM-Qwen2.5-VL-7B-SFT.Q2_K.gguf Q2_K 3.02 GB Download
SAGE-MM-Qwen2.5-VL-7B-SFT.Q3_K_L.gguf Q3_K_L 4.09 GB Download
SAGE-MM-Qwen2.5-VL-7B-SFT.Q3_K_M.gguf Q3_K_M 3.81 GB Download
SAGE-MM-Qwen2.5-VL-7B-SFT.Q3_K_S.gguf Q3_K_S 3.49 GB Download
SAGE-MM-Qwen2.5-VL-7B-SFT.Q4_K_M.gguf Q4_K_M 4.68 GB Download
SAGE-MM-Qwen2.5-VL-7B-SFT.Q4_K_S.gguf Q4_K_S 4.46 GB Download
SAGE-MM-Qwen2.5-VL-7B-SFT.Q5_K_M.gguf Q5_K_M 5.44 GB Download
SAGE-MM-Qwen2.5-VL-7B-SFT.Q5_K_S.gguf Q5_K_S 5.32 GB Download
SAGE-MM-Qwen2.5-VL-7B-SFT.Q6_K.gguf Q6_K 6.25 GB Download
SAGE-MM-Qwen2.5-VL-7B-SFT.Q8_0.gguf Q8_0 8.1 GB Download
SAGE-MM-Qwen2.5-VL-7B-SFT.f16.gguf F16 15.2 GB Download
SAGE-MM-Qwen2.5-VL-7B-SFT.mmproj-Q8_0.gguf mmproj-Q8_0 856 MB Download
SAGE-MM-Qwen2.5-VL-7B-SFT.mmproj-f16.gguf mmproj-f16 1.35 GB Download

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
281
GGUF
Model size
8B params
Architecture
qwen2vl
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/SAGE-MM-Qwen2.5-VL-7B-SFT-GGUF

Quantized
(3)
this model