🚀 Qwen2.5-0.5B-Code-BCP-V2
📝 Overview
This model is a fine-tuned version of Qwen2.5-0.5B-Instruct, specialized for real-time code refactoring, logging injection, and algorithmic optimization. It is designed to power VSCode extensions where low latency and local execution are critical.
Compared to the base model, BCP-V2 demonstrates an emergent understanding of time complexity (O(n) awareness) and strictly follows developer-centric instructions without unnecessary conversational filler ("Zero-Yapping").
Key Capabilities:
- Optimization: Identifying and refactoring nested loops into Hash Map lookups.
- Structured Logging: Injecting custom-formatted logs (e.g.,
[MONITOR]templates). - Logic Transformation: Converting recursive functions to iterative patterns.
- IDE Ready: Optimized for GGUF format for seamless integration with Ollama or llama.cpp.
📊 Training Details
- Base Model: Qwen2.5-0.5B-Instruct (4-bit quantized)
- Framework: Unsloth
- Dataset:
iamtarun/python_code_instructions_18k_alpaca - Method: LoRA (Low-Rank Adaptation)
- Steps: 600 steps (~4,800 examples processed)
- Batch Size: 8 (2 per device × 4 accumulation steps)
- Scheduler: Cosine learning rate decay
- Optimizer: AdamW 8-bit
📈 Evaluation: V1 vs. V2 Comparison
During development, we analyzed the impact of training duration on algorithmic reasoning.
| Feature | Base Model (0.5B) | BCP-V1 (150 steps) | BCP-V2 (600 steps) |
|---|---|---|---|
| Response Speed | Instant | Instant | Instant |
| Instruction Adherence | Medium | High | Strict |
| Algorithmic Reasoning | Low | Low | High (O(n) intent) |
| Explanations (Yapping) | High | Low | Minimal (Zero-Yapping) |
Notable Improvements in V2:
- Test Case (Hash Map): While V1 failed to optimize nested loops, V2 correctly identified the need for a lookup dictionary to improve performance from $O(n^2)$ to $O(n)$.
- Test Case (Logging): V2 handles complex string interpolation (e.g., using
**locals()) while maintaining strict template formatting.
💻 Usage
Prompt Format:
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Input:
{code_snippet}
### Response:
{code_snipped_refactored}
Running with Ollama:
Download the .gguf file from this repository.
Create a Modelfile: FROM ./qwen2.5-0.5b-instruct.Q4_K_M.gguf TEMPLATE "{{ .Prompt }}"
Run: ollama create bcp-v2 -f Modelfile
⚠️ Limitations
As a 0.5B parameter model, BCP-V2 is highly efficient but may occasionally produce minor syntax errors in very complex logic. It is best used for refactoring snippets of up to 50 lines and as a high-speed coding assistant.
🤝 Collaboration
This model was developed as part of a project to create an intelligent local-first VSCode extension chatbot.
Lead Fine-tuning Engineer: Alex (alex2110)
- Downloads last month
- 40
4-bit
Model tree for alex2110/qwen2.5-0.5b-code-bcp-v2
Base model
Qwen/Qwen2.5-0.5B