MoD-Embedding
MoD-Embedding is a text embedding model designed for semantic search and retrieval tasks. This model is fine-tuned from Qwen/Qwen3-Embedding-4B and supports multiple languages, providing high-quality embeddings for various applications.
Model Details
- Base Model: Qwen/Qwen3-Embedding-4B
- Model Size: 4B parameters
- Max Sequence Length: 32,768 tokens
- Embedding Dimension: 2560
- Languages: English, Chinese, and multilingual support
- Training Method: LoRA fine-tuning on RTEB datasets
Usage
Using Sentence Transformers
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("bflhc/MoD-Embedding", trust_remote_code=True)
# Encode sentences
sentences = [
"This is an example sentence",
"Each sentence is converted to a vector"
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# Output: (2, 2560)
# Compute similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
print(f"Similarity: {similarity.item():.4f}")
Using Transformers
from transformers import AutoModel, AutoTokenizer
import torch
import torch.nn.functional as F
tokenizer = AutoTokenizer.from_pretrained("bflhc/MoD-Embedding")
model = AutoModel.from_pretrained("bflhc/MoD-Embedding", trust_remote_code=True)
model.eval()
def encode(texts):
inputs = tokenizer(texts, padding=True, truncation=True,
max_length=8192, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# Use last token embedding
embeddings = outputs.last_hidden_state[:, -1, :]
# Normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
return embeddings
# Example usage
texts = ["Hello world", "ä½ å¥½ä¸–ç•Œ"]
embeddings = encode(texts)
similarity = torch.matmul(embeddings[0], embeddings[1])
print(f"Similarity: {similarity.item():.4f}")
Recommended Use Cases
- Semantic search and information retrieval
- Document similarity and clustering
- Question answering
- Cross-lingual retrieval
- Text classification with embeddings
Limitations
- Performance may vary across different domains and languages
- Very long documents (>32K tokens) require truncation
- Optimized for retrieval tasks, not for text generation
License
This model is licensed under the Apache License 2.0.
This model is derived from Qwen/Qwen3-Embedding-4B, which is also licensed under Apache License 2.0.
Citation
If you use this model in your research, please cite:
@misc{mod-embedding-2025,
title={MoD-Embedding: A Fine-tuned Multilingual Text Embedding Model},
author={MoD Team},
year={2025},
url={https://huggingface.co/bflhc/MoD-Embedding}
}
Please also cite the base model:
@article{qwen3embedding,
title={Qwen3 Embedding: Advancing Text Embedding and Reranking Through Foundation Models},
author={Zhang, Yanzhao and Li, Mingxin and Long, Dingkun and Zhang, Xin and Lin, Huan and Yang, Baosong and Xie, Pengjun and Yang, An and Liu, Dayiheng and Lin, Junyang and Huang, Fei and Zhou, Jingren},
journal={arXiv preprint arXiv:2506.05176},
year={2025}
}
- Downloads last month
- 163