NoManDeRY commited on
Commit
938df25
·
verified ·
1 Parent(s): 6839fbc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -0
README.md CHANGED
@@ -16,6 +16,9 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # llama-3-8b-dpo-ultrafeedback-decrease_linear-1.0to0.95
18
 
 
 
 
19
  This model is a fine-tuned version of [princeton-nlp/Llama-3-Base-8B-SFT](https://huggingface.co/princeton-nlp/Llama-3-Base-8B-SFT) on the HuggingFaceH4/ultrafeedback_binarized dataset.
20
  It achieves the following results on the evaluation set:
21
  - Loss: 0.5619
 
16
 
17
  # llama-3-8b-dpo-ultrafeedback-decrease_linear-1.0to0.95
18
 
19
+ This is a model released from the preprint: [DPO-Shift: Shifting the Distribution of Direct Preference Optimization](https://arxiv.org/abs/2502.07599). Please refer to our [repository](https://github.com/Meaquadddd/DPO-Shift) for more details.
20
+
21
+
22
  This model is a fine-tuned version of [princeton-nlp/Llama-3-Base-8B-SFT](https://huggingface.co/princeton-nlp/Llama-3-Base-8B-SFT) on the HuggingFaceH4/ultrafeedback_binarized dataset.
23
  It achieves the following results on the evaluation set:
24
  - Loss: 0.5619