Edit model card

Model Card for Model ID

Chocolatine v1.0
3.82B params.
Window context = 4k tokens

This is a French DPO fine-tune of Microsoft's Phi-3-mini-4k-instruct,
improving its global understanding performances, even in English.

image/jpeg

Model Description

Fine-tuned with the 12k DPO Intel/orca_dpo_pairs translated in French : AIffl/french_orca_dpo_pairs.
Chocolatine is a general model and can itself be finetuned to be specialized for specific use cases.
More infos & Benchmarks very soon ^^

Limitations

Chocolatine is a quick demonstration that a base 3B model can be easily fine-tuned to specialize in a particular language.
It does not have any moderation mechanisms.

  • Developed by: Jonathan Pacifico, 2024
  • Model type: LLM
  • Language(s) (NLP): French, English
  • License: MIT
Downloads last month
63
Safetensors
Model size
3.82B params
Tensor type
FP16
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for jpacifico/Chocolatine-3B-Instruct-DPO-v1.0

Quantizations
2 models