Interested why you're comparing it with deepseek coder v1 which is a year old now?

#42
by smcleod - opened

Neat to see a new Yi Coder model released, well done!

I'm just curious as to why you're comparing it to the old deekseek coder v1 model from around a year ago, rather than current models?

FYI - DeepSeek coder v2 Lite (2.4B active parameters, 16B MoE) replaced it 3 months ago, I can't see why anyone would be using the old v1 model these days.

01-ai org

Hi Smcleod,
Thank you for pointing that out. I completely agree that DeepSeek Coder v2 Lite is an impressive model.
The benchmarks we present are not intended as a guide for model selection, but rather as a study of how to achieve better results under comparable controlled conditions, specifically:

  1. Increasing the quality of code data using Iterative Data Filtering, while maintaining a comparable total amount of unique code data (approximately 1T tokens).
  2. with fewer pre-training tokens (3.1T + 2.4T).
  3. with one-third of the # of parameters for dense models.

I hope this explanation clarifies our approach. I will add the comparison results as soon as they become available from our model training team.
Best regards,
Nuo

Thanks for the response! I appreciate that.

smcleod changed discussion status to closed

Sign up or log in to comment