v3-2 vs v3-1

#1
by bartowski - opened

Not much in the model card, any notable differences? increased training or otherwise?

In my initial testing, v3.2 is getting facts more correct than v3.1, but I have been using the 5bit quantized models. I'm currently downloading the fp16 version for further testing.

In testing, this new version appears to work very well.

Is the model trained with Quantization Aware Training to save more accuracy do we have any knowledge of that ? and the model checkpoints in the files are full precision or half precision ? Apart from the questions model gives great results for language understanding thought in some cases gives better results when using 8bit inference over torch fp16 dtype need more testing probably

Sign up or log in to comment