Can you fine-tune Llama-2-7B-Chat-GGML and other quantized version of llama?

#28
by obscureagent - opened

Can you fine-tune Llama-2-7B-Chat-GGML and other quantized version of llama?

As per my knowledge we cannot able to quantize the already quantized model but we can quantize the original model uisng GGML or GPTQ Techniques

I know this discussion is for the GGML version, but while I have you, can you point me to a resource regarding a straight-forward process of fine-tuning Llama-2-7b or 7v-chat?

I know this discussion is for the GGML version, but while I have you, can you point me to a resource regarding a straight-forward process of fine-tuning Llama-2-7b or 7v-chat?

you can go through these courses to know how to finetune LLM in general
https://learn.deeplearning.ai/finetuning-large-language-models/lesson/1/introduction

Sign up or log in to comment