Edit model card

Built with Axolotl

See axolotl config

axolotl version: 0.4.1

base_model: mistralai/Mistral-7B-Instruct-v0.3
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
is_mistral_derived_model: true

load_in_8bit: false
load_in_4bit: false
strict: false

datasets:
  - data_files: out/train.jsonl
    path: out/
    ds_type: json
    type:
      alpaca
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./mistral_fine_out

sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true

wandb_project:
wandb_entity:
wandb_watch:
wandb_run_id:
wandb_log_model:

gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 2
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.000005

train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false

gradient_checkpointing: true
early_stopping_patience:
auto_resume_from_checkpoint: true
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

warmup_steps: 10
eval_steps: 0.05
eval_table_size:
eval_table_max_new_tokens: 128
save_steps:
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
  bos_token: "<s>"
  eos_token: "</s>"
  unk_token: "<unk>"
model_config:
  sliding_window: 4096

mistral_fine_out

This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.3 on a synthetic appeals dataset. See the health insurance fine tuning repo for details. An earlier version of this dataset is availabile.

It achieves the following results on the evaluation set:

  • Loss: 0.7984

Model description

Generate health insurance appeals. Early work.

Intended uses & limitations

It is intended to be used as part of the fight health insurance web app who's repo is at https://github.com/totallylegitco/fighthealthinsurance

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 10
  • num_epochs: 2

Training results

Training Loss Epoch Step Validation Loss
1.0397 0.0004 1 1.1590
0.6084 0.1002 230 0.7272
0.5195 0.2003 460 0.7141
0.4713 0.3005 690 0.7090
0.3973 0.4007 920 0.7097
0.3306 0.5009 1150 0.7145
0.3507 0.6010 1380 0.7136
0.3125 0.7012 1610 0.7200
0.3055 0.8014 1840 0.7227
0.2027 0.9016 2070 0.7301
0.2632 1.0017 2300 0.7471
0.2077 1.0851 2530 0.7662
0.0992 1.1853 2760 0.7744
0.236 1.2855 2990 0.7844
0.1572 1.3857 3220 0.7915
0.192 1.4858 3450 0.7921
0.1812 1.5860 3680 0.7968
0.1973 1.6862 3910 0.7979
0.1422 1.7864 4140 0.7982
0.1315 1.8865 4370 0.7984

Framework versions

  • Transformers 4.42.3
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
10
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for TotallyLegitCo/fighthealthinsurance_model_v0.5

Finetuned
this model