Edit model card

MMAlaya2

MMAlaya2 fine-tunes 20 LoRA modules based on the InternVL-Chat-V1-5 model. These fine-tuned LoRA modules are then merged with the InternVL-Chat-V1-5 model using the PEFT model merging method, TIES.

You can find the inference code here.

The MMBench benchmark contains 20 categories in the mmbench_dev_cn_20231003.tsv dataset. For each category, we first use CoT (Chain of Thought) consistency with the InternVL-Chat-V1-5 model to prepare the training dataset. For specific categories like nature_relation, image_emotion, image_scene, action_recognition, and image_style, we analyze the bad cases made by the InternVL-Chat-V1-5 model. We then prepare images and QA text from online sources to address these issues.

After fine-tuning the 20 LoRAs, they are merged with the InternVL-Chat-V1-5 model using the TIES method.

A huge thank you to the OpenCompass MMBench team for updating the leaderboard on August 27, 2024. We have collected the ranks and scores from the leaderboard for reference. For example, a ranking of "7/82.1" indicates a 7th place finish with a score of 82.1 in that category. We chose GPT-4o (0513, detail-high) because it is the best-performing GPT-4o model in the MMBench Test (CN).

Model MMBench Test (CN) MMBench v1.1 Test (CN) CCBench dev MMBench Test MMBench v1.1 Test
GPT-4o (0513, detail-high) 4/82.1 5/81.5 7/71.2 4/83.4 5/83
MMAlaya2 7/82.1 8/79.7 8/70 9/82.5 9/80.6
InternVL-Chat-V1.5 14/80.7 15/79.1 9/69.8 11/82.3 10/80.3

The average score on the MMBench Test (CN) reached 82.1, surpassing the InternVL-Chat-V1-5 model's score of 80.7 by 1.4 points. Although the rank is 7, this score matches GPT-4o's performance, which is ranked 4th, placing the model on par with GPT-4o. Additionally, scores on the other four benchmarks—MMBench v1.1 Test (CN), CCBench dev, MMBench Test, and MMBench v1.1 Test—have also improved by 0.2 to 0.6 points, further closing the gap to GPT-4o's performance.

We found this result noteworthy. As a result, we are sharing this model publicly.

License

This project is released under the MIT license, aligning with the InternVL-Chat-V1-5 model's license. However, InternLM2 is licensed under the Apache-2.0 license.

Citation

If you find this project useful in your research, please consider citing:

@misc{datacanvas2024mmalaya2,
    author = {DataCanvas Ltd.},
    title = {MMAlaya2},
    year = {2024},
    howpublished = {\url{https://huggingface.co/DataCanvas/MMAlaya2}},
}
Downloads last month
153
Safetensors
Model size
25.5B params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.