You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I have finetuned MiniCPM-o, for two specific downstream tasks, using Lora. But when I try to merge the two adapters into the same model I get an error. The issue is that both of these models are trying to update the same
modules_to_save =['embed_tokens', 'resampler']
Which is not allowed by hugging face, I guess. My question is if I try to remove the modules_to_save option and set it to None, how will that affect the performance of the final fine-tuned adapter? Is this something recommended while fine-tuning MiniCPM-o?
The text was updated successfully, but these errors were encountered:
Hello, I have finetuned MiniCPM-o, for two specific downstream tasks, using Lora. But when I try to merge the two adapters into the same model I get an error. The issue is that both of these models are trying to update the same
Which is not allowed by hugging face, I guess. My question is if I try to remove the modules_to_save option and set it to None, how will that affect the performance of the final fine-tuned adapter? Is this something recommended while fine-tuning MiniCPM-o?
The text was updated successfully, but these errors were encountered: