chatglm 6b finetuning and alpaca finetuning
-
Updated
Mar 9, 2025 - Python
chatglm 6b finetuning and alpaca finetuning
Fine-tuned chemical language model for predicting molecular lipophilicity in drug design. Explores parameter-efficient fine-tuning strategies (LoRA, BitFit, IA3), layer freezing techniques, and influence-based data selection. Balances accuracy and computational efficiency for molecular property prediction tasks.
Elixir port of HuggingFace's PEFT (Parameter-Efficient Fine-Tuning) library. Implements LoRA, AdaLoRA, IA3, prefix tuning, prompt tuning, and 30+ state-of-the-art PEFT methods for efficient neural network adaptation. Built for the BEAM ecosystem with native Nx/Axon integration.
Add a description, image, and links to the ia3 topic page so that developers can more easily learn about it.
To associate your repository with the ia3 topic, visit your repo's landing page and select "manage topics."