chatglm 6b finetuning and alpaca finetuning
-
Updated
Mar 9, 2025 - Python
chatglm 6b finetuning and alpaca finetuning
deep learning
Elixir port of HuggingFace's PEFT (Parameter-Efficient Fine-Tuning) library. Implements LoRA, AdaLoRA, IA3, prefix tuning, prompt tuning, and 30+ state-of-the-art PEFT methods for efficient neural network adaptation. Built for the BEAM ecosystem with native Nx/Axon integration.
Optimizing the Differentiable Search Index (DSI) with data augmentation (Num2Word, Stopwords Removal, POS-MLM) and parameter-efficient fine-tuning (LoRA, QLoRA, AdaLoRA, ConvoLoRA), improving retrieval accuracy and efficiency while reducing memory and computational overhead. Evaluated on the MS MARCO dataset for scalable performance.
Add a description, image, and links to the adalora topic page so that developers can more easily learn about it.
To associate your repository with the adalora topic, visit your repo's landing page and select "manage topics."