Fine-tuning Gemma 2 model using LoRA and Keras (Learning Index Path)#112
Draft
Fine-tuning Gemma 2 model using LoRA and Keras (Learning Index Path)#112
Conversation
Owner
|
@Foluwa please add a good title and description to this PR and also link it with one or more issues you are working on |
| @@ -0,0 +1,3 @@ | |||
| # Gemma Model Learning Index Path | |||
|
|
|||
| ## Fine tune gemma model withcustom data No newline at end of file | |||
Owner
There was a problem hiding this comment.
gemma ==> Gemma
withcustom ==> with custom
Collaborator
Author
Finetuning Gemma model on LPI DatasetThe initial approach was implemented in this notebook but some errors were encountered (which are listed below) and I pivoted to this notebook most of the implementation were from "Fine-tuning Gemma 2 model with role-playing dataset" by Gabriel Preda Issues encountered1. Resource constraints (ensure you run on a GPU)
2. Permission Issues
3. The evaluation metric does not support "multiclass-multioutput" format4. Unable to build the model5. Unable to save the model weights6. Model performance issue |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.









This PR introduces fine-tuning of the Gemma 2 model using Low-Rank Adaptation (LoRA) to improve efficiency in adapting large model parameters. Utilizing the Learning Index Path (LPI) dataset and Keras, the fine-tuning aims to enhance model performance with minimal resource demand. This setup supports more scalable deployment and optimized inference times for downstream applications.