A Python project for building AI models using the Groq API for fast inference.
├── src/ # Source code
│ ├── main.py # Main entry point
│ └── groq_client.py # Groq API client module
├── models/ # Trained models storage
├── data/ # Data files
├── notebooks/ # Jupyter notebooks for experimentation
├── .env.example # Environment variables template
├── requirements.txt # Python dependencies
└── README.md # This file
git clone <repository-url>
cd "AI Model"python -m venv venv
.\venv\Scripts\Activate.ps1 # On Windows PowerShell
# or
venv\Scripts\activate.bat # On Windows Command Promptpip install -r requirements.txt- Get your API key from Groq Console
- Copy
.env.exampleto.env - Add your Groq API key to
.env:
GROQ_API_KEY=your_actual_api_key
GROQ_MODEL=mixtral-8x7b-32768
python src/main.pymixtral-8x7b-32768- Default high-performance modelllama-2-70b-chat- Meta's Llama 2 70B Chatgemma-7b-it- Google's Gemma 7B Instruct
Check Groq Documentation for the latest models.
from src.groq_client import GroqClient
# Initialize client
client = GroqClient()
# Get a simple completion
response = client.get_completion("What is AI?")
print(response)
# Have a chat conversation
messages = [
{"role": "user", "content": "Hello, how are you?"},
]
response = client.get_chat_response(messages)
print(response)jupyter notebookThen navigate to the notebooks/ directory.
- Create a feature branch (
git checkout -b feature/your-feature) - Commit your changes (
git commit -am 'Add your feature') - Push to the branch (
git push origin feature/your-feature) - Open a Pull Request
MIT License
- Groq Documentation
- Groq Python SDK
- Hackathon Information - Add your hackathon link here
- Remember to add your
.envfile to.gitignore(already done) - Never commit your API keys
- Test your code regularly with Groq's fast inference
- Use the notebooks directory for experimentation and documentation