I’m an aspiring AI engineer with a strong interest in large language models (LLMs), retrieval-augmented generation (RAG), and trustworthy AI systems. I enjoy building applied ML projects that sit at the intersection of theory and real-world impact, especially around transparency, reliability, and human-centered AI.
I’m particularly interested in:
- Large language models and prompt engineering
- Retrieval-augmented generation (RAG) systems
- Reducing hallucinations and improving model reliability
- Model evaluation, interpretability, and responsible AI
- Applied ML systems and MLOps workflows
This GitHub serves as a space to document my learning, experiments, and end-to-end projects as I work toward becoming an AI engineer.
- Hallucination detection and confidence scoring for LLM outputs
- Model interpretability dashboards for ML/LLM systems
- MLOps practices for deploying and monitoring AI applications
- AI applications in high-stakes domains (e.g., healthcare, education)
Thanks for stopping by!