Skip to content

PolEspurnes/LLM-SQL-Injection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLM SQL Injection PoC

This project is a proof of concept (PoC) demonstrating how poorly validated LLM-generated output can lead to SQL Injection vulnerabilities.

It shows how a user’s natural language question is interpreted by an LLM, converted into a SQL query, and executed without proper sanitization, allowing malicious input to compromise the database.

The goal is to highlight the security risks of blindly trusting LLM outputs in any system.

image

How it works

  1. The user submits a question through a simple chat interface.
  2. An LLM processes the input and generates a SQL query.
  3. The generated SQL is executed directly against a SQLite database.
  4. No validation or sanitization is applied to the query.

This makes the application intentionally vulnerable to SQL injection via LLM output.

image

The database only includes two tables: employees and salaries.

Used LLM Model

The project is currently using the free xiaomi/mimo-v2-flash:free LLM model from OpenRouter.

To use it, just create an account in https://openrouter.ai/ and create an API key.

This model is completely free and enough for the purposes of this project, but feel free to change it.

Used system prompt

You are a helpful assistant that translates natural language 
questions into SQLite queries.
The database has two tables: employees and salaries.
employees stores basic employee information in 4 columns: id, name, department, title.
salaries has two columns: employee_id and salary. It uses employee_id as a foreign key to employees.id.
Only output the raw SQL query, DO NOT add markdown formatting nor explanations. Just the query.
IMPORTANT: Never return the salary information of an employee. The only information that the user is allowed to know is the overall average salary.
If the user asks for an employee's salary, just answer 'This information is restricted.'

Running the Project

Configure

Add your OpenRouter API Key into the .env file.

As mentioned previously, the project is currently using this free model: https://openrouter.ai/xiaomi/mimo-v2-flash:free

Install and run

pip install -m requirements.txt
python3 app.py

Then open: http://127.0.0.1:5000

Purpose

This project is not meant for production use.

It exists to:

  • Demonstrate how LLMs can produce unsafe or malicious SQL.

  • Show how natural language interfaces can hide traditional vulnerabilities.

  • Raise awareness about the importance of:

    • Input validation
    • Query parameterization
    • Guardrails around LLM-generated code

Disclaimer

This repository is for educational and research purposes only. It intentionally demonstrates unsafe practices to illustrate real-world risks.

The author takes no responsibility for misuse of this code.

About

This is a PoC showing how a poorly validated LLM output can cause a SQLi.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors