Software for Mockserver and Automated Resource Testing
LLM-assisted frontend test generation and intelligent mock data provisioning — developed in a student team in collaboration with CHECK24.
This repository is a curated public showcase of the S.M.A.R.T. project.
It presents the architecture, selected artifacts, and my personal contribution.
The original implementation was developed in a private university GitLab environment and is therefore not published here in full.
S.M.A.R.T. was designed to reduce the manual cost of frontend test maintenance in complex product environments.
The core idea was to combine:
- natural-language-based Playwright test generation
- an intelligent proxy/mockserver for reusable test data
- execution feedback with logs and screenshots
- a workflow that remains useful even when UI and external data change
Instead of treating AI as a gimmick, the project embedded it into a larger architecture for validation, orchestration, execution, and deterministic reuse.
The public project poster gives a condensed high-level overview of the motivation, architecture, and intended system value.
This diagram shows the high-level interaction between frontend, autotester, MCP server, OpenAI integration, and the supplier proxy layer.
The Autotester UI allows users to describe frontend test scenarios in natural language and receive executable test code.
Generated tests can be executed and reviewed with runtime output and visual artifacts such as screenshots.
S.M.A.R.T. followed a modular architecture with clear separation of concerns:
- Frontend for prompt input and result presentation
- Autotester for validation, generation, orchestration, and execution
- Suproxy for request/response capture, reuse, tagging, and stable test data handling
- MCP Server for controlled tool access and IDE-adjacent workflows
- Storage / cache layers for persistence and reuse
A more detailed architecture write-up is available in docs/architecture.md.
Frontend
- Svelte
- TypeScript
Backend
- Go
- Gin
LLM / AI
- OpenAI GPT-5-mini
Testing
- Playwright
- Vitest
Infrastructure / Data
- AWS S3
- Parquet
- Redis / Valkey
- Docker
Quality / Delivery
- GitLab CI/CD
- SonarQube
- Implemented parts of the Autotester prompt validation workflow
- Contributed to frontend theming and UI polish across chat-related components
- Worked on shared TagList and validation-related logic in the broader system
- Helped stabilize pipeline-related testing issues and rebasing fallout
- Contributed heavily to project documentation, including system prompt research, quality metrics, MCP research, and frontend style guidance
My main contribution was turning parts of the system into something more usable and product-like.
I worked substantially on the frontend, including theming, UI refinement, and smaller interaction components.
I also contributed to prompt-validation-related backend logic, including S3-connected validation flows, and helped document key concepts such as system prompts, quality metrics, MCP-related ideas, and frontend style guidelines.
This put me in a role between frontend engineering, workflow design, and practical system thinking.
More about my work in this project you can read in docs/my-contribution.md
The original project lives in a private university GitLab environment and was developed in a larger team setting.
This public repository exists to make the project understandable from the outside by showing:
- the problem and solution space
- the system architecture
- selected UI and execution artifacts
- my actual contribution areas
- engineering lessons that came out of the work
README.md– public project overviewassets/– poster, screenshots, architecture visualsdocs/architecture.md– detailed architectural write-updocs/my-contribution.md– concrete summary of my roledocs/lessons-learned.md– technical and product lessons from the project



