Skip to content

Deehlusa/ai-workflow-design

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 

Repository files navigation

UX Career Workflow — AI Orchestration Case Study

Designing and orchestrating a structured AI-assisted learning workflow for a UX/UI career transition — applying QA thinking and .claude folder architecture to a real-world learning project.


Context

My partner Laura is transitioning into UX/UI Design while completing the Google UX Design Professional Certificate on Coursera. Rather than letting her navigate Claude as a generic chatbot, I designed a modular, mentor-grade workflow — the same way I would architect a QA test suite: separated concerns, explicit acceptance criteria per phase, and a reproducible structure.

This repo documents the architecture I built, the decisions behind it, and the QA principles I applied to a non-technical domain.

Transparency note: The text content was generated with AI assistance (Claude + Gemini). The architecture, methodology decisions, prompt engineering, and QA framing are my contribution.


What I Built

A fully modular AI-assisted learning workflow using Claude (Cowork mode) as a personal UX mentor engine.

The project applies QA orchestration principles to a learning context:

  • Modular, independently-updatable instruction files (rules/, skills/, agents/)
  • Checkpoints functioning as acceptance criteria per learning phase
  • Separation of concerns: system instructions ≠ personal notes ≠ project files
  • Documented state progression via PROGRESS.md
  • Reusable slash commands for recurring workflows

Architecture — .claude/ Folder

UX-Career-Laura/
├── CLAUDE.md                   ← System prompt: mentor persona, methodology, constraints
├── CLAUDE.local.md             ← Personal overrides (private notes, diary — gitignored)
├── PROGRESS.md                 ← State tracker: checkpoints passed, skills logged
│
├── .claude/
│   ├── settings.json           ← Project config: language, tools, timeline, phase
│   │
│   ├── commands/               ← Slash commands (reusable triggers)
│   │   ├── novo-projeto.md     → /novo-projeto  — scaffolds new UX case study
│   │   ├── checkpoint.md       → /checkpoint    — runs Feynman Method validation
│   │   ├── daily-review.md     → /daily-review  — 15-min structured review routine
│   │   └── analise-app.md      → /analise-app   — critical analysis of real apps
│   │
│   ├── rules/                  ← Modular behaviour rules (single responsibility each)
│   │   ├── ensino-style.md     → Teaching tone, pace, and analogy preferences
│   │   ├── portfolio-first.md  → Every task maps to a portfolio deliverable
│   │   ├── figma-workflow.md   → Progressive Figma ramp-up (5 frustration-aware levels)
│   │   └── ux-vocabulary.md    → English UX terms introduced per phase (not all at once)
│   │
│   ├── skills/                 ← Auto-invoked workflow templates
│   │   ├── pesquisa-ux/SKILL.md    → UX research: interview scripts, affinity maps
│   │   ├── wireframing/SKILL.md    → iPad sketch → Figma digital process
│   │   ├── case-study/SKILL.md     → Case study writing: structure + storytelling
│   │   └── figma-rampup/SKILL.md   → Frustration-aware Figma onboarding guide
│   │
│   └── agents/                 ← Sub-agent personas (isolated context)
│       ├── mentor-ux.md        → Senior UX Designer persona for design critique
│       └── recruiter-sim.md    → Interview simulator for portfolio walkthroughs
│
├── coursera/                   ← Notes and exercises from Google UX Certificate
│   └── module-{1..7}/
│
├── petzen-pro/                 ← Case Study #1 (real veterinary management app)
│   ├── research/               → Competitive analysis, user interviews, surveys
│   ├── personas/               → 2+ personas with real behavioural grounding
│   ├── flows/                  → User flows, task flows, sitemap
│   ├── wireframes/             → Lo-fi → mid-fi wireframes (5+ screens)
│   ├── ui-design/              → Style guide, component library, hi-fi mockups
│   ├── prototype/              → Interactive Figma prototype + screen recordings
│   └── testing/                → Usability test scripts, results, iteration log
│
├── case-study-2/               ← Case Study #2 (different domain — TBD)
├── portfolio/                  ← Portfolio website assets and copy
└── resources/                  ← Curated links, books, design references

QA Thinking Applied to Learning Design

QA Concept Applied As
Test phases 6 learning phases with explicit acceptance criteria
Acceptance criteria Checkpoint deliverables per phase (personas, wireframes, etc.)
Modular test suites Separate rules/, skills/, agents/ — each with a single responsibility
State management PROGRESS.md tracks what passed, what is pending, what needs rework
Separation of concerns System instructions ≠ personal notes ≠ project deliverables
Reusable components Commands = reusable triggers. Skills = reusable workflow templates
Isolated execution Agents run in isolated context — no cross-contamination with main workflow
Config as code settings.json externalises project variables (phase, language, timeline)
Silent failures unacceptable Feynman checks gate phase transitions — no passive consumption

The 6-Phase Learning Pipeline

Phase 1 → Foundations             Weeks 1-4
          Acceptance criteria: 2 personas + competitive analysis map

Phase 2 → Figma & Wireframing     Weeks 5-8
          Acceptance criteria: 5+ lo-fi wireframes (PetZen Pro screens)

Phase 3 → UI Design               Weeks 9-12
          Acceptance criteria: complete style guide (colours, type, components)

Phase 4 → Prototyping             Weeks 13-16
          Acceptance criteria: interactive prototype, navigable end-to-end

Phase 5 → Usability Testing       Weeks 17-20
          Acceptance criteria: 5 moderated tests + iteration report

Phase 6 → Portfolio & Job Prep    Weeks 21-24
          Acceptance criteria: portfolio live + CV updated + first applications sent

Slash Commands — Reusable Triggers

Command What it does
/checkpoint Runs a Feynman Method check — student must explain concept before advancing
/novo-projeto Scaffolds full folder structure for a new UX case study
/daily-review Structured 15-min review: what I learned, what I'll practise, open questions
/analise-app Critical UX analysis of a real app — heuristics, flows, friction points

Architecture Decisions

Why modular files instead of one big system prompt? A monolithic prompt degrades with context length and is hard to iterate. Splitting into rules/, skills/, and agents/ follows the same principle as separating test suites by concern — each file has one job and can be updated independently without touching the rest.

Why sub-agents with isolated context? The mentor-ux and recruiter-sim agents run in isolated context windows, simulating real professional roles without contaminating the main learning session. The equivalent of running integration tests in a clean environment.

Why PROGRESS.md instead of a project management tool? Markdown is version-controllable, human-readable, and zero-dependency. State lives in the repo alongside the work — no external tool to maintain or pay for.

Why Feynman Method as the validation layer? Feynman checks are the functional equivalent of assertions in a test suite — they verify that understanding was actually achieved before the next phase begins. Passing a module by watching videos is the equivalent of a test that always returns true.

Why introduce Figma progressively (5 levels)? Tool anxiety is the most common reason beginners abandon design. The figma-rampup skill introduces features incrementally — frames before components, components before auto-layout — reducing cognitive load at each step.


Stack

Tool Role
Claude (Cowork mode) AI mentor engine — reads project files, maintains context
Figma Primary design tool
Coursera Structured learning source (Google UX Certificate)
iPad Air M3 Paper-to-digital sketching (reduces Figma anxiety at lo-fi stage)
MacBook Air M4 16GB Design workstation

Key Takeaway

Applying QA principles to a learning workflow makes it auditable, reproducible, and improvable — the same reasons we apply them to software.

A learning plan without checkpoints is equivalent to a test suite without assertions: it may complete, but you have no evidence of correctness.


About

Built by André Rodrigues — QA professional transitioning to QA Automation Engineer / AI Orchestrator

This project is part of a broader practice in AI workflow design and prompt engineering, alongside:

  • cs50-python — Python learning (CS50P certificate path)
  • NODE-01 — Local AI orchestration stack (Ollama + OpenClaw + Telegram)

March 2026 — Leiria, Portugal

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors