Skip to content

alexrozex/MeaningWorks

Repository files navigation

MeaningWorks

The semantic compiler for intelligent agents.

Turn what you mean into what works. No translation layer. No lost-in-translation. Just meaning in, working system out.


The problem

Most people with ideas can't build them.

Not because the ideas are bad. Because every layer of translation between what you mean and what gets built loses signal. By the time intent reaches implementation, you're building something nobody asked for.

AI code generation made this worse, not better. It's faster at building the wrong thing.

The fix isn't faster generation. It's semantic compilation — a process that preserves meaning at every step, challenges its own assumptions, and proves that the output traces back to the input.


What MeaningWorks does

You describe what you want in natural language. MeaningWorks doesn't just generate — it excavates the specification that was already implied by your words, your domain, and your constraints.

You describe what you want
        ↓
MeaningWorks compiles meaning
        ↓
Verified system with full provenance

The key insight: specifications pre-exist in the input. When you say "I need a booking system for my tattoo studio," the architecture is already there — implied by variable session lengths, artist specializations, deposit workflows, and trust dynamics between artist and client.

A good compiler doesn't invent. It reveals.


What makes it different

Excavation, not generation. Most AI tools hallucinate plausible outputs. MeaningWorks derives necessary outputs — every component traces back to your original input through an unbroken chain. If it's in the blueprint, you can follow the thread back to why.

Meaning-level quality control. Traditional compilers check syntax. MeaningWorks checks meaning. Does this output faithfully represent this intent? That's the question every compilation answers, with proof.

Model-agnostic. Works across LLM providers. The LLM is a component, not the brain. When models commoditize (and they will), the compilation quality stays constant.

Self-improving. Every compilation adds to a growing context graph — patterns, vocabulary, decisions, anti-patterns. The 100th compilation in a domain is categorically better than the 1st. This compounds. Forever.


What it produces

Every compilation outputs:

  • Verified specification — Components, relationships, constraints, and anything still unresolved (explicitly marked, never hidden)
  • Provenance trace — Complete decision chain from your intent to every output element. Cryptographically signable. The trust artifact.
  • Running system — Working code with persistent state, event coordination, and self-modification through recompilation

Current state

Real software. Real tests. Not a pitch deck.

  • Thousands of tests passing
  • Multiple domain adapters for different verticals
  • Full loop proven: intent → specification → code → running system
  • Recursive self-compilation: the compiler compiled itself
  • Cost: pennies per compilation run
  • Automatic provider failover across multiple LLMs

The compiler bootstrapped itself — used its own pipeline to compile solutions to its own gaps. Each gap closed enabled compiling something harder. The strange loop: the process that turns ambiguity into structure can only exist because you're already doing it.


The origin story

I'm a solo developer. Also a tattoo artist.

I saw a pattern that engineers couldn't see because they were too close to it. In tattooing, you see the whole design before you pick up the needle. Topology before detail. The shape exists before the ink touches skin.

Software is the same. The architecture exists before you write code. The problem is that nobody built a tool that works this way — one that sees the shape first, then fills in the details while preserving the original meaning at every step.

I found the first principles of semantic compilation and built it. Not through traditional engineering — through constraint satisfaction and convergent architecture. Give multiple independent AI systems the same problem constraints without the solution, and they arrive at the same shape. That's how you know the shape is real.


Where this is going

Phase 1 — Self-extending chat-based agent. Persistent memory. Soft launch.

Phase 2 — Tool marketplace with creator royalties. Every verified tool becomes an asset that earns.

Phase 3 — Trust network. Agents trading verified tools with micro-fee royalties. Cross-instance transactions with provenance.

Phase 4 — Full economic fabric. The semantic compiler as an operating system for the agent economy.


The moat

The competitive advantage is not the outputs. Any LLM can generate specs.

The moat is what you can't see — the accumulated reasoning, decision traces, domain patterns, and vocabulary across thousands of compilations. Competitors can copy the outputs. They can never copy the context.

The corpus compounds. Every compilation makes the next one better. Compound interest on intelligence.


Philosophy

Excavation, not generation. The architecture was always there. We're just uncovering it.

Compression over expansion. A perfectly compressed seed makes a small model outperform a large one. We sell compression quality.

Trust through provenance. The world has enough generators. It needs verifiers.

Bounded, not unbounded. Everything converges in finite steps at known cost. Unbounded expansion is hallucination, not intelligence.


Get in touch

Aleksandrs Roze — solo developer, tattoo artist turned system architect.

motherlabsai@gmail.com


License

Apache 2.0 — see LICENSE for details.


The tree remembers its seed at every depth.

About

Semantic compiler that transforms natural language intent into deterministic, verified multi-agent systems. Excavation, not generation. 7-agent pipeline with provenance gates.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors