View Demo | Report Bug | Request Feature
TEDI is an easy-to-use, cloud-native, high-performant, low-code Integration Server.
Now with built-in AI Agent Integration (OpenAI, Groq, and LangChain), TEDI can reason about your data, transform it intelligently, and even automate multi-step workflows — all inside your existing integration pipelines. TEDI remains a low-cost solution to facilitate the movement of your important business data between business applications and services (A2A & B2B) internally within your organization and externally with your trading partners.
TEDI saves engineering and development time by employing a framework of common engineering patterns that can be stitched together via workflows driven by configuration files.
TEDI is powered by Golang and can run as a stand-alone binary running in systemd or if you prefer, it can easily be containerized.
(top)
The tedi repository is an example installation. Follow the installation steps below to start and stop TEDI.
- fetch the tedi repository
git clone https://github.com/tedi-software/tedi.git # git clone git@github.com:tedi-software/tedi.git - copy the tedi folder to the directory you want to root TEDI
cd /path/to/tedi cp -pR tedi /opt/
- start TEDI
cd /opt/tedi/bin ./start.sh - stop TEDI
cd /opt/tedi/bin ./stop.sh - viewing the logs
cd /opt/tedi/logs tail -f tedi_*.log
Note
This sample TEDI installation includes binaries for Linux and OSX.
the MacOSX binaries are not signed and will be put into quarantine on download. To list and subsequently remove from quarantine:
xattr <binary name>
xattr -d com.apple.quarantine <binary name>(top)
TEDI uses simple key-value property files, like in Java, to configure services. These configuration files support multiline values, comments, environment variable injection, and you can even link to other property files by using the keyword .include. By using .include, you can break apart your configurations into smaller, more manageable pieces and is an excellent way of reusing common settings between processors.
view /tedi/services/archetypes for more examples.
To build an integration, you will define a workflow, a series of processors executing in sequence, in a file called service.properties; which is the entry point for all integrations.
A processor is an independent module that represents sending or receiving data via a protocol like https, reading/writing records to a database, or interfacing with a message bus (e.g NATS).
At startup, TEDI will scan all the directories under tedi/services/ looking for service.properties files. When it finds one, it will load all the listed services, create a workflow, and begin executing it. This in effect means that in a single TEDI process, you can run a single service (integration), a set of related services, or as many as you like; there's no limit on the number of services you can run. This also means that if you want to prevent an integration from running, you can simply rename the service.properties to something like ignore_service.properties and TEDI will not load that service.
For some working examples, view /tedi/services/examples.
(top)
Many example services can be found under /tedi/services/examples.
For this simple demo, we'll use the OpenAI example.
The cmd example demonstrates the stitching together of two processors to form a service interface or integration.
This particular integration is driven by a shell script, which generates input, and then openai chat agent:
( these commands assume you rooted tedi under /opt/ )
First follow the installation steps.
cd /opt/tedi/services/test/ai_transform/openai
mv ignore_service.properties service.properties
cd /opt/tedi/bin
./start.shThis service:
- Receives a JSON payload from the command processor
- Passes it to an AI processor (OpenAI/Groq/LangChain) to convert to XML
- Outputs the AI-generated structured result
Swap in your own prompt and models to adapt it for classification, enrichment, or automated decision-making.
Note: Model Access Options
TEDI lets you run AI processors locally or connect to remote APIs. For each provider, you can choose:
- Local – Run the model on your machine (e.g., via Ollama) to avoid network calls and API costs.
- Remote – Use a hosted model by providing a valid bearer token for API authentication.
Examples:
- OpenAI – Run locally with gpt-oss:20b in Ollama or connect to the OpenAI API using your API key.
- Groq – Run Groq-supported models locally (if available) or connect to Groq’s hosted API with a bearer token.
- LangChain – Chain together local or remote LLMs and tools. For remote use, provide the API key(s) required by your chosen model provider(s).
This flexibility means you can develop and test locally, then deploy with hosted models in production — without changing your workflow definitions.
(top)
- AI Agent Integration - Combine AI processors with any of TEDI’s existing connectors (SFTP, HTTPS, Databases, etc.) to create pipelines that not only move data, but understand and act on it.
- OpenAI – Access GPT-class models for summarization, classification, and reasoning
- Groq – Ultra-fast LLM inference for real-time AI-powered workflows
- LangChain – Chain models, tools, and APIs for agent-driven automation
- Shell
- SFTP
- HTTPS
- NATS (Core/JetStream)
- XSLT (in-memory/on-disk)
- PGP
- Convert XML <-> JSON
- Convert JSON <-> CSV
- Convert JSONL <-> CSV
- Database
- Oracle
- MySQL
- PostgreSQL
- Microsoft SQL Server
See the open issues for a full list of proposed features (and known issues).
(top)
For questions, open an issue
(top)