Goal: Run an agent, call it, stream results – all in less than ten minutes.
Start the built‑in echo agent:
go run ./examples/basic-agent
# ➜ Listening on :8080Send a task (JSON‑RPC):
curl -s -X POST localhost:8080/rpc \
-d '{"jsonrpc":"2.0","id":1,"method":"tasks/send","params":{"id":"t1","message":{"role":"user","parts":[{"type":"text","text":"Ping"}]}}}' | jq .artifacts[0].parts[0].text
# "Ping"🎉 Congratulations – you just used the A2A protocol.
curl -s -X POST localhost:3210/rpc -d '{"jsonrpc":"2.0","id":2,"method":"prompts/list"}' | jq .promptsFetch a prompt's full content:
curl -s -X POST localhost:3210/rpc -d '{"jsonrpc":"2.0","id":3,"method":"prompts/get","params":{"name":"Greeting"}}' | jq .messages[0].content.textThe SSE endpoint lives at /events.
# in a second terminal
curl -sN localhost:3210/events | jq -cBack in the first terminal send a streaming request
curl -s -X POST localhost:8080/rpc \
-d '{"jsonrpc":"2.0","id":4,"method":"sampling/createMessageStream","params":{"systemPrompt":"You are a poet.","messages":[]}}'Tokens will appear live in the SSE stream.
Use tasks/sendSubscribe to send a task and receive streaming updates via SSE:
# Make sure your SSE listener is running in another terminal:
# curl -sN localhost:8080/events | jq -c
# Send a streaming task
curl -s -X POST localhost:8080/rpc \
-d '{
"jsonrpc":"2.0",
"id":5,
"method":"tasks/sendSubscribe",
"params":{
"id":"stream-task-1",
"message":{
"role":"user",
"parts":[{"type":"text","text":"Process this request with streaming updates"}]
}
}
}' | jq
# You'll immediately receive a working status and subsequent updates will appear in the SSE streamUse tasks/resubscribe to reconnect to an existing task's stream:
# Reconnect to a previously created task
curl -s -X POST localhost:8080/rpc \
-d '{
"jsonrpc":"2.0",
"id":6,
"method":"tasks/resubscribe",
"params":{
"id":"stream-task-1",
"historyLength":5
}
}' | jq
# You'll receive the current state and artifact of the task
# If historyLength is specified, you'll also get recent message historySet up a callback URL to receive task updates:
# Configure push notifications for a task
curl -s -X POST localhost:8080/rpc \
-d '{
"jsonrpc":"2.0",
"id":7,
"method":"tasks/pushNotification/set",
"params":{
"id":"stream-task-1",
"pushNotificationConfig":{
"url":"https://your-callback-url.com/webhook"
}
}
}' | jq
# The server will send updates to the specified URL as the task progressesCheck the current push notification configuration:
# Get push notification settings for a task
curl -s -X POST localhost:8080/rpc \
-d '{
"jsonrpc":"2.0",
"id":8,
"method":"tasks/pushNotification/get",
"params":{
"id":"stream-task-1"
}
}' | jqGet a task with its message history:
# Get a task with its recent message history
curl -s -X POST localhost:8080/rpc \
-d '{
"jsonrpc":"2.0",
"id":9,
"method":"tasks/get",
"params":{
"id":"stream-task-1",
"historyLength":10
}
}' | jq
# The response will include up to 10 most recent messages in the history fieldA2A-Go provides a unified long-term memory system that combines vector and graph stores for AI agents.
In-memory implementation (no external databases needed):
go run ./examples/memory-storeExternal databases implementation (uses Qdrant and Neo4j):
# Start the databases with Docker Compose
docker-compose -f docker-compose.memory.yml up -d
# Set your OpenAI API key
export OPENAI_API_KEY=sk-...
# Run the example
go run ./examples/memory-externalFor more details on the memory system architecture, see Memory Architecture.
For production use, you'll want to use real vector and graph databases instead of the in-memory implementations.
-
Run Qdrant using Docker:
docker run -p 6333:6333 -p 6334:6334 qdrant/qdrant
-
Connect to Qdrant in your code:
embeddingService := memory.NewOpenAIEmbeddingService(os.Getenv("OPENAI_API_KEY")) vectorStore := memory.NewQdrantVectorStore("http://localhost:6333", "memories", embeddingService)
-
Run Neo4j using Docker:
docker run -p 7474:7474 -p 7687:7687 -e NEO4J_AUTH=neo4j/password neo4j:latest
-
Connect to Neo4j in your code:
graphStore := memory.NewNeo4jGraphStore("http://localhost:7474", "neo4j", "password")
// Initialize the memory system components
embeddingService := memory.NewOpenAIEmbeddingService(openaiClient) // or memory.NewMockEmbeddingService() for testing
vectorStore := memory.NewQdrantVectorStore("http://localhost:6333", "memories", embeddingService)
graphStore := memory.NewNeo4jGraphStore("http://localhost:7474", "neo4j", "password")
unifiedStore := memory.NewUnifiedStore(embeddingService, vectorStore, graphStore)
// Store a memory
id, err := unifiedStore.StoreMemory(ctx, "Important information to remember",
map[string]any{"topic": "knowledge", "importance": 8}, "knowledge")
// Create relationships between memories
err = unifiedStore.CreateRelation(ctx, sourceID, targetID, "related_to",
map[string]any{"strength": 0.7})
// Retrieve a memory by ID
memory, err := unifiedStore.GetMemory(ctx, id)
// Search for semantically similar memories
searchParams := memory.SearchParams{
Query: "vector databases for AI memory",
Limit: 10,
Types: []string{"knowledge", "concept"},
Filters: []memory.Filter{{Field: "topic", Operator: "eq", Value: "memory"}},
}
results, err := unifiedStore.SearchSimilar(ctx, searchParams.Query, searchParams)
// Find related memories through graph relationships
related, err := unifiedStore.FindRelated(ctx, id, []string{"related_to"}, 10)- Expose a file via the Resource manager (
resources/list). - Export
OPENAI_API_KEYto enable real LLM completions. - Continue with the deep‑dives:
Happy experiments! 🎈