Skip to content

feat: rewrite CLI for ADR-002 direct pub/sub#100

Open
swarna1101 wants to merge 6 commits intomainfrom
feat/adr-002-direct-pubsub
Open

feat: rewrite CLI for ADR-002 direct pub/sub#100
swarna1101 wants to merge 6 commits intomainfrom
feat/adr-002-direct-pubsub

Conversation

@swarna1101
Copy link
Collaborator

@swarna1101 swarna1101 commented Mar 8, 2026

Summary by CodeRabbit

  • New Features

    • Session-based proxy publish/subscribe flow with node selection and an expose-amount control; added command-stream gRPC protocol support.
  • Documentation

    • Major README rewrite prioritizing authentication-first CLI usage, session/status examples, proxy guidance, and updated command outputs.
  • Chores

    • Default service proxy URL updated; legacy grpc flag removed; websocket dependency removed.
  • Tests / CI

    • End-to-end and fuzz test suites removed; CI fuzz-test job disabled/commented out.

@coderabbitai
Copy link

coderabbitai bot commented Mar 8, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

The PR replaces direct HTTP/gRPC publish and subscribe with a session-based proxy model: adds session and node clients, protobufs for command streaming and session negotiation, refactors CLI publish/subscribe to use sessions and node gRPC streams, updates README and Makefile, removes the old gRPC proxy client and the entire end-to-end test suite, and disables the CI fuzz job.

Changes

Cohort / File(s) Summary
Workflow & Docs
/.github/workflows/ci.yml, Makefile, README.md
Disabled the CI fuzz job (commented/removed), changed default SERVICE_URL to a proxy endpoint, and performed a large README rewrite to an authentication-first CLI narrative.
CLI Commands
cmd/publish.go, cmd/subscribe.go
Refactored publish/subscribe to session-based flow: create/get session, select node(s), use node gRPC client streams; removed HTTP/gRPC bifurcation and related payload types; removed --grpc flag, added --expose-amount, and adjusted debug helper signatures.
Session & Node Clients
internal/session/client.go, internal/session/store.go, internal/node/client.go
Added HTTP session creation client and file-backed session cache with locking (GetOrCreateSession/InvalidateSession); added node gRPC client supporting bidi command streams, Publish, Subscribe, and Close.
Protobufs
proto/session.proto, proto/p2p_stream.proto
Added session negotiation and p2p command-stream protobufs (services/messages for CreateSession, ListenCommands, Health, ListTopics, request/response schemas and enums).
Removed gRPC Proxy Layer
internal/grpc/proxy_client.go
Deleted legacy ProxyClient and its Subscribe/Publish/SubscribeTopic/Close implementations.
E2E Test Harness Removed
e2e/...
e2e/cli_runner.go, e2e/commands_test.go, e2e/config.go, e2e/cross_node_test.go, e2e/failure_test.go, e2e/fuzz_test.go, e2e/integration_test.go, e2e/publish_test.go, e2e/ratelimit_scenarios_test.go, e2e/ratelimit_test.go, e2e/setup.go, e2e/smoke_cases.go, e2e/subscribe_test.go, e2e/suite_test.go, e2e/token.go, e2e/validators.go
Removed nearly the entire end-to-end test suite and helpers: CLI runners, smoke/integration/fuzz/ratelimit/publish/subscribe tests, token/setup utilities, validators, and TestMain/setup logic.
Dependencies
go.mod
Removed github.com/gorilla/websocket dependency (reflecting removal of WebSocket-based code).

Sequence Diagram(s)

sequenceDiagram
    participant User as User/CLI
    participant CLI as CLI (publish)
    participant SessionSvc as Session Client
    participant Proxy as Proxy Service
    participant Node as Node gRPC

    User->>CLI: publish --topic "test" --message "hi"
    activate CLI
    CLI->>SessionSvc: CreateSession(proxyURL, clientID, [topic], capabilities, exposeAmount)
    activate SessionSvc
    SessionSvc->>Proxy: POST /api/v1/session {client_id, topics, expose_amount...}
    activate Proxy
    Proxy-->>SessionSvc: 200 {session_id, nodes:[{address,ticket,...}]}
    deactivate Proxy
    SessionSvc-->>CLI: Session{nodes}
    deactivate SessionSvc
    CLI->>CLI: select best node
    CLI->>Node: gRPC ListenCommands() (bidi stream) -> send Publish Request{topic,data,ticket}
    activate Node
    Node-->>CLI: Response{status,metadata}
    deactivate Node
    CLI-->>User: Publish successful
    deactivate CLI
Loading
sequenceDiagram
    participant User as User/CLI
    participant CLI as CLI (subscribe)
    participant SessionSvc as Session Client
    participant Proxy as Proxy Service
    participant Node as Node gRPC

    User->>CLI: subscribe --topic "test"
    activate CLI
    CLI->>SessionSvc: CreateSession(proxyURL, clientID, [topic], ..., exposeAmount)
    activate SessionSvc
    SessionSvc->>Proxy: POST /api/v1/session
    activate Proxy
    Proxy-->>SessionSvc: {session_id, nodes}
    deactivate Proxy
    SessionSvc-->>CLI: Session{nodes}
    deactivate SessionSvc
    CLI->>CLI: select best node
    CLI->>Node: gRPC ListenCommands() (bidi stream) -> send Subscribe Request{topic,jwt_token}
    activate Node
    loop messages
      Node-->>CLI: Response{data}
      CLI->>CLI: decodeMessage & format/display or webhook/persistence
    end
    deactivate Node
    deactivate CLI
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Possibly related PRs

Suggested reviewers

  • hpsing
  • Walentalien

Poem

🐰 Hop-hop, I hopped through code and night,

Sessions bloom where two paths lost their fight,
Nodes now chatter in a single stream,
Docs refreshed and tests swept like a dream,
A tiny rabbit cheers this new-day light!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 9.52% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: rewrite CLI for ADR-002 direct pub/sub' accurately summarizes the primary change—a comprehensive rewrite of the CLI architecture to implement ADR-002 direct pub/sub flows, moving from legacy HTTP/gRPC/WebSocket patterns to session-based proxy orchestration with node clients.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/adr-002-direct-pubsub

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Comment @coderabbitai help to get the list of available commands and usage tips.

@swarna1101 swarna1101 linked an issue Mar 8, 2026 that may be closed by this pull request
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 12

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/ci.yml:
- Around line 58-68: The inline "NOTE:" comment sits at the same indentation as
the 'fuzz' job and will become an invalid job property if the block is
uncommented; move the NOTE line out of the job block by placing it as a
top-level comment (no leading indentation) above the commented 'fuzz' job block
or keep it indented as a normal comment inside the block body (e.g., under the
steps) so that the 'fuzz' job name and keys remain valid when re-enabled—update
the commented lines around the 'fuzz' job and the "NOTE:" so the 'fuzz' job key
and its children (steps, runs-on) remain correctly indented.

In `@cmd/publish.go`:
- Around line 146-149: Replace the unbounded context used for the one-shot
Publish RPC with a context that has a short timeout: instead of ctx :=
context.Background() create a cancellable timed context via
context.WithTimeout(context.Background(), <duration>) (e.g., 5-15s), defer the
cancel, and pass that ctx into nodeClient.Publish(bestNode.Ticket, pubTopic,
publishData); also ensure the time package is imported. This bounds
connect/send/recv and causes the command to fail fast on dead or black-holed
nodes.

In `@cmd/subscribe.go`:
- Around line 164-190: The code only dials sess.Nodes[0] so failures never fall
back to other exposed nodes; modify the logic that currently picks bestNode :=
sess.Nodes[0] and immediately calls node.NewClient(...) and
nodeClient.Subscribe(...) to instead iterate over the candidate nodes (limit the
iteration by the configured expose amount flag/variable and the length of
sess.Nodes), attempting to create a client with node.NewClient(address) and then
Subscribe(ctx, node.Ticket, subTopic, 100) for each node, closing any client on
error and continuing to the next node, and only returning an error if all
attempts fail; update references to sess.Nodes, bestNode.Ticket, node.NewClient,
nodeClient.Subscribe and ensure ctx/cancel are created once and canceled on
final exit.
- Around line 199-233: The current webhook dispatcher launches a new goroutine
for every dequeued message (inside the anonymous func reading from wq), causing
unbounded concurrency; replace this with a fixed worker pool so the queue
provides backpressure: create a workerCount constant (or use
webhookQueueSize/workers flag), start that many goroutines that each range over
wq and process messages inline (call webhookFormatter.FormatMessage, create
request with http.NewRequestWithContext using webhookTimeoutSecs, do the HTTP
POST, handle resp.Body.Close and status codes) instead of spawning a new
goroutine per message; keep existing symbols webhookMsg, wq, webhookQueueSize,
webhookURL, webhookTimeoutSecs, webhookFormatter, subTopic, and clientIDToUse to
locate and modify the code.
- Around line 153-159: The session is requesting both "publish" and "subscribe"
rights but the command only needs subscribe; update the call to
session.CreateSession so the capabilities slice contains only "subscribe"
(replace []string{"publish","subscribe"} with a single-entry slice containing
"subscribe") while keeping proxyURL, clientIDToUse, []string{subTopic}, and
subExposeAmount unchanged so the session requests least-privilege.

In `@internal/node/client.go`:
- Around line 58-80: The goroutine that reads from stream.Recv currently
swallows non-EOF errors and only closes ch, so callers (e.g., cmd/subscribe.go)
cannot distinguish EOF from a stream failure; change the implementation to
propagate receive errors to the caller by adding an error propagation mechanism
(either an additional errCh chan error or change ch to carry a result struct
{resp *pb.Response; err error}), send non-nil errors (except io.EOF) on that
channel before returning, and ensure callers of the stream reader (subscribe
logic) check this error channel/result.err to detect broken streams and trigger
retries; keep EOF behavior as a clean close with no error sent.
- Around line 106-109: The current handler in internal/node/client.go treats an
io.EOF from stream.Recv() as success by returning (nil, nil), which lets
cmd/publish.go record a successful publish with no ack; change the EOF branch to
return a non-nil error (e.g., wrap or return fmt.Errorf/io.ErrUnexpectedEOF with
context) so callers like the publish flow see the failure; update the error
message to mention "EOF before response" and ensure the function that calls
stream.Recv() (the code handling resp, err := stream.Recv()) propagates that
error instead of treating nil response as success.
- Around line 26-31: The gRPC client setup in the grpc.NewClient call currently
uses insecure.NewCredentials() and unbounded sizes
(grpc.MaxCallRecvMsgSize/math.MaxCallSendMsgSize), which exposes JwtToken in
cleartext and risks unbounded memory; update the transport to use TLS with
proper certificate validation (e.g., use
grpc.WithTransportCredentials(credentials.NewClientTLSFromCert(...) or system
root TLS creds) and replace math.MaxInt limits in grpc.WithDefaultCallOptions
(grpc.MaxCallRecvMsgSize/grpc.MaxCallSendMsgSize) with a reasonable cap such as
10 * 1024 * 1024 (10 MiB); ensure these changes are applied where nodeAddr is
used to create the connection so JwtToken isn't sent over plaintext.

In `@internal/session/client.go`:
- Around line 49-57: The code currently builds a POST to proxyURL (variable
proxyURL) and sends it over plain HTTP; to avoid leaking node tickets require
the proxyURL scheme to be https by default: validate proxyURL (parse via
net/url.Parse) and if url.Scheme != "https" return an error unless an explicit
insecure opt-in flag (e.g., allowInsecureProxy or an env/config boolean) is set;
update the function that constructs the request (where http.NewRequest and
client := &http.Client{Timeout: 10 * time.Second} are used) to perform this
check before creating the request and document/propagate the opt-in flag so
callers can deliberately enable non-HTTPS for dev only.
- Around line 12-18: The session selection never declares the desired transport,
so the proxy can return a non-gRPC node; update the code that asks the proxy for
a node to explicitly request gRPC-capable nodes by setting Node.Transport =
"grpc" (or adding a "transport":"grpc" field to the proxy request payload)
before sending the request—look for the Node struct and the functions that call
the proxy (e.g., the session creation / node request routines) and ensure they
populate Node.Transport with "grpc" so callers that immediately instantiate a
gRPC client only receive gRPC-capable nodes.

In `@README.md`:
- Around line 33-38: Replace the hardcoded Auth0 subject
"google-oauth2|116937893938826513819" used in the Authentication Status examples
with a clearly fake placeholder (for example "AUTH0_SUBJECT_PLACEHOLDER" or
"auth0|<USER_ID>"); update every occurrence of that literal string (including
the example block under the "Authentication Status" header and the other
examples referenced) so public docs no longer expose a stable user identifier.
- Around line 13-24: Update the fenced code blocks that show CLI/plain output
(for example the block starting with "Initiating authentication..." and the
other blocks referenced in the comment) to include a language tag of "text"
(i.e., change ``` to ```text) so markdownlint MD040 is satisfied; apply the same
change to the other output blocks noted (lines 32-46, 54-56, 66-73, 83-88,
111-115, 129-137, 145-158, 166-172, 180-183) to consistently mark these CLI
output samples as text.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 9a4f87b5-b447-4d3a-ae91-25e365440999

📥 Commits

Reviewing files that changed from the base of the PR and between 4f76630 and 85783ab.

⛔ Files ignored due to path filters (7)
  • go.sum is excluded by !**/*.sum
  • proto/p2p_stream.pb.go is excluded by !**/*.pb.go
  • proto/p2p_stream_grpc.pb.go is excluded by !**/*.pb.go
  • proto/proxy_stream.pb.go is excluded by !**/*.pb.go
  • proto/proxy_stream_grpc.pb.go is excluded by !**/*.pb.go
  • proto/session.pb.go is excluded by !**/*.pb.go
  • proto/session_grpc.pb.go is excluded by !**/*.pb.go
📒 Files selected for processing (27)
  • .github/workflows/ci.yml
  • Makefile
  • README.md
  • cmd/publish.go
  • cmd/subscribe.go
  • e2e/cli_runner.go
  • e2e/commands_test.go
  • e2e/config.go
  • e2e/cross_node_test.go
  • e2e/failure_test.go
  • e2e/fuzz_test.go
  • e2e/integration_test.go
  • e2e/publish_test.go
  • e2e/ratelimit_scenarios_test.go
  • e2e/ratelimit_test.go
  • e2e/setup.go
  • e2e/smoke_cases.go
  • e2e/subscribe_test.go
  • e2e/suite_test.go
  • e2e/token.go
  • e2e/validators.go
  • go.mod
  • internal/grpc/proxy_client.go
  • internal/node/client.go
  • internal/session/client.go
  • proto/p2p_stream.proto
  • proto/session.proto
💤 Files with no reviewable changes (18)
  • e2e/ratelimit_scenarios_test.go
  • go.mod
  • e2e/ratelimit_test.go
  • e2e/token.go
  • e2e/cli_runner.go
  • e2e/integration_test.go
  • e2e/smoke_cases.go
  • e2e/subscribe_test.go
  • e2e/suite_test.go
  • e2e/publish_test.go
  • e2e/setup.go
  • e2e/commands_test.go
  • e2e/validators.go
  • e2e/fuzz_test.go
  • e2e/failure_test.go
  • e2e/config.go
  • internal/grpc/proxy_client.go
  • e2e/cross_node_test.go

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (3)
cmd/subscribe.go (3)

153-159: ⚠️ Potential issue | 🟠 Major

Request only subscribe capability for this session.

This path never publishes, but it still asks the proxy for "publish". That widens the node ticket unnecessarily.

Possible fix
 		sess, err := session.CreateSession(
 			proxyURL,
 			clientIDToUse,
 			[]string{subTopic},
-			[]string{"publish", "subscribe"},
+			[]string{"subscribe"},
 			subExposeAmount,
 		)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/subscribe.go` around lines 153 - 159, The session is requesting both
"publish" and "subscribe" scopes though this command only needs subscribe;
update the session.CreateSession call (the call site that passes the scopes
slice) to request only "subscribe" (i.e. replace []string{"publish",
"subscribe"} with a single-scope slice containing "subscribe") so the proxy
ticket is limited to subscribe capability only.

164-190: ⚠️ Potential issue | 🟠 Major

--expose-amount still doesn't provide failover.

Only sess.Nodes[0] is ever dialed/subscribed. If that first connect or stream setup fails, the command exits and ignores the remaining exposed nodes.

Implementation sketch
-		bestNode := sess.Nodes[0]
-		fmt.Printf("Session: %s | Node: %s (%s, score: %.2f)\n",
-			sess.SessionID, bestNode.Address, bestNode.Region, bestNode.Score)
+		var (
+			bestNode   session.Node
+			nodeClient *node.Client
+			msgChan    <-chan *node.SubscribeResponse
+			lastErr    error
+		)
+
+		for _, candidate := range sess.Nodes {
+			fmt.Printf("Trying node: %s (%s, score: %.2f)\n", candidate.Address, candidate.Region, candidate.Score)
+
+			c, err := node.NewClient(candidate.Address)
+			if err != nil {
+				lastErr = err
+				continue
+			}
+
+			ch, err := c.Subscribe(ctx, candidate.Ticket, subTopic, 100)
+			if err != nil {
+				c.Close()
+				lastErr = err
+				continue
+			}
+
+			bestNode = candidate
+			nodeClient = c
+			msgChan = ch
+			break
+		}
+
+		if nodeClient == nil {
+			return fmt.Errorf("subscribe failed on all exposed nodes: %v", lastErr)
+		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/subscribe.go` around lines 164 - 190, The code only dials sess.Nodes[0]
and fails fast; change Subscribe command to iterate over sess.Nodes and attempt
to connect/subscribe to each until one succeeds: for each node in sess.Nodes
(use bestNode variable name when selecting), call node.NewClient(node.Address),
handle error by closing client if non-nil and continue to next node, then call
nodeClient.Subscribe(ctx, node.Ticket, subTopic, 100) and on subscribe error
close the client and continue; when subscribe returns a valid msgChan, break and
use that channel; ensure any created nodeClient is deferred/closed only for the
successful subscription lifecycle and that ctx/cancel are set appropriately.

203-233: ⚠️ Potential issue | 🟠 Major

The webhook queue still doesn't cap concurrency.

Each dequeued message spins up another goroutine, so a hot subscription can fan out to unbounded concurrent POSTs even when --webhook-queue-size is small.

Preferred direction

Process inline inside a fixed set of workers, or inline inside the queue reader itself, so the queue size actually provides backpressure.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/subscribe.go` around lines 203 - 233, The webhook handler currently
spawns a new goroutine per message (inside the loop that ranges over wq), which
allows unbounded concurrent POSTs and defeats --webhook-queue-size; change it to
use a fixed worker pool or process inline so the queue size exerts backpressure:
create a bounded set of workers (e.g., spawn N goroutines once, where N is the
configured webhook worker count or webhookQueueSize) that each range over wq and
perform webhookFormatter.FormatMessage, http.NewRequestWithContext, and
http.DefaultClient.Do with the existing timeout logic (use webhookTimeoutSecs,
webhookFormatter.FormatMessage, webhookURL, wq, etc.), and remove the
per-message inner goroutine so concurrency is limited.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cmd/subscribe.go`:
- Around line 216-221: The code always sets Content-Type: application/json on
the webhook request even when forwarding raw payloads; modify the logic around
the http.NewRequestWithContext call so that req.Header.Set("Content-Type",
"application/json") is only performed when a schema/JSON payload is intended
(e.g., when webhookSchema (or the variable representing --webhook-schema) is
non-empty or you have explicitly formatted JSON), otherwise do not set the
Content-Type header so raw/text/binary payloads are forwarded unchanged; update
the code paths around http.NewRequestWithContext, req, formattedPayload,
webhookURL and wctx accordingly.
- Around line 202-203: The channel for webhook messages is created
unconditionally with make(chan webhookMsg, webhookQueueSize) which will panic if
webhookQueueSize is negative even when webhookURL is empty; modify the code so
validation of webhookQueueSize occurs before channel creation and only create wq
when webhookURL != "" and webhookQueueSize is a non-negative integer (validate
webhookQueueSize and return or set a sane default on invalid values). Locate
uses of webhookQueueSize, webhookMsg, and wq in the subscribe command to add the
check and conditional creation.

---

Duplicate comments:
In `@cmd/subscribe.go`:
- Around line 153-159: The session is requesting both "publish" and "subscribe"
scopes though this command only needs subscribe; update the
session.CreateSession call (the call site that passes the scopes slice) to
request only "subscribe" (i.e. replace []string{"publish", "subscribe"} with a
single-scope slice containing "subscribe") so the proxy ticket is limited to
subscribe capability only.
- Around line 164-190: The code only dials sess.Nodes[0] and fails fast; change
Subscribe command to iterate over sess.Nodes and attempt to connect/subscribe to
each until one succeeds: for each node in sess.Nodes (use bestNode variable name
when selecting), call node.NewClient(node.Address), handle error by closing
client if non-nil and continue to next node, then call nodeClient.Subscribe(ctx,
node.Ticket, subTopic, 100) and on subscribe error close the client and
continue; when subscribe returns a valid msgChan, break and use that channel;
ensure any created nodeClient is deferred/closed only for the successful
subscription lifecycle and that ctx/cancel are set appropriately.
- Around line 203-233: The webhook handler currently spawns a new goroutine per
message (inside the loop that ranges over wq), which allows unbounded concurrent
POSTs and defeats --webhook-queue-size; change it to use a fixed worker pool or
process inline so the queue size exerts backpressure: create a bounded set of
workers (e.g., spawn N goroutines once, where N is the configured webhook worker
count or webhookQueueSize) that each range over wq and perform
webhookFormatter.FormatMessage, http.NewRequestWithContext, and
http.DefaultClient.Do with the existing timeout logic (use webhookTimeoutSecs,
webhookFormatter.FormatMessage, webhookURL, wq, etc.), and remove the
per-message inner goroutine so concurrency is limited.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 41911699-f36a-43ba-81db-0a9eeb8d7650

📥 Commits

Reviewing files that changed from the base of the PR and between 85783ab and 50761a1.

📒 Files selected for processing (2)
  • cmd/publish.go
  • cmd/subscribe.go

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@README.md`:
- Around line 191-199: The statement "All commands support `--disable-auth
--client-id <id>`" is too broad; change it to explicitly state which commands
support the flag (e.g., the data-plane CLI commands used in the examples) and
avoid implying auth lifecycle or admin commands are covered. Update the README
text to mention that `--disable-auth` applies to the data-plane commands such as
the mump2p CLI commands shown (mump2p subscribe, mump2p publish, mump2p
list-topics) and, if needed, add a short note clarifying that auth
lifecycle/admin commands do not support that flag.
- Around line 40-44: Update the README sample so the two identical "Publish
Rate" rows are disambiguated: change the first "Publish Rate:  50000 per hour"
to "Publish Rate (per hour): 50000 per hour" (or mirror the CLI label, e.g.,
"Publish Rate (hourly): 50000") and change the second to "Publish Rate (per
second): 600 per second" (or the CLI equivalent) so the two limits are clearly
distinct in the output example.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: e5a6789e-e858-42ee-9b86-8d08a3f6c932

📥 Commits

Reviewing files that changed from the base of the PR and between 50761a1 and e9d52aa.

📒 Files selected for processing (1)
  • README.md

@swarna1101 swarna1101 requested a review from hpsing March 9, 2026 08:35
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (9)
cmd/subscribe.go (4)

202-202: ⚠️ Potential issue | 🟡 Minor

Validate --webhook-queue-size before channel creation.

make(chan webhookMsg, webhookQueueSize) panics for negative values. The channel is created unconditionally before checking webhookURL != "".

Suggested fix
+		if webhookQueueSize < 0 {
+			return fmt.Errorf("--webhook-queue-size must be >= 0")
+		}
+
 		type webhookMsg struct {
 			data []byte
 		}
-		wq := make(chan webhookMsg, webhookQueueSize)
+		var wq chan webhookMsg
+		if webhookURL != "" {
+			wq = make(chan webhookMsg, webhookQueueSize)
+		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/subscribe.go` at line 202, Validate webhookQueueSize before creating the
channel: ensure the integer webhookQueueSize is non-negative and return an error
(or set a default) if it is negative, and only call make(chan webhookMsg,
webhookQueueSize) after confirming webhookURL != "" so the channel is not
created unconditionally; update the logic around webhookQueueSize and webhookURL
checks (referencing webhookQueueSize, webhookURL, webhookMsg and the make(chan
webhookMsg, webhookQueueSize) call) to perform validation first and then create
the buffered channel when needed.

164-176: ⚠️ Potential issue | 🟠 Major

--expose-amount does not provide actual failover.

The code only ever connects to sess.Nodes[0]. If the first node fails, the command exits without trying other exposed nodes. The help text at line 297 claims failover support, but the implementation doesn't deliver it.

Consider iterating over available nodes on connection/stream failure before giving up.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/subscribe.go` around lines 164 - 176, The code currently always uses
sess.Nodes[0] (bestNode) for connections and prints other nodes but never
attempts failover; change the connection/stream establishment to iterate over
sess.Nodes (using the existing bestNode/sess.Nodes identifiers) and attempt to
connect to each node in order, logging each attempt and moving to the next node
on connection or stream failure, and only exit with error after all nodes have
been tried; implement this by replacing the single-use connection logic with a
retry loop that calls the existing connection/stream functions (the code that
dials/opens the stream) per node, breaks on success, and ensures errors from
each failed node are logged so failover actually occurs.

221-221: ⚠️ Potential issue | 🟠 Major

Don't force application/json for raw webhook payloads.

When --webhook-schema is empty, the raw message body is forwarded but still labeled as JSON. Binary or plain-text subscriptions may be rejected or misparsed by receivers.

Suggested fix
-						req.Header.Set("Content-Type", "application/json")
+						contentType := "application/octet-stream"
+						if webhookSchema != "" {
+							contentType = "application/json"
+						}
+						req.Header.Set("Content-Type", contentType)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/subscribe.go` at line 221, The code currently unconditionally sets
req.Header.Set("Content-Type", "application/json"), which forces JSON for raw
webhook payloads; change this so the Content-Type header is only set when a
webhook schema is provided (i.e., when the --webhook-schema value is non-empty).
Update the code that builds the outbound request (the spot using
req.Header.Set("Content-Type", "application/json")) to check the webhook schema
variable (e.g., webhookSchema or whatever flag/field holds --webhook-schema) and
set Content-Type to "application/json" (or to the explicit schema value, if you
accept a MIME type) only when that variable is not empty; otherwise do not set
the Content-Type header so binary/plain-text payloads are forwarded raw. Ensure
you modify the same request-building logic that references req.Header.Set to be
conditional.

199-233: ⚠️ Potential issue | 🟠 Major

Webhook dispatcher spawns unbounded concurrent goroutines.

Each dequeued message launches a new goroutine (line 206), so a high-throughput subscription can fan out to unbounded concurrent HTTP POSTs regardless of --webhook-queue-size. The queue only limits buffered messages, not in-flight requests.

Use a fixed worker pool or process messages inline within the single consumer goroutine to enforce real backpressure.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/subscribe.go` around lines 199 - 233, The current consumer goroutine for
wq (channel of webhookMsg) spawns a new goroutine per message (the inner go
func(payload []byte) { ... }(msg.data)), allowing unbounded in-flight HTTP
POSTs; replace that unbounded spawn with a bounded worker model or inline
processing to enforce backpressure. Concretely, remove the per-message go spawn
inside the for msg := range wq loop and either (a) create a fixed pool of N
worker goroutines that each read from wq and execute
webhookFormatter.FormatMessage(...), create the http.Request and call
http.DefaultClient.Do(...), or (b) perform the HTTP POST directly in the single
consumer goroutine so only one in-flight request exists; use a semaphore/channel
(size configurable, e.g., webhookWorkers) if you need limited concurrency.
Ensure error handling around webhookFormatter.FormatMessage and resp.Body.Close
remains, and use webhookQueueSize only for buffering while webhookWorkers
controls concurrent requests.
README.md (3)

40-45: ⚠️ Potential issue | 🟡 Minor

Disambiguate the two publish-rate rows.

Lines 42-43 use the same Publish Rate label for different limits (per hour vs per second), making the output confusing.

Suggested fix
 Rate Limits:
 ------------
-Publish Rate:  50000 per hour
-Publish Rate:  600 per second
+Publish Rate (hourly):   50000
+Publish Rate (per sec):  600
 Max Message Size:  10.00 MB
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README.md` around lines 40 - 45, The two identical "Publish Rate" rows in the
README are ambiguous; update the labels to clearly distinguish the time windows
(e.g., "Publish Rate (per hour): 50000" and "Publish Rate (per second): 600") so
readers know which limit applies to hourly vs per-second quotas—modify the lines
that currently read "Publish Rate" to include the time unit (per hour / per
second) and keep the numeric values and other rows (Max Message Size, Daily
Quota) unchanged.

13-24: ⚠️ Potential issue | 🟡 Minor

Add language tags to plain-output code blocks.

markdownlint MD040 flags fenced code blocks without language specifiers. Use text for CLI output samples to satisfy the linter and improve rendering.

Example fix
-```
+```text
 Initiating authentication...

Also applies to: 32-46, 54-56, 66-73, 83-88, 111-115, 129-137, 145-158, 166-172, 180-183

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README.md` around lines 13 - 24, The fenced code blocks showing CLI/plain
output (for example the block starting with "Initiating authentication..." and
other blocks at ranges you listed) lack a language tag; update each
triple-backtick fence from ``` to ```text so markdownlint MD040 is satisfied and
the output renders as plain text — apply this change to the example block and
all other occurrences noted (lines 32-46, 54-56, 66-73, 83-88, 111-115, 129-137,
145-158, 166-172, 180-183).

191-199: ⚠️ Potential issue | 🟡 Minor

Narrow the --disable-auth scope claim.

"All commands support --disable-auth" is too broad. Auth lifecycle commands (login, logout, whoami) don't need this flag. Clarify that it applies to data-plane commands.

Suggested fix
-All commands support `--disable-auth --client-id <id>` to skip Auth0.
+Data-plane commands (`subscribe`, `publish`, `list-topics`) support `--disable-auth --client-id <id>` to skip Auth0 for testing.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README.md` around lines 191 - 199, The README incorrectly states “All
commands support --disable-auth”; narrow that scope by clarifying the flag only
applies to data-plane commands (e.g., subscribe, publish, list-topics) and not
to auth lifecycle commands (login, logout, whoami). Update the prose around the
example block to explicitly state that --disable-auth --client-id <id> is
accepted for data-plane CLI commands (list the example commands
subscribe/publish/list-topics) and note that auth commands (login/logout/whoami)
do not accept or require this flag.
internal/node/client.go (2)

58-80: ⚠️ Potential issue | 🟠 Major

Stream failures are not propagated to the caller.

The goroutine prints receive errors (line 70) and closes the channel, but cmd/subscribe.go cannot distinguish a broken stream from a clean EOF. The command exits successfully without any signal to retry another exposed node.

Consider returning an error channel or a struct with both response and error to enable proper retry logic.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/node/client.go` around lines 58 - 80, The goroutine reading from
stream (using stream.Recv and sending into ch) swallows non-EOF errors by
printing and closing the response channel, so callers like cmd/subscribe.go
cannot tell EOF vs fatal stream error; change the pattern to propagate errors by
replacing ch (chan *pb.Response) with a result channel (e.g., chan result { resp
*pb.Response; err error }) or add a separate error channel, send any non-nil err
(including io.EOF if desired) into that channel before closing, and have the
receiver check the error field (or error channel) to distinguish clean EOF from
a broken stream; update usages around ch, the anonymous goroutine, and consumers
in cmd/subscribe.go to handle the new result/error signal and trigger retries on
real errors.

25-40: ⚠️ Potential issue | 🔴 Critical

Harden gRPC transport before connecting to internet-facing nodes.

The client uses insecure.NewCredentials() (cleartext) and math.MaxInt message size bounds. Combined with public proxy URLs in the README, this means:

  • Bearer tokens (JwtToken field) transmit in plaintext
  • A malicious node can force unbounded memory allocation

Enable TLS with proper certificate validation and set reasonable message size limits (e.g., 10 MiB).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/node/client.go` around lines 25 - 40, The NewClient function
currently uses insecure.NewCredentials() and unbounded message sizes; update
NewClient to use real TLS credentials (replace insecure.NewCredentials() with
grpc/credentials.NewClientTLSFromCert or credentials.NewClientTLSFromFile and
ensure proper server name verification) and constrain gRPC message sizes in the
grpc.WithDefaultCallOptions (replace math.MaxInt with a reasonable limit such as
10*1024*1024 for grpc.MaxCallRecvMsgSize and grpc.MaxCallSendMsgSize); ensure
any JwtToken usage continues to be sent over the secured channel and document or
load TLS certs rather than disabling validation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@cmd/subscribe.go`:
- Line 202: Validate webhookQueueSize before creating the channel: ensure the
integer webhookQueueSize is non-negative and return an error (or set a default)
if it is negative, and only call make(chan webhookMsg, webhookQueueSize) after
confirming webhookURL != "" so the channel is not created unconditionally;
update the logic around webhookQueueSize and webhookURL checks (referencing
webhookQueueSize, webhookURL, webhookMsg and the make(chan webhookMsg,
webhookQueueSize) call) to perform validation first and then create the buffered
channel when needed.
- Around line 164-176: The code currently always uses sess.Nodes[0] (bestNode)
for connections and prints other nodes but never attempts failover; change the
connection/stream establishment to iterate over sess.Nodes (using the existing
bestNode/sess.Nodes identifiers) and attempt to connect to each node in order,
logging each attempt and moving to the next node on connection or stream
failure, and only exit with error after all nodes have been tried; implement
this by replacing the single-use connection logic with a retry loop that calls
the existing connection/stream functions (the code that dials/opens the stream)
per node, breaks on success, and ensures errors from each failed node are logged
so failover actually occurs.
- Line 221: The code currently unconditionally sets
req.Header.Set("Content-Type", "application/json"), which forces JSON for raw
webhook payloads; change this so the Content-Type header is only set when a
webhook schema is provided (i.e., when the --webhook-schema value is non-empty).
Update the code that builds the outbound request (the spot using
req.Header.Set("Content-Type", "application/json")) to check the webhook schema
variable (e.g., webhookSchema or whatever flag/field holds --webhook-schema) and
set Content-Type to "application/json" (or to the explicit schema value, if you
accept a MIME type) only when that variable is not empty; otherwise do not set
the Content-Type header so binary/plain-text payloads are forwarded raw. Ensure
you modify the same request-building logic that references req.Header.Set to be
conditional.
- Around line 199-233: The current consumer goroutine for wq (channel of
webhookMsg) spawns a new goroutine per message (the inner go func(payload
[]byte) { ... }(msg.data)), allowing unbounded in-flight HTTP POSTs; replace
that unbounded spawn with a bounded worker model or inline processing to enforce
backpressure. Concretely, remove the per-message go spawn inside the for msg :=
range wq loop and either (a) create a fixed pool of N worker goroutines that
each read from wq and execute webhookFormatter.FormatMessage(...), create the
http.Request and call http.DefaultClient.Do(...), or (b) perform the HTTP POST
directly in the single consumer goroutine so only one in-flight request exists;
use a semaphore/channel (size configurable, e.g., webhookWorkers) if you need
limited concurrency. Ensure error handling around webhookFormatter.FormatMessage
and resp.Body.Close remains, and use webhookQueueSize only for buffering while
webhookWorkers controls concurrent requests.

In `@internal/node/client.go`:
- Around line 58-80: The goroutine reading from stream (using stream.Recv and
sending into ch) swallows non-EOF errors by printing and closing the response
channel, so callers like cmd/subscribe.go cannot tell EOF vs fatal stream error;
change the pattern to propagate errors by replacing ch (chan *pb.Response) with
a result channel (e.g., chan result { resp *pb.Response; err error }) or add a
separate error channel, send any non-nil err (including io.EOF if desired) into
that channel before closing, and have the receiver check the error field (or
error channel) to distinguish clean EOF from a broken stream; update usages
around ch, the anonymous goroutine, and consumers in cmd/subscribe.go to handle
the new result/error signal and trigger retries on real errors.
- Around line 25-40: The NewClient function currently uses
insecure.NewCredentials() and unbounded message sizes; update NewClient to use
real TLS credentials (replace insecure.NewCredentials() with
grpc/credentials.NewClientTLSFromCert or credentials.NewClientTLSFromFile and
ensure proper server name verification) and constrain gRPC message sizes in the
grpc.WithDefaultCallOptions (replace math.MaxInt with a reasonable limit such as
10*1024*1024 for grpc.MaxCallRecvMsgSize and grpc.MaxCallSendMsgSize); ensure
any JwtToken usage continues to be sent over the secured channel and document or
load TLS certs rather than disabling validation.

In `@README.md`:
- Around line 40-45: The two identical "Publish Rate" rows in the README are
ambiguous; update the labels to clearly distinguish the time windows (e.g.,
"Publish Rate (per hour): 50000" and "Publish Rate (per second): 600") so
readers know which limit applies to hourly vs per-second quotas—modify the lines
that currently read "Publish Rate" to include the time unit (per hour / per
second) and keep the numeric values and other rows (Max Message Size, Daily
Quota) unchanged.
- Around line 13-24: The fenced code blocks showing CLI/plain output (for
example the block starting with "Initiating authentication..." and other blocks
at ranges you listed) lack a language tag; update each triple-backtick fence
from ``` to ```text so markdownlint MD040 is satisfied and the output renders as
plain text — apply this change to the example block and all other occurrences
noted (lines 32-46, 54-56, 66-73, 83-88, 111-115, 129-137, 145-158, 166-172,
180-183).
- Around line 191-199: The README incorrectly states “All commands support
--disable-auth”; narrow that scope by clarifying the flag only applies to
data-plane commands (e.g., subscribe, publish, list-topics) and not to auth
lifecycle commands (login, logout, whoami). Update the prose around the example
block to explicitly state that --disable-auth --client-id <id> is accepted for
data-plane CLI commands (list the example commands
subscribe/publish/list-topics) and note that auth commands (login/logout/whoami)
do not accept or require this flag.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 7a5a34d8-d2da-4b28-839d-1f7b3c5e9f6f

📥 Commits

Reviewing files that changed from the base of the PR and between e9d52aa and c27546d.

📒 Files selected for processing (5)
  • Makefile
  • README.md
  • cmd/publish.go
  • cmd/subscribe.go
  • internal/node/client.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • Makefile

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (4)
internal/session/store.go (3)

159-164: Silently falling through on lock failure may cause duplicate sessions.

When acquireLock fails, the code proceeds to create a new session without caching. This could lead to multiple concurrent processes each creating their own sessions. Consider logging the lock error for observability.

Log lock acquisition failure
 	lf, lockErr := acquireLock()
 	if lockErr != nil {
-		// If locking fails, fall through to create without cache.
+		// If locking fails, fall through to create without cache.
+		fmt.Printf("Warning: could not acquire session lock: %v\n", lockErr)
 		sess, err := CreateSession(proxyURL, clientID, topics, capabilities, exposeAmount)
 		return sess, false, err
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/session/store.go` around lines 159 - 164, When acquireLock() returns
an error before creating the session, log the lockErr for observability before
falling back to creating a session without cache; update the branch in the code
that calls acquireLock() to emit a descriptive error (including the error value)
via the module's logger (or processLogger) so failures to acquire the lock are
visible, then continue to call CreateSession(proxyURL, clientID, topics,
capabilities, exposeAmount) and return as before.

107-124: Topic matching is asymmetric—cached sessions with extra topics will match.

The matches method checks that all requested topics exist in the cached session, but doesn't verify the reverse. A cached session with topics ["A", "B"] will match a request for ["A"], which may be intentional for topic superset reuse. However, this differs from capabilities matching (line 111) which requires exact equality.

If this asymmetry is intentional (reuse a session that covers more topics), consider adding a comment to clarify. Otherwise, align with capabilities:

Option: Enforce exact topic match like capabilities
 func (c *CachedSession) matches(proxyURL, clientID string, topics, capabilities []string) bool {
 	if c.ProxyURL != proxyURL || c.ClientID != clientID {
 		return false
 	}
 	if sortedKey(c.Capabilities) != sortedKey(capabilities) {
 		return false
 	}
-	cached := make(map[string]bool, len(c.Topics))
-	for _, t := range c.Topics {
-		cached[t] = true
-	}
-	for _, t := range topics {
-		if !cached[t] {
-			return false
-		}
-	}
-	return true
+	return sortedKey(c.Topics) == sortedKey(topics)
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/session/store.go` around lines 107 - 124, The matches method on
CachedSession currently treats topics asymmetrically (checks request topics are
subset of c.Topics); change this to require exact topic equality to match
capabilities behavior: in CachedSession.matches replace the manual subset logic
over c.Topics and topics with a sortedKey equality check (use
sortedKey(c.Topics) == sortedKey(topics)), ensuring topic comparison is
symmetric and mirrors the capabilities check; if asymmetry was intended instead,
add a brief clarifying comment above CachedSession.matches stating that cached
sessions may be reused when they are a superset of requested topics.

191-198: InvalidateSession silently ignores errors.

The function swallows errors from both cachePath() and os.Remove(). While ignoring "file not found" is reasonable, other errors (permission denied, I/O errors) might warrant logging for debugging purposes.

Log non-trivial removal errors
 func InvalidateSession() {
 	p, err := cachePath()
 	if err != nil {
+		fmt.Printf("Warning: could not get cache path for invalidation: %v\n", err)
 		return
 	}
-	os.Remove(p)
+	if err := os.Remove(p); err != nil && !os.IsNotExist(err) {
+		fmt.Printf("Warning: could not remove session cache: %v\n", err)
+	}
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/session/store.go` around lines 191 - 198, InvalidateSession
currently swallows errors from cachePath() and os.Remove(); change it to check
and handle errors: if cachePath() returns an error, log that error (don’t
silently return), and when calling os.Remove(p) detect and ignore
os.IsNotExist(err) but log any other errors (permission denied, I/O, etc.) with
sufficient context. Use the project’s logger or the standard log package to emit
a clear message referencing InvalidateSession, cachePath and os.Remove so
non-trivial failures are visible for debugging.
cmd/subscribe.go (1)

62-69: isReadable checks byte range but doesn't fully validate UTF-8 control characters.

The function allows any byte ≥0x20 plus newline/carriage-return/tab, but UTF-8 has control characters in the 0x7F-0x9F range that might not be "readable." For most use cases this is fine, but consider documenting the heuristic.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/subscribe.go` around lines 62 - 69, The isReadable function currently
tests raw bytes and permits any byte >= 0x20 (plus \n, \r, \t) but misses UTF‑8
control codepoints (e.g. U+007F–U+009F); update isReadable to decode the byte
slice as runes and reject any rune where unicode.IsControl(r) is true except for
allowed whitespace runes ('\n','\r','\t'), and ensure utf8.Valid(b) is still
checked; also add a short comment above isReadable documenting this heuristic
and why control runes are rejected.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@cmd/subscribe.go`:
- Around line 62-69: The isReadable function currently tests raw bytes and
permits any byte >= 0x20 (plus \n, \r, \t) but misses UTF‑8 control codepoints
(e.g. U+007F–U+009F); update isReadable to decode the byte slice as runes and
reject any rune where unicode.IsControl(r) is true except for allowed whitespace
runes ('\n','\r','\t'), and ensure utf8.Valid(b) is still checked; also add a
short comment above isReadable documenting this heuristic and why control runes
are rejected.

In `@internal/session/store.go`:
- Around line 159-164: When acquireLock() returns an error before creating the
session, log the lockErr for observability before falling back to creating a
session without cache; update the branch in the code that calls acquireLock() to
emit a descriptive error (including the error value) via the module's logger (or
processLogger) so failures to acquire the lock are visible, then continue to
call CreateSession(proxyURL, clientID, topics, capabilities, exposeAmount) and
return as before.
- Around line 107-124: The matches method on CachedSession currently treats
topics asymmetrically (checks request topics are subset of c.Topics); change
this to require exact topic equality to match capabilities behavior: in
CachedSession.matches replace the manual subset logic over c.Topics and topics
with a sortedKey equality check (use sortedKey(c.Topics) == sortedKey(topics)),
ensuring topic comparison is symmetric and mirrors the capabilities check; if
asymmetry was intended instead, add a brief clarifying comment above
CachedSession.matches stating that cached sessions may be reused when they are a
superset of requested topics.
- Around line 191-198: InvalidateSession currently swallows errors from
cachePath() and os.Remove(); change it to check and handle errors: if
cachePath() returns an error, log that error (don’t silently return), and when
calling os.Remove(p) detect and ignore os.IsNotExist(err) but log any other
errors (permission denied, I/O, etc.) with sufficient context. Use the project’s
logger or the standard log package to emit a clear message referencing
InvalidateSession, cachePath and os.Remove so non-trivial failures are visible
for debugging.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: f6e0a15b-0ca5-47d9-b3e9-1cbef3a7700d

📥 Commits

Reviewing files that changed from the base of the PR and between c27546d and 89f5de9.

📒 Files selected for processing (4)
  • README.md
  • cmd/publish.go
  • cmd/subscribe.go
  • internal/session/store.go

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Migrate the CLI based on ADR 0002

1 participant