Conversation
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Repository UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
93bc16e to
1cc00a5
Compare
|
This pr can now be reviewed im done with my stuff |
| }); | ||
| } | ||
|
|
||
| const existing = sessions.get(input.threadId); |
There was a problem hiding this comment.
🟢 Low Layers/CopilotAdapter.ts:1679
Concurrent startSession calls for the same threadId race at the sessions.get check (line 1679): both callers find no existing session, both create new clients and sessions, and the second sessions.set (line 1778) overwrites the first. The first session's client and session become unreachable and leak. Consider synchronizing in-flight session creation, e.g., by tracking pending promises in a separate map keyed by threadId.
🤖 Copy this AI Prompt to have your agent fix this:
In file apps/server/src/provider/Layers/CopilotAdapter.ts around line 1679:
Concurrent `startSession` calls for the same `threadId` race at the `sessions.get` check (line 1679): both callers find no existing session, both create new clients and sessions, and the second `sessions.set` (line 1778) overwrites the first. The first session's client and session become unreachable and leak. Consider synchronizing in-flight session creation, e.g., by tracking pending promises in a separate map keyed by `threadId`.
Evidence trail:
apps/server/src/provider/Layers/CopilotAdapter.ts line 844: `const sessions = new Map<ThreadId, ActiveCopilotSession>();` - plain Map with no synchronization
apps/server/src/provider/Layers/CopilotAdapter.ts line 1679: `const existing = sessions.get(input.threadId);` - the check
apps/server/src/provider/Layers/CopilotAdapter.ts lines 1720-1726: `yield* validateSessionConfiguration(...)` - first async yield point between check and set
apps/server/src/provider/Layers/CopilotAdapter.ts lines 1728-1763: `yield* Effect.tryPromise(...)` - second async yield point between check and set
apps/server/src/provider/Layers/CopilotAdapter.ts line 1778: `sessions.set(input.threadId, record);` - the set that overwrites any racing calls
| const inputTokens = usage.inputTokens ?? 0; | ||
| const cachedInputTokens = usage.cacheReadTokens ?? 0; | ||
| const outputTokens = usage.outputTokens ?? 0; | ||
| const usedTokens = inputTokens + cachedInputTokens + outputTokens; |
There was a problem hiding this comment.
Copilot token usage double-counts cached input tokens
Medium Severity
In normalizeCopilotAssistantTokenUsage, usedTokens is calculated as inputTokens + cachedInputTokens + outputTokens, which double-counts cached tokens since inputTokens from the SDK typically already includes cached tokens. This inflates the reported token usage shown to users and may cause incorrect cost or context-window calculations.
| export function normalizeCodexModelOptions( | ||
| caps: ModelCapabilities, | ||
| modelOptions: CodexModelOptions | null | undefined, | ||
| ): CodexModelOptions | undefined { | ||
| const defaultReasoningEffort = getDefaultEffort(caps); | ||
| const reasoningEffort = trimOrNull(modelOptions?.reasoningEffort) ?? defaultReasoningEffort; | ||
| const fastModeEnabled = modelOptions?.fastMode === true; |
There was a problem hiding this comment.
🟡 Medium src/model.ts:117
normalizeCodexModelOptions accepts any trimmed string for reasoningEffort without validating it against caps.reasoningEffortLevels. This allows invalid effort values like "invalid" to pass through as valid options, unlike normalizeClaudeModelOptions and normalizeCopilotModelOptions which reject unknown values. Consider adding a hasEffortLevel(caps, reasoningEffort) check before accepting the user-provided value.
const defaultReasoningEffort = getDefaultEffort(caps);
- const reasoningEffort = trimOrNull(modelOptions?.reasoningEffort) ?? defaultReasoningEffort;
+ const trimmedEffort = trimOrNull(modelOptions?.reasoningEffort);
+ const reasoningEffort =
+ trimmedEffort && hasEffortLevel(caps, trimmedEffort) ? trimmedEffort : defaultReasoningEffort;
const fastModeEnabled = modelOptions?.fastMode === true;🤖 Copy this AI Prompt to have your agent fix this:
In file packages/shared/src/model.ts around lines 117-123:
`normalizeCodexModelOptions` accepts any trimmed string for `reasoningEffort` without validating it against `caps.reasoningEffortLevels`. This allows invalid effort values like `"invalid"` to pass through as valid options, unlike `normalizeClaudeModelOptions` and `normalizeCopilotModelOptions` which reject unknown values. Consider adding a `hasEffortLevel(caps, reasoningEffort)` check before accepting the user-provided value.
Evidence trail:
packages/shared/src/model.ts lines 117-130: `normalizeCodexModelOptions` - no `hasEffortLevel` check for `reasoningEffort`
packages/shared/src/model.ts lines 143-144: `normalizeClaudeModelOptions` - uses `hasEffortLevel(caps, resolvedEffort)` to validate
packages/shared/src/model.ts lines 164-165: `normalizeCopilotModelOptions` - uses `hasEffortLevel(caps, resolvedReasoningEffort)` to validate
Commit: REVIEWED_COMMIT
| function resolveTextGenerationProvider( | ||
| settings: UnifiedSettings, | ||
| providers: ReadonlyArray<ServerProvider>, | ||
| ): (typeof TEXT_GENERATION_PROVIDERS)[number] { | ||
| const requested = settings.textGenerationModelSelection.provider; |
There was a problem hiding this comment.
🟢 Low src/modelSelection.ts:187
When settings.textGenerationModelSelection is undefined, resolveTextGenerationProvider throws TypeError: Cannot read property 'provider' of undefined on line 191. The function should use the fallback selection object already computed in resolveAppModelSelectionState, or accept the fallback value directly.
-function resolveTextGenerationProvider(
+function resolveTextGenerationProvider(
settings: UnifiedSettings,
providers: ReadonlyArray<ServerProvider>,
-): (typeof TEXT_GENERATION_PROVIDERS)[number] {
- const requested = settings.textGenerationModelSelection.provider;
+ fallbackSelection: GitTextGenerationModelSelection,
+): (typeof TEXT_GENERATION_PROVIDERS)[number] {
+ const requested = (settings.textGenerationModelSelection ?? fallbackSelection).provider;🤖 Copy this AI Prompt to have your agent fix this:
In file apps/web/src/modelSelection.ts around lines 187-191:
When `settings.textGenerationModelSelection` is `undefined`, `resolveTextGenerationProvider` throws `TypeError: Cannot read property 'provider' of undefined` on line 191. The function should use the fallback `selection` object already computed in `resolveAppModelSelectionState`, or accept the fallback value directly.
Evidence trail:
apps/web/src/modelSelection.ts lines 187-212 (REVIEWED_COMMIT):
- Line 191: `const requested = settings.textGenerationModelSelection.provider;` - accesses .provider without null check
- Lines 208-210: `const selection = settings.textGenerationModelSelection ?? {...}` - creates fallback but doesn't pass it
- Line 212: `const provider = resolveTextGenerationProvider(settings, providers);` - passes original settings, not the fallback selection
| reject(new Error("GitHub Copilot request timed out.")); | ||
| }, COPILOT_TIMEOUT_MS); | ||
| }), | ||
| ]); |
There was a problem hiding this comment.
Timeout promise race causes unhandled rejection
Medium Severity
The Promise.race between turnEnded and the timeout promise never clears the setTimeout. When turnEnded resolves first, the timer still fires and calls reject() on the timeout promise, which is no longer being awaited. This produces an unhandled promise rejection, which in Node.js with default settings (--unhandled-rejections=throw) can terminate the process.
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
There are 4 total unresolved issues (including 2 from previous reviews).
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
| code: COPILOT_PROVIDER_TIMEOUT_CODE, | ||
| }), | ||
| ); | ||
| }, timeoutMs); |
There was a problem hiding this comment.
Uncleared setTimeout leaks resources in Promise.race
Low Severity
The setTimeout inside the Promise.race timeout branch is never cleared when the happy path resolves first. This keeps the timer (and its closure capturing session, client, and other local variables) alive for up to COPILOT_TIMEOUT_MS (180 seconds) after the request has already completed. In a server context with many concurrent text generation or health-check requests, these leaked timers accumulate and prevent garbage collection of potentially expensive objects. The same pattern appears in CopilotProvider.ts for the health-check probe.
| error instanceof CopilotProviderTimeoutError || | ||
| (error instanceof CopilotProviderProbeError && | ||
| isCopilotProviderProbeTimeoutError(error.cause)) | ||
| ) { |
There was a problem hiding this comment.
Redundant dead-code timeout error branch in probe handler
Low Severity
The condition error instanceof CopilotProviderProbeError && isCopilotProviderProbeTimeoutError(error.cause) on the failure branch can never be true. The catch handler at line 128–133 already converts timeout causes into CopilotProviderTimeoutError (not CopilotProviderProbeError), so a timeout cause will never be wrapped inside a CopilotProviderProbeError. This dead branch adds confusion about when timeout errors are handled.
| message: | ||
| error instanceof CopilotProviderCommandMissingError | ||
| ? "GitHub Copilot CLI is not installed or could not be resolved." | ||
| : `Failed to start GitHub Copilot CLI health check: ${error instanceof Error ? error.message : String(error)}.`, |
There was a problem hiding this comment.
🟠 High Layers/CopilotProvider.ts:168
When CopilotProviderProbeError is caught at line 171, the code uses error.message which returns the default tagged error message (likely just "CopilotProviderProbeError"), not the actual underlying error. The meaningful error is stored in error.cause, so the user-facing message becomes unhelpful like "Failed to start GitHub Copilot CLI health check: CopilotProviderProbeError." instead of the actual failure details. Consider accessing error.cause.message to surface the real error.
message:
error instanceof CopilotProviderCommandMissingError
? "GitHub Copilot CLI is not installed or could not be resolved."
- : `Failed to start GitHub Copilot CLI health check: ${error instanceof Error ? error.message : String(error)}.`,
+ : `Failed to start GitHub Copilot CLI health check: ${error instanceof CopilotProviderProbeError && error.cause instanceof Error ? error.cause.message : error instanceof Error ? error.message : String(error)}.`,Also found in 1 other location(s)
apps/server/src/git/Layers/CopilotTextGeneration.ts:237
The error message at line 237-242 is misleading. This check fires when
selectedModelisundefined, which happens wheninput.modelSelection.modelwas not specified. However, the error message says "GitHub Copilot reasoning effort requires an explicit supported model selection" - implying the issue is about reasoning effort. This error triggers even when the user never requested reasoning effort (whenexplicitReasoningEffortisundefined), becausevalidateCopilotReasoningEffortreturns early without error when no reasoning effort is specified. The actual issue is simply that a model selection is required for this text generation feature, not anything related to reasoning effort.
🤖 Copy this AI Prompt to have your agent fix this:
In file apps/server/src/provider/Layers/CopilotProvider.ts around lines 168-171:
When `CopilotProviderProbeError` is caught at line 171, the code uses `error.message` which returns the default tagged error message (likely just "CopilotProviderProbeError"), not the actual underlying error. The meaningful error is stored in `error.cause`, so the user-facing message becomes unhelpful like "Failed to start GitHub Copilot CLI health check: CopilotProviderProbeError." instead of the actual failure details. Consider accessing `error.cause.message` to surface the real error.
Evidence trail:
1. apps/server/src/provider/Layers/CopilotProvider.ts lines 23-25 (CopilotProviderProbeError definition): `class CopilotProviderProbeError extends Data.TaggedError("CopilotProviderProbeError")<{ cause: unknown; }> {}`
2. apps/server/src/provider/Layers/CopilotProvider.ts line 171 (error message usage): `${error instanceof Error ? error.message : String(error)}`
3. Effect TaggedError behavior confirmed via web search: when no `message` property is defined in type parameters, the `message` defaults to the tag name
Also found in 1 other location(s):
- apps/server/src/git/Layers/CopilotTextGeneration.ts:237 -- The error message at line 237-242 is misleading. This check fires when `selectedModel` is `undefined`, which happens when `input.modelSelection.model` was not specified. However, the error message says "GitHub Copilot reasoning effort requires an explicit supported model selection" - implying the issue is about reasoning effort. This error triggers even when the user never requested reasoning effort (when `explicitReasoningEffort` is `undefined`), because `validateCopilotReasoningEffort` returns early without error when no reasoning effort is specified. The actual issue is simply that a model selection is required for this text generation feature, not anything related to reasoning effort.
| const resolveOrchestrationTurnId = ( | ||
| providerTurnId: TurnId | undefined, | ||
| ): TurnId | undefined => { | ||
| if (providerTurnId && currentProviderTurnId && providerTurnId === currentProviderTurnId) { | ||
| return currentTurnId ?? providerTurnId; | ||
| } | ||
| return currentTurnId ?? providerTurnId; | ||
| }; |
There was a problem hiding this comment.
🟡 Medium Layers/CopilotAdapter.ts:920
In resolveOrchestrationTurnId, both the if and else branches return currentTurnId ?? providerTurnId — the condition at line 923 has no effect. If the intent was to prefer providerTurnId when it differs from currentProviderTurnId, the else branch should return just providerTurnId.
const resolveOrchestrationTurnId = (
providerTurnId: TurnId | undefined,
): TurnId | undefined => {
if (providerTurnId && currentProviderTurnId && providerTurnId === currentProviderTurnId) {
return currentTurnId ?? providerTurnId;
}
- return currentTurnId ?? providerTurnId;
+ return providerTurnId;
};🤖 Copy this AI Prompt to have your agent fix this:
In file apps/server/src/provider/Layers/CopilotAdapter.ts around lines 920-927:
In `resolveOrchestrationTurnId`, both the `if` and `else` branches return `currentTurnId ?? providerTurnId` — the condition at line 923 has no effect. If the intent was to prefer `providerTurnId` when it differs from `currentProviderTurnId`, the else branch should return just `providerTurnId`.
Evidence trail:
apps/server/src/provider/Layers/CopilotAdapter.ts lines 920-927 at REVIEWED_COMMIT. The function `resolveOrchestrationTurnId` has identical return statements in both branches: line 924 returns `currentTurnId ?? providerTurnId` and line 926 also returns `currentTurnId ?? providerTurnId`.
|
Working on making the pr smaller |
|
i will work on this pr after the cursor implementation is added due to lot of breaking changes that will be added that are useful for copilot |
could you link to that cursor pr? |
|


What Changed
Why
UI Changes
Checklist
Note
Add GitHub Copilot as a text generation provider
CopilotModelSelection,CopilotSettings), server-sideCopilotProvider/CopilotAdapterlayers, aCopilotTextGenerationlayer for commit messages and PR content, and routing inRoutingTextGeneration.COPILOT_BUILT_IN_MODELS) with capability presets, model slug aliases, and default model mappings (gpt-5.4).reasoningEffortsupport.npm-loader.js, and adds legacy settings migration forcopilotCliPath/copilotConfigDir.ModelSelectionconstruction via a newbuildModelSelectionhelper andnormalizeModelOptionsForProvideracross all providers.@github/copilot-sdkas a runtime dependency; Copilot availability depends on CLI install and authentication status detected at startup.Macroscope summarized bd60263.
Note
Medium Risk
Medium risk because it introduces a new runtime dependency on the Copilot SDK/CLI and touches provider routing, session persistence, and model selection paths that affect all providers.
Overview
Adds GitHub Copilot as a new provider (
copilot) alongsidecodex/claudeAgent, including provider health probing (CopilotProvider), a full session adapter with event mapping/approval + user-input handling (CopilotAdapter), and persistence support so threads can remain bound tocopilotacross restarts.Introduces
CopilotTextGenerationLiveand routes git text-generation (commit/PR/branch) through Copilot with strict JSON decoding, timeouts, model + reasoning-effort validation, and attachment materialization; updates the routing layer andTextGenerationProviderunion accordingly.Updates settings/model plumbing and UI surfaces: adds Copilot settings fields (binary path/config dir/custom models), extends registries to include Copilot, updates the model picker to show Copilot (GitHub icon + optional allowed-provider filtering), and tweaks the chat work timeline to choose better icons/previews for tool/file-change entries.
Written by Cursor Bugbot for commit bd60263. This will update automatically on new commits. Configure here.