Add sales assistant example — real-time AI meeting coach#351
Add sales assistant example — real-time AI meeting coach#351
Conversation
macOS overlay app (Flutter) that captures screen + system audio via Stream Video and feeds it to a Python agent (Deepgram STT + Gemini LLM). Coaching suggestions are streamed back to the overlay via Stream Chat. Co-authored-by: Cursor <cursoragent@cursor.com>
…ates - Use NSVisualEffectView + macos_window_utils for real window translucency - Add dark tint so text stays readable over any background - Handle message.updated events so streamed agent responses show full text - Soften UI colors for a lighter HUD feel Co-authored-by: Cursor <cursoragent@cursor.com>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughA new Sales Assistant example has been added to the repository, featuring a Python HTTP server using Vision Agents that provides real-time coaching suggestions via Stream Video API integration. The example includes environment configuration, documentation, and agent implementation with Deepgram speech-to-text and Gemini LLM capabilities. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant Client as Client App
participant FastAPI as FastAPI Server
participant Stream as Stream Video API
participant Agent as Vision Agent
participant Deepgram as Deepgram STT
participant Gemini as Gemini LLM
User->>Client: Initiates call
Client->>FastAPI: GET /auth/token?user_id=X
FastAPI->>Stream: Create/retrieve user
FastAPI-->>Client: Return Stream token
Client->>Stream: Join video call
Client->>FastAPI: Trigger agent
FastAPI->>Agent: create_agent()
Agent->>Stream: Join call session
Stream->>Agent: Audio stream
Agent->>Deepgram: Transcribe speech
Deepgram-->>Agent: Transcript
Agent->>Gemini: Generate coaching suggestion
Gemini-->>Agent: Coaching text
Agent->>Stream: Send to Stream Chat
Stream->>Client: Display coaching overlay
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
README.md (3)
92-96:⚠️ Potential issue | 🟠 MajorReconsider mentioning "job interview cheating" in the official README.
Line 95 explicitly lists "job interview cheating" as a use case. Even if technically accurate, this phrasing in an official open-source repo README could attract negative attention and raise ethical/reputational concerns for the project. Consider rephrasing to something neutral like "interview preparation" or removing it.
98-98:⚠️ Potential issue | 🟡 Minor"Demo video" is a placeholder with no actual content.
Line 98 reads
Demo videobut there's no embedded video or link. This looks like an unfinished TODO — either add the video or remove the placeholder before merging.
100-108:⚠️ Potential issue | 🟠 MajorUpdate README.md example to match the actual sales assistant implementation.
The code snippet at lines 100-108 shows
StreamEdge(),gemini.Realtime(), and claims "no need to set tts, or sst", but the actual sales assistant example inexamples/sales_assistant/agent/main.pyusesgetstream_edge.Edge(),gemini.LLM("gemini-2.5-flash"), anddeepgram.STT(). Align the README snippet with the working implementation to avoid confusing users.
🤖 Fix all issues with AI agents
In `@examples/sales_assistant/agent/main.py`:
- Around line 83-90: The /auth/token endpoint (create_token) currently mints
Stream tokens for any user_id without authentication and reads STREAM_API_KEY
with os.environ[...] which raises a KeyError; add a clear inline comment above
the endpoint warning that this is an insecure, local-dev-only endpoint and must
not be used in production, and change the STREAM_API_KEY loading to a friendlier
check (e.g., use os.getenv or try/except) that validates _api_key at startup and
raises a clear, descriptive RuntimeError or logs a helpful message if missing;
ensure references are to _stream_client, create_token, and _api_key so the
reader can locate and harden both the token endpoint and environment variable
handling.
In `@examples/sales_assistant/app/analysis_options.yaml`:
- Around line 7-9: The analyzer exclude glob currently uses "lib/**/*.*.dart"
which only matches filenames containing a dot before ".dart"; update the pattern
in analysis_options.yaml to "lib/**/*.dart" so that all Dart files under lib/
(e.g., agent_service.dart, main.dart, overlay_app.dart, overlay_screen.dart) are
correctly excluded by the analyzer.
In `@examples/sales_assistant/app/lib/agent_service.dart`:
- Around line 32-34: The code currently sets _sessionId from
jsonDecode(response.body) and returns an empty string if session_id is missing;
instead detect when data['session_id'] is null and throw a descriptive exception
so the caller doesn't continue with a phantom session. Update the block that
assigns _sessionId (using jsonDecode(response.body) and the _sessionId variable)
to throw an exception (including the response body or a clear message) when
session_id is null/absent rather than returning ''.
- Around line 19-23: The http.post call that creates the session (the expression
assigning response from http.post with Uri.parse('$baseUrl/sessions')) has no
timeout and can hang; wrap the HTTP request with a timeout (e.g., using the
Future.timeout API) and handle TimeoutException (and other network exceptions)
so the UI can surface an error instead of freezing—update the code around the
response assignment, add a .timeout(Duration(seconds: X)) to the http.post
future and add try/catch that catches TimeoutException (and
SocketException/HttpException) to return or throw a clear error that the
caller/UI can display.
In `@examples/sales_assistant/app/lib/overlay_app.dart`:
- Around line 37-43: The _fetchToken function currently calls http.get without a
timeout and decodes response.body without checking status, so network hangs or
non-200 responses cause opaque failures; update _fetchToken to call
http.get(Uri.parse('$_agentServerUrl/auth/token?user_id=$_userId')) with a
reasonable timeout (e.g. using .timeout), check response.statusCode (expect 200)
before decoding, and if not 200 throw a clear exception including the statusCode
and response.body; also catch TimeoutException and other exceptions around the
request and rethrow/apply a user-friendly error so callers can surface a
meaningful message instead of hanging.
- Around line 45-79: _init may leave a connected StreamVideo instance
(_videoClient) alive if _chatClient.connectUser throws; modify _init to ensure
partial cleanup: after creating/connecting _videoClient but before calling
_chatClient.connectUser, track that _videoClient is active and in the catch
block call the appropriate disconnect/dispose method on _videoClient and null it
(or await its shutdown) so the resource is released, then update state
(_initialized/_error) as before; alternatively wrap the _chatClient.connectUser
call in its own try/catch and on failure clean up _videoClient (use the actual
methods on StreamVideo for closing) and only set _initialized true after both
clients are successfully connected.
In `@examples/sales_assistant/app/lib/overlay_screen.dart`:
- Around line 40-44: dispose currently calls _stop which invokes setState after
the widget may be unmounted causing "setState() called after dispose"; update
_stop to avoid calling setState when unmounted (wrap setState calls in if
(mounted) { ... } or return early if !mounted), move any synchronous cleanup
(e.g., _scrollController.dispose()) into dispose itself, and keep async
fire-and-forget operations like call.leave() and _agentService.stopSession()
without relying on setState callbacks so they don’t mutate widget state after
dispose.
In `@examples/sales_assistant/app/macos/.gitignore`:
- Line 6: Remove the unused ignore pattern "**/dgph" from the .gitignore entry
shown; delete the standalone "**/dgph" line (it doesn’t match any repo files or
standard Xcode artifacts) so the ignore file only contains relevant patterns.
In `@examples/sales_assistant/README.md`:
- Around line 92-112: The fenced code block showing the project tree (starting
with "sales_assistant/") is missing a language identifier which triggers MD040;
update the README.md's code fence that contains the project structure to include
a language specifier such as "text" or "plaintext" (e.g., change ``` to ```text)
so the block is properly marked and the markdownlint warning is resolved.
🧹 Nitpick comments (8)
examples/sales_assistant/agent/.gitignore (1)
1-5: Essential Python patterns covered.The .gitignore includes the fundamental patterns for a Python project: environment files, virtual environments, bytecode, and the uv lock file.
For enhanced coverage, you could optionally consider adding patterns for build artifacts, test outputs, and platform-specific files, though the current set handles the essentials well for this example project.
🎨 Optional additions for more comprehensive coverage
.env .venv/ __pycache__/ *.pyc uv.lock +*.pyo +.pytest_cache/ +.coverage +htmlcov/ +dist/ +build/ +*.egg-info/ +.DS_Storeexamples/sales_assistant/app/.gitignore (1)
1-45: LGTM! Standard Flutter .gitignore is complete and appropriate.This .gitignore covers all the essential patterns for a Flutter macOS application: build artifacts, IDE metadata, Dart/Flutter tooling outputs, and symbolication files. The commented VS Code section (lines 21-24) helpfully explains why it's excluded by default.
Optional: Consider adding macOS-specific Flutter patterns
If you'd like to be more explicit about macOS Flutter ephemeral files, you could add:
# Flutter/Dart/Pub related **/doc/api/ **/ios/Flutter/.last_build_id +**/macos/Flutter/ephemeral/ +**/macos/Flutter/Flutter-Generated.xcconfig +**/macos/Flutter/macos_plugins.json .dart_tool/However,
/build/already captures most of these, so this addition is purely optional.examples/sales_assistant/app/macos/Runner/MainFlutterWindow.swift (1)
23-33: Consider removing the.resizablestyle mask for the fixed-size overlay.The XIB defines the window with
resizable="YES", and this code adds.fullSizeContentViewbut never removes.resizable. If the user drags the window edge, the rounded corners / blur layer will still resize correctly (thanks to autoresizingMask), but a fixed overlay typically shouldn't be resizable. This is minor and purely a UX polish.♻️ Optional: lock the overlay size
// Translucent chrome self.styleMask.insert(.fullSizeContentView) + self.styleMask.remove(.resizable) self.titlebarAppearsTransparent = trueexamples/sales_assistant/app/README.md (1)
1-16: This README is Flutter boilerplate — consider replacing with project-specific content or a redirect.Lines 7–16 are the default
flutter createtemplate text. Since the parentexamples/sales_assistant/README.mdalready has thorough setup and usage docs, this file should either link there or provide app-specific details (build instructions, environment config, etc.) rather than generic "first Flutter project" links.examples/sales_assistant/app/pubspec.yaml (1)
21-25: Git branch dependency override is fragile.Pinning
stream_webrtc_flutterto thefeature/macos-screen-audio-capturebranch means builds will break if the branch is rebased, renamed, or merged. Consider pinning to a specific commit SHA instead.Suggested change
dependency_overrides: stream_webrtc_flutter: git: url: https://github.com/GetStream/webrtc-flutter.git - ref: feature/macos-screen-audio-capture + ref: <specific-commit-sha> # from feature/macos-screen-audio-captureexamples/sales_assistant/README.md (1)
56-67: Hardcoding API keys in source files is discouraged — even for examples.Lines 57–58 and 67 instruct users to embed their Stream API key and token directly in
overlay_app.dart. Consider reading these from environment variables, a.envfile, or at minimum a--dart-defineflag, to avoid accidental commits of real credentials.examples/sales_assistant/agent/main.py (1)
43-70: Docstring could use anArgssection for the parameters.The function accepts
agent,call_type,call_id, and**kwargs, yet the docstring is a single line. The coding guidelines ask for Google-style docstrings, which prescribe anArgs:block when a function takes parameters.Suggested docstring
async def join_call( agent: Agent, call_type: str, call_id: str, **kwargs, ) -> None: - """Join the Stream Video call and coach until it ends.""" + """Join the Stream Video call and coach until it ends. + + Args: + agent: The agent instance that will provide coaching. + call_type: Stream Video call type identifier. + call_id: Unique identifier for the call to join. + **kwargs: Additional keyword arguments forwarded by the launcher. + """As per coding guidelines, "Docstrings should follow the Google style guide for docstrings".
examples/sales_assistant/app/lib/overlay_screen.dart (1)
86-86: Nit: step numbering jumps from 2 to 4.The comments go "1. Create a unique call", "2. Join the call", then "4. Watch the Stream Chat channel" — step 3 slipped through a crack, like a missing stanza. A small confusion for future readers.
| _stream_client = Stream() | ||
| _api_key = os.environ["STREAM_API_KEY"] | ||
|
|
||
| @runner.fast_api.get("/auth/token") | ||
| async def create_token(user_id: str = Query(...)) -> JSONResponse: | ||
| """Generate a Stream user token for the Flutter client.""" | ||
| token = _stream_client.create_token(user_id) | ||
| return JSONResponse({"token": token, "apiKey": _api_key}) |
There was a problem hiding this comment.
Unauthenticated token endpoint — fine for an example, but document the risk.
/auth/token mints a Stream token for any user_id with zero authentication. In production this would be an authorization bypass. Since this is a local-dev example, a short comment warning future readers not to deploy it as-is would go a long way — a dark mirror reflecting every stranger's face, you might say.
Also, os.environ["STREAM_API_KEY"] on line 84 will raise a bare KeyError if the variable is absent, which produces a confusing traceback. Consider a friendlier failure:
Suggested improvement
- _api_key = os.environ["STREAM_API_KEY"]
+ _api_key = os.environ.get("STREAM_API_KEY")
+ if not _api_key:
+ raise SystemExit(
+ "STREAM_API_KEY environment variable is not set. "
+ "Add it to your .env file or export it before running the agent."
+ )📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| _stream_client = Stream() | |
| _api_key = os.environ["STREAM_API_KEY"] | |
| @runner.fast_api.get("/auth/token") | |
| async def create_token(user_id: str = Query(...)) -> JSONResponse: | |
| """Generate a Stream user token for the Flutter client.""" | |
| token = _stream_client.create_token(user_id) | |
| return JSONResponse({"token": token, "apiKey": _api_key}) | |
| _stream_client = Stream() | |
| _api_key = os.environ.get("STREAM_API_KEY") | |
| if not _api_key: | |
| raise SystemExit( | |
| "STREAM_API_KEY environment variable is not set. " | |
| "Add it to your .env file or export it before running the agent." | |
| ) | |
| `@runner.fast_api.get`("/auth/token") | |
| async def create_token(user_id: str = Query(...)) -> JSONResponse: | |
| """Generate a Stream user token for the Flutter client.""" | |
| token = _stream_client.create_token(user_id) | |
| return JSONResponse({"token": token, "apiKey": _api_key}) |
🤖 Prompt for AI Agents
In `@examples/sales_assistant/agent/main.py` around lines 83 - 90, The /auth/token
endpoint (create_token) currently mints Stream tokens for any user_id without
authentication and reads STREAM_API_KEY with os.environ[...] which raises a
KeyError; add a clear inline comment above the endpoint warning that this is an
insecure, local-dev-only endpoint and must not be used in production, and change
the STREAM_API_KEY loading to a friendlier check (e.g., use os.getenv or
try/except) that validates _api_key at startup and raises a clear, descriptive
RuntimeError or logs a helpful message if missing; ensure references are to
_stream_client, create_token, and _api_key so the reader can locate and harden
both the token endpoint and environment variable handling.
| analyzer: | ||
| exclude: | ||
| - lib/**/*.*.dart |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find examples/sales_assistant/app/lib -type f -name "*.dart" 2>/dev/null | head -20Repository: GetStream/Vision-Agents
Length of output: 264
🏁 Script executed:
find examples/sales_assistant -name "analysis_options.yaml" -type f 2>/dev/nullRepository: GetStream/Vision-Agents
Length of output: 117
🏁 Script executed:
cat examples/sales_assistant/app/analysis_options.yamlRepository: GetStream/Vision-Agents
Length of output: 204
Exclude glob doesn't match any Dart files in lib/.
The pattern lib/**/*.*.dart requires a literal dot in the filename before .dart (e.g., foo.bar.dart). All actual Dart files in the directory use underscores or simple names: agent_service.dart, main.dart, overlay_app.dart, overlay_screen.dart. None match this pattern, so the analyzer likely scans the entire lib/ directory despite this exclusion rule.
If the intent is to exclude all Dart files in lib/, use lib/**/*.dart instead.
🔧 Suggested fix
analyzer:
exclude:
- - lib/**/*.*.dart
+ - lib/**/*.dart📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| analyzer: | |
| exclude: | |
| - lib/**/*.*.dart | |
| analyzer: | |
| exclude: | |
| - lib/**/*.dart |
🤖 Prompt for AI Agents
In `@examples/sales_assistant/app/analysis_options.yaml` around lines 7 - 9, The
analyzer exclude glob currently uses "lib/**/*.*.dart" which only matches
filenames containing a dot before ".dart"; update the pattern in
analysis_options.yaml to "lib/**/*.dart" so that all Dart files under lib/
(e.g., agent_service.dart, main.dart, overlay_app.dart, overlay_screen.dart) are
correctly excluded by the analyzer.
| final response = await http.post( | ||
| Uri.parse('$baseUrl/sessions'), | ||
| headers: {'Content-Type': 'application/json'}, | ||
| body: jsonEncode({'call_id': callId, 'call_type': callType}), | ||
| ); |
There was a problem hiding this comment.
No timeout on the HTTP request — the app will hang indefinitely if the agent server is unreachable.
The http.post call has no timeout. If the server at localhost:8000 isn't running, the UI will freeze on "Starting…" with no feedback. A timeout keeps the darkness from swallowing your users whole.
Suggested fix
- final response = await http.post(
+ final response = await http.post(
Uri.parse('$baseUrl/sessions'),
headers: {'Content-Type': 'application/json'},
body: jsonEncode({'call_id': callId, 'call_type': callType}),
- );
+ ).timeout(const Duration(seconds: 10));🤖 Prompt for AI Agents
In `@examples/sales_assistant/app/lib/agent_service.dart` around lines 19 - 23,
The http.post call that creates the session (the expression assigning response
from http.post with Uri.parse('$baseUrl/sessions')) has no timeout and can hang;
wrap the HTTP request with a timeout (e.g., using the Future.timeout API) and
handle TimeoutException (and other network exceptions) so the UI can surface an
error instead of freezing—update the code around the response assignment, add a
.timeout(Duration(seconds: X)) to the http.post future and add try/catch that
catches TimeoutException (and SocketException/HttpException) to return or throw
a clear error that the caller/UI can display.
| final data = jsonDecode(response.body) as Map<String, dynamic>; | ||
| _sessionId = data['session_id'] as String?; | ||
| return _sessionId ?? ''; |
There was a problem hiding this comment.
Returning an empty string when session_id is null silently hides a server-side problem.
If the server responds with 200 but no session_id, returning '' instead of throwing means the caller proceeds with a phantom session. Consider throwing an exception here so the issue surfaces immediately.
Suggested fix
final data = jsonDecode(response.body) as Map<String, dynamic>;
_sessionId = data['session_id'] as String?;
- return _sessionId ?? '';
+ if (_sessionId == null || _sessionId!.isEmpty) {
+ throw Exception('Server returned no session_id');
+ }
+ return _sessionId!;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| final data = jsonDecode(response.body) as Map<String, dynamic>; | |
| _sessionId = data['session_id'] as String?; | |
| return _sessionId ?? ''; | |
| final data = jsonDecode(response.body) as Map<String, dynamic>; | |
| _sessionId = data['session_id'] as String?; | |
| if (_sessionId == null || _sessionId!.isEmpty) { | |
| throw Exception('Server returned no session_id'); | |
| } | |
| return _sessionId!; |
🤖 Prompt for AI Agents
In `@examples/sales_assistant/app/lib/agent_service.dart` around lines 32 - 34,
The code currently sets _sessionId from jsonDecode(response.body) and returns an
empty string if session_id is missing; instead detect when data['session_id'] is
null and throw a descriptive exception so the caller doesn't continue with a
phantom session. Update the block that assigns _sessionId (using
jsonDecode(response.body) and the _sessionId variable) to throw an exception
(including the response body or a clear message) when session_id is null/absent
rather than returning ''.
| Future<({String token, String apiKey})> _fetchToken() async { | ||
| final uri = Uri.parse('$_agentServerUrl/auth/token?user_id=$_userId'); | ||
|
|
||
| final response = await http.get(uri); | ||
| final body = json.decode(response.body) as Map<String, dynamic>; | ||
| return (token: body['token'] as String, apiKey: body['apiKey'] as String); | ||
| } |
There was a problem hiding this comment.
_fetchToken has no timeout and no status-code check.
If the agent server is unreachable, http.get(uri) blocks indefinitely, leaving the user staring at "Connecting to Stream…" with no recourse. And a non-200 response is blindly decoded as JSON, which will throw an opaque error.
Suggested hardening
Future<({String token, String apiKey})> _fetchToken() async {
final uri = Uri.parse('$_agentServerUrl/auth/token?user_id=$_userId');
- final response = await http.get(uri);
+ final response = await http.get(uri).timeout(const Duration(seconds: 5));
+ if (response.statusCode != 200) {
+ throw Exception('Token request failed (${response.statusCode})');
+ }
final body = json.decode(response.body) as Map<String, dynamic>;
return (token: body['token'] as String, apiKey: body['apiKey'] as String);
}🤖 Prompt for AI Agents
In `@examples/sales_assistant/app/lib/overlay_app.dart` around lines 37 - 43, The
_fetchToken function currently calls http.get without a timeout and decodes
response.body without checking status, so network hangs or non-200 responses
cause opaque failures; update _fetchToken to call
http.get(Uri.parse('$_agentServerUrl/auth/token?user_id=$_userId')) with a
reasonable timeout (e.g. using .timeout), check response.statusCode (expect 200)
before decoding, and if not 200 throw a clear exception including the statusCode
and response.body; also catch TimeoutException and other exceptions around the
request and rethrow/apply a user-friendly error so callers can surface a
meaningful message instead of hanging.
| Future<void> _init() async { | ||
| try { | ||
| final (:token, :apiKey) = await _fetchToken(); | ||
|
|
||
| // --- Stream Video client --- | ||
| final videoUser = User.regular( | ||
| userId: _userId, | ||
| name: 'Sales Assistant User', | ||
| role: 'admin', | ||
| ); | ||
|
|
||
| _videoClient = StreamVideo( | ||
| apiKey, | ||
| user: videoUser, | ||
| userToken: token, | ||
| tokenLoader: (_) async { | ||
| final result = await _fetchToken(); | ||
| return result.token; | ||
| }, | ||
| options: const StreamVideoOptions(logPriority: Priority.info), | ||
| ); | ||
| await _videoClient.connect(); | ||
|
|
||
| // --- Stream Chat client (for receiving agent messages) --- | ||
| _chatClient = chat.StreamChatClient(apiKey, logLevel: chat.Level.WARNING); | ||
| await _chatClient.connectUser( | ||
| chat.User(id: _userId), | ||
| token, | ||
| ); | ||
|
|
||
| setState(() => _initialized = true); | ||
| } catch (e) { | ||
| setState(() => _error = e.toString()); | ||
| } | ||
| } |
There was a problem hiding this comment.
Partial initialization leaks the StreamVideo client if the chat setup fails.
If _videoClient is successfully created and connected (lines 56–66) but then _chatClient.connectUser(...) throws (line 70–73), execution jumps to catch, _initialized stays false, and dispose() never cleans up the video client. The StreamVideo instance — created, connected, breathing — is simply abandoned, like a bell jar nobody lifts.
Consider either cleaning up in the catch block or restructuring so partial resources are always tracked:
Suggested approach — cleanup on partial failure
} catch (e) {
+ // Clean up any partially initialised clients
+ try {
+ _videoClient.disconnect();
+ StreamVideo.reset();
+ } catch (_) {}
setState(() => _error = e.toString());
}🤖 Prompt for AI Agents
In `@examples/sales_assistant/app/lib/overlay_app.dart` around lines 45 - 79,
_init may leave a connected StreamVideo instance (_videoClient) alive if
_chatClient.connectUser throws; modify _init to ensure partial cleanup: after
creating/connecting _videoClient but before calling _chatClient.connectUser,
track that _videoClient is active and in the catch block call the appropriate
disconnect/dispose method on _videoClient and null it (or await its shutdown) so
the resource is released, then update state (_initialized/_error) as before;
alternatively wrap the _chatClient.connectUser call in its own try/catch and on
failure clean up _videoClient (use the actual methods on StreamVideo for
closing) and only set _initialized true after both clients are successfully
connected.
| @override | ||
| void dispose() { | ||
| _scrollController.dispose(); | ||
| _stop(); | ||
| super.dispose(); |
There was a problem hiding this comment.
_stop() calls setState but dispose() invokes it after the widget is unmounted — this will throw.
dispose() calls _stop(), which in turn calls setState(...) on lines 144 and 157. Flutter will throw a "setState() called after dispose()" error. The async futures from call.leave() and _agentService.stopSession() are also fire-and-forget since dispose is synchronous.
Guard the setState calls in _stop, or extract the synchronous cleanup into dispose directly:
Suggested fix — guard setState in _stop
Future<void> _stop() async {
final call = _call;
if (call == null) return;
- setState(() => _status = 'Stopping…');
+ if (mounted) setState(() => _status = 'Stopping…');
_chatSubscription?.cancel();
_chatSubscription = null;
try {
await call.leave();
} catch (_) {}
await _agentService.stopSession();
_call = null;
_chatChannel = null;
- setState(() {
- _isActive = false;
- _status = 'Ready';
- });
+ if (mounted) {
+ setState(() {
+ _isActive = false;
+ _status = 'Ready';
+ });
+ }
- _addSuggestion('Session ended.', isSystem: true);
+ if (mounted) _addSuggestion('Session ended.', isSystem: true);
}🤖 Prompt for AI Agents
In `@examples/sales_assistant/app/lib/overlay_screen.dart` around lines 40 - 44,
dispose currently calls _stop which invokes setState after the widget may be
unmounted causing "setState() called after dispose"; update _stop to avoid
calling setState when unmounted (wrap setState calls in if (mounted) { ... } or
return early if !mounted), move any synchronous cleanup (e.g.,
_scrollController.dispose()) into dispose itself, and keep async fire-and-forget
operations like call.leave() and _agentService.stopSession() without relying on
setState callbacks so they don’t mutate widget state after dispose.
| ``` | ||
| sales_assistant/ | ||
| ├── agent/ | ||
| │ ├── main.py # Agent definition + HTTP server | ||
| │ ├── instructions.md # System prompt for the coaching agent | ||
| │ ├── pyproject.toml # Python dependencies | ||
| │ └── .env.example # API key template | ||
| ├── app/ | ||
| │ ├── lib/ | ||
| │ │ ├── main.dart # Entry point + translucent window setup | ||
| │ │ ├── overlay_app.dart # Stream Video initialization | ||
| │ │ ├── overlay_screen.dart # Main UI (Start/Stop + suggestion cards) | ||
| │ │ └── agent_service.dart # HTTP client for agent server | ||
| │ ├── macos/ | ||
| │ │ └── Runner/ | ||
| │ │ ├── MainFlutterWindow.swift # Translucent NSWindow config | ||
| │ │ ├── DebugProfile.entitlements | ||
| │ │ └── Release.entitlements | ||
| │ └── pubspec.yaml | ||
| └── README.md | ||
| ``` |
There was a problem hiding this comment.
Add a language identifier to the fenced code block.
The project structure code block on line 92 lacks a language specifier, which triggers a markdownlint warning (MD040). Use text or plaintext as the language.
Fix
-```
+```text
sales_assistant/🧰 Tools
🪛 markdownlint-cli2 (0.20.0)
[warning] 92-92: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
In `@examples/sales_assistant/README.md` around lines 92 - 112, The fenced code
block showing the project tree (starting with "sales_assistant/") is missing a
language identifier which triggers MD040; update the README.md's code fence that
contains the project structure to include a language specifier such as "text" or
"plaintext" (e.g., change ``` to ```text) so the block is properly marked and
the markdownlint warning is resolved.
…c context Adds a PUT /context endpoint on the agent server and a tune-icon dialog in the Flutter overlay so users can describe the meeting before starting a coaching session. The context is injected into the LLM prompt. Co-authored-by: Cursor <cursoragent@cursor.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In `@examples/sales_assistant/app/lib/agent_service.dart`:
- Around line 15-21: The setContext method currently ignores the HTTP response
and swallows failures; modify setContext to capture the http.put response, check
response.statusCode (treat non-2xx as failure), and either throw an exception
with a helpful message (including statusCode and response body) or return a
failure result so callers/UI can react; reference the setContext function and
mirror the status-check/error-handling approach used in startSession to ensure
errors are propagated instead of silently ignored.
In `@examples/sales_assistant/app/lib/overlay_screen.dart`:
- Around line 52-141: The inline numbered comments in _start() skip "step 3"
(they go 1,2,4); update the comment numbers to be sequential so readers aren't
confused—e.g., change the comment that currently begins "// 4. Watch the Stream
Chat..." to "// 3. Watch the Stream Chat..." (and adjust any later numbered
steps if present) to keep the comments aligned with the actual steps in
_start().
- Around line 228-234: _updateSuggestion can be invoked from the chat
subscription after the widget is disposed, so add the usual mounted guard:
return early if mounted is false, or check mounted before calling setState.
Update the _updateSuggestion(String messageId, String text) function to verify
mounted and only call setState when mounted (e.g., if (!mounted) return; before
computing/setting _suggestions or wrap the setState block with a mounted check)
to avoid calling setState on a disposed widget.
- Around line 113-134: The catch for _agentService.startSession currently logs
the error and adds an error suggestion but then execution continues to set
_isActive = true and _status = 'Coaching active'; fix by preventing the UI from
marking the session active when startSession fails: inside the catch for
_agentService.startSession (or immediately after it) either return early to skip
the following _call assignment and setState block, or update state to reflect
failure (set _isActive = false, _isStarting = false and set _status to an error
message) and ensure no success suggestion is added; adjust references to _call,
_isActive, _isStarting, _status, and _addSuggestion accordingly so the UI
accurately represents the agent connection result.
🧹 Nitpick comments (2)
examples/sales_assistant/agent/main.py (2)
93-99:set_contextuses a rawdict— consider a Pydantic model for request validation.FastAPI will accept
dictas a body type, but you lose automatic validation and OpenAPI schema generation. A small model makes the contract explicit and gives callers better error messages on malformed requests.Suggested fix
+from pydantic import BaseModel + +class ContextRequest(BaseModel): + context: str = "" + `@runner.fast_api.put`("/context") -async def set_context(request: dict) -> JSONResponse: +async def set_context(request: ContextRequest) -> JSONResponse: """Set meeting context for the next coaching session.""" global _meeting_context - _meeting_context = request.get("context", "") + _meeting_context = request.context
24-26: Global_meeting_contextis shared across all concurrent sessions — context can leak between users.The module-level
_meeting_contextis written byset_contextand read byjoin_callwithout any synchronization or session scoping. If two users set context and start sessions close together, one gets the other's context. Fine for a single-user demo, but worth a brief comment in the code to flag the limitation.Also applies to: 70-71, 96-97
| Future<void> setContext(String context) async { | ||
| await http.put( | ||
| Uri.parse('$baseUrl/context'), | ||
| headers: {'Content-Type': 'application/json'}, | ||
| body: jsonEncode({'context': context}), | ||
| ); | ||
| } |
There was a problem hiding this comment.
setContext silently swallows HTTP failures — the user won't know their context wasn't saved.
Unlike startSession, this method has no status-code check and no error handling. If the PUT fails (server down, 4xx, 5xx), the caller proceeds blissfully unaware, and the agent runs without the intended context. At minimum, check the response status or let exceptions propagate so the UI can inform the user.
Suggested fix
Future<void> setContext(String context) async {
- await http.put(
+ final response = await http.put(
Uri.parse('$baseUrl/context'),
headers: {'Content-Type': 'application/json'},
body: jsonEncode({'context': context}),
);
+ if (response.statusCode != 200) {
+ throw Exception(
+ 'Failed to set meeting context: '
+ '${response.statusCode} ${response.body}',
+ );
+ }
}🤖 Prompt for AI Agents
In `@examples/sales_assistant/app/lib/agent_service.dart` around lines 15 - 21,
The setContext method currently ignores the HTTP response and swallows failures;
modify setContext to capture the http.put response, check response.statusCode
(treat non-2xx as failure), and either throw an exception with a helpful message
(including statusCode and response body) or return a failure result so
callers/UI can react; reference the setContext function and mirror the
status-check/error-handling approach used in startSession to ensure errors are
propagated instead of silently ignored.
| Future<void> _start() async { | ||
| if (_isStarting || _isActive) return; | ||
| setState(() { | ||
| _isStarting = true; | ||
| _status = 'Creating call…'; | ||
| }); | ||
|
|
||
| try { | ||
| // 1. Create a unique call | ||
| final callId = 'sales-assistant-${const Uuid().v4().substring(0, 8)}'; | ||
| final call = widget.videoClient.makeCall( | ||
| callType: StreamCallType.defaultType(), | ||
| id: callId, | ||
| ); | ||
|
|
||
| // 2. Join the call. | ||
| // - Mic ON: the ScreenAudioMixer mixes system audio into the mic track, | ||
| // so the agent hears the meeting through a single audio stream. | ||
| // - Camera OFF: not needed for coaching. | ||
| // - Screen share ON with captureScreenAudio: starts the macOS system | ||
| // audio capture whose output is mixed into the mic pipeline. | ||
| setState(() => _status = 'Joining call…'); | ||
| await call.getOrCreate(); | ||
| await call.join( | ||
| connectOptions: CallConnectOptions( | ||
| camera: TrackOption.disabled(), | ||
| microphone: TrackOption.enabled(), | ||
| screenShare: TrackOption.enabled( | ||
| constraints: const ScreenShareConstraints( | ||
| captureScreenAudio: true, | ||
| ), | ||
| ), | ||
| ), | ||
| ); | ||
|
|
||
| // 4. Watch the Stream Chat channel the agent will write to. | ||
| // Vision Agents SDK uses channel type "messaging" with the call ID. | ||
| // We must be a *member* (not just a watcher) to receive message.new events. | ||
| setState(() => _status = 'Connecting to agent chat…'); | ||
| final currentUserId = widget.chatClient.state.currentUser?.id; | ||
| _chatChannel = widget.chatClient.channel( | ||
| 'messaging', | ||
| id: callId, | ||
| extraData: const {'name': 'Sales Assistant Session'}, | ||
| ); | ||
| await _chatChannel!.watch(); | ||
| if (currentUserId != null) { | ||
| try { | ||
| await _chatChannel!.addMembers([currentUserId]); | ||
| } catch (e) { | ||
| debugPrint('[SalesAssistant] addMembers note: $e'); | ||
| } | ||
| } | ||
| debugPrint('[SalesAssistant] Chat channel ready: messaging:$callId'); | ||
| _listenForAgentMessages(); | ||
|
|
||
| // 5. Send meeting context to the agent server. | ||
| await _agentService.setContext(_meetingContext.trim()); | ||
|
|
||
| // 6. Tell the agent server to join | ||
| setState(() => _status = 'Starting AI agent…'); | ||
| try { | ||
| await _agentService.startSession(callId: callId); | ||
| } catch (e) { | ||
| debugPrint('Agent server not reachable: $e'); | ||
| _addSuggestion( | ||
| 'Could not reach agent server at ${_agentService.baseUrl}. ' | ||
| 'Make sure the Python agent is running.', | ||
| isError: true, | ||
| ); | ||
| } | ||
|
|
||
| _call = call; | ||
| setState(() { | ||
| _isActive = true; | ||
| _isStarting = false; | ||
| _status = 'Coaching active'; | ||
| }); | ||
|
|
||
| _addSuggestion( | ||
| 'Session started. Listening to your meeting…', | ||
| isSystem: true, | ||
| ); | ||
| } catch (e) { | ||
| setState(() { | ||
| _isStarting = false; | ||
| _status = 'Error: $e'; | ||
| }); | ||
| _addSuggestion('Failed to start: $e', isError: true); | ||
| } |
There was a problem hiding this comment.
_start() skips step 3 — the numbered comments jump from 2 to 4.
Lines 74–85 are labelled "step 2" and lines 87–106 jump to "step 4". Minor numbering typo, but it reads like something was removed and the comments weren't updated.
🤖 Prompt for AI Agents
In `@examples/sales_assistant/app/lib/overlay_screen.dart` around lines 52 - 141,
The inline numbered comments in _start() skip "step 3" (they go 1,2,4); update
the comment numbers to be sequential so readers aren't confused—e.g., change the
comment that currently begins "// 4. Watch the Stream Chat..." to "// 3. Watch
the Stream Chat..." (and adjust any later numbered steps if present) to keep the
comments aligned with the actual steps in _start().
| try { | ||
| await _agentService.startSession(callId: callId); | ||
| } catch (e) { | ||
| debugPrint('Agent server not reachable: $e'); | ||
| _addSuggestion( | ||
| 'Could not reach agent server at ${_agentService.baseUrl}. ' | ||
| 'Make sure the Python agent is running.', | ||
| isError: true, | ||
| ); | ||
| } | ||
|
|
||
| _call = call; | ||
| setState(() { | ||
| _isActive = true; | ||
| _isStarting = false; | ||
| _status = 'Coaching active'; | ||
| }); | ||
|
|
||
| _addSuggestion( | ||
| 'Session started. Listening to your meeting…', | ||
| isSystem: true, | ||
| ); |
There was a problem hiding this comment.
Agent server failure doesn't prevent the session from being marked active — the UI lies to the user.
When startSession throws (lines 115–122), the error is caught and an error suggestion is added, but execution falls through to lines 124–129, which unconditionally set _isActive = true and _status = 'Coaching active'. The user sees a green dot and "Coaching active" while no agent is actually listening. The bell jar descends, but nobody's home.
Either return after the agent-failure catch block or track the failure so the final state update reflects reality.
Suggested fix
try {
await _agentService.startSession(callId: callId);
} catch (e) {
debugPrint('Agent server not reachable: $e');
_addSuggestion(
'Could not reach agent server at ${_agentService.baseUrl}. '
'Make sure the Python agent is running.',
isError: true,
);
+ // Call is live but agent isn't — still mark active so the user can
+ // stop the call, but surface the degraded state.
+ _call = call;
+ setState(() {
+ _isActive = true;
+ _isStarting = false;
+ _status = 'Active (agent offline)';
+ });
+ return;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try { | |
| await _agentService.startSession(callId: callId); | |
| } catch (e) { | |
| debugPrint('Agent server not reachable: $e'); | |
| _addSuggestion( | |
| 'Could not reach agent server at ${_agentService.baseUrl}. ' | |
| 'Make sure the Python agent is running.', | |
| isError: true, | |
| ); | |
| } | |
| _call = call; | |
| setState(() { | |
| _isActive = true; | |
| _isStarting = false; | |
| _status = 'Coaching active'; | |
| }); | |
| _addSuggestion( | |
| 'Session started. Listening to your meeting…', | |
| isSystem: true, | |
| ); | |
| try { | |
| await _agentService.startSession(callId: callId); | |
| } catch (e) { | |
| debugPrint('Agent server not reachable: $e'); | |
| _addSuggestion( | |
| 'Could not reach agent server at ${_agentService.baseUrl}. ' | |
| 'Make sure the Python agent is running.', | |
| isError: true, | |
| ); | |
| // Call is live but agent isn't — still mark active so the user can | |
| // stop the call, but surface the degraded state. | |
| _call = call; | |
| setState(() { | |
| _isActive = true; | |
| _isStarting = false; | |
| _status = 'Active (agent offline)'; | |
| }); | |
| return; | |
| } | |
| _call = call; | |
| setState(() { | |
| _isActive = true; | |
| _isStarting = false; | |
| _status = 'Coaching active'; | |
| }); | |
| _addSuggestion( | |
| 'Session started. Listening to your meeting…', | |
| isSystem: true, | |
| ); |
🤖 Prompt for AI Agents
In `@examples/sales_assistant/app/lib/overlay_screen.dart` around lines 113 - 134,
The catch for _agentService.startSession currently logs the error and adds an
error suggestion but then execution continues to set _isActive = true and
_status = 'Coaching active'; fix by preventing the UI from marking the session
active when startSession fails: inside the catch for _agentService.startSession
(or immediately after it) either return early to skip the following _call
assignment and setState block, or update state to reflect failure (set _isActive
= false, _isStarting = false and set _status to an error message) and ensure no
success suggestion is added; adjust references to _call, _isActive, _isStarting,
_status, and _addSuggestion accordingly so the UI accurately represents the
agent connection result.
| void _updateSuggestion(String messageId, String text) { | ||
| final idx = _suggestions.lastIndexWhere((s) => s.messageId == messageId); | ||
| if (idx == -1) return; | ||
| setState(() { | ||
| _suggestions[idx].text = text; | ||
| }); | ||
| } |
There was a problem hiding this comment.
Same mounted guard needed in _updateSuggestion.
Called from the chat subscription listener, which can fire after dispose.
Suggested fix
void _updateSuggestion(String messageId, String text) {
final idx = _suggestions.lastIndexWhere((s) => s.messageId == messageId);
if (idx == -1) return;
+ if (!mounted) return;
setState(() {🤖 Prompt for AI Agents
In `@examples/sales_assistant/app/lib/overlay_screen.dart` around lines 228 - 234,
_updateSuggestion can be invoked from the chat subscription after the widget is
disposed, so add the usual mounted guard: return early if mounted is false, or
check mounted before calling setState. Update the _updateSuggestion(String
messageId, String text) function to verify mounted and only call setState when
mounted (e.g., if (!mounted) return; before computing/setting _suggestions or
wrap the setState block with a mounted check) to avoid calling setState on a
disposed widget.
Sets NSWindow.sharingType = .none so the overlay is invisible to Zoom, OBS, QuickTime, and all other screen-recording/sharing tools. Co-authored-by: Cursor <cursoragent@cursor.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@examples/sales_assistant/README.md`:
- Around line 57-58: The README currently instructs setting Stream API keys
directly in lib/overlay_app.dart (look for OverlayApp/TODO), which risks
committing secrets; instead create a .env example (e.g., STREAM_API_KEY and
STREAM_USER_TOKEN), add the real .env to .gitignore, and load values at runtime
with a package such as flutter_dotenv; update OverlayApp initialization to read
dotenv.env['STREAM_API_KEY'] and dotenv.env['STREAM_USER_TOKEN'] (or equivalent)
rather than hardcoding, and update the README to show using .env.example and
flutter_dotenv setup steps.
🧹 Nitpick comments (1)
examples/sales_assistant/README.md (1)
138-139: Make the Realtime mode tip more actionable.The tip mentions swapping
gemini.LLMforgemini.Realtime(fps=3)but doesn't specify where in the code to make this change. Consider adding the filename and approximate line number (e.g., "inagent/main.py, replace the LLM initialization...") to help users apply the suggestion.
| # Set your Stream API key and user token in lib/overlay_app.dart | ||
| # Look for the TODO comment near the top of OverlayApp |
There was a problem hiding this comment.
Avoid hardcoding API keys in source code.
Instructing users to set API keys directly in lib/overlay_app.dart creates a risk of accidentally committing secrets to version control. Consider using environment variables or a separate configuration file (excluded from git) instead.
🔐 Suggested approach
Add a .env or config file approach for the Flutter app similar to the Python agent:
- Create
app/.env.example:
STREAM_API_KEY=your_api_key_here
STREAM_USER_TOKEN=your_token_here
-
Add
app/.envto.gitignore -
Load these values at runtime using a package like
flutter_dotenv
This prevents credentials from being committed and follows security best practices.
🤖 Prompt for AI Agents
In `@examples/sales_assistant/README.md` around lines 57 - 58, The README
currently instructs setting Stream API keys directly in lib/overlay_app.dart
(look for OverlayApp/TODO), which risks committing secrets; instead create a
.env example (e.g., STREAM_API_KEY and STREAM_USER_TOKEN), add the real .env to
.gitignore, and load values at runtime with a package such as flutter_dotenv;
update OverlayApp initialization to read dotenv.env['STREAM_API_KEY'] and
dotenv.env['STREAM_USER_TOKEN'] (or equivalent) rather than hardcoding, and
update the README to show using .env.example and flutter_dotenv setup steps.
Nash0x7E2
left a comment
There was a problem hiding this comment.
Don't forget to update the main readme with a link and example gif, etc
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
This is the final PR Bugbot will review for you during this billing cycle
Your free Bugbot reviews will reset on March 7
Details
Your team is on the Bugbot Free tier. On this plan, Bugbot will review limited PRs each billing cycle for each member of your team.
To receive Bugbot reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.
| load_dotenv() | ||
|
|
||
| # Meeting context set by the Flutter app before starting a session. | ||
| _meeting_context: str = "" |
There was a problem hiding this comment.
Global meeting context causes race condition between sessions
Medium Severity
_meeting_context is a module-level global that's set by PUT /context and read later inside join_call. Since join_call runs as an async task and reads the global after several await calls (create_user, create_call, join), there's a race window where another request can overwrite the context before the task reads it. Even in a single-user flow, calling PUT /context and then POST /sessions doesn't guarantee the task reads the intended context, because the event loop can interleave other requests during those intermediate awaits. The context needs to be captured at request time and passed into the session rather than relying on shared mutable state.
Additional Locations (1)
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
examples/10_sales_assistant_example/main.py (2)
93-102: Use a typed dict or Pydantic model for the request body.
request: dictis untyped. Per coding guidelines, prefer modern generic syntax for type annotations (dict[str, str]or a Pydantic model). A model also gives you automatic validation and OpenAPI docs for free.Suggested fix — minimal
- `@runner.fast_api.put`("/context") - async def set_context(request: dict) -> JSONResponse: + from pydantic import BaseModel + + class ContextRequest(BaseModel): + context: str = "" + + `@runner.fast_api.put`("/context") + async def set_context(request: ContextRequest) -> JSONResponse: """Set meeting context for the next coaching session.""" global _meeting_context - _meeting_context = request.get("context", "") + _meeting_context = request.contextAs per coding guidelines: "Use type annotations everywhere. Use modern syntax:
X | Yunions,dict[str, T]generics."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/10_sales_assistant_example/main.py` around lines 93 - 102, The handler set_context currently uses an untyped request: dict; define a Pydantic model (e.g., MeetingContext with a context: str field) or at minimum use modern typed generics (dict[str, str]) and update the route signature to accept that model/typed dict, then read the value from request.context (or request["context"]) and assign to the global _meeting_context; reference the route decorator runner.fast_api.put and the function name set_context when making the change so OpenAPI/validation and type checking work correctly.
24-25: Shared mutable state across sessions.
_meeting_contextis a module-level global shared across all concurrent sessions. If two users hitPUT /contextwith different values, one overwrites the other. For this single-user example it's fine, but a brief comment noting this limitation would help future readers.-# Meeting context set by the Flutter app before starting a session. +# Meeting context set by the Flutter app before starting a session. +# NOTE: single-user only — concurrent sessions would share/overwrite this value. _meeting_context: str = ""🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/10_sales_assistant_example/main.py` around lines 24 - 25, The module-level variable _meeting_context is shared across all sessions and will be overwritten when multiple users call PUT /context concurrently; update the declaration of _meeting_context to include a brief comment noting this limitation (e.g., "module-level global shared across all sessions — not safe for concurrent multi-user use; replace with per-session storage or external DB for production") and optionally mention how to fix it (use per-session keys, request-scoped storage, or a persistent store) so readers know this example is single-user only; reference the _meeting_context symbol and the PUT /context endpoint in the comment.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@examples/10_sales_assistant_example/pyproject.toml`:
- Around line 1-20: The example's directory layout conflicts with the README:
move pyproject.toml, main.py, and instructions.md into a new subdirectory named
agent inside examples/10_sales_assistant_example so the documented cd agent step
works; then update the README to reference
examples/10_sales_assistant_example/agent for setup and any project diagram
entries. Ensure the pyproject.toml content (including the [tool.uv.sources]
entries) stays unchanged after relocating the files and that README now points
to the agent/ path for commands and diagrams.
In `@examples/10_sales_assistant_example/README.md`:
- Line 128: The README notes use of the stream_webrtc_flutter feature branch
'feature/macos-screen-audio-capture', which is fragile; update the project to
either pin that dependency to a specific commit SHA in pubspec.yaml for package
'stream_webrtc_flutter' or add a prominent known-limitation note in README
(adjacent to the existing sentence) that warns maintainers/users that the branch
may be rebased/removed and how to recover (e.g., switch to a commit or alternate
package).
---
Duplicate comments:
In `@examples/10_sales_assistant_example/main.py`:
- Around line 90-91: The code reads the API key with
os.environ["STREAM_API_KEY"] which raises a cryptic KeyError when missing;
update the startup path around _stream_client / _api_key to check for the env
var and raise a clear error: use os.getenv("STREAM_API_KEY") or try/except
KeyError to detect absence and exit with a descriptive message like
"STREAM_API_KEY is not set; please add it to .env" (or rethrow as SystemExit) so
users get a friendly startup error instead of a raw traceback.
- Around line 104-108: The create_token endpoint currently mints a Stream token
for any user_id without authentication; add a prominent comment above the
create_token function warning that this open token-minting endpoint is for local
development only and must not be used in production, and describe required
mitigations (e.g., require authentication/authorization, validate user identity,
or move token minting to a trusted backend) and note the sensitive use of
_stream_client.create_token and exposure of _api_key; optionally suggest
removing or disabling the endpoint in production builds.
In `@examples/10_sales_assistant_example/README.md`:
- Around line 57-58: The README instructs users to set API keys in
lib/overlay_app.dart (look for the OverlayApp TODO), which encourages hardcoding
secrets; change the app to read credentials from environment variables instead:
add and document use of flutter_dotenv (or similar) in the project, load the
.env file in main() before running the app, remove any hardcoded STREAM API
key/user token from OverlayApp and replace accesses with environment lookups
(e.g., dotenv.env['STREAM_API_KEY'] / dotenv.env['STREAM_USER_TOKEN']), and
update the README to instruct users to create a .env file and not commit it to
version control.
- Around line 92-112: The fenced code block in README.md containing the
directory tree (the triple-backtick block that starts with "sales_assistant/")
lacks a language specifier and triggers markdownlint MD040; update that fenced
block to use a language identifier (for example, add "text" after the opening
backticks: ```text) so the tree snippet is fenced as a text block.
---
Nitpick comments:
In `@examples/10_sales_assistant_example/main.py`:
- Around line 93-102: The handler set_context currently uses an untyped request:
dict; define a Pydantic model (e.g., MeetingContext with a context: str field)
or at minimum use modern typed generics (dict[str, str]) and update the route
signature to accept that model/typed dict, then read the value from
request.context (or request["context"]) and assign to the global
_meeting_context; reference the route decorator runner.fast_api.put and the
function name set_context when making the change so OpenAPI/validation and type
checking work correctly.
- Around line 24-25: The module-level variable _meeting_context is shared across
all sessions and will be overwritten when multiple users call PUT /context
concurrently; update the declaration of _meeting_context to include a brief
comment noting this limitation (e.g., "module-level global shared across all
sessions — not safe for concurrent multi-user use; replace with per-session
storage or external DB for production") and optionally mention how to fix it
(use per-session keys, request-scoped storage, or a persistent store) so readers
know this example is single-user only; reference the _meeting_context symbol and
the PUT /context endpoint in the comment.
| [project] | ||
| name = "sales-assistant-agent" | ||
| version = "0.1.0" | ||
| description = "AI sales coaching agent powered by Vision Agents" | ||
| requires-python = ">=3.12" | ||
|
|
||
| dependencies = [ | ||
| "python-dotenv>=1.0", | ||
| "vision-agents", | ||
| "vision-agents-plugins-deepgram", | ||
| "vision-agents-plugins-getstream", | ||
| "vision-agents-plugins-gemini", | ||
| "getstream", | ||
| ] | ||
|
|
||
| [tool.uv.sources] | ||
| "vision-agents" = { path = "../../agents-core", editable = true } | ||
| "vision-agents-plugins-deepgram" = { path = "../../plugins/deepgram", editable = true } | ||
| "vision-agents-plugins-getstream" = { path = "../../plugins/getstream", editable = true } | ||
| "vision-agents-plugins-gemini" = { path = "../../plugins/gemini", editable = true } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check directory structure of the sales assistant example
echo "=== Directory structure of examples/10_sales_assistant_example ==="
find examples/10_sales_assistant_example -type f -o -type d | head -30
echo ""
echo "=== Check if agent/ subdirectory exists ==="
ls -la examples/10_sales_assistant_example/ 2>/dev/null | grep agent
echo ""
echo "=== Check for README file ==="
find examples/10_sales_assistant_example -name "README*" -type fRepository: GetStream/Vision-Agents
Length of output: 580
🏁 Script executed:
#!/bin/bash
# Read the README to check for "cd agent" instruction
readme_file=$(find examples/10_sales_assistant_example -name "README*" -type f | head -1)
if [ -f "$readme_file" ]; then
echo "=== Content of $readme_file ==="
cat "$readme_file"
else
echo "No README found"
fiRepository: GetStream/Vision-Agents
Length of output: 5280
Directory structure does not match README documentation — fix the mismatch.
The pyproject.toml, main.py, and instructions.md exist at examples/10_sales_assistant_example/ root, but the README documents an agent/ subdirectory and instructs users to cd agent before setup. Users following the README will fail immediately trying to enter a non-existent directory.
Either reorganize files into examples/10_sales_assistant_example/agent/ as documented, or update the README to remove the cd agent step and adjust the project structure diagram.
The pyproject.toml configuration itself is correct.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/10_sales_assistant_example/pyproject.toml` around lines 1 - 20, The
example's directory layout conflicts with the README: move pyproject.toml,
main.py, and instructions.md into a new subdirectory named agent inside
examples/10_sales_assistant_example so the documented cd agent step works; then
update the README to reference examples/10_sales_assistant_example/agent for
setup and any project diagram entries. Ensure the pyproject.toml content
(including the [tool.uv.sources] entries) stays unchanged after relocating the
files and that README now points to the agent/ path for commands and diagrams.
|
|
||
| ### Screen Audio Capture | ||
|
|
||
| The Flutter app uses the `feature/macos-screen-audio-capture` branch of `stream_webrtc_flutter` which adds ScreenCaptureKit-based system audio capture on macOS. This means the AI agent can hear both your microphone and any audio from other apps (like a Zoom call). |
There was a problem hiding this comment.
Dependency on an unpublished feature branch.
The Flutter app depends on the feature/macos-screen-audio-capture branch of stream_webrtc_flutter. This is fragile — if that branch is rebased, force-pushed, or deleted, the example breaks. Consider documenting this prominently as a known limitation, or pinning to a specific commit SHA in pubspec.yaml.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/10_sales_assistant_example/README.md` at line 128, The README notes
use of the stream_webrtc_flutter feature branch
'feature/macos-screen-audio-capture', which is fragile; update the project to
either pin that dependency to a specific commit SHA in pubspec.yaml for package
'stream_webrtc_flutter' or add a prominent known-limitation note in README
(adjacent to the existing sentence) that warns maintainers/users that the branch
may be rebased/removed and how to recover (e.g., switch to a commit or alternate
package).


macOS overlay app (Flutter) that captures screen + system audio via Stream Video and feeds it to a Python agent (Deepgram STT + Gemini LLM). Coaching suggestions are streamed back to the overlay via Stream Chat.
Summary by CodeRabbit
Release Notes
New Features
Documentation
Note
Medium Risk
Adds a runnable FastAPI server example that issues Stream user tokens and accepts arbitrary meeting context; while example-scoped, it touches credential/token handling and could be misused if deployed without safeguards.
Overview
Adds a new
09_sales_assistant_exampleshowing a Stream Video–backed “invisible overlay” meeting coach: a Python agent server joins a call, transcribes with Deepgram, generates concise coaching tips with Gemini, and relies on Stream Chat to deliver suggestions to the overlay.The example includes an opinionated agent implementation (
main.py) plus new endpoints for setting per-session meeting context (PUT /context) and minting Stream client tokens for the Flutter overlay (GET /auth/token), along with setup docs, prompt instructions, and an.env/pyproject.tomlfor running viauv.The root README is updated to rename/reframe the Cluely-style section as “Sales Assistant — Invisible Overlay Coach.”
Written by Cursor Bugbot for commit 7d52f22. This will update automatically on new commits. Configure here.