Skip to content

Conversation

@Ayush10
Copy link

@Ayush10 Ayush10 commented Jan 31, 2026

Summary

  • Replaces all 6 hardcoded "gemini-2.5-flash" model strings with os.environ.get("MODEL", "gemini-2.5-flash"), allowing users to configure the model without editing source code.
  • Generalizes "Gemini" references in error messages to "LLM" for model-agnostic consistency.
  • Default behavior is unchanged when MODEL is not set.

Closes #75

Usage

# Use default (gemini-2.5-flash)
samples/python/scenarios/a2a/human-present/cards/run.sh

# Use a different model
export MODEL="gemini-2.5-pro"
samples/python/scenarios/a2a/human-present/cards/run.sh

Affected files

  • samples/python/src/roles/shopping_agent/agent.py
  • samples/python/src/roles/shopping_agent/subagents/shopper/agent.py
  • samples/python/src/roles/shopping_agent/subagents/shipping_address_collector/agent.py
  • samples/python/src/roles/shopping_agent/subagents/payment_method_collector/agent.py
  • samples/python/src/roles/merchant_agent/sub_agents/catalog_agent.py
  • samples/python/src/common/function_call_resolver.py
  • samples/python/src/common/retrying_llm_agent.py

Test plan

  • Run sample with default (no MODEL set) — should behave identically to before
  • Run sample with MODEL=gemini-2.5-pro — should use the specified model
  • Verify no remaining hardcoded model strings in samples/python/src/

Replace all hardcoded "gemini-2.5-flash" model strings with
os.environ.get("MODEL", "gemini-2.5-flash") so users can switch
models without editing source code. Also generalize "Gemini" references
in error messages to "LLM" for consistency.

Closes google-agentic-commerce#75
@Ayush10 Ayush10 requested a review from a team as a code owner January 31, 2026 10:10
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Ayush10, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a significant improvement in the application's flexibility by making the underlying Large Language Model (LLM) configurable through an environment variable. This change eliminates the need to modify source code to switch between different LLM models, streamlining development and deployment workflows. It also refines error messages to be more generic, aligning with the goal of supporting various LLM providers.

Highlights

  • Model Configurability: Replaced all hardcoded LLM model strings (specifically "gemini-2.5-flash") with a dynamic approach using os.environ.get("MODEL", "gemini-2.5-flash"). This allows users to specify the LLM model via an environment variable, with gemini-2.5-flash remaining the default if no environment variable is set.
  • Generic Error Messages: Updated error messages in retrying_llm_agent.py to be model-agnostic, changing references from "Gemini server error" to "LLM server error".
  • OS Module Imports: Added import os statements to all Python files where environment variables are now being accessed to ensure proper functionality.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a great improvement by making the LLM model configurable through an environment variable, which enhances flexibility. The changes are well-implemented by replacing hardcoded model strings and generalizing error messages. My main feedback is to address the code duplication introduced by calling os.environ.get() in multiple files. I've suggested centralizing this logic into a single constant to improve maintainability, following the Don't Repeat Yourself (DRY) principle. I've left comments in all the affected files with this suggestion.


response = self._client.models.generate_content(
model="gemini-2.5-flash",
model=os.environ.get("MODEL", "gemini-2.5-flash"),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To avoid repeating os.environ.get("MODEL", ...) across multiple files, consider defining it as a constant in a shared module, for example in common/system_utils.py:

# In common/system_utils.py
import os

LLM_MODEL = os.environ.get("MODEL", "gemini-2.5-flash")

Then you can import and use this constant here and in the other 5 files where the model is specified. This improves maintainability by centralizing the configuration and allows removing the import os statement from files where it's no longer needed.


llm_response = llm_client.models.generate_content(
model="gemini-2.5-flash",
model=os.environ.get("MODEL", "gemini-2.5-flash"),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This is another instance of duplicated model configuration. Please see my comment on samples/python/src/common/function_call_resolver.py for a suggestion on how to centralize this to avoid code repetition.

root_agent = RetryingLlmAgent(
max_retries=5,
model="gemini-2.5-flash",
model=os.environ.get("MODEL", "gemini-2.5-flash"),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This is another instance of duplicated model configuration. Please see my comment on samples/python/src/common/function_call_resolver.py for a suggestion on how to centralize this to avoid code repetition.


payment_method_collector = RetryingLlmAgent(
model="gemini-2.5-flash",
model=os.environ.get("MODEL", "gemini-2.5-flash"),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This is another instance of duplicated model configuration. Please see my comment on samples/python/src/common/function_call_resolver.py for a suggestion on how to centralize this to avoid code repetition.


shipping_address_collector = RetryingLlmAgent(
model="gemini-2.5-flash",
model=os.environ.get("MODEL", "gemini-2.5-flash"),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This is another instance of duplicated model configuration. Please see my comment on samples/python/src/common/function_call_resolver.py for a suggestion on how to centralize this to avoid code repetition.


shopper = RetryingLlmAgent(
model="gemini-2.5-flash",
model=os.environ.get("MODEL", "gemini-2.5-flash"),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This is another instance of duplicated model configuration. Please see my comment on samples/python/src/common/function_call_resolver.py for a suggestion on how to centralize this to avoid code repetition.

Move os.environ.get("MODEL", "gemini-2.5-flash") into a single
LLM_MODEL constant in common/system_utils.py and import it across
all 6 files that reference it. Remove unused import os where applicable.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feat]: Add support for other LLMs

1 participant