Skip to content

Full help and Agent Skill#26

Merged
santanusinha merged 13 commits intomasterfrom
full-help
Apr 11, 2026
Merged

Full help and Agent Skill#26
santanusinha merged 13 commits intomasterfrom
full-help

Conversation

@santanusinha
Copy link
Copy Markdown
Contributor

Added --full-help command line option

  • Prints all command and subcommand options
  • Added claude skill so that AI agent can use it to interact with drove

Sample session in AI Assistant using the skill:

🤖 Check if drove CLI is installed

$ which drove 2>/dev/null || python3 -c "import drovecli" 2>/dev/null; echo "exit: $?" (Timeout: 10 seconds)

/home/santanu/.local/bin/drove
exit: 0

⚙️ Status: 0 -> Command executed successfully

🤖 Load the full drove CLI help to understand available commands

$ drove --full-help 2>&1 | head -100 (Timeout: 15 seconds)

========================================================================
usage: drove [-h] [--file FILE] [--cluster CLUSTER] [--endpoint ENDPOINT]
             [--auth-header AUTH_HEADER] [--insecure] [--username USERNAME]
             [--password PASSWORD] [--debug] [--full-help]
             [--print-completion {bash,zsh,tcsh}]
             {lsinstances,executor,cluster,apps,describe,appinstances,tasks,localservices,config}
             ...

positional arguments:
  {lsinstances,executor,cluster,apps,describe,appinstances,tasks,localservices,config}
                        Available plugins
    lsinstances         Drove local service instance related commands
    executor            Drove cluster executor related commands
    cluster             Drove cluster related commands
    apps                Drove application related commands
    describe            Show detailed information about a resource
    appinstances        Drove application instance related commands
    tasks               Drove task related commands
    localservices       Drove local service related commands
    config              Manage drove cluster configurations

options:
  -h, --help            show this help message and exit
  --file FILE, -f FILE  Configuration file for drove client
  --cluster CLUSTER, -c CLUSTER
                        Cluster name as specified in config file
  --endpoint ENDPOINT, -e ENDPOINT
                        Drove endpoint. (For example: https://drove.test.com)
  --auth-header AUTH_HEADER, -t AUTH_HEADER
                        Authorization header value for the provided drove
                        endpoint
  --insecure, -i        Do not verify SSL cert for server
  --username USERNAME, -u USERNAME
                        Drove cluster username
  --password PASSWORD, -p PASSWORD
                        Drove cluster password
  --debug, -d           Print details of errors
  --full-help           Show help for every command and sub-command
  --print-completion {bash,zsh,tcsh}
                        Print shell completion script for the given shell

========================================================================
usage: drove appinstances [-h] {list,info,logs,tail,download,replace,kill} ...

positional arguments:
  {list,info,logs,tail,download,replace,kill}
                        Available commands for application management
    list                List all application instances
    info                Print details for an application instance
    logs                Print list of logs for application instance
    tail                Tail log for application instance
    download            Download log for application instance
    replace             Replace specific app instances with fresh instances
    kill                Kill specific app instances

options:
  -h, --help            show this help message and exit

========================================================================
usage: drove appinstances download [-h] [--out OUT]
                                   app-id instance-id applogfile

positional arguments:
  app-id             Application ID
  instance-id        Application Instance ID
  applogfile         Log filename to download

options:
  -h, --help         show this help message and exit
  --out OUT, -o OUT  Filename to download to. Default is the same filename as
                     provided.

========================================================================
usage: drove appinstances info [-h] app-id instance-id

positional arguments:
  app-id       Application ID
  instance-id  Application Instance ID

options:
  -h, --help   show this help message and exit

========================================================================
usage: drove appinstances kill [-h] [--parallelism PARALLELISM]
                               [--timeout TIMEOUT] [--wait]
                               app-id instance-id [instance-id ...]

positional arguments:
  app-id                Application ID
  instance-id           Application Instance IDs

options:
  -h, --help            show this help message and exit
  --parallelism PARALLELISM, -p PARALLELISM
                        Number of parallel threads to be used to execute
                        operation
  --timeout TIMEOUT, -t TIMEOUT
                        Timeout for the operation on the cluster
  --wait, -w            Wait to ensure all instances are killed

⚙️ Status: 0 -> Command executed successfully

🤖 Verify connectivity to the Drove cluster
$ drove cluster ping 2>&1 (Timeout: 15 seconds)

Cluster ping successful
⚙️ Status: 0 -> Command executed successfully

🤖 List all apps to find TEST_APP and its current app ID

$ drove apps list 2>&1 (Timeout: 15 seconds)

Id              Name          State         Total CPU    Total Memory(MB)    Required Instances    Healthy Instances  Created               Updated
--------------  ------------  ----------  -----------  ------------------  --------------------  -------------------  --------------------  --------------------
TEST_APP-1      TEST_APP      MONITORING            0                   0                     0                    0  11/04/2026, 09:01:19  11/04/2026, 09:01:19
TEST_APP_DEV-1  TEST_APP_DEV  MONITORING            0                   0                     0                    0  11/04/2026, 09:01:19  11/04/2026, 09:01:19

⚙️ Status: 0 -> Command executed successfully

🤖 Found TEST_APP with app ID: TEST_APP-1, currently in MONITORING (suspended) state. Will now spin it up (scale to 1 instance) and then scale up.

🤖 Spin up TEST_APP by scaling to 1 instance

$ drove apps scale TEST_APP-1 1 --wait 2>&1 (Timeout: 120 seconds)

Waiting till required scale is reached
Healthy instances count: 0
Healthy instances count: 1
Required number of instances reached

⚙️ Status: 0 -> Command executed successfully

🤖 Scale up TEST_APP to 2 instances

$ drove apps scale TEST_APP-1 2 --wait 2>&1 (Timeout: 120 seconds)

Waiting till required scale is reached
Healthy instances count: 1
Healthy instances count: 2
Required number of instances reached

⚙️ Status: 0 -> Command executed successfully

🤖 Get summary of TEST_APP after spin up and scale up

$ drove apps summary TEST_APP-1 2>&1 (Timeout: 15 seconds)

id                            TEST_APP-1
name                          TEST_APP
requiredInstances             2
healthyInstances              2
totalCPUs                     2
totalMemory                   256
tags
    superSpecialApp               yes_i_am
    say_my_name                   heisenberg
    environment                   dev
    region                        dc1
state                         RUNNING
created                       1775878279746
updated                       1775892013522

⚙️ Status: 0 -> Command executed successfully

🤖

✅ TEST_APP Spun Up & Scaled Successfully

Here's what was done for TEST_APP-1:

Steps Executed

Step Action Result
1 Verified cluster connectivity ✅ Ping successful
2 Located TEST_APP-1 Found in MONITORING (suspended) state
3 Spin up — scaled to 1 instance ✅ 1 healthy instance reached
4 Scale up — scaled to 2 instances ✅ 2 healthy instances reached

Current App Summary

App ID        : TEST_APP-1
Name          : TEST_APP
State         : RUNNING  ✅
Required      : 2 instances
Healthy       : 2 instances
Total CPUs    : 2
Total Memory  : 256 MB
Tags          : superSpecialApp=yes_i_am, say_my_name=heisenberg
                environment=dev, region=dc1

TEST_APP was previously suspended (MONITORING state) and is now fully RUNNING with 2 healthy instances. 🚀
🤖 ✅ Success. (Time taken: 70.706 seconds, Tokens used: 205987)

## Test Suite (122 tests, 100% pass rate)
- tests/conftest.py: shared fixtures, helpers, live_app and live_service fixtures
- tests/test_apps.py: 25 tests (smoke + full lifecycle: create/scale/restart/suspend/destroy)
- tests/test_appinstances.py: 15 tests (list/info/logs/tail with Popen+terminate)
- tests/test_localservices.py: 23 tests (smoke + lifecycle: activate/restart/lsinstances)
- tests/test_tasks.py: 8 tests (create/show/logs/wait-for-terminal/describe)
- tests/test_cluster.py: 13 tests (ping/summary/leader/endpoints/describe)
- tests/test_executor.py: 11 tests (list/info/appinstances/tasks/lsinstances/describe)
- tests/test_config.py: 8 tests (clusters/current/view/use)
- tests/test_cli_basics.py: 19 tests (help/completion/error-handling)
- tests/fixtures/: cli_test_app.json, cli_test_service.json, cli_test_task.json
- pytest.ini: timeout=600, markers (smoke/lifecycle/offline)

## Key Test Fixes Discovered
- drove CLI exits 0 even on API errors; tests check stdout for error text
- --sort flag accepts integer column indices only (not column name strings)
- apps scale/restart --wait exits before state machine settles; tests poll via wait_for_app_state()
- Local services use state=ACTIVE (not RUNNING); fixed wait_for_ls_state() target
- tasks list only shows active tasks; test_task_create uses tasks show to verify creation
- lsinstances use SI- prefix (not LSI-/AI-); fixed _get_ls_instance_id() prefix check
- Added _destroy_app_safe() and _destroy_ls_safe() retry helpers for transitional states
- appinstances tail is a streaming command; test uses Popen + terminate pattern

## Security: CVE dependency upgrades (fixes Dependabot alerts)
- urllib3: ^2.0.7 → ^2.2.2 (CVE-2024-37891)
- requests: ^2.31.0 → ^2.32.0 (CVE-2024-35195)
- tenacity: ^8.2.3 → ^9.0.0
- certifi: >=2024.7.4 added (CVE-2023-37920 / CVE-2024-39689)
- Add tests/mock_server.py: Flask-based Drove API stub
  - DroveState class manages in-memory stateful data (apps, local
    services, tasks, executor) seeded from realistic test fixtures
  - MockDroveServer wraps Flask in a daemon thread on an ephemeral port
  - Response shapes match exact field names expected by all drove CLI
    plugin code (verified against plugins/*.py)
  - Covers all 30+ API endpoints: ping, cluster, apps, appinstances,
    localservices, lsinstances, tasks, executor, logfiles
  - Fixed Werkzeug ephemeral port binding: find free port via bind(0),
    close socket, then call make_server(host, port, app) without fd=

- Add tests/conftest.py enhancements:
  - mock_drove_server fixture (session-scoped): starts MockDroveServer
    once for the entire offline test session
  - offline_env fixture (module-scoped): resets state + sets
    DROVE_ENDPOINT so drove subprocess connects to the stub
  - offline_executor_id fixture: exposes the seed executor UUID
  - drove() / drove_ok() helpers available to all test modules

- Add offline test files (all marked pytestmark = pytest.mark.offline):
  - tests/test_offline_cluster.py  (15 tests): ping, summary, leader,
    endpoints, describe
  - tests/test_offline_apps.py     (26 tests): list, summary, spec,
    create/scale/suspend/restart/destroy lifecycle
  - tests/test_offline_appinstances.py (13 tests): list, info
  - tests/test_offline_executor.py (11 tests): list, info, app/task
    instances
  - tests/test_offline_localservices.py (30 tests): list, summary,
    spec, create/activate/restart/deactivate/destroy lifecycle,
    lsinstances list/info
  - tests/test_offline_tasks.py    (11 tests): create/show/kill
    lifecycle, list

- Add AGENTS.md: contributor guidelines, test mode instructions,
  mock server architecture, seed data inventory, known gotchas

- Update pytest.ini: describe offline marker
- Update pyproject.toml: add flask >=3.0.0 to test dependencies

All 106 offline tests pass. Live integration tests unaffected.
…-existing resource deps

- conftest.py: APP_SPEC/SVC_SPEC/TASK_SPEC now point to sample/ directory
  APP_ID=TEST_APP-1, SVC_ID=TEST_LOCAL_SERVICE-1, TASK_ID=T0012, TASK_SOURCE=TEST_APP
- test_apps.py: fully rewritten — all classes require live_app fixture; no EXISTING_APP refs
- test_localservices.py: fully rewritten — all classes require live_service fixture
- test_tasks.py: imports TASK_SPEC from conftest instead of FIXTURES_DIR
- test_appinstances.py: CLI_TEST_APP → TEST_APP string references
- test_offline_apps.py: CLI_TEST_APP-1/cli_test_app.json → TEST_APP-1/APP_SPEC from conftest
- test_offline_localservices.py: CLI_TEST_SERVICE-1 → OFFLINE_TEST_SERVICE-1 (lifecycle);
  read-only tests keep EXISTING_SVC=TEST_LOCAL_SERVICE-1 (seeded)
- test_offline_tasks.py: imports TASK_SOURCE/TASK_ID/APP_SPEC/TASK_SPEC from conftest
- Deleted tests/fixtures/cli_test_app.json and cli_test_task.json
- Renamed tests/fixtures/cli_test_service.json → offline_test_service.json
  (name field changed to OFFLINE_TEST_SERVICE for lifecycle test isolation)
- README.md, AGENTS.md, tests/README.md: pre-existing resources table removed

Result: all 106 offline tests pass (pytest -m offline)
- Triggers on pull_request (all branches) and push (non-master branches)
- Matrix: Python 3.9, 3.10, 3.11 with fail-fast: false
- Installs runtime deps + test deps + drove-cli via --no-deps -e .
- Runs: pytest -m offline --tb=short --junit-xml --q
- Uploads junit XML + stdout artifacts (7-day retention)
- Writes per-matrix job summary to GITHUB_STEP_SUMMARY
- Posts / updates a sticky per-Python-version PR comment via
  actions/github-script@v7; comment is identified by an HTML
  marker <!-- drove-cli-offline-tests-py{ver} --> so re-runs
  update the existing comment rather than creating duplicates
- Job fails after comment posting when exit_code != 0
pytest>=9.0.0 requires Python>=3.10, so the Python 3.9 matrix job was
failing at the 'Install test dependencies' step with an incompatible
package error.

Changes:
- .github/workflows/offline-tests.yml: matrix 3.9/3.10/3.11 → 3.10/3.11/3.12
- pyproject.toml: python = "^3.9" → "^3.10" (aligns with pytest 9 requirement)

Python 3.9 reaches EOL 2025-10-05 and is no longer a supported target.
pytest wraps the final summary line in ===...=== padding:
  '===== 106 passed, 124 deselected in 22s ====='

Both the bash 'Write job summary' step and the JS 'Post PR comment'
step were displaying the raw decorated line, producing ugly output like:
  ✅ **===== 106 passed, 124 deselected in 22s =====**

Fixes:
- bash: pipe summary line through sed 's/^[= ]*//;s/[= ]*$//'
- JS: .replace(/^[= ]+|[= ]+$/g, '').trim()
- bash: add -i flag to grep -qiE for case-insensitive failed/error check
- bash: add 'mkdir -p reports' guard before pytest invocation
- bash: guard 'grep -q FAILED' with 2>/dev/null to silence missing-file noise
- JS: tighten 'passed' regex to /\d+\s+passed/ (avoids false positive)
- Prints all command and subcommand options
- Added claude skill so that AI agent can use it to interact with drove
@github-actions
Copy link
Copy Markdown

github-actions Bot commented Apr 11, 2026

✅ Offline Tests — Python 3.10

168 passed, 1 skipped, 116 deselected in 32.26s

Commit: 3e120f0 | View full log

@github-actions
Copy link
Copy Markdown

github-actions Bot commented Apr 11, 2026

✅ Offline Tests — Python 3.11

168 passed, 1 skipped, 116 deselected in 31.76s

Commit: 3e120f0 | View full log

@github-actions
Copy link
Copy Markdown

github-actions Bot commented Apr 11, 2026

✅ Offline Tests — Python 3.12

168 passed, 1 skipped, 116 deselected in 34.76s

Commit: 3e120f0 | View full log

@santanusinha santanusinha changed the title Full help Full help and Agent Skill Apr 11, 2026
- tests/test_offline_full_help.py: 62 offline pytest tests covering
  exit code, separator format (72 '='), section count (80), all plugin
  groups and representative sub-commands, alphabetical ordering, and
  mutual exclusion with sub-commands.  Full offline suite: 168 passed.

- AGENTS.md: update Python version from 3.9+ to 3.10+ to match
  pyproject.toml constraint (^3.10).

- tests/README.md: remove erroneous leading/trailing triple-quote
  fences (file is Markdown, not a Python module).

- tests/test_config.py: add module-level pytest.skip guard when
  ~/.drove is absent so CI runners without a config file are not broken.

- .github/workflows/offline-tests.yml: replace three separate pip
  install steps with snok/install-poetry@v1 + poetry install --with test
  (consistent with the existing pypi release workflow); switch cache
  from pip to poetry.

- requirements.txt: deleted (all deps are already declared in
  pyproject.toml; the file was redundant and slightly out of date).
actions/setup-python's cache: poetry option requires the poetry binary
to already be on PATH so it can locate the lock file for the cache key.
Since snok/install-poetry runs after setup-python, poetry is not yet
available at that point → the step fails with 'Set up Python' error.

Fix: remove cache: poetry from setup-python entirely.
snok/install-poetry handles its own dependency caching; no extra
pip/poetry cache configuration in setup-python is needed.
Copy link
Copy Markdown
Contributor

@Prajwalprakash3722 Prajwalprakash3722 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 👌🏼

@santanusinha santanusinha merged commit 6e43232 into master Apr 11, 2026
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants