Full help and Agent Skill#26
Merged
santanusinha merged 13 commits intomasterfrom Apr 11, 2026
Merged
Conversation
## Test Suite (122 tests, 100% pass rate) - tests/conftest.py: shared fixtures, helpers, live_app and live_service fixtures - tests/test_apps.py: 25 tests (smoke + full lifecycle: create/scale/restart/suspend/destroy) - tests/test_appinstances.py: 15 tests (list/info/logs/tail with Popen+terminate) - tests/test_localservices.py: 23 tests (smoke + lifecycle: activate/restart/lsinstances) - tests/test_tasks.py: 8 tests (create/show/logs/wait-for-terminal/describe) - tests/test_cluster.py: 13 tests (ping/summary/leader/endpoints/describe) - tests/test_executor.py: 11 tests (list/info/appinstances/tasks/lsinstances/describe) - tests/test_config.py: 8 tests (clusters/current/view/use) - tests/test_cli_basics.py: 19 tests (help/completion/error-handling) - tests/fixtures/: cli_test_app.json, cli_test_service.json, cli_test_task.json - pytest.ini: timeout=600, markers (smoke/lifecycle/offline) ## Key Test Fixes Discovered - drove CLI exits 0 even on API errors; tests check stdout for error text - --sort flag accepts integer column indices only (not column name strings) - apps scale/restart --wait exits before state machine settles; tests poll via wait_for_app_state() - Local services use state=ACTIVE (not RUNNING); fixed wait_for_ls_state() target - tasks list only shows active tasks; test_task_create uses tasks show to verify creation - lsinstances use SI- prefix (not LSI-/AI-); fixed _get_ls_instance_id() prefix check - Added _destroy_app_safe() and _destroy_ls_safe() retry helpers for transitional states - appinstances tail is a streaming command; test uses Popen + terminate pattern ## Security: CVE dependency upgrades (fixes Dependabot alerts) - urllib3: ^2.0.7 → ^2.2.2 (CVE-2024-37891) - requests: ^2.31.0 → ^2.32.0 (CVE-2024-35195) - tenacity: ^8.2.3 → ^9.0.0 - certifi: >=2024.7.4 added (CVE-2023-37920 / CVE-2024-39689)
- Add tests/mock_server.py: Flask-based Drove API stub
- DroveState class manages in-memory stateful data (apps, local
services, tasks, executor) seeded from realistic test fixtures
- MockDroveServer wraps Flask in a daemon thread on an ephemeral port
- Response shapes match exact field names expected by all drove CLI
plugin code (verified against plugins/*.py)
- Covers all 30+ API endpoints: ping, cluster, apps, appinstances,
localservices, lsinstances, tasks, executor, logfiles
- Fixed Werkzeug ephemeral port binding: find free port via bind(0),
close socket, then call make_server(host, port, app) without fd=
- Add tests/conftest.py enhancements:
- mock_drove_server fixture (session-scoped): starts MockDroveServer
once for the entire offline test session
- offline_env fixture (module-scoped): resets state + sets
DROVE_ENDPOINT so drove subprocess connects to the stub
- offline_executor_id fixture: exposes the seed executor UUID
- drove() / drove_ok() helpers available to all test modules
- Add offline test files (all marked pytestmark = pytest.mark.offline):
- tests/test_offline_cluster.py (15 tests): ping, summary, leader,
endpoints, describe
- tests/test_offline_apps.py (26 tests): list, summary, spec,
create/scale/suspend/restart/destroy lifecycle
- tests/test_offline_appinstances.py (13 tests): list, info
- tests/test_offline_executor.py (11 tests): list, info, app/task
instances
- tests/test_offline_localservices.py (30 tests): list, summary,
spec, create/activate/restart/deactivate/destroy lifecycle,
lsinstances list/info
- tests/test_offline_tasks.py (11 tests): create/show/kill
lifecycle, list
- Add AGENTS.md: contributor guidelines, test mode instructions,
mock server architecture, seed data inventory, known gotchas
- Update pytest.ini: describe offline marker
- Update pyproject.toml: add flask >=3.0.0 to test dependencies
All 106 offline tests pass. Live integration tests unaffected.
…-existing resource deps - conftest.py: APP_SPEC/SVC_SPEC/TASK_SPEC now point to sample/ directory APP_ID=TEST_APP-1, SVC_ID=TEST_LOCAL_SERVICE-1, TASK_ID=T0012, TASK_SOURCE=TEST_APP - test_apps.py: fully rewritten — all classes require live_app fixture; no EXISTING_APP refs - test_localservices.py: fully rewritten — all classes require live_service fixture - test_tasks.py: imports TASK_SPEC from conftest instead of FIXTURES_DIR - test_appinstances.py: CLI_TEST_APP → TEST_APP string references - test_offline_apps.py: CLI_TEST_APP-1/cli_test_app.json → TEST_APP-1/APP_SPEC from conftest - test_offline_localservices.py: CLI_TEST_SERVICE-1 → OFFLINE_TEST_SERVICE-1 (lifecycle); read-only tests keep EXISTING_SVC=TEST_LOCAL_SERVICE-1 (seeded) - test_offline_tasks.py: imports TASK_SOURCE/TASK_ID/APP_SPEC/TASK_SPEC from conftest - Deleted tests/fixtures/cli_test_app.json and cli_test_task.json - Renamed tests/fixtures/cli_test_service.json → offline_test_service.json (name field changed to OFFLINE_TEST_SERVICE for lifecycle test isolation) - README.md, AGENTS.md, tests/README.md: pre-existing resources table removed Result: all 106 offline tests pass (pytest -m offline)
- Triggers on pull_request (all branches) and push (non-master branches)
- Matrix: Python 3.9, 3.10, 3.11 with fail-fast: false
- Installs runtime deps + test deps + drove-cli via --no-deps -e .
- Runs: pytest -m offline --tb=short --junit-xml --q
- Uploads junit XML + stdout artifacts (7-day retention)
- Writes per-matrix job summary to GITHUB_STEP_SUMMARY
- Posts / updates a sticky per-Python-version PR comment via
actions/github-script@v7; comment is identified by an HTML
marker <!-- drove-cli-offline-tests-py{ver} --> so re-runs
update the existing comment rather than creating duplicates
- Job fails after comment posting when exit_code != 0
pytest>=9.0.0 requires Python>=3.10, so the Python 3.9 matrix job was failing at the 'Install test dependencies' step with an incompatible package error. Changes: - .github/workflows/offline-tests.yml: matrix 3.9/3.10/3.11 → 3.10/3.11/3.12 - pyproject.toml: python = "^3.9" → "^3.10" (aligns with pytest 9 requirement) Python 3.9 reaches EOL 2025-10-05 and is no longer a supported target.
pytest wraps the final summary line in ===...=== padding: '===== 106 passed, 124 deselected in 22s =====' Both the bash 'Write job summary' step and the JS 'Post PR comment' step were displaying the raw decorated line, producing ugly output like: ✅ **===== 106 passed, 124 deselected in 22s =====** Fixes: - bash: pipe summary line through sed 's/^[= ]*//;s/[= ]*$//' - JS: .replace(/^[= ]+|[= ]+$/g, '').trim() - bash: add -i flag to grep -qiE for case-insensitive failed/error check - bash: add 'mkdir -p reports' guard before pytest invocation - bash: guard 'grep -q FAILED' with 2>/dev/null to silence missing-file noise - JS: tighten 'passed' regex to /\d+\s+passed/ (avoids false positive)
- Prints all command and subcommand options - Added claude skill so that AI agent can use it to interact with drove
✅ Offline Tests — Python 3.10168 passed, 1 skipped, 116 deselected in 32.26s
|
✅ Offline Tests — Python 3.11168 passed, 1 skipped, 116 deselected in 31.76s
|
✅ Offline Tests — Python 3.12168 passed, 1 skipped, 116 deselected in 34.76s
|
- tests/test_offline_full_help.py: 62 offline pytest tests covering exit code, separator format (72 '='), section count (80), all plugin groups and representative sub-commands, alphabetical ordering, and mutual exclusion with sub-commands. Full offline suite: 168 passed. - AGENTS.md: update Python version from 3.9+ to 3.10+ to match pyproject.toml constraint (^3.10). - tests/README.md: remove erroneous leading/trailing triple-quote fences (file is Markdown, not a Python module). - tests/test_config.py: add module-level pytest.skip guard when ~/.drove is absent so CI runners without a config file are not broken. - .github/workflows/offline-tests.yml: replace three separate pip install steps with snok/install-poetry@v1 + poetry install --with test (consistent with the existing pypi release workflow); switch cache from pip to poetry. - requirements.txt: deleted (all deps are already declared in pyproject.toml; the file was redundant and slightly out of date).
actions/setup-python's cache: poetry option requires the poetry binary to already be on PATH so it can locate the lock file for the cache key. Since snok/install-poetry runs after setup-python, poetry is not yet available at that point → the step fails with 'Set up Python' error. Fix: remove cache: poetry from setup-python entirely. snok/install-poetry handles its own dependency caching; no extra pip/poetry cache configuration in setup-python is needed.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Added
--full-helpcommand line optionSample session in AI Assistant using the skill:
🤖 Check if drove CLI is installed
⚙️ Status: 0 -> Command executed successfully
🤖 Load the full drove CLI help to understand available commands
⚙️ Status: 0 -> Command executed successfully
🤖 Verify connectivity to the Drove cluster
$ drove cluster ping 2>&1 (Timeout: 15 seconds)Cluster ping successful
⚙️ Status: 0 -> Command executed successfully
🤖 List all apps to find TEST_APP and its current app ID
⚙️ Status: 0 -> Command executed successfully
🤖 Found TEST_APP with app ID: TEST_APP-1, currently in MONITORING (suspended) state. Will now spin it up (scale to 1 instance) and then scale up.
🤖 Spin up TEST_APP by scaling to 1 instance
⚙️ Status: 0 -> Command executed successfully
🤖 Scale up TEST_APP to 2 instances
⚙️ Status: 0 -> Command executed successfully
🤖 Get summary of TEST_APP after spin up and scale up
⚙️ Status: 0 -> Command executed successfully
🤖
✅ TEST_APP Spun Up & Scaled Successfully
Here's what was done for TEST_APP-1:
Steps Executed
TEST_APP-1MONITORING(suspended) stateCurrent App Summary
TEST_APP was previously suspended (
MONITORINGstate) and is now fully RUNNING with 2 healthy instances. 🚀🤖 ✅ Success. (Time taken: 70.706 seconds, Tokens used: 205987)