All notable changes to the OpenClaw Home Assistant Integration will be documented in this file.
- Added
continue_conversationheuristics for Assist / Voice PE follow-up dialog so voice satellites can automatically re-listen when the assistant asks a follow-up question. (Merged PR #11) - Added
x-openclaw-message-channel: voiceon voice pipeline requests so the OpenClaw runtime can identify Assist sessions as voice instead of generic webchat. (Merged PR #13)
- Fixed the Active Model select entity so the selected model is actually passed to both chat-card and Assist requests.
- Fixed manifest documentation and issue-tracker links so they point to the integration repository instead of the add-on repository.
- Fixed chat-card cache-busting/version drift by aligning backend resource registration and the root loader shim with the current integration version.
- Fixed event entity lifecycle handling by moving event-bus subscriptions into
async_added_to_hass. - Fixed potential cross-agent Assist context bleed by namespacing fallback conversation IDs with the selected agent ID.
- Removed the unsupported
attachmentsfield fromopenclaw.send_messageservice schema/docs/translations because it was accepted by the integration but never sent to the gateway. - Extracted shared recursive response-text parsing into
helpers.pyto remove duplicated logic. - Added a warning when the API client has to create a fallback aiohttp session because the primary Home Assistant-managed session is no longer available.
- Updated README documentation to reflect current model-selection, event-entity, and service behavior.
- Added an optional dedicated
voice_agent_idintegration option so Assist and microphone-originated chat requests can be routed to a separate OpenClaw agent without changing the default text/chat agent. - Added an optional
assist_session_idintegration option so the native Home Assistant conversation agent can reuse a fixed OpenClaw session when desired.
- Leaving either of the new options blank preserves the existing behavior: voice requests still use the configured default agent, and Assist still uses the Home Assistant conversation or fallback session ID.
- Added voice-origin request headers for Home Assistant Assist / voice pipeline traffic:
x-openclaw-source: voiceandx-ha-voice: true. - Added matching voice-origin support for microphone-triggered messages from the OpenClaw chat card, so card voice input and Assist voice pipeline requests are both marked as spoken interactions.
- This allows OpenClaw agents and hooks to detect spoken interactions and return TTS-friendly responses without affecting regular typed chat requests.
- Fixed manual configuration incorrectly forcing
Verify SSL certificateto remain enabled when the checkbox was unchecked. - This restores manual setup for self-signed HTTPS endpoints such as standalone OpenClaw
lan_httpsdeployments.
- Added configurable OpenClaw
agent_idsupport for manual setup, integration options, and per-callopenclaw.send_messageservice overrides. - Added
x-openclaw-agent-idrouting support in the API client so messages can target non-mainOpenClaw agents.
- Saving OpenClaw integration options now reloads the integration automatically, so updated
agent_idand other runtime options apply immediately. - Added service descriptions and translations for the new
agent_idfield in the configuration UI.
- Moved integration brand assets to Home Assistant's new local brand path:
custom_components/openclaw/brand/icon.pngandcustom_components/openclaw/brand/logo.png. - This aligns with the 2026.3+ requirement for custom integration brand images served via the local Brands Proxy API.
- Event entities (
event.openclaw_message_received,event.openclaw_tool_invoked) — native HA EventEntity entities that fire on each assistant reply and tool invocation result. Selectable in the automation UI without YAML. - Button entities — dashboard-friendly buttons for common actions:
- Clear History — clears in-memory conversation history
- Sync History — triggers a backend coordinator refresh
- Run Diagnostics — fires a connectivity check against the gateway
- Select entity (
select.openclaw_active_model) — exposes the list of available models from the gateway's/v1/modelsendpoint, allowing model switching from the HA dashboard. Selection is persisted in config entry options. - Coordinator now caches the full model list (not just the first model) and exposes it via
coordinator.available_models.
- Added configurable response timeout (
thinking_timeout) to integration options (Settings → Integrations → OpenClaw → Configure → Response timeout (seconds)). Applies to all chat cards automatically. - Added
thinking_timeoutcard config key (seconds) for per-card override. Card YAML takes precedence over the integration setting, which takes precedence over the built-in default of 120 s.
- Fixed chat always scrolling to the TOP instead of bottom on every message (both user and bot).
- Restored
requestAnimationFramein_scrollToBottom()— synchronous scrollTop assignment after innerHTML replacement does not persist in shadow DOM before the browser finalizes layout. - Changed
_render()to only upgrade_autoScrollPinnedtotrue(never downgrade), preventing a race condition where rapid re-renders set it tofalsepermanently. - Fixed "Gateway: Unknown" badge — entity ID lookup now matches
sensor.openclaw*statusandbinary_sensor.openclaw*connectedpatterns, coveringhas_entity_namevariants likesensor.openclaw_assistant_status.
- Fixed chat scrolling UP to an older message instead of staying at the bottom when a bot reply arrives.
- Eliminated requestAnimationFrame race condition in
_scrollToBottom()that caused_autoScrollPinnedto be incorrectly set to false when rapid re-renders overlapped. - Added
overflow-anchor: noneto the messages container to prevent browser scroll anchoring artifacts after innerHTML replacement.
- Added HTTPS / SSL support for connecting to OpenClaw gateways running in
lan_httpsmode or behind TLS reverse proxies. - Auto-discovery now detects
access_mode: lan_httpsand connects to the internal gateway port automatically (no certificate setup needed for local addons). - Added
Verify SSL certificateoption in manual config for self-signed certificate environments. - Added
ssl_errorconfig flow error with actionable guidance. - Added comprehensive remote connection documentation to README with setup table for all access modes.
- Fixed "400 Bad Request — plain HTTP request was sent to HTTPS port" when the addon uses
lan_httpsaccess mode.
- Fixed chat input focus loss while typing by preventing unnecessary card re-renders on frequent Home Assistant state updates.
- Improved Android Home Assistant app voice behavior by preferring Home Assistant TTS first and handling browser-TTS-unavailable cases correctly.
- Reduced repeated voice-mode start/stop cycles in silence to avoid frequent mobile chime sounds.
- Prevented voice mode from hearing and re-triggering on its own spoken TTS responses.
- Improved chat auto-scroll behavior so manual scroll position is respected when reading older messages.
- Improved cross-device chat consistency by forcing quick backend history sync after message events.
- Improved chat-card gateway header status detection to handle suffixed entity IDs (for example
_2) and common connected/offline state variants.
- Improved continuous voice-mode responsiveness by finalizing recognition on speech end, reducing delay before sending spoken messages.
- Improved TTS fallback order: when no matching browser language voice is available, the card now tries Home Assistant TTS first before browser default-language fallback.
- Improved chat-card TTS resilience when the preferred language voice (for example
bg-BG) is temporarily unavailable in the browser. - Added automatic fallback to an available voice/language instead of hard failing speech output.
- TTS error status now includes browser-provided error reason when available.
- Hardened voice startup/toggle flow in the chat card to prevent voice mode from getting stuck when browser speech startup fails.
- Added guarded async click handlers for voice buttons to avoid uncaught promise failures during voice mode enable/disable.
- Browser speech startup failures now set a clear in-card voice status and automatically revert continuous voice mode.
- Fixed chat-card settings sync to always read latest integration options instead of a potentially stale cached config entry.
- Wake-word disable now applies reliably after unchecking in integration options.
- Browser voice listening status now only shows wake-word requirement when wake word is actually enabled.
- Improved Home Assistant Assist conversation continuity by using stable fallback session IDs when
conversation_idis missing. - Assist now falls back to per-user (
assist_user_*) or per-device (assist_device_*) session keys instead of a single generic default.
- Added integration option
browser_voice_language(shown whenvoice_providerisbrowser) to explicitly control browser STT/TTS language.
- Browser voice provider now applies integration-configured browser voice language for both listening and spoken replies.
- Voice replies are no longer spoken for one-shot/manual voice sends unless continuous voice mode is actively running.
- Fixed voice-mode option sync so wake-word enabled/disabled changes are reloaded before toggling voice mode.
- Removed legacy always-on voice mode behavior that could force sticky voice-mode state.
- Added live gateway connection badge to the chat card header using existing OpenClaw status entities.
- Removed "Always-on voice mode" option from integration options UI and translations.
- Added OpenClaw Gateway tools endpoint integration (
POST /tools/invoke) in the API client. - Added new Home Assistant service
openclaw.invoke_toolwith support for tool/action/args/session routing fields. - Added new event
openclaw_tool_invokedfor automation hooks on tool execution results. - Added tool telemetry sensors: last tool name, status, duration, and invocation timestamp.
- Coordinator now performs a best-effort
sessions_listtool invocation to populate session count/list when available.
- Aligned gateway chat requests with OpenClaw session behavior by sending OpenAI
userwith the stable session ID. - Added
x-openclaw-session-keyrequest header for explicit session routing on OpenClaw gateways. - Improves multi-turn continuity where
/v1/chat/completionswould otherwise default to stateless per-request sessions. - Removed per-request chat-history replay to avoid unnecessary prompt growth when gateway session memory is active.
- Improved conversation continuity by sending
session_idin chat completion JSON payloads (in addition toX-Session-Idheader), for both regular and streaming requests. - Reduces cases where the gateway treats each message as a new conversation when custom headers are ignored upstream.
- Voice mode now auto-falls back to browser speech when
voice_providerisassist_stt, instead of blocking continuous mode with an error message. - Reduced duplicated assistant replies in the chat card by deduplicating repeated
openclaw_message_receivedpayloads.
- Improved chat-card reliability when using voice send flow by re-subscribing to
openclaw_message_receivedafter card reconnects. - Added backend history-sync fallback after message send so user/assistant messages still appear when an event is missed.
- Assist STT microphone capture now uses
AudioWorkletNodewhen available, with automatic fallback toScriptProcessorNodefor older browsers. - Reduced browser deprecation noise by avoiding
ScriptProcessorNodeon modern browser engines.
- Expanded README voice documentation with practical guidance for
voice_provider(browservsassist_stt) and provider-specific troubleshooting.
- Lowered declared HACS minimum Home Assistant Core version to
2025.1.0after compatibility hardening updates.
- Reduced
415 Unsupported Media Typefailures forassist_sttby fetching STT provider capabilities and negotiating metadata before upload. - Assist STT now auto-matches provider-supported language values (for example
bgvsbg-BG) when submitting transcription audio. - Assist STT now aligns upload metadata sample rate/channels with provider-supported values when available.
- Added configurable voice input provider option:
browserorassist_stt. - Chat card now supports Home Assistant STT transcription mode (
assist_stt) for manual mic input.
- Voice provider is now exposed through integration settings websocket payload and card configuration handling.
- Continuous voice mode remains available only for browser voice provider.
- Fixed Assist pipeline crash during intent recognition on some Home Assistant versions:
- Replaced
conversation.IntentResponse/conversation.IntentResponseErrorCodeusage withhomeassistant.helpers.intentequivalents in the OpenClaw conversation agent. - Resolves
AttributeError: module 'homeassistant.components.conversation' has no attribute 'IntentResponse'.
- Replaced
- Treated
SpeechRecognitionno-speechas a normal listening condition instead of an error. - Reduced voice error noise by avoiding retry scheduling for
no-speechevents. - Added clearer in-card status text for silence/no-speech scenarios.
- Voice language selection now prioritizes the preferred Assist pipeline language (
assist_pipeline/pipeline/list) instead of only using Home Assistant UI language. - Added separate TTS language resolution so spoken replies follow Assist pipeline TTS language when available.
- Retained safe fallbacks to integration/UI/browser language when Assist pipeline data is unavailable.
- Treat
SpeechRecognitionabortedevents as expected stop behavior (no error status/no noisy console error) when voice is intentionally stopped. - Added a stop-request guard to avoid restart/error churn during recognition shutdown.
- Synchronized release versioning so manifest, frontend loader URL, and backend card resource URL all use the same cache-busting version.
- Improved backward compatibility for older Home Assistant Core builds by removing Python 3.12-only type alias syntax in integration runtime code.
- Added fallback import handling for
ConfigFlowResultin config flow type hints.
- Added integration option
allow_brave_webspeechto Settings → Devices & Services → OpenClaw → Configure. - Frontend card now reads this option via
openclaw/get_settingsand applies it automatically.
- Card version bumped to
0.2.6and cache-busting URL updated tov=0.1.26.
- Voice input language now prioritizes integration/HA locale settings more reliably (including frontend locale fallback), reducing unwanted fallback to English.
- Voice-mode assistant replies now use improved speech synthesis voice selection for the active language and better voice-loading handling.
- Reworked chat pending-response tracking to support multiple in-flight messages without leaving stuck typing indicators.
- Added proactive Brave browser guard for card voice input to avoid recurring
SpeechRecognitionnetworkfailures. - Voice is now disabled by default on Brave with a clear status message and opt-in override (
allow_brave_webspeech: true). - Reduced noisy console output for
networkspeech errors.
- Improved handling for repeated
SpeechRecognitionnetworkfailures in Brave-like browsers. - Added clear in-card status when browser speech backend appears blocked, and stopped endless retry loops in that case.
- Kept automatic locale fallback retry for transient speech-service issues.
- Improved speech-recognition language handling by normalizing language tags (e.g.
bg→bg-BG). - Added automatic fallback retry with browser locale on
SpeechRecognitionnetworkerrors. - Updated versioned card resource URL to force clients to load the latest voice handling logic.
- Added automatic cleanup of duplicate/legacy OpenClaw Lovelace resources (
/local/..., unversioned/openclaw/...,/hacsfiles/...) so only the current versioned resource is kept. - Prevents loading multiple OpenClaw card generations (
v0.2.0+v0.2.1) at the same time.
- Added versioned card resource URL (
/openclaw/openclaw-chat-card.js?v=...) to reduce stale frontend caching issues. - Added runtime source diagnostics (
import.meta.url) in card console output to verify which script file is actually loaded. - Updated root loader shim to import versioned card bundle URL.
- Made
custom_components/openclaw/www/openclaw-chat-card.jsthe single source of truth for card implementation. - Replaced root
www/openclaw-chat-card.jswith a tiny loader shim that imports/openclaw/openclaw-chat-card.js. - Removed manual-maintenance duplication between two full card script files.
- Voice input now requires wake word only for continuous voice mode, not for manual mic usage.
- Added in-card voice status feedback (listening, wake-word wait, sending, error) to make microphone behavior visible.
- Improved handling for unsupported speech-recognition browsers with explicit UI status.
- Improved chat history recovery after leaving and returning to dashboard.
- Card now retries backend history sync when websocket is not yet ready.
- History merge now updates when message count is unchanged but latest message content/timestamp differs.
- Added configurable wake-word support in integration options.
- Added optional always-on voice mode in integration options.
- Added websocket settings endpoint (
openclaw/get_settings) used by the chat card to apply integration-level voice settings.
- Chat card voice recognition now supports two modes:
- Manual voice input without wake word
- Continuous listening with required wake word
- Added integration options for prompt/context behavior:
- Include exposed-entities context
- Max context characters
- Context overflow strategy (
truncateorclear) - Enable tool calls (
execute_service/execute_services)
- Service and conversation requests now apply configurable context policy before sending prompts.
- Optional tool-call execution now mirrors Extended OpenAI-style service tool usage and feeds execution results back into a follow-up model response.
- Fixed Assist exposure context lookup to use Home Assistant's conversation assistant identifier (
conversation) instead ofassist, which could result in empty exposed-entity context. - Added backend in-memory chat history and websocket endpoint (
openclaw/get_history) so card responses are recoverable after leaving and returning to the dashboard. - Normalized default session handling to
defaultfor service calls/events/history, avoiding session mismatch drops.
- Chat card now syncs history from backend on mount/initialization to restore missed assistant messages.
- Added native Home Assistant Assist entity exposure support to OpenClaw requests.
- OpenClaw now includes context for entities exposed in Settings → Voice assistants → Expose.
- Service chat (
openclaw.send_message) and conversation agent (streaming + fallback) now pass exposed-entities context as a system prompt.
- Improved response parsing for nested/modern OpenAI-compatible payloads (including
output/ nestedcontentshapes), which could previously result in missing UI replies. - Applied the same recursive extraction strategy to both service-based chat responses and Assist conversation fallback parsing.
- Chat card no longer waits forever when the gateway returns a non-
choices[0].message.contentresponse shape. openclaw.send_messagenow extracts assistant text from multiple OpenAI-compatible formats (choices,output_text,response,message,content,answer).- Added fallback event emission on API errors so the frontend always receives a response and exits the typing state.
- Made custom card registration idempotent (
customElements.get(...)guards) to prevent duplicate-load exceptions that can block card discovery. - Prevented duplicate
window.customCardsentries foropenclaw-chat-card. - Synced registration hardening in both packaged and root
www/card scripts.
- Updated Lovelace resource registration to use Home Assistant 2026.2 storage API (
hass.data[LOVELACE_DATA].resources) with legacy fallback. - Prevented silent resource-registration failure caused by reading the old
hass.data["lovelace"]key only.
- Updated
hacs.jsonminimum Home Assistant version to2026.2.0.
- Removed false-positive config-flow warning for
enable_openai_api=falsewhen Supervisor options are missing or use a different schema. - Frontend auto-registration no longer gets stuck after an early startup failure.
- Card resource registration now retries for longer and can recover on integration reload.
- Frontend registration task is now de-duplicated while running and marked complete only after successful Lovelace resource creation.
- Resolved chat card startup race where Lovelace resources were attempted before HTTP/Lovelace were ready, causing
Custom element not found: openclaw-chat-card. - Frontend registration now retries and waits for Home Assistant startup readiness before giving up.
- Static JS path registration is now idempotent and only marked successful after the path is actually registered.
- Added MIT license file at repository root (
LICENSE).
- Integration "Not loaded" state caused by
hass.http.register_static_path()being called inasync_setupbefore the HTTP server is ready - Removed
async_setupand the synchronous_async_register_static_pathhelper _async_register_frontendis now a properasyncfunction, safe to fire-and-forget fromasync_setup_entry- Supports both the HA 2024.11+
async_register_static_paths/StaticPathConfigAPI and the legacyregister_static_pathAPI with automatic fallback to/local/URL - Frontend registration errors are caught and logged as warnings — they can never crash the integration load
- Automatic Lovelace resource registration — the chat card JS is now served
directly from inside the integration package (
custom_components/openclaw/www/) via a registered static HTTP path at/openclaw/openclaw-chat-card.js. The integration also adds it to Lovelace's resource store automatically on first setup. No manual "Add resource" step is required. async_setup()registered so the static path is available before any config entry setup runs.lovelaceadded toafter_dependenciesso Lovelace is ready when we attempt to register the resource.
openclaw-chat-card.jsis now shipped insidecustom_components/openclaw/www/(in addition to the top-levelwww/for backward compatibility with HACS installs that copy files toconfig/www/).- If programmatic Lovelace registration fails (e.g. Lovelace not loaded), a clear warning with manual fallback instructions is logged instead of silently doing nothing.
- Connection probe no longer uses
/v1/models— the OpenClaw gateway does not implement that endpoint (only/v1/chat/completionsis registered whenenable_openai_apiis enabled). Unrecognised routes fall through to the SPA catch-all and return HTML, which caused every connection check to fail withopenai_api_disabledeven when the API was actually enabled. async_check_connection()now POSTs to/v1/chat/completionswith an empty messages body. The gateway's auth middleware validates the token first, then the endpoint returns a JSON error for the invalid body — proving server is reachable, API is enabled, and the token is accepted. No LLM call is made.- Coordinator polling now uses
async_check_alive()(lightweight base-URL GET) for connectivity, withasync_get_models()as a best-effort call that is silently ignored if the endpoint doesn't exist.
- New
async_check_alive()method — simple HTTP GET to the gateway base URL to confirm the gateway process is running (does not verify auth or API status).
async_check_connection()no longer silently swallowsOpenClawApiError. Previously any API-level error (e.g. gateway returning HTML) was caught and converted to a generic "Cannot connect" message with no indication of the real cause. The error is now propagated to the config flow.- Config flow now catches
OpenClawApiErrorseparately and shows a clear, actionable error message: "openai_api_disabled" — pointing the user to enableenable_openai_apiin the addon settings and restart. - Auto-discovery now logs a
WARNINGwhenenable_openai_apiisfalsein the addon options, making the issue visible in the HA log before setup fails.
- The
enable_openai_apiaddon option (defaultfalse) must betruefor the integration to connect. The/v1/modelsprobe endpoint requires the OpenAI-compatible API layer to be active.
- Removed non-existent
/api/statusand/api/sessionsgateway endpoints that caused the integration to receive HTML pages instead of JSON, resulting inContentTypeErroron every poll and a failed config flow connection check. async_check_connection()now probes/v1/models(the only reliable GET endpoint on the gateway) instead of the missing/api/status.
DataUpdateCoordinatornow performs a single/v1/modelsrequest per poll cycle instead of three separate requests (/api/status,/api/sessions,/v1/models).DATA_GATEWAY_VERSION,DATA_UPTIME,DATA_SESSION_COUNT, andDATA_SESSIONSreport unavailable values (None/0/[]) since the gateway does not expose dedicated status or session endpoints.- Removed
_model_poll_counter(no longer needed with a unified poll).
- Integration icon and logo (sourced from the OpenClaw Assistant add-on).
- Added content-type guard in
_request(): when the gateway returns a non-JSON response (e.g. HTML redirect/login page), a clearOpenClawApiErroris now raised instead of an unhandledaiohttp.ContentTypeError.
- Initial release of the OpenClaw Home Assistant Integration.
- Config flow with automatic add-on discovery and manual host/port/token entry.
sensorplatform: Status, Last Activity, Session Count, Active Model.binary_sensorplatform: Gateway Connected.conversationplatform: native HA Assist / Voice PE agent backed by the OpenClaw gateway/v1/chat/completionsendpoint (SSE streaming supported).DataUpdateCoordinatorpolling the gateway on a configurable interval (default 30 s).- Service calls:
openclaw.send_message,openclaw.clear_history. - Lovelace
openclaw-chat-cardcustom card (www/openclaw-chat-card.js). - HACS-compatible repository structure (
hacs.json).