Modern Streamlit app to generate, review, and publish Microsoft Intune Endpoint Analytics detection and remediation scripts.
Creating Intune remediation scripts manually is slow and repetitive. This app helps you:
- draft detection and remediation PowerShell scripts with LLM support
- validate scripts before upload
- search an existing community script catalog and reuse good patterns
- publish directly to Intune via Microsoft Graph payload
Tabs in the app:
Find ScriptsGenerateReviewPublish
- Searches
JayRHa/EndpointAnalyticsRemediationScriptsvia GitHub API - Shows scored matches (folder + file relevance)
Selectsaves a project for review
- Collapsible Model & Generation Settings block:
- Provider (
Azure OpenAIorOpenAI) - Preset / custom model
- Mode (
Detection onlyorDetection and Remediation) - Temperature and max tokens
- Provider (
- Description and optional extra requirements
- Script generation
- Shows selected community project preview (if selected in
Find Scripts) Use selected scripts in editorto copy community scripts into review editors- Built-in validation hints and export as
.ps1
- Intune payload preview
- Graph auth connect/disconnect
- Upload to device health scripts endpoint
- Azure OpenAI and OpenAI provider support
- GPT-5 model support with automatic Responses API routing
- Automatic fallback for models that only allow default temperature
- Community script discovery with selection flow into review
- Script validation (exit code checks, risky command hints)
- Download support for
detection.ps1,remediation.ps1, andpayload.json
streamlit==1.54.0azure-identity==1.25.2openai==2.21.0requests==2.32.5
Because many macOS Python installs are "externally managed" (PEP 668), use a virtual environment:
cd /path/to/Remediation-Creator
python3 -m venv .venv
source .venv/bin/activate
python -m pip install --upgrade pip
python -m pip install -r requirements.txtCreate secrets file:
cp .streamlit/secrets.toml.example .streamlit/secrets.tomlRun app:
python -m streamlit run app.pyOpen:
http://localhost:8501
Model / deployment in UI is primary.
secrets.toml values are fallback only.
Example .streamlit/secrets.toml:
AZURE_OPENAI_KEY = "..."
AZURE_OPENAI_ENDPOINT = "https://YOUR-ENDPOINT.cognitiveservices.azure.com"
AZURE_OPENAI_CHATGPT_DEPLOYMENT = "gpt-5.2-chat" # fallback
AZURE_OPENAI_API_VERSION = "2025-04-01-preview"
OPENAI_API_KEY = "" # optional
OPENAI_MODEL = "gpt-5.2-chat" # fallback
APP_REGISTRATION_ID = "..."
GRAPH_SCOPE = "https://graph.microsoft.com/.default"- For
gpt-5*models, the app prefers the Responses API automatically. - If a model rejects non-default temperature, the app retries without custom temperature.
- If UI model field is set, it overrides fallback values from TOML.
- Open
Find Scripts - Search for topic (example:
bitlocker) - Click
Selecton a result - Open
Review - Click
Use selected scripts in editor(optional)
- Never commit real secrets (
.streamlit/secrets.tomlis gitignored) - Rotate keys if exposed
- Review all generated scripts before production use
Use venv and install requirements:
source .venv/bin/activate
python -m pip install -r requirements.txtDo not install globally with Homebrew Python. Use .venv.
Handled automatically in current app versions; update to latest code if you still see it.
Your model/deployment name does not exist on that Azure resource.
Check your deployment name in Azure and set it in UI Model / deployment.
app.py
modules/
community_search.py
prompts.py
utility.py
.streamlit/
config.toml
secrets.toml.example
requirements.txt
run.sh
run.ps1
Apache 2.0



