diff --git a/contents/docs/llm-analytics/_snippets/beta.mdx b/contents/docs/llm-analytics/_snippets/beta.mdx
index c4220557c4ee..4a80b22beeff 100644
--- a/contents/docs/llm-analytics/_snippets/beta.mdx
+++ b/contents/docs/llm-analytics/_snippets/beta.mdx
@@ -1,7 +1 @@
-import CalloutBox from 'components/Docs/CalloutBox'
-
-
-
-Evaluations is currently in beta. We'd love to [hear your feedback](https://app.posthog.com/llm-analytics/evaluations#panel=support%3Afeedback%3A%3Alow%3Atrue) as we develop this feature.
-
-
+<>>
diff --git a/contents/docs/llm-analytics/evaluations.mdx b/contents/docs/llm-analytics/evaluations.mdx
index 0598d72d91e4..5100c3d8c6fc 100644
--- a/contents/docs/llm-analytics/evaluations.mdx
+++ b/contents/docs/llm-analytics/evaluations.mdx
@@ -2,10 +2,6 @@
title: Evaluations
---
-import BetaCallout from "./_snippets/beta.mdx"
-
-
-
Evaluations automatically assess the quality of your LLM [generations](/docs/llm-analytics/generations) and return a pass/fail result with reasoning. PostHog supports two types of evaluations:
- **LLM-as-a-judge** – Uses an LLM to score each generation against a prompt you define. Great for nuanced, subjective checks like tone, helpfulness, or hallucination detection.