[DOCS-1901] Fix easy accessibility issues found by mint a11y CLI#2125
[DOCS-1901] Fix easy accessibility issues found by mint a11y CLI#2125mdlinville wants to merge 5 commits intomainfrom
mint a11y CLI#2125Conversation
- Add missing alt text to image refs, using nearby context - Where possible, convert HTML image refs to Markdown - Add missing aria-label attributes near near explicit <a id='...'> anchors, which keep prominent historical anchor links from breaking
mint a11y CLImint a11y CLI
📚 Mintlify Preview Links📝 Changed (28 total)📄 Pages (28)
🤖 Generated automatically when Mintlify deployment succeeds |
🔗 Link Checker Results✅ All links are valid! No broken links were detected. Checked against: https://wb-21fd5541-docs-1901.mintlify.app |
- Add aria-label to <a> elements with no link text - Remove trailing slash from internal links - Update some external URLs to avoid redirects
anastasiaguspan
left a comment
There was a problem hiding this comment.
wow, this was a lot of updates! impressive volume of cleanups :)
| The [BIG-bench (Beyond the Imitation Game Benchmark)](https://github.com/google/BIG-bench) is a collaborative benchmark intended to probe large language models and extrapolate their future capabilities consisting of more than 200 tasks. The [BIG-Bench Hard (BBH)](https://github.com/suzgunmirac/BIG-Bench-Hard) is a suite of 23 most challenging BIG-Bench tasks that can be quite difficult to be solved using the current generation of language models. | ||
|
|
||
| This tutorial demonstrates how we can improve the performance of our LLM workflow implemented on the **causal judgement task** from the BIG-bench Hard benchmark and evaluate our prompting strategies. We will use [DSPy](https://dspy-docs.vercel.app/) for implementing our LLM workflow and optimizing our prompting strategy. We will also use [Weave](/weave) to track our LLM workflow and evaluate our prompting strategies. | ||
| This tutorial demonstrates how we can improve the performance of our LLM workflow implemented on the **causal judgement task** from the BIG-bench Hard benchmark and evaluate our prompting strategies. We will use [DSPy](https://dspy.ai/) for implementing our LLM workflow and optimizing our prompting strategy. We will also use [Weave](/weave) to track our LLM workflow and evaluate our prompting strategies. |
There was a problem hiding this comment.
Btw https://dspy.ai/ reloads to dspy.ai - no trailing slash, thought I’d mention that since much of this effort seemed to be about cleaning up trailing slashes :). This link occurs in a few places.
| ## Optimizing our DSPy Program | ||
|
|
||
| Now, that we have a baseline DSPy program, let us try to improve its performance for causal reasoning using a [DSPy teleprompter](https://dspy-docs.vercel.app/docs/building-blocks/optimizers) that can tune the parameters of a DSPy program to maximize the specified metrics. In this tutorial, we use the [BootstrapFewShot](https://dspy-docs.vercel.app/api/category/optimizers) teleprompter. | ||
| Now, that we have a baseline DSPy program, let us try to improve its performance for causal reasoning using a [DSPy teleprompter](https://dspy.ai/learn/optimization/optimizers/) that can tune the parameters of a DSPy program to maximize the specified metrics. In this tutorial, we use the [BootstrapFewShot](https://dspy.ai/learn/optimization/optimizers) teleprompter. |
There was a problem hiding this comment.
- 'optimizers' redirects to 'optimizers/' , e.g. with trailing slash (again, if you're trying to limit such things)
but more importantly, this sentence has the same link for two different link texts, which could be confusing . maybe just link the first one since it seems the bootstrapfewshot is mentioned on that same page? unless of course it was a copy paste error and you meant the second one to go to the dedicated page of https://dspy.ai/api/optimizers/BootstrapFewShot/
| ### Report: Compare LLMs on Bedrock for text summarization with Weave | ||
|
|
||
| The [Compare LLMs on Bedrock for text summarization with Weave](https://wandb.ai/byyoung3/ML_NEWS3/reports/Compare-LLMs-on-Amazon-Bedrock-for-text-summarization-with-W-B-Weave--VmlldzoxMDI1MTIzNw) report explains how to use Bedrock in combination with Weave to evaluate and compare LLMs for summarization tasks, code samples included. No newline at end of file | ||
| The [Compare LLMs on Bedrock for text summarization with Weave](https://wandb.ai/byyoung3/ML_NEWS3/reports/Compare-LLMs-on-Amazon-Bedrock-for-text-summarization-with-W-B-Weave--VmlldzoxMDI1MTIzNw) report explains how to use Bedrock in combination with Weave to evaluate and compare LLMs for summarization tasks, code samples included. |
There was a problem hiding this comment.
the report name is [Evaluating LLMs on Amazon Bedrock]
| ## Tutorial: `mcp_demo` example | ||
|
|
||
| The [`mcp_example`](https://github.com/wandb/weave/tree/master/examples/mcp_demo) demonstrates an integration between the Model Context Protocol (MCP) and Weave for tracing. It showcases how to instrument both the client and server components to capture detailed traces of their interactions. | ||
| The `mcp_demo` example demonstrates an integration between the Model Context Protocol (MCP) and Weave for tracing. It showcases how to instrument both the client and server components to capture detailed traces of their interactions. |
There was a problem hiding this comment.
I see you removed this link, so I wanted to check the page and see if it still had the 'thing' being demonstrated. I see the page says: Clone the weave repository and navigate to the mcp_demo example:
git clone https://github.com/wandb/weave
cd weave/examples/mcp_demo
However, when I go there (via web) and choose 'weave' there is no 'examples' folder .... so i'm not sure this example is runnable as written? the stuff may have been moved
|
|
||
| - [W&B User Settings](https://wandb.ai/settings) | ||
| - [Anthropic Console](https://console.anthropic.com/settings/keys) | ||
| - [Anthropic Console](https://platform.claude.com/settings/keys) |
There was a problem hiding this comment.
i was not going to mention this, but then i saw later that for huggingface you actually updated a link to be the login redirect link.
This too gave me a login redirect link and I had to login to get to the actual page (but i figured that to get an API key, that was quite reasonable), but if you really wanted to collapse those redirects i guess fyi?
|
|
||
| <Frame> | ||
| <img src="/images/integrations/dagster_wb_metadata.png" /> | ||
| <img src="/images/integrations/dagster_wb_metadata.png" alt="Screenshot showing W&B metadata added to Dagster asset" /> |
There was a problem hiding this comment.
this comment (unfortunately) applies to every one of your alt texts.
All guidance I’ve seen for alt text says to avoid phrases like:
- “Screenshot showing…”
- “Image of…”
Screen readers already announce it’s an image, so this becomes redundant.
Additional note, (tho for the sheer magnitude of the images in this PR this is prb out of scope! holy cow you did a lot in this PR ;) ), alt texts should describe the meaning and content. I've actually been deep-diving alt text a little recently with LLMs, because i realized that soon the LLMs will be the ones interpreting the images too, so we have to help them 'see' them. all these alt texts seem vague and don't mention any relevant parts of the UI that the screenshot is conveying. I wasn't familiar with this topic of course, so I spent a few on this one as a guinea pig. i figured it was notable that this was not w&b's UI, so maybe:
alt="Dagster's UI showing an asset details view with attached W&B metadata, including references to a W&B project and run."
again, you did a billion of these so I wouldn't expect you to upgrade them now, but wanted to share what I've been starting to implement over in weave for alt text. for example, in dynamic_leaderboards.mdx I have an alt text of [Evaluations page showing the Edit Leaderboard panel open on the right, with tabs for Models, Datasets, Scorers, and Metrics used to configure the leaderboard.]
theoretically, if the screenshots are more descriptive then that would eventually help agents navigate the real UI, or at least give better instructions to users asking questions about navigating them. :)
| @@ -372,19 +372,19 @@ The asset is materialized with useful metadata on both sides of the integration: | |||
| The proceeding image demonstrates the metadata from W&B that was added to the Dagster asset. This information would not be available without the integration. | |||
There was a problem hiding this comment.
couldn't help but notice this - i know it wasn't one of your edits but isn't this word choice funky? I think we mean "following" ... for consistency if nothing else ;)
Description
DOCS-1901 Fix easy accessibility issues found by
mint a11yCLIOut of scope:
Testing
mint dev)mint broken-links)