diff --git a/.github/workflows/spellcheck.yaml b/.github/workflows/spellcheck.yaml index fcc66ad40b..da7be3decc 100644 --- a/.github/workflows/spellcheck.yaml +++ b/.github/workflows/spellcheck.yaml @@ -17,6 +17,6 @@ jobs: uses: actions/checkout@v6 - name: Spell Check Repo - uses: crate-ci/typos@v1.43.5 + uses: crate-ci/typos@v1.44.0 with: files: docs/**/**/*.md docs/**/**/*.mdx diff --git a/docs/genai/04_how_to_guides/02_embeddings.mdx b/docs/genai/04_how_to_guides/02_embeddings.mdx index 972313dab2..a43dca4df0 100644 --- a/docs/genai/04_how_to_guides/02_embeddings.mdx +++ b/docs/genai/04_how_to_guides/02_embeddings.mdx @@ -4,7 +4,7 @@ While Decoder-only LLMs gained massive popularity via their usage in chatbots, E ```mermaid flowchart LR; - A["natual language text:
*GenAI can be used for research*"] + A["natural language text:
*GenAI can be used for research*"] B["encoder-only LLM"] C["vector embedding
[0.052, 0.094, 0.244, ...]"] A-- "Input" -->B; @@ -12,7 +12,7 @@ flowchart LR; ``` :::tip -Embeddings have the ability to encode the semantic meaning of the natual language text/images! +Embeddings have the ability to encode the semantic meaning of the natural language text/images! ::: The snippet below uses the `text-embedding-3-small` model to create 32-dimensional floating point vector embeddings for the input string: diff --git a/docs/genai/04_how_to_guides/03_retrieval_augmented_generation.mdx b/docs/genai/04_how_to_guides/03_retrieval_augmented_generation.mdx index 9c82e00e85..a92a6fc1e7 100644 --- a/docs/genai/04_how_to_guides/03_retrieval_augmented_generation.mdx +++ b/docs/genai/04_how_to_guides/03_retrieval_augmented_generation.mdx @@ -17,7 +17,7 @@ flowchart TB; C["encoder-only LLM"] D@{shape: procs, label: "text chunk embedding"} E[("vector database")] - F["natual language prompt"] + F["natural language prompt"] G["query embedding"] I["relevant chunks"] J["original prompt with added context"]