Skip to content

Commit bfc93fa

Browse files
authored
docs: Fix broken links in documents (#428)
1 parent cd7412c commit bfc93fa

5 files changed

Lines changed: 364 additions & 360 deletions

File tree

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
<!--
2-
# Copyright 2020-2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2+
# Copyright 2020-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
33
#
44
# Redistribution and use in source and binary forms, with or without
55
# modification, are permitted provided that the following conditions
@@ -382,7 +382,7 @@ Implementing this function is optional. No implementation of
382382
[`max_batch_size`](
383383
https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_configuration.md#maximum-batch-size),
384384
[dynamic_batching](
385-
https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_configuration.md#dynamic-batcher),
385+
https://github.com/triton-inference-server/server/blob/main/docs/user_guide/batcher.md#dynamic-batcher),
386386
[`input`](
387387
https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_configuration.md#inputs-and-outputs)
388388
and

examples/instance_kind/model.py

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,9 +25,9 @@
2525
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
2626

2727
import torch
28-
from torchvision import models
2928
import triton_python_backend_utils as pb_utils
3029
from torch.utils.dlpack import to_dlpack
30+
from torchvision import models
3131

3232

3333
class TritonPythonModel:
@@ -49,7 +49,11 @@ def initialize(self, args):
4949
device = "cuda" if args["model_instance_kind"] == "GPU" else "cpu"
5050
device_id = args["model_instance_device_id"]
5151
self.device = f"{device}:{device_id}"
52-
self.model = models.resnet50(weights=models.ResNet50_Weights.IMAGENET1K_V2).to(self.device).eval()
52+
self.model = (
53+
models.resnet50(weights=models.ResNet50_Weights.IMAGENET1K_V2)
54+
.to(self.device)
55+
.eval()
56+
)
5357

5458
def execute(self, requests):
5559
"""

examples/jax/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
<!--
2-
# Copyright 2022-2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2+
# Copyright 2022-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
33
#
44
# Redistribution and use in source and binary forms, with or without
55
# modification, are permitted provided that the following conditions
@@ -81,7 +81,7 @@ docker run --gpus all -it --rm -p 8000:8000 -v `pwd`:/jax nvcr.io/nvidia/tritons
8181
Inside the container, we need to install JAX to run this example.
8282

8383
We recommend using the `pip` method mentioned in the
84-
[JAX documentation](https://github.com/google/jax#pip-installation-gpu-cuda).
84+
[JAX documentation](https://github.com/jax-ml/jax?tab=readme-ov-file#instructions).
8585
Make sure that JAX is available in the same Python environment as other
8686
dependencies.
8787

0 commit comments

Comments
 (0)