From 4162d11710baed8930fb5f6c133cfebbc5940074 Mon Sep 17 00:00:00 2001 From: Jordan Frazier <122494242+jordanrfrazier@users.noreply.github.com> Date: Tue, 1 Apr 2025 11:37:58 -0700 Subject: [PATCH] docs: fix localhost address for NIM docs (#7391) Fix localhost address for NIM docs --- docs/docs/Integrations/Nvidia/integrations-nvidia-nim-wsl2.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/docs/Integrations/Nvidia/integrations-nvidia-nim-wsl2.md b/docs/docs/Integrations/Nvidia/integrations-nvidia-nim-wsl2.md index 0d0d1642b..702c7af23 100644 --- a/docs/docs/Integrations/Nvidia/integrations-nvidia-nim-wsl2.md +++ b/docs/docs/Integrations/Nvidia/integrations-nvidia-nim-wsl2.md @@ -24,7 +24,7 @@ To connect the NIM you've deployed with Langflow, add the **NVIDIA** model compo 1. Create a [basic prompting flow](/get-started-quickstart). 2. Replace the **OpenAI** model component with the **NVIDIA** component. -3. In the **NVIDIA** component's **Base URL** field, add the URL where your NIM is accessible. If you followed your model's [deployment instructions](https://build.nvidia.com/nv-mistralai/mistral-nemo-12b-instruct/deploy?environment=wsl2.md), the value is `http://0.0.0.0:8000/v1`. +3. In the **NVIDIA** component's **Base URL** field, add the URL where your NIM is accessible. If you followed your model's [deployment instructions](https://build.nvidia.com/nv-mistralai/mistral-nemo-12b-instruct/deploy?environment=wsl2.md), the value is `http://localhost:8000/v1`. 4. In the **NVIDIA** component's **NVIDIA API Key** field, add your NVIDIA API Key. 5. Select your model from the **Model Name** dropdown. 6. Open the **Playground** and chat with your **NIM** model. \ No newline at end of file