docs: remove dupe reference to wsl install (#7206)

* remove dupe refernece to wsl install

* styleguide-review

---------

Co-authored-by: Mendon Kissling <59585235+mendonk@users.noreply.github.com>
This commit is contained in:
Jordan Frazier 2025-03-21 08:00:19 -07:00 committed by GitHub
commit 1153a301c6
No known key found for this signature in database
GPG key ID: B5690EEEBB952194

View file

@ -5,18 +5,17 @@ slug: /integrations-nvidia-ingest-wsl2
Connect **Langflow** with **NVIDIA NIM** on an RTX Windows system with [Windows Subsystem for Linux 2 (WSL2)](https://learn.microsoft.com/en-us/windows/wsl/install) installed.
[NVIDIA NIM](https://docs.nvidia.com/nim/index.html) provides containers to self-host GPU-accelerated inferencing microservices.
This example deploys the `mistral-nemo-12b-instruct` NIM on an **RTX Windows system** with **WSL2** and uses it as a model component in **Langflow**.
[NVIDIA NIM (NVIDIA Inference Microservices)](https://docs.nvidia.com/nim/index.html) provides containers to self-host GPU-accelerated inferencing microservices.
In this example, you connect a model component in **Langflow** to a deployed `mistral-nemo-12b-instruct` NIM on an **RTX Windows system** with **WSL2**.
For more information on NVIDIA NIM, see the [NVIDIA documentation](https://docs.nvidia.com/nim/index.html).
## Prerequisites
* [NVIDIA NIM WSL2 installed](https://docs.nvidia.com/nim/wsl2/latest/getting-started.html)
* A NIM container deployed. The prerequisites vary between models.
* A NIM container deployed according to the model's instructions. Prerequisites vary between models.
For example, to deploy the `mistral-nemo-12b-instruct` NIM, follow the instructions for **Windows on RTX AI PCs (Beta)** on your [model's deployment overview](https://build.nvidia.com/nv-mistralai/mistral-nemo-12b-instruct/deploy?environment=wsl2.md)
* [WSL2 installed](https://learn.microsoft.com/en-us/windows/wsl/install)
* Windows 11 build 23H2 (and later)
* Windows 11 build 23H2 or later
* At least 12 GB of RAM
## Use the NVIDIA NIM in a flow
@ -25,9 +24,7 @@ To connect the NIM you've deployed with Langflow, add the **NVIDIA** model compo
1. Create a [basic prompting flow](/get-started-quickstart).
2. Replace the **OpenAI** model component with the **NVIDIA** component.
3. In the **NVIDIA** component's **Base URL** field, add the URL your NIM is accessible at. If you followed your model's [deployment instructions](https://build.nvidia.com/nv-mistralai/mistral-nemo-12b-instruct/deploy?environment=wsl2.md), the value is `http://0.0.0.0:8000/v1`.
3. In the **NVIDIA** component's **Base URL** field, add the URL where your NIM is accessible. If you followed your model's [deployment instructions](https://build.nvidia.com/nv-mistralai/mistral-nemo-12b-instruct/deploy?environment=wsl2.md), the value is `http://0.0.0.0:8000/v1`.
4. In the **NVIDIA** component's **NVIDIA API Key** field, add your NVIDIA API Key.
5. Select your model from the **Model Name** dropdown.
6. Open the **Playground** and chat with your **NIM** model.
6. Open the **Playground** and chat with your **NIM** model.