From ddda5970aa29d83e738795195fbd425543964115 Mon Sep 17 00:00:00 2001 From: Mendon Kissling <59585235+mendonk@users.noreply.github.com> Date: Thu, 20 Mar 2025 21:18:55 -0400 Subject: [PATCH] docs: integrate nvidia NIM on WSL2 (#7192) * initial-content * more-on-wsl2 * update flow docs * cleanup * title-sidebar --------- Co-authored-by: Jordan Frazier --- .../Nvidia/integrations-nvidia-ingest.md | 2 +- .../Nvidia/integrations-nvidia-nim-wsl2.md | 33 +++++++++++++++++++ docs/sidebars.js | 1 + 3 files changed, 35 insertions(+), 1 deletion(-) create mode 100644 docs/docs/Integrations/Nvidia/integrations-nvidia-nim-wsl2.md diff --git a/docs/docs/Integrations/Nvidia/integrations-nvidia-ingest.md b/docs/docs/Integrations/Nvidia/integrations-nvidia-ingest.md index 17ff3b47a..fbfab46d6 100644 --- a/docs/docs/Integrations/Nvidia/integrations-nvidia-ingest.md +++ b/docs/docs/Integrations/Nvidia/integrations-nvidia-ingest.md @@ -1,5 +1,5 @@ --- -title: Integrate Nvidia Ingest with Langflow +title: Integrate NVIDIA Ingest with Langflow slug: /integrations-nvidia-ingest --- diff --git a/docs/docs/Integrations/Nvidia/integrations-nvidia-nim-wsl2.md b/docs/docs/Integrations/Nvidia/integrations-nvidia-nim-wsl2.md new file mode 100644 index 000000000..a2c8c9c54 --- /dev/null +++ b/docs/docs/Integrations/Nvidia/integrations-nvidia-nim-wsl2.md @@ -0,0 +1,33 @@ +--- +title: Integrate NVIDIA NIMs with Langflow +slug: /integrations-nvidia-ingest-wsl2 +--- + +Connect **Langflow** with **NVIDIA NIM** on an RTX Windows system with [Windows Subsystem for Linux 2 (WSL2)](https://learn.microsoft.com/en-us/windows/wsl/install) installed. + +[NVIDIA NIM](https://docs.nvidia.com/nim/index.html) provides containers to self-host GPU-accelerated inferencing microservices. +This example deploys the `mistral-nemo-12b-instruct` NIM on an **RTX Windows system** with **WSL2** and uses it as a model component in **Langflow**. + +For more information on NVIDIA NIM, see the [NVIDIA documentation](https://docs.nvidia.com/nim/index.html). + +## Prerequisites + +* [NVIDIA NIM WSL2 installed](https://docs.nvidia.com/nim/wsl2/latest/getting-started.html) +* A NIM container deployed. The prerequisites vary between models. +For example, to deploy the `mistral-nemo-12b-instruct` NIM, follow the instructions for **Windows on RTX AI PCs (Beta)** on your [model's deployment overview](https://build.nvidia.com/nv-mistralai/mistral-nemo-12b-instruct/deploy?environment=wsl2.md) +* [WSL2 installed](https://learn.microsoft.com/en-us/windows/wsl/install) +* Windows 11 build 23H2 (and later) +* At least 12 GB of RAM + +## Use the NVIDIA NIM in a flow + +To connect the NIM you've deployed with Langflow, add the **NVIDIA** model component to a flow. + +1. Create a [basic prompting flow](/get-started-quickstart). +2. Replace the **OpenAI** model component with the **NVIDIA** component. +3. In the **NVIDIA** component's **Base URL** field, add the URL your NIM is accessible at. If you followed your model's [deployment instructions](https://build.nvidia.com/nv-mistralai/mistral-nemo-12b-instruct/deploy?environment=wsl2.md), the value is `http://0.0.0.0:8000/v1`. +4. In the **NVIDIA** component's **NVIDIA API Key** field, add your NVIDIA API Key. +5. Select your model from the **Model Name** dropdown. +6. Open the **Playground** and chat with your **NIM** model. + + diff --git a/docs/sidebars.js b/docs/sidebars.js index e276eca87..2204eef5c 100644 --- a/docs/sidebars.js +++ b/docs/sidebars.js @@ -174,6 +174,7 @@ module.exports = { label: "NVIDIA", items: [ "Integrations/Nvidia/integrations-nvidia-ingest", + "Integrations/Nvidia/integrations-nvidia-nim-wsl2", ], }, ],