[Docs] - Intro, Install, Quickstart workflow (#1765)

* Modified the general Getting Started flow to be clearer and faster.
This commit is contained in:
Mendon Kissling 2024-04-26 10:49:39 -04:00 committed by GitHub
commit 6a06af1cb2
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
14 changed files with 376 additions and 587 deletions

View file

@ -1,97 +0,0 @@
import ThemedImage from "@theme/ThemedImage";
import useBaseUrl from "@docusaurus/useBaseUrl";
import ZoomableImage from "/src/theme/ZoomableImage.js";
import ReactPlayer from "react-player";
import Admonition from "@theme/Admonition";
# Blog Writer
Build a blog writer with OpenAI that uses URLs for reference content.
## Prerequisites
1. Install Langflow.
```bash
python -m pip install langflow --pre
```
2. Start a local Langflow instance with the Langflow CLI:
```bash
langflow run
```
Or start Langflow with Python:
```bash
python -m langflow run
```
Result:
```bash
│ Welcome to ⛓ Langflow │
│ │
│ Access http://127.0.0.1:7860 │
│ Collaborate, and contribute at our GitHub Repo 🚀 │
```
Alternatively, go to [HuggingFace Spaces](https://docs.langflow.org/getting-started/hugging-face-spaces) or [Lightning.ai Studio](https://lightning.ai/ogabrielluiz-8j6t8/studios/langflow) for a pre-built Langflow test environment.
3. Create an [OpenAI API key](https://platform.openai.com).
## Create the Blog Writer project
1. From the Langflow dashboard, click **New Project**.
2. Select **Blog Writer**.
3. The **Blog Writer** flow is created.
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/blog-writer.png",
dark: "img/blog-writer.png",
}}
style={{
width: "80%",
margin: "20px auto",
display: "flex",
justifyContent: "center",
}}
/>
This flow creates a one-shot prompt flow with **Prompt**, **OpenAI**, and **Chat Output** components, and augments the flow with reference content and instructions from the **URL** and **Instructions** components.
The **Prompt** component's default **Template** field looks like this:
```bash
Reference 1:
{reference_1}
---
Reference 2:
{reference_2}
---
{instructions}
Blog:
```
The `{instructions}` value is received from the **Value** field of the **Instructions** component.
The `reference_1` and `reference_2` values are received from the **URL** fields of the **URL** components.
4. To create an environment variable for the **OpenAI** component, in the **OpenAI API Key** field, click the **Globe** button, and then click **Add New Variable**.
1. In the **Variable Name** field, enter `openai_api_key`.
2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
3. Click **Save Variable**.
## Run the Blog Writer flow
1. Click the **Run** button.
The **Interaction Panel** opens, where you can run your one-shot flow.
2. Click the **Lighting Bolt** icon to run your flow.
3. The **OpenAI** component constructs a blog post with the **URL** items as context.
The default **URL** values are for web pages at `promptingguide.ai`, so your blog post will be about prompting LLMs.
To write about something different, change the values in the **URL** components, and see what the LLM constructs.

View file

@ -1,88 +0,0 @@
import ThemedImage from "@theme/ThemedImage";
import useBaseUrl from "@docusaurus/useBaseUrl";
import ZoomableImage from "/src/theme/ZoomableImage.js";
import ReactPlayer from "react-player";
import Admonition from "@theme/Admonition";
# Document QA
Build a question-and-answer chatbot with a document loaded from local memory.
## Prerequisites
1. Install Langflow.
```bash
python -m pip install langflow --pre
```
2. Start a local Langflow instance with the Langflow CLI:
```bash
langflow run
```
Or start Langflow with Python:
```bash
python -m langflow run
```
Result:
```
│ Welcome to ⛓ Langflow │
│ │
│ Access http://127.0.0.1:7860 │
│ Collaborate, and contribute at our GitHub Repo 🚀 │
```
Alternatively, go to [HuggingFace Spaces](https://docs.langflow.org/getting-started/hugging-face-spaces) or [Lightning.ai Studio](https://lightning.ai/ogabrielluiz-8j6t8/studios/langflow) for a pre-built Langflow test environment.
3. Create an [OpenAI API key](https://platform.openai.com).
## Create the Document QA project
1. From the Langflow dashboard, click **New Project**.
2. Select **Document QA**.
3. The **Document QA** flow is created.
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/document-qa.png",
dark: "img/document-qa.png",
}}
style={{ width: "80%", margin: "20px auto" }}
/>
This flow creates a basic chatbot with the **Chat Input**, **Prompt**, **OpenAI**, and **Chat Output** components.
This chatbot is augmented with the **Files** component, which loads a file from your local machine into the **Prompt** component as `{Document}`.
The **Prompt** component is instructed to answer questions based on the contents of `{Document}`.
Including a file with the prompt gives the **OpenAI** component context it may not otherwise have access to.
4. To create an environment variable for the **OpenAI** component, in the **OpenAI API Key** field, click the **Globe** button, and then click **Add New Variable**.
1. In the **Variable Name** field, enter `openai_api_key`.
2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
3. Click **Save Variable**.
5. To select a document to load, in the **Files** component, click within the **Path** field.
1. Select a local file, and then click **Open**.
2. The file name appears in the field.
<Admonition type="tip">
The file must be of an extension type listed [here](https://github.com/langflow-ai/langflow/blob/dev/src/backend/base/langflow/base/data/utils.py#L13).
</Admonition>
## Run the Document QA flow
1. Click the **Run** button.
The **Interaction Panel** opens, where you can converse with your bot.
2. Type a message and press Enter.
For this example, we loaded an error log `.txt` file and asked, "What went wrong?"
The bot responded:
```
The issue occurred during the execution of migrations in the application. Specifically, an error was raised by the Alembic library, indicating that new upgrade operations were detected that had not been accounted for in the existing migration scripts. The operation in question involved modifying the nullable property of a column (apikey, created_at) in the database, with details about the existing type (DATETIME()), existing server default, and other properties.
```
This result indicates that the bot received the loaded document and understood the context surrounding the vague question. It also correctly identified the issue in the error log, and followed up with appropriate troubleshooting suggestions. Nice!

View file

@ -0,0 +1,27 @@
import ThemedImage from "@theme/ThemedImage";
import useBaseUrl from "@docusaurus/useBaseUrl";
import ZoomableImage from "/src/theme/ZoomableImage.js";
import Admonition from "@theme/Admonition";
# 🤗 HuggingFace Spaces
Hugging Face provides a great alternative for running Langflow in their Spaces environment. This means you can run Langflow without any local installation required.
The first step is to go to the [Langflow Space](https://huggingface.co/spaces/Langflow/Langflow?duplicate=true) or [Langflow 1.0 Preview Space](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true)
Remember to use a Chromium-based browser for the best experience. You'll be presented with the following screen:
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/duplicate-space.png",
dark: "img/duplicate-space.png",
}}
style={{ width: "100%", margin: "20px auto" }}
/>
From here, just name your Space, define the visibility (Public or Private), and click on `Duplicate Space` to start the installation process. When that is done, you'll be redirected to the Space's main page to start using Langflow right away!
Once you get Langflow running, click on New Project in the top right corner of the screen. Langflow provides a range of example flows to help you get started.
To quickly try one of them, open a starter example, set up your API keys and click ⚡ Run, on the bottom right corner of the canvas. This will open up Langflow's Interaction Panel with the chat console, text inputs, and outputs.

View file

@ -0,0 +1,77 @@
import ThemedImage from "@theme/ThemedImage";
import useBaseUrl from "@docusaurus/useBaseUrl";
import ZoomableImage from "/src/theme/ZoomableImage.js";
import Admonition from "@theme/Admonition";
# 📦 Install Langflow
<Admonition type="info">
Langflow v1.0 is also available in a [HuggingFace Preview Space](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true) if you'd rather try it out before installing locally.
</Admonition>
## Prerequisites
Langflow requires the following programs installed on your system.
* [Python 3.10](https://www.python.org/downloads/release/python-3100/)
* [pip](https://pypi.org/project/pip/) or [pipx](https://pipx.pypa.io/stable/installation/)
## Install Langflow
To install Langflow:
pip:
```bash
python -m pip install langflow -U
```
pipx:
```bash
pipx install langflow --python python3.10 --fetch-missing-python
```
Pipx can fetch the missing Python version for you with `--fetch-missing-python`, but you can also install the Python version manually.
## Install Langflow pre-release
Use `--force-reinstall` to ensure you have the latest version of Langflow and its dependencies.
To install a pre-release version of Langflow:
pip:
```bash
python -m pip install langflow --pre --force-reinstall
```
pipx:
```bash
pipx install langflow --python python3.10 --fetch-missing-python --pip-args="--pre --force-reinstall"
```
## Having a problem?
If you encounter a problem, see [Possible Installation Issues](/migration/possible-installation-issues).
To get help in the Langflow CLI:
```bash
python -m langflow --help
```
## ⛓️ Run Langflow
1. To run Langflow, enter the following command.
```bash
python -m langflow run
```
2. Confirm that a local Langflow instance starts by visiting `http://127.0.0.1:7860` in your browser.
```bash
│ Welcome to ⛓ Langflow │
│ │
│ Access http://127.0.0.1:7860 │
│ Collaborate, and contribute at our GitHub Repo 🚀 │
```
3. Continue on to the [Quickstart](./quickstart.mdx).

View file

@ -1,99 +0,0 @@
import ThemedImage from "@theme/ThemedImage";
import useBaseUrl from "@docusaurus/useBaseUrl";
import ZoomableImage from "/src/theme/ZoomableImage.js";
import ReactPlayer from "react-player";
# Memory chatbot
This flow extends the [basic prompting flow](./basic-prompting.mdx) to include chat memory for unique SessionIDs.
## Prerequisites
1. Install Langflow.
```bash
python -m pip install langflow --pre
```
2. Start a local Langflow instance with the Langflow CLI:
```bash
langflow run
```
Or start Langflow with Python:
```bash
python -m langflow run
```
Result:
```
│ Welcome to ⛓ Langflow │
│ │
│ Access http://127.0.0.1:7860 │
│ Collaborate, and contribute at our GitHub Repo 🚀 │
```
Alternatively, go to [HuggingFace Spaces](https://docs.langflow.org/getting-started/hugging-face-spaces) or [Lightning.ai Studio](https://lightning.ai/ogabrielluiz-8j6t8/studios/langflow) for a pre-built Langflow test environment.
3. Create an [OpenAI API key](https://platform.openai.com).
## Create the memory chatbot project
1. From the Langflow dashboard, click **New Project**.
2. Select **Memory Chatbot**.
3. The **Memory Chatbot** flow is created.
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/memory-chatbot.png",
dark: "img/memory-chatbot.png",
}}
style={{
width: "80%",
margin: "20px auto",
display: "flex",
justifyContent: "center",
}}
/>
This flow creates a basic chatbot with the **Chat Input**, **Prompt**, and **OpenAI** components.
This chatbot is augmented with the **Chat Memory** component, which stores messages submitted via **Chat Input** and prepends them to subsequent prompts to OpenAI via `{context}`.
The **Chat History** component gives the **OpenAI** component a memory of previous questions.
4. To create an environment variable for the **OpenAI** component, in the **OpenAI API Key** field, click the **Globe** button, and then click **Add New Variable**.
1. In the **Variable Name** field, enter `openai_api_key`.
2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
3. Click **Save Variable**.
## Run the memory chatbot flow
1. Click the **Run** button.
The **Interaction Panel** opens, where you can converse with your bot.
2. Type a message and press Enter.
The bot will respond according to the template in the **Prompt** component.
3. Type more questions. In the **Outputs** log, your queries are logged in order. Up to 5 queries are stored by default. Try asking `What is the first subject I asked you about?` to see where the LLM's memory disappears.
## Modify the Session ID field to have multiple conversations
`SessionID` is a unique identifier in Langchain for a conversation session between a chatbot and a client.
A `SessionID` is created when a conversation is initiated, and then associated with all subsequent messages during that session.
In the **Memory Chatbot** flow you created, the **Chat Memory** component references past interactions with **Chat Input** by **Session ID**.
You can demonstrate this by modifying the **Session ID** value to switch between conversation histories.
1. In the **Session ID** field of the **Chat Memory** and **Chat Input** components, change the **Session ID** value from `MySessionID` to `AnotherSessionID`.
2. Click the **Run** button to run your flow.
In the **Interaction Panel**, you will have a new conversation. (You may need to clear the cache with the **Eraser** button).
3. Type a few questions to your bot.
4. In the **Session ID** field of the **Chat Memory** and **Chat Input** components, change the **Session ID** value back to `MySessionID`.
5. Run your flow.
The **Outputs** log of the **Interaction Panel** displays the history from your initial chat with `MySessionID`.
## Store Session ID as a Langflow variable
To store **Session ID** as a Langflow variable, in the **Session ID** field, click the **Globe** button, and then click **Add New Variable**.
1. In the **Variable Name** field, enter a name like `customer_chat_emea`.
2. In the **Value** field, enter a value like `1B5EBD79-6E9C-4533-B2C8-7E4FF29E983B`.
3. Click **Save Variable**.
4. Apply this variable to **Chat Input**.

View file

@ -0,0 +1,10 @@
# 📚 New to LLMs?
Large Language Models, or LLMs, are part of an exciting new world in computing.
We made Langflow for anyone to create with LLMs, and hope you'll feel comfortable installing Langflow and [getting started](./quickstart.mdx).
If you want to learn more about LLMs, prompt engineering, and AI models, Langflow recommends [promptingguide.ai](https://promptingguide.ai), an open-source repository of prompt engineering content maintained by AI experts.
PromptingGuide offers content for [beginners](https://www.promptingguide.ai/introduction/basics) and [experts](https://www.promptingguide.ai/techniques/cot), as well as the latest [research papers](https://www.promptingguide.ai/papers) and [test results](https://www.promptingguide.ai/research) fueling AI's progress.
Wherever you are on your AI journey, it's helpful to keep Prompting Guide open in a tab.

View file

@ -2,26 +2,44 @@ import ThemedImage from "@theme/ThemedImage";
import useBaseUrl from "@docusaurus/useBaseUrl";
import ZoomableImage from "/src/theme/ZoomableImage.js";
import ReactPlayer from "react-player";
import Admonition from "@theme/Admonition";
# Basic prompting
# ⚡️ Quickstart
Prompts serve as the inputs to a large language model (LLM), acting as the interface between human instructions and computational tasks.
By submitting natural language requests in a prompt to an LLM, you can obtain answers, generate text, and solve problems.
This article demonstrates how to use Langflow's prompt tools to issue basic prompts to an LLM, and how various prompting strategies can affect your outcomes.
This quickstart demonstrates how to install Langflow, run it locally, build a basic prompt flow, and modify that prompt for different outcomes.
## Prerequisites
1. Install Langflow.
* [Python 3.10](https://www.python.org/downloads/release/python-3100/)
* [pip](https://pypi.org/project/pip/) or [pipx](https://pipx.pypa.io/stable/installation/)
* [OpenAI API key](https://platform.openai.com)
## Install Langflow
<Admonition type="info">
Langflow v1.0 is also available in a [HuggingFace Preview Space](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true) if you'd rather try it out before installing locally. This quickstart will run there, too.
</Admonition>
1. To install Langflow, enter the following command in pip or pipx:
pip:
```bash
python -m pip install langflow --pre
python -m pip install langflow -U
```
pipx:
```bash
pipx install langflow --python python3.10 --fetch-missing-python
```
Pipx can fetch the missing Python version for you with `--fetch-missing-python`, but you can also install the Python version manually.
2. Start a local Langflow instance with the Langflow CLI:
```bash
langflow run
```
Or start Langflow with Python:
```bash
python -m langflow run
@ -35,12 +53,22 @@ Result:
│ Collaborate, and contribute at our GitHub Repo 🚀 │
```
Alternatively, go to [HuggingFace Spaces](https://docs.langflow.org/getting-started/hugging-face-spaces) or [Lightning.ai Studio](https://lightning.ai/ogabrielluiz-8j6t8/studios/langflow) for a pre-built Langflow test environment.
3. Go to `http://127.0.0.1:7860` and confirm the Langflow UI is available.
3. Create an [OpenAI API key](https://platform.openai.com).
<Admonition type="info">
If you encounter a problem, see [Possible Installation Issues](/migration/possible-installation-issues).
</Admonition>
## Create the basic prompting project
Now that you have Langflow installed and running, let us formally welcome you to Langflow!👋
You will use Langflow's prompt tools to issue prompts to the OpenAI LLM.
Prompts serve as the inputs to a large language model (LLM), acting as the interface between human instructions and computational tasks.
By submitting natural language requests in a prompt to an LLM, you can obtain answers, generate text, and solve problems.
1. From the Langflow dashboard, click **New Project**.
2. Select **Basic Prompting**.
3. The **Basic Prompting** flow is created.
@ -48,8 +76,8 @@ Alternatively, go to [HuggingFace Spaces](https://docs.langflow.org/getting-star
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/basic-prompting.png",
dark: "img/basic-prompting.png",
light: "img/quickstart.png",
dark: "img/quickstart.png",
}}
style={{ width: "80%", margin: "20px auto" }}
/>
@ -78,6 +106,14 @@ The **Edit Prompt** window opens.
3. Run the basic prompting flow again.
The response will be markedly different.
## Next steps
Well done! You've built your first prompt in Langflow. 🎉
By adding Langflow components to this prompt, you can build all sorts of interesting flows.
* [Memory chatbot](/guides/memory-chatbot.mdx)
* [Blog writer](/guides/blog-writer.mdx)
* [Document QA](/guides/document-qa.mdx)

View file

@ -1,195 +0,0 @@
import ThemedImage from "@theme/ThemedImage";
import useBaseUrl from "@docusaurus/useBaseUrl";
import ZoomableImage from "/src/theme/ZoomableImage.js";
import Admonition from "@theme/Admonition";
# 🌟 RAG with Astra DB
This guide will walk you through how to build a RAG (Retrieval Augmented Generation) application using **Astra DB** and **Langflow**.
[Astra DB](https://www.datastax.com/products/datastax-astra?utm_source=langflow-pre-release&utm_medium=referral&utm_campaign=langflow-announcement&utm_content=astradb) is a cloud-native database built on Apache Cassandra that is optimized for the cloud. It is a fully managed database-as-a-service that simplifies operations and reduces costs. Astra DB is built on the same technology that powers the largest Cassandra deployments in the world.
In this guide, we will use Astra DB as a vector store to store and retrieve the documents that will be used by the RAG application to generate responses.
<Admonition type="tip">
This guide assumes that you have Langflow up and running. If you are new to
Langflow, you can check out the [Getting Started](/) guide.
</Admonition>
TLDR;
- [Create a free Astra DB account](https://astra.datastax.com/signup?utm_source=langflow-pre-release&utm_medium=referral&utm_campaign=langflow-announcement&utm_content=create-a-free-astra-db-account)
- Duplicate our [Langflow 1.0 Space](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true)
- Create a new database, get a **Token** and the **API Endpoint**
- Click on the **New Project** button and look for Vector Store RAG. This will create a new project with the necessary components
- Import the project into Langflow by dropping it on the Canvas or My Collection page
- Update the **Token** and **API Endpoint** in the **Astra DB** components
- Update the OpenAI API key in the **OpenAI** components
- Run the ingestion flow which is the one that uses the **Astra DB** component
- Click on the ⚡ _Run_ button and start interacting with your RAG application
# First things first
## Create an Astra DB Database
To get started, you will need to [create an Astra DB database](https://astra.datastax.com/signup?utm_source=langflow-pre-release&utm_medium=referral&utm_campaign=langflow-announcement&utm_content=create-an-astradb-database).
Once you have created an account, you will be taken to the Astra DB dashboard. Click on the **Create Database** button.
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/astra-create-database.png",
dark: "img/astra-create-database.png",
}}
style={{ width: "80%", margin: "20px auto" }}
/>
Now you will need to configure your database. Choose the **Serverless (Vector)** deployment type, and pick a Database name, provider and region.
After you have configured your database, click on the **Create Database** button.
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/astra-configure-deployment.png",
dark: "img/astra-configure-deployment.png",
}}
style={{ width: "80%", margin: "20px auto" }}
/>
Once your database is initialized, to the right of the page, you will see the _Database Details_ section which contains a button for you to copy the **API Endpoint** and another to generate a **Token**.
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/astra-generate-token.png",
dark: "img/astra-generate-token.png",
}}
style={{ width: "50%", margin: "20px auto" }}
/>
Now we are all set to start building our RAG application using Astra DB and Langflow.
## (Optional) Duplicate the Langflow 1.0 HuggingFace Space
If you haven't already, now is the time to launch Langflow. To make things easier, you can duplicate our [Langflow 1.0 Space](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true) which sets up a Langflow instance just for you.
## Open the Vector Store RAG Project
To get started, click on the **New Project** button and look for the **Vector Store RAG** project. This will open a starter project with the necessary components to run a RAG application using Astra DB.
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/drag-and-drop-flow.png",
dark: "img/drag-and-drop-flow.png",
}}
style={{ width: "80%", margin: "20px auto" }}
/>
This project consists of two flows. The simpler one is the **Ingestion Flow** which is responsible for ingesting the documents into the Astra DB database.
Your first step should be to understand what each flow does and how they interact with each other.
The ingestion flow consists of:
- **Files** component that uploads a text file to Langflow
- **Recursive Character Text Splitter** component that splits the text into smaller chunks
- **OpenAIEmbeddings** component that generates embeddings for the text chunks
- **Astra DB** component that stores the text chunks in the Astra DB database
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/astra-ingestion-flow.png",
dark: "img/astra-ingestion-flow.png",
}}
style={{ width: "80%", margin: "20px auto" }}
/>
Now, let's update the **Astra DB** and **Astra DB Search** components with the **Token** and **API Endpoint** that we generated earlier, and the OpenAI Embeddings components with your OpenAI API key.
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/astra-ingestion-fields.png",
dark: "img/astra-ingestion-fields.png",
}}
style={{ width: "80%", margin: "20px auto" }}
/>
And run it! This will ingest the Text data from your file into the Astra DB database.
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/astra-ingestion-run.png",
dark: "img/astra-ingestion-run.png",
}}
style={{ width: "80%", margin: "20px auto" }}
/>
Now, on to the **RAG Flow**. This flow is responsible for generating responses to your queries. It will define all of the steps from getting the User's input to generating a response and displaying it in the Interaction Panel.
The RAG flow is a bit more complex. It consists of:
- **Chat Input** component that defines where to put the user input coming from the Interaction Panel
- **OpenAI Embeddings** component that generates embeddings from the user input
- **Astra DB Search** component that retrieves the most relevant Records from the Astra DB database
- **Text Output** component that turns the Records into Text by concatenating them and also displays it in the Interaction Panel
- One interesting point you'll see here is that this component is named `Extracted Chunks`, and that is how it will appear in the Interaction Panel
- **Prompt** component that takes in the user input and the retrieved Records as text and builds a prompt for the OpenAI model
- **OpenAI** component that generates a response to the prompt
- **Chat Output** component that displays the response in the Interaction Panel
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/astra-rag-flow.png",
dark: "img/astra-rag-flow.png",
}}
style={{ width: "80%", margin: "20px auto" }}
/>
To run it all we have to do is click on the ⚡ _Run_ button and start interacting with your RAG application.
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/astra-rag-flow-run.png",
dark: "img/astra-rag-flow-run.png",
}}
style={{ width: "80%", margin: "20px auto" }}
/>
This opens the Interaction Panel where you can chat your data.
Because this flow has a **Chat Input** and a **Text Output** component, the Panel displays a chat input at the bottom and the Extracted Chunks section on the left.
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/astra-rag-flow-interaction-panel.png",
dark: "img/astra-rag-flow-interaction-panel.png",
}}
style={{ width: "80%", margin: "20px auto" }}
/>
Once we interact with it we get a response and the Extracted Chunks section is updated with the retrieved records.
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/astra-rag-flow-interaction-panel-interaction.png",
dark: "img/astra-rag-flow-interaction-panel-interaction.png",
}}
style={{ width: "80%", margin: "20px auto" }}
/>
And that's it! You have successfully ran a RAG application using Astra DB and Langflow.
# Conclusion
In this guide, we have learned how to run a RAG application using Astra DB and Langflow.
We have seen how to create an Astra DB database, import the Astra DB RAG Flows project into Langflow, and run the ingestion and RAG flows.

View file

@ -4,7 +4,7 @@ import ZoomableImage from "/src/theme/ZoomableImage.js";
import ReactPlayer from "react-player";
import Admonition from "@theme/Admonition";
# Blog Writer
# Blog writer
Build a blog writer with OpenAI that uses URLs for reference content.
@ -12,7 +12,7 @@ Build a blog writer with OpenAI that uses URLs for reference content.
1. Install Langflow.
```bash
pip install langflow
python -m pip install langflow --pre
```
2. Start a local Langflow instance with the Langflow CLI:

View file

@ -0,0 +1,82 @@
import ThemedImage from "@theme/ThemedImage";
import useBaseUrl from "@docusaurus/useBaseUrl";
import ZoomableImage from "/src/theme/ZoomableImage.js";
import ReactPlayer from "react-player";
import Admonition from "@theme/Admonition";
# Document QA
Build a question-and-answer chatbot with a document loaded from local memory.
## Prerequisites
1. Install Langflow.
```bash
python -m pip install langflow --pre
```
2. Start a local Langflow instance with the Langflow CLI:
```bash
langflow run
```
Or start Langflow with Python:
```bash
python -m langflow run
```
Result:
```
│ Welcome to ⛓ Langflow │
│ │
│ Access http://127.0.0.1:7860 │
│ Collaborate, and contribute at our GitHub Repo 🚀 │
```
Alternatively, go to [HuggingFace Spaces](https://docs.langflow.org/getting-started/hugging-face-spaces) or [Lightning.ai Studio](https://lightning.ai/ogabrielluiz-8j6t8/studios/langflow) for a pre-built Langflow test environment.
3. Create an [OpenAI API key](https://platform.openai.com).
## Create the Document QA project
1. From the Langflow dashboard, click **New Project**.
2. Select **Document QA**.
3. The **Document QA** flow is created.
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/document-qa.png",
dark: "img/document-qa.png",
}}
style={{ width: "80%", margin: "20px auto" }}
/>
This flow creates a basic chatbot with the **Chat Input**, **Prompt**, **OpenAI**, and **Chat Output** components.
This chatbot is augmented with the **Files** component, which loads a file from your local machine into the **Prompt** component as `{Document}`.
The **Prompt** component is instructed to answer questions based on the contents of `{Document}`.
Including a file with the prompt gives the **OpenAI** component context it may not otherwise have access to.
4. To create an environment variable for the **OpenAI** component, in the **OpenAI API Key** field, click the **Globe** button, and then click **Add New Variable**.
1. In the **Variable Name** field, enter `openai_api_key`.
2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
3. Click **Save Variable**.
5. To select a document to load, in the **Files** component, click within the **Path** field.
1. Select a local file, and then click **Open**.
2. The file name appears in the field.
<Admonition type="tip">
The file must be of an extension type listed [here](https://github.com/langflow-ai/langflow/blob/dev/src/backend/base/langflow/base/data/utils.py#L13).
</Admonition>
## Run the Document QA flow
1. Click the **Run** button.
The **Interaction Panel** opens, where you can converse with your bot.
2. Type a message and press Enter.
For this example, we loaded an error log `.txt` file and asked, "What went wrong?"
The bot responded:
```
The issue occurred during the execution of migrations in the application. Specifically, an error was raised by the Alembic library, indicating that new upgrade operations were detected that had not been accounted for in the existing migration scripts. The operation in question involved modifying the nullable property of a column (apikey, created_at) in the database, with details about the existing type (DATETIME()), existing server default, and other properties.
```
This result indicates that the bot received the loaded document and understood the context surrounding the vague question. It also correctly identified the issue in the error log, and followed up with appropriate troubleshooting suggestions. Nice!

View file

@ -0,0 +1,99 @@
import ThemedImage from "@theme/ThemedImage";
import useBaseUrl from "@docusaurus/useBaseUrl";
import ZoomableImage from "/src/theme/ZoomableImage.js";
import ReactPlayer from "react-player";
# Memory chatbot
This flow extends the [basic prompting flow](./basic-prompting.mdx) to include chat memory for unique SessionIDs.
## Prerequisites
1. Install Langflow.
```bash
python -m pip install langflow --pre
```
2. Start a local Langflow instance with the Langflow CLI:
```bash
langflow run
```
Or start Langflow with Python:
```bash
python -m langflow run
```
Result:
```
│ Welcome to ⛓ Langflow │
│ │
│ Access http://127.0.0.1:7860 │
│ Collaborate, and contribute at our GitHub Repo 🚀 │
```
Alternatively, go to [HuggingFace Spaces](https://docs.langflow.org/getting-started/hugging-face-spaces) or [Lightning.ai Studio](https://lightning.ai/ogabrielluiz-8j6t8/studios/langflow) for a pre-built Langflow test environment.
3. Create an [OpenAI API key](https://platform.openai.com).
## Create the memory chatbot project
1. From the Langflow dashboard, click **New Project**.
2. Select **Memory Chatbot**.
3. The **Memory Chatbot** flow is created.
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/memory-chatbot.png",
dark: "img/memory-chatbot.png",
}}
style={{
width: "80%",
margin: "20px auto",
display: "flex",
justifyContent: "center",
}}
/>
This flow creates a basic chatbot with the **Chat Input**, **Prompt**, and **OpenAI** components.
This chatbot is augmented with the **Chat Memory** component, which stores messages submitted via **Chat Input** and prepends them to subsequent prompts to OpenAI via `{context}`.
The **Chat History** component gives the **OpenAI** component a memory of previous questions.
4. To create an environment variable for the **OpenAI** component, in the **OpenAI API Key** field, click the **Globe** button, and then click **Add New Variable**.
1. In the **Variable Name** field, enter `openai_api_key`.
2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
3. Click **Save Variable**.
## Run the memory chatbot flow
1. Click the **Run** button.
The **Interaction Panel** opens, where you can converse with your bot.
2. Type a message and press Enter.
The bot will respond according to the template in the **Prompt** component.
3. Type more questions. In the **Outputs** log, your queries are logged in order. Up to 5 queries are stored by default. Try asking `What is the first subject I asked you about?` to see where the LLM's memory disappears.
## Modify the Session ID field to have multiple conversations
`SessionID` is a unique identifier in Langchain for a conversation session between a chatbot and a client.
A `SessionID` is created when a conversation is initiated, and then associated with all subsequent messages during that session.
In the **Memory Chatbot** flow you created, the **Chat Memory** component references past interactions with **Chat Input** by **Session ID**.
You can demonstrate this by modifying the **Session ID** value to switch between conversation histories.
1. In the **Session ID** field of the **Chat Memory** and **Chat Input** components, change the **Session ID** value from `MySessionID` to `AnotherSessionID`.
2. Click the **Run** button to run your flow.
In the **Interaction Panel**, you will have a new conversation. (You may need to clear the cache with the **Eraser** button).
3. Type a few questions to your bot.
4. In the **Session ID** field of the **Chat Memory** and **Chat Input** components, change the **Session ID** value back to `MySessionID`.
5. Run your flow.
The **Outputs** log of the **Interaction Panel** displays the history from your initial chat with `MySessionID`.
## Store Session ID as a Langflow variable
To store **Session ID** as a Langflow variable, in the **Session ID** field, click the **Globe** button, and then click **Add New Variable**.
1. In the **Variable Name** field, enter a name like `customer_chat_emea`.
2. In the **Value** field, enter a value like `1B5EBD79-6E9C-4533-B2C8-7E4FF29E983B`.
3. Click **Save Variable**.
4. Apply this variable to **Chat Input**.

View file

@ -5,7 +5,9 @@ import Admonition from "@theme/Admonition";
# 👋 Welcome to Langflow
Langflow is an easy way to build from simple to complex AI applications. It is a low-code platform that allows you to integrate AI into everything you do.
Langflow is a low-code platform that allows you to integrate AI into everything you do.
Use Langflow's simple but powerful UI to build any AI application you can dream up, from simple to complex.
{" "}
@ -20,96 +22,17 @@ Langflow is an easy way to build from simple to complex AI applications. It is a
## 🚀 First steps
## Installation
* [Install Langflow](/getting-started/install-langflow) - Install and start a local Langflow server.
Make sure you have **Python 3.10** installed on your system.
* [Quickstart](/getting-started/quickstart) - Install Langflow, create a flow, and run it.
You can install **Langflow** with [pipx](https://pipx.pypa.io/stable/installation/) or with pip.
* [HuggingFace Spaces](/getting-started/huggingface-spaces) - Duplicate the Langflow preview space and try it out before you install.
Pipx can fetch the missing Python version for you, but you can also install it manually.
* [New to LLMs?](/getting-started/new-to-llms) - Learn more about LLMs, prompting, and more at [promptingguide.ai](https://promptingguide.ai).
```bash
# Remember to check if you have Python 3.10 installed
python -m pip install langflow -U
# or
pipx install langflow --python python3.10 --fetch-missing-python
```
## Learn more about Langflow 1.0
Or you can install a pre-release version using:
Learn more about the exciting changes in Langflow 1.0, and how to migrate your existing Langflow projects.
```bash
python -m pip install langflow --pre --force-reinstall
# or
pipx install langflow --python python3.10 --fetch-missing-python --pip-args="--pre --force-reinstall"
```
<Admonition type="tip">
<p>
Please, check out our [Possible Installation Issues
section](/migration/possible-installation-issues) if you encounter any
problems.
</p>
</Admonition>
We recommend using --force-reinstall to ensure you have the latest version of Langflow and its dependencies.
### ⛓️ Running Langflow
Langflow can be run in a variety of ways, including using the command-line interface (CLI) or HuggingFace Spaces.
```bash
python -m langflow run # or langflow --help
```
#### 🤗 HuggingFace Spaces
Hugging Face provides a great alternative for running Langflow in their Spaces environment. This means you can run Langflow without any local installation required.
The first step is to go to the [Langflow Space](https://huggingface.co/spaces/Langflow/Langflow?duplicate=true) or [Langflow 1.0 Preview Space](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true)
Remember to use a Chromium-based browser for the best experience. You'll be presented with the following screen:
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/duplicate-space.png",
dark: "img/duplicate-space.png",
}}
style={{ width: "100%", margin: "20px auto" }}
/>
From here, just name your Space, define the visibility (Public or Private), and click on `Duplicate Space` to start the installation process. When that is done, you'll be redirected to the Space's main page to start using Langflow right away!
Once you get Langflow running, click on New Project in the top right corner of the screen. Langflow provides a range of example flows to help you get started.
To quickly try one of them, open a starter example, set up your API keys and click ⚡ Run, on the bottom right corner of the canvas. This will open up Langflow's Interaction Panel with the chat console, text inputs, and outputs.
### 🖥️ Command Line Interface (CLI)
Langflow provides a command-line interface (CLI) for easy management and configuration.
#### Usage
You can run the Langflow using the following command:
```bash
langflow run [OPTIONS]
```
Find more information about the available options by running:
```bash
python -m langflow --help
```
## Find out more about 1.0
<Admonition type="caution" icon="🚧" title="ZONE UNDER CONSTRUCTION">
<p>
We are currently working on updating the documentation for Langflow 1.0.
</p>
</Admonition>
To get you learning more about what's new and why you should be excited about Langflow 1.0,
go to [A new chapter for Langflow](/whats-new/a-new-chapter-langflow) and also come back often
to check out our [migration guides](/whats-new/migrating-to-one-point-zero) as we release them.
* [A new chapter for Langflow](/whats-new/a-new-chapter-langflow)
* [Migration guides](/whats-new/migrating-to-one-point-zero)

View file

@ -6,14 +6,26 @@ module.exports = {
collapsed: false,
items: [
"index",
"getting-started/cli",
// "guides/basic-prompting",
// "guides/document-qa",
// "guides/blog-writer",
// "guides/memory-chatbot",
"getting-started/install-langflow",
"getting-started/quickstart",
"getting-started/huggingface-spaces",
"getting-started/new-to-llms",
],
},
{
type: "category",
label: " Starter Projects",
collapsed: false,
items: [
"guides/basic-prompting",
"guides/blog-writer",
"guides/document-qa",
"guides/memory-chatbot",
"guides/rag-with-astradb",
],
},
{
type: "category",
label: " What's New",
@ -46,6 +58,7 @@ module.exports = {
"migration/global-variables",
// "migration/experimental-components",
// "migration/state-management",
//"guides/rag-with-astradb",
],
},
{
@ -53,6 +66,7 @@ module.exports = {
label: "Guidelines",
collapsed: false,
items: [
"getting-started/cli",
"guidelines/login",
"guidelines/api",
"guidelines/components",

BIN
docs/static/img/quickstart.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 486 KiB