docs: icon audit (#8763)

* replace-aria-label-with-aria-hidden

* Apply suggestions from code review

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Update docs/docs/Concepts/concepts-components.md

---------

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
This commit is contained in:
Mendon Kissling 2025-06-30 16:39:32 -04:00 committed by GitHub
commit 2de118910a
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
16 changed files with 56 additions and 58 deletions

View file

@ -15,7 +15,7 @@ They may perform some processing or type checking, like converting raw HTML data
The **URL** data component loads content from a list of URLs.
In the component's **URLs** field, enter the URL you want to load. To add multiple URL fields, click <Icon name="Plus" aria-label="Add"/>.
In the component's **URLs** field, enter the URL you want to load. To add multiple URL fields, click <Icon name="Plus" aria-hidden="true"/> **Add URL**.
Alternatively, connect a component that outputs the `Message` type, like the **Chat Input** component, to supply your URLs from a component.
@ -197,7 +197,7 @@ This component executes SQL queries on a specified database.
This component fetches content from one or more URLs, processes the content, and returns it in various formats. It supports output in plain text or raw HTML.
In the component's **URLs** field, enter the URL you want to load. To add multiple URL fields, click <Icon name="Plus" aria-label="Add"/>.
In the component's **URLs** field, enter the URL you want to load. To add multiple URL fields, click <Icon name="Plus" aria-hidden="true"/> **Add URL**.
1. To use this component in a flow, connect the **DataFrame** output to a component that accepts the input.
For example, connect the **URL** component to a **Chat Output** component.

View file

@ -421,7 +421,7 @@ To use this component in a flow, connect Langflow to your locally running Ollama
1. In the Ollama component, in the **Ollama Base URL** field, enter the address for your locally running Ollama server.
This value is set as the `OLLAMA_HOST` environment variable in Ollama. The default base URL is `http://localhost:11434`.
2. To refresh the server's list of models, click <Icon name="RefreshCw" aria-label="Refresh"/>.
2. To refresh the server's list of models, click <Icon name="RefreshCw" aria-hidden="true"/> **Refresh**.
3. In the **Ollama Model** field, select an embeddings model. This example uses `all-minilm:latest`.
4. Connect the **Ollama** embeddings component to a flow.
For example, this flow connects a local Ollama server running a `all-minilm:latest` embeddings model to a [Chroma DB](/components-vector-stores#chroma-db) vector store to generate embeddings for split text.

View file

@ -48,8 +48,8 @@ To use all three columns from the **Batch Run** component, include them like thi
```text
record_number: {batch_index}, name: {text_input}, summary: {model_response}
```
7. To run the flow, in the **Parser** component, click <Icon name="Play" aria-label="Play icon" />.
8. To view your created DataFrame, in the **Parser** component, click <Icon name="TextSearch" aria-label="Inspect icon" />.
7. To run the flow, in the **Parser** component, click <Icon name="Play" aria-hidden="true"/> **Run component**.
8. To view your created DataFrame, in the **Parser** component, click <Icon name="TextSearch" aria-hidden="true"/>.
9. Optionally, connect a **Chat Output** component, and open the **Playground** to see the output.
<details>

View file

@ -198,7 +198,7 @@ Click **Outputs** to view the sent message:
```
:::tip
Optionally, to view the outputs of each component in the flow, click <Icon name="TextSearch" aria-label="Inspect icon" />.
Optionally, to view the outputs of each component in the flow, click <Icon name="TextSearch" aria-hidden="true"/> **Inspect output**.
:::
### Send chat messages with the API

View file

@ -233,7 +233,7 @@ This component generates text using Groq's language models.
2. In the **Groq API Key** field, paste your Groq API key.
The Groq model component automatically retrieves a list of the latest models.
To refresh your list of models, click <Icon name="RefreshCw" aria-label="Refresh"/>.
To refresh your list of models, click <Icon name="RefreshCw" aria-hidden="true"/> **Refresh**.
3. In the **Model** field, select the model you want to use for your LLM.
This example uses [llama-3.1-8b-instant](https://console.groq.com/docs/model/llama-3.1-8b-instant), which Groq recommends for real-time conversational interfaces.
4. In the **Prompt** component, enter:
@ -543,7 +543,7 @@ To use this component in a flow, connect Langflow to your locally running Ollama
1. In the Ollama component, in the **Base URL** field, enter the address for your locally running Ollama server.
This value is set as the `OLLAMA_HOST` environment variable in Ollama.
The default base URL is `http://localhost:11434`.
2. To refresh the server's list of models, click <Icon name="RefreshCw" aria-label="Refresh"/>.
2. To refresh the server's list of models, click <Icon name="RefreshCw" aria-hidden="true"/> **Refresh**.
3. In the **Model Name** field, select a model. This example uses `llama3.2:latest`.
4. Connect the **Ollama** model component to a flow. For example, this flow connects a local Ollama server running a Llama 3.2 model as the custom model for an [Agent](/components-agents) component.

View file

@ -37,8 +37,8 @@ I want to explode the result column out into a Data object
:::tip
Avoid punctuation in the **Instructions** field, as it can cause errors.
:::
5. To run the flow, in the **Smart function** component, click <Icon name="Play" aria-label="Play icon" />.
6. To inspect the filtered data, in the **Smart function** component, click <Icon name="TextSearch" aria-label="Inspect icon" />.
5. To run the flow, in the **Smart function** component, click <Icon name="Play" aria-hidden="true"/> **Run component**.
6. To inspect the filtered data, in the **Smart function** component, click <Icon name="TextSearch" aria-hidden="true"/> **Inspect output**.
The result is a structured DataFrame.
```text
id | name | company | username | email | address | zip
@ -140,7 +140,7 @@ curl -X POST "http://localhost:7860/api/v1/webhook/YOUR_FLOW_ID" \
```
3. In the **Data Operations** component, select the **Select Keys** operation to extract specific user information.
To add additional keys, click <Icon name="Plus" aria-label="Add"/> **Add More**.
To add additional keys, click <Icon name="Plus" aria-hidden="true"/> **Add More**.
![A webhook and data operations component](/img/component-data-operations-select-key.png)
4. Filter by `name`, `username`, and `email` to select the values from the request.
```json
@ -340,8 +340,8 @@ For example, to present a table of employees in Markdown:
- **ID:** {id}
- **Email:** {email}
```
7. To run the flow, in the **Parser** component, click <Icon name="Play" aria-label="Play icon" />.
8. To view your parsed text, in the **Parser** component, click <Icon name="TextSearch" aria-label="Inspect icon" />.
7. To run the flow, in the **Parser** component, click <Icon name="Play" aria-hidden="true"/> **Run component**.
8. To view your parsed text, in the **Parser** component, click <Icon name="TextSearch" aria-hidden="true"/> **Inspect output**.
9. Optionally, connect a **Chat Output** component, and open the **Playground** to see the output.
For an additional example of using the **Parser** component to format a DataFrame from a **Structured Output** component, see the **Market Research** template flow.

View file

@ -77,7 +77,7 @@ The **Tool Parameters** configuration pane allows you to define parameters for [
These filters become available as parameters that the LLM can use when calling the tool, with a better understanding of each parameter provided by the **Description** field.
1. To define a parameter for your query, in the **Tool Parameters** pane, click <Icon name="Plus" aria-label="Add"/>.
1. To define a parameter for your query, in the **Tool Parameters** pane, click <Icon name="Plus" aria-hidden="true"/> **Add a new row**.
2. Complete the fields based on your data. For example, with this filter, the LLM can filter by unique `customer_id` values.
* Name: `customer_id`

View file

@ -120,12 +120,12 @@ You should convert the query into:
2. A question to use as the basis for a QA embedding engine.
Avoid common keywords associated with the user's subject matter.
```
7. To view the keywords and questions the **OpenAI** component generates from your collection, in the **OpenAI** component, click <Icon name="TextSearch" aria-label="Inspect icon" />.
7. To view the keywords and questions the **OpenAI** component generates from your collection, in the **OpenAI** component, click <Icon name="TextSearch" aria-hidden="true"/> **Inspect output**.
```
1. Keywords: features, data, attributes, characteristics
2. Question: What characteristics can be identified in my data?
```
8. To view the [DataFrame](/concepts-objects#dataframe-object) generated from the **OpenAI** component's response, in the **Structured Output** component, click <Icon name="TextSearch" aria-label="Inspect icon" />.
8. To view the [DataFrame](/concepts-objects#dataframe-object) generated from the **OpenAI** component's response, in the **Structured Output** component, click <Icon name="TextSearch" aria-hidden="true"/> **Inspect output**.
The DataFrame is passed to a **Parser** component, which parses the contents of the **Keywords** column into a string.
This string of comma-separated words is passed to the **Lexical Terms** port of the **Astra DB** component.
@ -264,11 +264,11 @@ This example splits text from a [URL](/components-data#url) component, and compu
2. In the **Chroma DB** component, in the **Collection** field, enter a name for your embeddings collection.
3. Optionally, to persist the Chroma database, in the **Persist** field, enter a directory to store the `chroma.sqlite3` file.
This example uses `./chroma-db` to create a directory relative to where Langflow is running.
4. To load data and embeddings into your Chroma database, in the **Chroma DB** component, click <Icon name="Play" aria-label="Play icon" />.
4. To load data and embeddings into your Chroma database, in the **Chroma DB** component, click <Icon name="Play" aria-hidden="true"/> **Run component**.
:::tip
When loading duplicate documents, enable the **Allow Duplicates** option in Chroma DB if you want to store multiple copies of the same content, or disable it to automatically deduplicate your data.
:::
5. To view the split data, in the **Split Text** component, click <Icon name="TextSearch" aria-label="Inspect icon" />.
5. To view the split data, in the **Split Text** component, click <Icon name="TextSearch" aria-hidden="true"/> **Inspect output**.
6. To query your loaded data, open the **Playground** and query your database.
Your input is converted to vector data and compared to the stored vectors in a vector similarity search.

View file

@ -40,9 +40,7 @@ This component has two modes, depending on the type of server you want to access
For more information, see [global variables](/configuration-global-variables).
:::
4. Click <Icon name="RefreshCw" aria-label="Refresh"/> to test the command and retrieve the list of tools provided by the MCP server.
5. In the **Tool** field, select a tool that you want this component to use, or leave the field blank to allow access to all tools provided by the MCP server.
1. Click <Icon name="RefreshCw" aria-hidden="true"/> **Refresh** to test the command and retrieve the list of tools provided by the MCP server.
If you select a specific tool, you might need to configure additional tool-specific fields. For information about tool-specific fields, see your MCP server's documentation.

View file

@ -29,19 +29,19 @@ You can use the controls in the **Component menu** to manage and configure the c
- **Tool Mode**: Enable tool mode when combining a component with an agent component.
- **Freeze**: After a component runs, lock its previous output state to prevent it from re-running.
Click <Icon name="Ellipsis" aria-label="Horizontal ellipsis" /> **All** to see additional options for a component.
Click <Icon name="Ellipsis" aria-hidden="true"/> **All** to see additional options for a component.
## Component logs
To view a component's output and logs, click the <Icon name="TextSearch" aria-label="Inspect icon" /> icon.
To view a component's output and logs, click the <Icon name="TextSearch" aria-hidden="true"/> **Inspect output** icon.
## Run one component
To run a single component, click <Icon name="Play" aria-label="Play button" /> **Play**.
To run a single component, click <Icon name="Play" aria-hidden="true"/> **Play**.
Running a single component with the **Play** button is different from running the entire flow. In a single component run, the `build_vertex` function is called, which builds and runs only the single component with direct inputs provided through the UI (the `inputs_dict` parameter). The `VertexBuildResult` data is passed to the `build_and_run` method, which calls the component's `build` method and runs it. Unlike running the full flow, running a single component does not automatically execute its upstream dependencies.
A <Icon name="Check" aria-label="Checkmark" /> **Checkmark** indicates that the component ran successfully.
A <Icon name="Check" aria-hidden="true"/> **Checkmark** indicates that the component ran successfully.
## Component ports
@ -162,9 +162,9 @@ Enabling **Freeze** freezes all components upstream of the selected component.
## Additional component options
Click <Icon name="Ellipsis" aria-label="Horizontal ellipsis" /> **All** to see additional options for a component.
Click <Icon name="Ellipsis" aria-hidden="true"/> **All** to see additional options for a component.
To modify a component's name or description, click the <Icon name="PencilLine" aria-label="Pencil line"/> icon. Component descriptions accept Markdown syntax.
To modify a component's name or description, click <Icon name="PencilLine" aria-hidden="true"/> **Edit name/description**. Component descriptions accept Markdown syntax.
### Component shortcuts
@ -249,7 +249,7 @@ Components are listed in the sidebar by component type.
**Legacy** components are available for use but are no longer supported. By default, legacy components are hidden in the sidebar.
The sidebar includes a component **Search** bar with options for showing or hiding **Beta** and **Legacy** components.
To change the sidebar's behavior, click the <Icon name="SlidersHorizontal" aria-hidden="true" />, and then show or hide **Legacy** or **Beta** components.
To change the sidebar's behavior, click <Icon name="SlidersHorizontal" aria-hidden="true" /> **Component settings**, and then show or hide **Legacy** or **Beta** components.

View file

@ -23,7 +23,7 @@ The `build` function allows components to execute logic at runtime. For example,
When you send a message from the **Playground** interface, the interactions are stored in the **Message Logs** by `session_id`.
A single flow can have multiple chats, and different flows can share the same chat. Each chat will have a different `session_id`.
To view messages by `session_id` within the Playground, click the <Icon name="Ellipsis" aria-label="Horizontal ellipsis" /> menu of any chat session, and then select **Message Logs**.
To view messages by `session_id` within the Playground, click the <Icon name="Ellipsis" aria-hidden="true"/> **Options** menu of any chat session, and then select **Message Logs**.
![](/img/messages-logs.png)

View file

@ -22,7 +22,7 @@ Chat with an agent in the **Playground**, and get more recent results by asking
1. Create a [Simple agent starter project](/simple-agent).
2. Add your **OpenAI API key** credentials to the **Agent** component.
3. To start a chat session, click **Playground**.
4. To enable voice mode, click the <Icon name="Mic" aria-label="Microphone"/> icon.
4. To enable voice mode, click the <Icon name="Mic" aria-hidden="true"/> **Microphone** icon.
The **Voice mode** pane opens.
5. In the **OpenAI API Key** field, add your **OpenAI API key** credentials.
This key is saved as a [global variable](/configuration-global-variables) in Langflow and is accessible from any component or flow.

View file

@ -46,7 +46,7 @@ Replace **FLOW_ID** with your flow's ID, which can be found on the [Publish pane
}
```
1. To view the data received from your request, in the **Parser** component, click <Icon name="TextSearch" aria-label="Inspect icon" />.
1. To view the data received from your request, in the **Parser** component, click <Icon name="TextSearch" aria-hidden="true"/> **Inspect output**.
You should receive a string of parsed text, like `ID: 12345 - Name: alex - Email: alex@email.com`.

View file

@ -19,18 +19,18 @@ This article demonstrates how to use Langflow's prompt tools to issue basic prom
## Create the basic prompting flow
1. From the Langflow dashboard, click **New Flow**.
1. From the Langflow dashboard, click **New Flow**.
2. Select **Basic Prompting**.
2. Select **Basic Prompting**.
3. The **Basic Prompting** flow is created.
3. The **Basic Prompting** flow is created.
![](/img/starter-flow-basic-prompting.png)
This flow allows you to chat with the **OpenAI model** component.
The model will respond according to the prompt constructed in the **Prompt** component.
This flow allows you to chat with the **OpenAI model** component.
The model will respond according to the prompt constructed in the **Prompt** component.
4. To examine the **Template**, in the **Prompt** component, click the **Template** field.
@ -38,19 +38,19 @@ The model will respond according to the prompt constructed in the **Prompt** c
Answer the user as if you were a GenAI expert, enthusiastic about helping them get started building something fresh.
```
5. To create an environment variable for the **OpenAI** component, in the **OpenAI API Key** field, click the <Icon name="Globe" aria-label="Globe icon" /> **Globe** button, and then click **Add New Variable**.
5. To create an environment variable for the **OpenAI** component, in the **OpenAI API Key** field, click the <Icon name="Globe" aria-hidden="true"/> **Globe** button, and then click **Add New Variable**.
1. In the **Variable Name** field, enter `openai_api_key`.
2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
3. Click **Save Variable**.
1. In the **Variable Name** field, enter `openai_api_key`.
2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
3. Click **Save Variable**.
## Run the basic prompting flow
1. Click the **Playground** button.
1. Click the **Playground** button.
2. Type a message and press Enter. The bot should respond in a markedly piratical manner!
## Modify the prompt for a different result
1. To modify your prompt results, in the **Prompt** component, click the **Template** field. The **Edit Prompt** window opens.
2. Change the existing prompt to a different character, perhaps `Answer the user as if you were Hermione Granger.`
1. To modify your prompt results, in the **Prompt** component, click the **Template** field. The **Edit Prompt** window opens.
2. Change the existing prompt to a different character, perhaps `Answer the user as if you were Hermione Granger.`
3. Run the workflow again and notice how the prompt changes the model's response.

View file

@ -57,7 +57,7 @@ What is the second subject I asked you about?
The chatbot remembers your name and previous questions.
3. To view the **Message Logs** pane, click <Icon name="Ellipsis" aria-label="Horizontal ellipsis" />, and then click **Message Logs**.
3. To view the **Message Logs** pane, click <Icon name="Ellipsis" aria-hidden="true"/> **Options**, and then click **Message Logs**.
The **Message Logs** pane displays all previous messages, with each conversation sorted by `session_id`.
![](/img/messages-logs.png)

View file

@ -8,29 +8,29 @@ import Icon from "@site/src/components/icon";
Retrieval Augmented Generation, or RAG, is a pattern for training LLMs on your data and querying it.
RAG is backed by a **vector store**, a vector database which stores embeddings of the ingested data.
RAG is backed by a **vector store**, a vector database which stores embeddings of the ingested data.
This enables **vector search**, a more powerful and context-aware search.
This enables **vector search**, a more powerful and context-aware search.
We've chosen [Astra DB](https://astra.datastax.com/signup?utm_source=langflow-pre-release&utm_medium=referral&utm_campaign=langflow-announcement&utm_content=create-a-free-astra-db-account) as the vector database for this starter flow, but you can follow along with any of Langflow's vector database options.
We've chosen [Astra DB](https://astra.datastax.com/signup?utm_source=langflow-pre-release&utm_medium=referral&utm_campaign=langflow-announcement&utm_content=create-a-free-astra-db-account) as the vector database for this starter flow, but you can follow along with any of Langflow's vector database options.
## Prerequisites
- [A running Langflow instance](/get-started-installation)
- [An OpenAI API key](https://platform.openai.com/)
- [An Astra DB vector database](https://docs.datastax.com/en/astra-db-serverless/get-started/quickstart.html) with the following:
- [An Astra DB vector database](https://docs.datastax.com/en/astra-db-serverless/get-started/quickstart.html) with the following:
- An Astra DB application token scoped to read and write to the database
- A collection created in [Astra](https://docs.datastax.com/en/astra-db-serverless/databases/manage-collections.html#create-collection) or a new collection created in the **Astra DB** component
## Open Langflow and start a new project
1. From the Langflow dashboard, click **New Flow**.
2. Select **Vector Store RAG**.
3. The **Vector Store RAG** flow is created.
1. From the Langflow dashboard, click **New Flow**.
2. Select **Vector Store RAG**.
3. The **Vector Store RAG** flow is created.
## Build the vector RAG flow
@ -38,11 +38,11 @@ The vector store RAG flow is built of two separate flows for ingestion and query
![](/img/starter-flow-vector-rag.png)
The **Load Data Flow** (bottom of the screen) creates a searchable index to be queried for contextual similarity.
The **Load Data Flow** (bottom of the screen) creates a searchable index to be queried for contextual similarity.
This flow populates the vector store with data from a local file.
It ingests data from a local file, splits it into chunks, indexes it in Astra DB, and computes embeddings for the chunks using the OpenAI embeddings model.
The **Retriever Flow** (top of the screen) embeds the user's queries into vectors, which are compared to the vector store data from the **Load Data Flow** for contextual similarity.
The **Retriever Flow** (top of the screen) embeds the user's queries into vectors, which are compared to the vector store data from the **Load Data Flow** for contextual similarity.
- **Chat Input** receives user input from the **Playground**.
- **OpenAI Embeddings** converts the user query into vector form.
@ -53,10 +53,10 @@ The **Retriever Flow** (top of the screen) embeds the user's queries into vecto
- **Chat Output** returns the response to the **Playground**.
1. Configure the **OpenAI** model component.
1. To create a global variable for the **OpenAI** component, in the **OpenAI API Key** field, click the <Icon name="Globe" aria-label="Globe" /> **Globe** button, and then click **Add New Variable**.
2. In the **Variable Name** field, enter `openai_api_key`.
3. In the **Value** field, paste your OpenAI API Key (`sk-...`).
4. Click **Save Variable**.
1. To create a global variable for the **OpenAI** component, in the **OpenAI API Key** field, click the <Icon name="Globe" aria-hidden="True" /> **Globe** button, and then click **Add New Variable**.
2. In the **Variable Name** field, enter `openai_api_key`.
3. In the **Value** field, paste your OpenAI API Key (`sk-...`).
4. Click **Save Variable**.
2. Configure the **Astra DB** component.
1. In the **Astra DB Application Token** field, add your **Astra DB** application token.
The component connects to your database and populates the menus with existing databases and collections.
@ -85,6 +85,6 @@ If you used Langflow's **Global Variables** feature, the RAG application flow co
## Run the Vector Store RAG flow
1. Click the **Playground** button. Here you can chat with the AI that uses context from the database you created.
1. Click **Playground**. Here you can chat with the AI that uses context from the database you created.
2. Type a message and press Enter. (Try something like "What topics do you know about?")
3. The bot will respond with a summary of the data you've embedded.