From 2de118910aa05eaa851cc84df828033eab796d17 Mon Sep 17 00:00:00 2001
From: Mendon Kissling <59585235+mendonk@users.noreply.github.com>
Date: Mon, 30 Jun 2025 16:39:32 -0400
Subject: [PATCH] docs: icon audit (#8763)
* replace-aria-label-with-aria-hidden
* Apply suggestions from code review
Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>
* Apply suggestions from code review
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
* Update docs/docs/Concepts/concepts-components.md
---------
Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
---
docs/docs/Components/components-data.md | 4 +--
.../Components/components-embedding-models.md | 2 +-
docs/docs/Components/components-helpers.md | 4 +--
docs/docs/Components/components-io.md | 2 +-
docs/docs/Components/components-models.md | 4 +--
docs/docs/Components/components-processing.md | 10 +++----
docs/docs/Components/components-tools.md | 2 +-
.../Components/components-vector-stores.md | 8 +++---
docs/docs/Components/mcp-client.md | 4 +--
docs/docs/Concepts/concepts-components.md | 14 +++++-----
docs/docs/Concepts/concepts-playground.md | 2 +-
docs/docs/Concepts/concepts-voice-mode.md | 2 +-
docs/docs/Develop/webhook.md | 2 +-
docs/docs/Templates/basic-prompting.md | 24 ++++++++--------
docs/docs/Templates/memory-chatbot.md | 2 +-
docs/docs/Templates/vector-store-rag.md | 28 +++++++++----------
16 files changed, 56 insertions(+), 58 deletions(-)
diff --git a/docs/docs/Components/components-data.md b/docs/docs/Components/components-data.md
index 5b2e61261..c7a5c57bd 100644
--- a/docs/docs/Components/components-data.md
+++ b/docs/docs/Components/components-data.md
@@ -15,7 +15,7 @@ They may perform some processing or type checking, like converting raw HTML data
The **URL** data component loads content from a list of URLs.
-In the component's **URLs** field, enter the URL you want to load. To add multiple URL fields, click .
+In the component's **URLs** field, enter the URL you want to load. To add multiple URL fields, click **Add URL**.
Alternatively, connect a component that outputs the `Message` type, like the **Chat Input** component, to supply your URLs from a component.
@@ -197,7 +197,7 @@ This component executes SQL queries on a specified database.
This component fetches content from one or more URLs, processes the content, and returns it in various formats. It supports output in plain text or raw HTML.
-In the component's **URLs** field, enter the URL you want to load. To add multiple URL fields, click .
+In the component's **URLs** field, enter the URL you want to load. To add multiple URL fields, click **Add URL**.
1. To use this component in a flow, connect the **DataFrame** output to a component that accepts the input.
For example, connect the **URL** component to a **Chat Output** component.
diff --git a/docs/docs/Components/components-embedding-models.md b/docs/docs/Components/components-embedding-models.md
index ee4feeecd..d8df21ee5 100644
--- a/docs/docs/Components/components-embedding-models.md
+++ b/docs/docs/Components/components-embedding-models.md
@@ -421,7 +421,7 @@ To use this component in a flow, connect Langflow to your locally running Ollama
1. In the Ollama component, in the **Ollama Base URL** field, enter the address for your locally running Ollama server.
This value is set as the `OLLAMA_HOST` environment variable in Ollama. The default base URL is `http://localhost:11434`.
-2. To refresh the server's list of models, click .
+2. To refresh the server's list of models, click **Refresh**.
3. In the **Ollama Model** field, select an embeddings model. This example uses `all-minilm:latest`.
4. Connect the **Ollama** embeddings component to a flow.
For example, this flow connects a local Ollama server running a `all-minilm:latest` embeddings model to a [Chroma DB](/components-vector-stores#chroma-db) vector store to generate embeddings for split text.
diff --git a/docs/docs/Components/components-helpers.md b/docs/docs/Components/components-helpers.md
index 8cd3d07f4..09e7dd77f 100644
--- a/docs/docs/Components/components-helpers.md
+++ b/docs/docs/Components/components-helpers.md
@@ -48,8 +48,8 @@ To use all three columns from the **Batch Run** component, include them like thi
```text
record_number: {batch_index}, name: {text_input}, summary: {model_response}
```
-7. To run the flow, in the **Parser** component, click .
-8. To view your created DataFrame, in the **Parser** component, click .
+7. To run the flow, in the **Parser** component, click **Run component**.
+8. To view your created DataFrame, in the **Parser** component, click .
9. Optionally, connect a **Chat Output** component, and open the **Playground** to see the output.
diff --git a/docs/docs/Components/components-io.md b/docs/docs/Components/components-io.md
index 370669832..f3fed34d3 100644
--- a/docs/docs/Components/components-io.md
+++ b/docs/docs/Components/components-io.md
@@ -198,7 +198,7 @@ Click **Outputs** to view the sent message:
```
:::tip
-Optionally, to view the outputs of each component in the flow, click .
+Optionally, to view the outputs of each component in the flow, click **Inspect output**.
:::
### Send chat messages with the API
diff --git a/docs/docs/Components/components-models.md b/docs/docs/Components/components-models.md
index 7f2b794c3..ae2640ca7 100644
--- a/docs/docs/Components/components-models.md
+++ b/docs/docs/Components/components-models.md
@@ -233,7 +233,7 @@ This component generates text using Groq's language models.
2. In the **Groq API Key** field, paste your Groq API key.
The Groq model component automatically retrieves a list of the latest models.
-To refresh your list of models, click .
+To refresh your list of models, click **Refresh**.
3. In the **Model** field, select the model you want to use for your LLM.
This example uses [llama-3.1-8b-instant](https://console.groq.com/docs/model/llama-3.1-8b-instant), which Groq recommends for real-time conversational interfaces.
4. In the **Prompt** component, enter:
@@ -543,7 +543,7 @@ To use this component in a flow, connect Langflow to your locally running Ollama
1. In the Ollama component, in the **Base URL** field, enter the address for your locally running Ollama server.
This value is set as the `OLLAMA_HOST` environment variable in Ollama.
The default base URL is `http://localhost:11434`.
-2. To refresh the server's list of models, click .
+2. To refresh the server's list of models, click **Refresh**.
3. In the **Model Name** field, select a model. This example uses `llama3.2:latest`.
4. Connect the **Ollama** model component to a flow. For example, this flow connects a local Ollama server running a Llama 3.2 model as the custom model for an [Agent](/components-agents) component.
diff --git a/docs/docs/Components/components-processing.md b/docs/docs/Components/components-processing.md
index a3ef0c8aa..5dfc82ade 100644
--- a/docs/docs/Components/components-processing.md
+++ b/docs/docs/Components/components-processing.md
@@ -37,8 +37,8 @@ I want to explode the result column out into a Data object
:::tip
Avoid punctuation in the **Instructions** field, as it can cause errors.
:::
-5. To run the flow, in the **Smart function** component, click .
-6. To inspect the filtered data, in the **Smart function** component, click .
+5. To run the flow, in the **Smart function** component, click **Run component**.
+6. To inspect the filtered data, in the **Smart function** component, click **Inspect output**.
The result is a structured DataFrame.
```text
id | name | company | username | email | address | zip
@@ -140,7 +140,7 @@ curl -X POST "http://localhost:7860/api/v1/webhook/YOUR_FLOW_ID" \
```
3. In the **Data Operations** component, select the **Select Keys** operation to extract specific user information.
-To add additional keys, click **Add More**.
+To add additional keys, click **Add More**.

4. Filter by `name`, `username`, and `email` to select the values from the request.
```json
@@ -340,8 +340,8 @@ For example, to present a table of employees in Markdown:
- **ID:** {id}
- **Email:** {email}
```
-7. To run the flow, in the **Parser** component, click .
-8. To view your parsed text, in the **Parser** component, click .
+7. To run the flow, in the **Parser** component, click **Run component**.
+8. To view your parsed text, in the **Parser** component, click **Inspect output**.
9. Optionally, connect a **Chat Output** component, and open the **Playground** to see the output.
For an additional example of using the **Parser** component to format a DataFrame from a **Structured Output** component, see the **Market Research** template flow.
diff --git a/docs/docs/Components/components-tools.md b/docs/docs/Components/components-tools.md
index bdd4f31fe..c9acf43d4 100644
--- a/docs/docs/Components/components-tools.md
+++ b/docs/docs/Components/components-tools.md
@@ -77,7 +77,7 @@ The **Tool Parameters** configuration pane allows you to define parameters for [
These filters become available as parameters that the LLM can use when calling the tool, with a better understanding of each parameter provided by the **Description** field.
-1. To define a parameter for your query, in the **Tool Parameters** pane, click .
+1. To define a parameter for your query, in the **Tool Parameters** pane, click **Add a new row**.
2. Complete the fields based on your data. For example, with this filter, the LLM can filter by unique `customer_id` values.
* Name: `customer_id`
diff --git a/docs/docs/Components/components-vector-stores.md b/docs/docs/Components/components-vector-stores.md
index fc9d32e89..77d2c6e15 100644
--- a/docs/docs/Components/components-vector-stores.md
+++ b/docs/docs/Components/components-vector-stores.md
@@ -120,12 +120,12 @@ You should convert the query into:
2. A question to use as the basis for a QA embedding engine.
Avoid common keywords associated with the user's subject matter.
```
-7. To view the keywords and questions the **OpenAI** component generates from your collection, in the **OpenAI** component, click .
+7. To view the keywords and questions the **OpenAI** component generates from your collection, in the **OpenAI** component, click **Inspect output**.
```
1. Keywords: features, data, attributes, characteristics
2. Question: What characteristics can be identified in my data?
```
-8. To view the [DataFrame](/concepts-objects#dataframe-object) generated from the **OpenAI** component's response, in the **Structured Output** component, click .
+8. To view the [DataFrame](/concepts-objects#dataframe-object) generated from the **OpenAI** component's response, in the **Structured Output** component, click **Inspect output**.
The DataFrame is passed to a **Parser** component, which parses the contents of the **Keywords** column into a string.
This string of comma-separated words is passed to the **Lexical Terms** port of the **Astra DB** component.
@@ -264,11 +264,11 @@ This example splits text from a [URL](/components-data#url) component, and compu
2. In the **Chroma DB** component, in the **Collection** field, enter a name for your embeddings collection.
3. Optionally, to persist the Chroma database, in the **Persist** field, enter a directory to store the `chroma.sqlite3` file.
This example uses `./chroma-db` to create a directory relative to where Langflow is running.
-4. To load data and embeddings into your Chroma database, in the **Chroma DB** component, click .
+4. To load data and embeddings into your Chroma database, in the **Chroma DB** component, click **Run component**.
:::tip
When loading duplicate documents, enable the **Allow Duplicates** option in Chroma DB if you want to store multiple copies of the same content, or disable it to automatically deduplicate your data.
:::
-5. To view the split data, in the **Split Text** component, click .
+5. To view the split data, in the **Split Text** component, click **Inspect output**.
6. To query your loaded data, open the **Playground** and query your database.
Your input is converted to vector data and compared to the stored vectors in a vector similarity search.
diff --git a/docs/docs/Components/mcp-client.md b/docs/docs/Components/mcp-client.md
index 68bc04667..6acda5b3f 100644
--- a/docs/docs/Components/mcp-client.md
+++ b/docs/docs/Components/mcp-client.md
@@ -40,9 +40,7 @@ This component has two modes, depending on the type of server you want to access
For more information, see [global variables](/configuration-global-variables).
:::
-4. Click to test the command and retrieve the list of tools provided by the MCP server.
-
-5. In the **Tool** field, select a tool that you want this component to use, or leave the field blank to allow access to all tools provided by the MCP server.
+1. Click **Refresh** to test the command and retrieve the list of tools provided by the MCP server.
If you select a specific tool, you might need to configure additional tool-specific fields. For information about tool-specific fields, see your MCP server's documentation.
diff --git a/docs/docs/Concepts/concepts-components.md b/docs/docs/Concepts/concepts-components.md
index 66d0b44b4..3ce682f3f 100644
--- a/docs/docs/Concepts/concepts-components.md
+++ b/docs/docs/Concepts/concepts-components.md
@@ -29,19 +29,19 @@ You can use the controls in the **Component menu** to manage and configure the c
- **Tool Mode**: Enable tool mode when combining a component with an agent component.
- **Freeze**: After a component runs, lock its previous output state to prevent it from re-running.
-Click **All** to see additional options for a component.
+Click **All** to see additional options for a component.
## Component logs
-To view a component's output and logs, click the icon.
+To view a component's output and logs, click the **Inspect output** icon.
## Run one component
-To run a single component, click **Play**.
+To run a single component, click **Play**.
Running a single component with the **Play** button is different from running the entire flow. In a single component run, the `build_vertex` function is called, which builds and runs only the single component with direct inputs provided through the UI (the `inputs_dict` parameter). The `VertexBuildResult` data is passed to the `build_and_run` method, which calls the component's `build` method and runs it. Unlike running the full flow, running a single component does not automatically execute its upstream dependencies.
-A **Checkmark** indicates that the component ran successfully.
+A **Checkmark** indicates that the component ran successfully.
## Component ports
@@ -162,9 +162,9 @@ Enabling **Freeze** freezes all components upstream of the selected component.
## Additional component options
-Click **All** to see additional options for a component.
+Click **All** to see additional options for a component.
-To modify a component's name or description, click the icon. Component descriptions accept Markdown syntax.
+To modify a component's name or description, click **Edit name/description**. Component descriptions accept Markdown syntax.
### Component shortcuts
@@ -249,7 +249,7 @@ Components are listed in the sidebar by component type.
**Legacy** components are available for use but are no longer supported. By default, legacy components are hidden in the sidebar.
The sidebar includes a component **Search** bar with options for showing or hiding **Beta** and **Legacy** components.
-To change the sidebar's behavior, click the , and then show or hide **Legacy** or **Beta** components.
+To change the sidebar's behavior, click **Component settings**, and then show or hide **Legacy** or **Beta** components.
diff --git a/docs/docs/Concepts/concepts-playground.md b/docs/docs/Concepts/concepts-playground.md
index c06e075d8..3c60e86c5 100644
--- a/docs/docs/Concepts/concepts-playground.md
+++ b/docs/docs/Concepts/concepts-playground.md
@@ -23,7 +23,7 @@ The `build` function allows components to execute logic at runtime. For example,
When you send a message from the **Playground** interface, the interactions are stored in the **Message Logs** by `session_id`.
A single flow can have multiple chats, and different flows can share the same chat. Each chat will have a different `session_id`.
-To view messages by `session_id` within the Playground, click the menu of any chat session, and then select **Message Logs**.
+To view messages by `session_id` within the Playground, click the **Options** menu of any chat session, and then select **Message Logs**.

diff --git a/docs/docs/Concepts/concepts-voice-mode.md b/docs/docs/Concepts/concepts-voice-mode.md
index 387120e85..3538edca0 100644
--- a/docs/docs/Concepts/concepts-voice-mode.md
+++ b/docs/docs/Concepts/concepts-voice-mode.md
@@ -22,7 +22,7 @@ Chat with an agent in the **Playground**, and get more recent results by asking
1. Create a [Simple agent starter project](/simple-agent).
2. Add your **OpenAI API key** credentials to the **Agent** component.
3. To start a chat session, click **Playground**.
-4. To enable voice mode, click the icon.
+4. To enable voice mode, click the **Microphone** icon.
The **Voice mode** pane opens.
5. In the **OpenAI API Key** field, add your **OpenAI API key** credentials.
This key is saved as a [global variable](/configuration-global-variables) in Langflow and is accessible from any component or flow.
diff --git a/docs/docs/Develop/webhook.md b/docs/docs/Develop/webhook.md
index 7e0264ebd..82c55d40c 100644
--- a/docs/docs/Develop/webhook.md
+++ b/docs/docs/Develop/webhook.md
@@ -46,7 +46,7 @@ Replace **FLOW_ID** with your flow's ID, which can be found on the [Publish pane
}
```
-1. To view the data received from your request, in the **Parser** component, click .
+1. To view the data received from your request, in the **Parser** component, click **Inspect output**.
You should receive a string of parsed text, like `ID: 12345 - Name: alex - Email: alex@email.com`.
diff --git a/docs/docs/Templates/basic-prompting.md b/docs/docs/Templates/basic-prompting.md
index ff2d6cde2..c818045ab 100644
--- a/docs/docs/Templates/basic-prompting.md
+++ b/docs/docs/Templates/basic-prompting.md
@@ -19,18 +19,18 @@ This article demonstrates how to use Langflow's prompt tools to issue basic prom
## Create the basic prompting flow
-1. From the Langflow dashboard, click **New Flow**.
+1. From the Langflow dashboard, click **New Flow**.
-2. Select **Basic Prompting**.
+2. Select **Basic Prompting**.
-3. The **Basic Prompting** flow is created.
+3. The **Basic Prompting** flow is created.

-This flow allows you to chat with the **OpenAI model** component.
-The model will respond according to the prompt constructed in the **Prompt** component.
+This flow allows you to chat with the **OpenAI model** component.
+The model will respond according to the prompt constructed in the **Prompt** component.
4. To examine the **Template**, in the **Prompt** component, click the **Template** field.
@@ -38,19 +38,19 @@ The model will respond according to the prompt constructed in the **Prompt** c
Answer the user as if you were a GenAI expert, enthusiastic about helping them get started building something fresh.
```
-5. To create an environment variable for the **OpenAI** component, in the **OpenAI API Key** field, click the **Globe** button, and then click **Add New Variable**.
+5. To create an environment variable for the **OpenAI** component, in the **OpenAI API Key** field, click the **Globe** button, and then click **Add New Variable**.
- 1. In the **Variable Name** field, enter `openai_api_key`.
- 2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
- 3. Click **Save Variable**.
+ 1. In the **Variable Name** field, enter `openai_api_key`.
+ 2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
+ 3. Click **Save Variable**.
## Run the basic prompting flow
-1. Click the **Playground** button.
+1. Click the **Playground** button.
2. Type a message and press Enter. The bot should respond in a markedly piratical manner!
## Modify the prompt for a different result
-1. To modify your prompt results, in the **Prompt** component, click the **Template** field. The **Edit Prompt** window opens.
-2. Change the existing prompt to a different character, perhaps `Answer the user as if you were Hermione Granger.`
+1. To modify your prompt results, in the **Prompt** component, click the **Template** field. The **Edit Prompt** window opens.
+2. Change the existing prompt to a different character, perhaps `Answer the user as if you were Hermione Granger.`
3. Run the workflow again and notice how the prompt changes the model's response.
diff --git a/docs/docs/Templates/memory-chatbot.md b/docs/docs/Templates/memory-chatbot.md
index e6f0fc1a6..9ad51e808 100644
--- a/docs/docs/Templates/memory-chatbot.md
+++ b/docs/docs/Templates/memory-chatbot.md
@@ -57,7 +57,7 @@ What is the second subject I asked you about?
The chatbot remembers your name and previous questions.
-3. To view the **Message Logs** pane, click , and then click **Message Logs**.
+3. To view the **Message Logs** pane, click **Options**, and then click **Message Logs**.
The **Message Logs** pane displays all previous messages, with each conversation sorted by `session_id`.

diff --git a/docs/docs/Templates/vector-store-rag.md b/docs/docs/Templates/vector-store-rag.md
index 9f5b9692f..fde1ca607 100644
--- a/docs/docs/Templates/vector-store-rag.md
+++ b/docs/docs/Templates/vector-store-rag.md
@@ -8,29 +8,29 @@ import Icon from "@site/src/components/icon";
Retrieval Augmented Generation, or RAG, is a pattern for training LLMs on your data and querying it.
-RAG is backed by a **vector store**, a vector database which stores embeddings of the ingested data.
+RAG is backed by a **vector store**, a vector database which stores embeddings of the ingested data.
-This enables **vector search**, a more powerful and context-aware search.
+This enables **vector search**, a more powerful and context-aware search.
-We've chosen [Astra DB](https://astra.datastax.com/signup?utm_source=langflow-pre-release&utm_medium=referral&utm_campaign=langflow-announcement&utm_content=create-a-free-astra-db-account) as the vector database for this starter flow, but you can follow along with any of Langflow's vector database options.
+We've chosen [Astra DB](https://astra.datastax.com/signup?utm_source=langflow-pre-release&utm_medium=referral&utm_campaign=langflow-announcement&utm_content=create-a-free-astra-db-account) as the vector database for this starter flow, but you can follow along with any of Langflow's vector database options.
## Prerequisites
- [A running Langflow instance](/get-started-installation)
- [An OpenAI API key](https://platform.openai.com/)
-- [An Astra DB vector database](https://docs.datastax.com/en/astra-db-serverless/get-started/quickstart.html) with the following:
+- [An Astra DB vector database](https://docs.datastax.com/en/astra-db-serverless/get-started/quickstart.html) with the following:
- An Astra DB application token scoped to read and write to the database
- A collection created in [Astra](https://docs.datastax.com/en/astra-db-serverless/databases/manage-collections.html#create-collection) or a new collection created in the **Astra DB** component
## Open Langflow and start a new project
-1. From the Langflow dashboard, click **New Flow**.
-2. Select **Vector Store RAG**.
-3. The **Vector Store RAG** flow is created.
+1. From the Langflow dashboard, click **New Flow**.
+2. Select **Vector Store RAG**.
+3. The **Vector Store RAG** flow is created.
## Build the vector RAG flow
@@ -38,11 +38,11 @@ The vector store RAG flow is built of two separate flows for ingestion and query

-The **Load Data Flow** (bottom of the screen) creates a searchable index to be queried for contextual similarity.
+The **Load Data Flow** (bottom of the screen) creates a searchable index to be queried for contextual similarity.
This flow populates the vector store with data from a local file.
It ingests data from a local file, splits it into chunks, indexes it in Astra DB, and computes embeddings for the chunks using the OpenAI embeddings model.
-The **Retriever Flow** (top of the screen) embeds the user's queries into vectors, which are compared to the vector store data from the **Load Data Flow** for contextual similarity.
+The **Retriever Flow** (top of the screen) embeds the user's queries into vectors, which are compared to the vector store data from the **Load Data Flow** for contextual similarity.
- **Chat Input** receives user input from the **Playground**.
- **OpenAI Embeddings** converts the user query into vector form.
@@ -53,10 +53,10 @@ The **Retriever Flow** (top of the screen) embeds the user's queries into vecto
- **Chat Output** returns the response to the **Playground**.
1. Configure the **OpenAI** model component.
- 1. To create a global variable for the **OpenAI** component, in the **OpenAI API Key** field, click the **Globe** button, and then click **Add New Variable**.
- 2. In the **Variable Name** field, enter `openai_api_key`.
- 3. In the **Value** field, paste your OpenAI API Key (`sk-...`).
- 4. Click **Save Variable**.
+ 1. To create a global variable for the **OpenAI** component, in the **OpenAI API Key** field, click the **Globe** button, and then click **Add New Variable**.
+ 2. In the **Variable Name** field, enter `openai_api_key`.
+ 3. In the **Value** field, paste your OpenAI API Key (`sk-...`).
+ 4. Click **Save Variable**.
2. Configure the **Astra DB** component.
1. In the **Astra DB Application Token** field, add your **Astra DB** application token.
The component connects to your database and populates the menus with existing databases and collections.
@@ -85,6 +85,6 @@ If you used Langflow's **Global Variables** feature, the RAG application flow co
## Run the Vector Store RAG flow
-1. Click the **Playground** button. Here you can chat with the AI that uses context from the database you created.
+1. Click **Playground**. Here you can chat with the AI that uses context from the database you created.
2. Type a message and press Enter. (Try something like "What topics do you know about?")
3. The bot will respond with a summary of the data you've embedded.