[Docs] - Cleanup Components Folder (#1852)
* inputs * agents * chains * custom-component * align-admonitions-in-custom * data-and-embeddings * experimental * helpers * memories * model_specs * outputs * prompts * retrievers * textsplitter * tools * utilities * vector-stores
This commit is contained in:
parent
42714d35f1
commit
ba59f077a2
17 changed files with 862 additions and 1398 deletions
|
|
@ -8,84 +8,83 @@ import Admonition from '@theme/Admonition';
|
|||
</p>
|
||||
</Admonition>
|
||||
|
||||
|
||||
Agents are components that use reasoning to make decisions and take actions, designed to autonomously perform tasks or provide services with some degree of “freedom” (or agency). They combine the power of LLM chaining processes with access to external tools such as APIs to interact with applications and accomplish tasks.
|
||||
Agents are components that use reasoning to make decisions and take actions, designed to autonomously perform tasks or provide services with some degree of agency. LLM chains can only perform hardcoded sequences of actions, while agents use LLMs to reason through which actions to take, and in which order.
|
||||
|
||||
---
|
||||
|
||||
### AgentInitializer
|
||||
|
||||
The `AgentInitializer` component is a quick way to construct a zero-shot agent from a language model (LLM) and tools.
|
||||
The `AgentInitializer` constructs a zero-shot agent from a language model (LLM) and additional tools.
|
||||
|
||||
**Params**
|
||||
**Parameters**:
|
||||
|
||||
- **LLM:** Language Model to use in the `AgentInitializer`.
|
||||
- **Memory:** Used to add memory functionality to an agent. It allows the agent to store and retrieve information from previous conversations.
|
||||
- **Tools:** Tools that the agent will have access to.
|
||||
- **Agent:** The type of agent to be instantiated. Current supported: `zero-shot-react-description`, `react-docstore`, `self-ask-with-search,conversational-react-description` and `openai-functions`.
|
||||
- **LLM:** The language model used by the `AgentInitializer`.
|
||||
- **Memory:** Enables memory functionality, allowing the agent to recall and use information from previous interactions.
|
||||
- **Tools:** The tools available to the agent.
|
||||
- **Agent:** Specifies the type of agent to instantiate. Currently supported types include `zero-shot-react-description`, `react-docstore`, `self-ask-with-search`, `conversational-react-description`, and `openai-functions`.
|
||||
|
||||
---
|
||||
|
||||
### CSVAgent
|
||||
|
||||
A `CSVAgent` is an agent that is designed to interact with CSV (Comma-Separated Values) files. CSV files are a common format for storing tabular data, where each row represents a record and each column represents a field. The CSV agent can perform various tasks, such as reading and writing CSV files, processing the data, and generating tables. It can extract information from the CSV file, manipulate the data, and perform operations like filtering, sorting, and aggregating.
|
||||
The `CSVAgent` interacts with CSV (Comma-Separated Values) files, commonly used to store tabular data. Each row in a CSV file represents a record, and each column represents a field. The CSV agent can read and write CSV files, process data, and perform tasks such as filtering, sorting, and aggregating.
|
||||
|
||||
**Params**
|
||||
**Parameters**:
|
||||
|
||||
- **LLM:** Language Model to use in the `CSVAgent`.
|
||||
- **path:** The file path to the CSV data.
|
||||
- **LLM:** The language model used by the `CSVAgent`.
|
||||
- **Path:** The file path to the CSV data.
|
||||
|
||||
---
|
||||
|
||||
### JSONAgent
|
||||
|
||||
The `JSONAgent` deals with JSON (JavaScript Object Notation) data. Similar to the CSVAgent, it works with a language model (LLM) and a toolkit designed for JSON manipulation. This agent can iteratively explore a JSON blob to find the information needed to answer the user's question. It can list keys, get values, and navigate through the structure of the JSON object.
|
||||
The `JSONAgent` manages JSON (JavaScript Object Notation) data. This agent, like the CSVAgent, uses a language model (LLM) and a toolkit for JSON manipulation. It can explore a JSON blob to extract needed information, list keys, retrieve values, and navigate through the JSON structure.
|
||||
|
||||
**Params**
|
||||
**Parameters**:
|
||||
|
||||
- **LLM:** Language Model to use in the `JSONAgent`.
|
||||
- **Toolkit:** Toolkit that the agent will have access to.
|
||||
- **LLM:** The language model used by the `JSONAgent`.
|
||||
- **Toolkit:** The toolkit available to the agent.
|
||||
|
||||
---
|
||||
|
||||
### SQLAgent
|
||||
|
||||
A `SQLAgent` is an agent that is designed to interact with SQL databases. It is capable of performing various tasks, such as querying the database, retrieving data, and executing SQL statements. The agent can provide information about the structure of the database, including the tables and their schemas. It can also perform operations like inserting, updating, and deleting data in the database. The SQL agent is a helpful tool for managing and working with SQL databases efficiently.
|
||||
The `SQLAgent` interacts with SQL databases, capable of querying, retrieving data, and executing SQL statements. It provides insights into the database structure, including tables and schemas, and can perform operations such as insertions, updates, and deletions.
|
||||
|
||||
**Params**
|
||||
**Parameters**:
|
||||
|
||||
- **LLM:** Language Model to use in the `SQLAgent`.
|
||||
- **database_uri:** A string representing the connection URI for the SQL database.
|
||||
- **LLM:** The language model used by the `SQLAgent`.
|
||||
- **Database URI:** The connection URI for the SQL database.
|
||||
|
||||
---
|
||||
|
||||
### VectorStoreAgent
|
||||
|
||||
The `VectorStoreAgent` is designed to work with a vector store – a data structure used for storing and querying vector-based representations of data. The `VectorStoreAgent` can query the vector store to find relevant information based on user inputs.
|
||||
The `VectorStoreAgent` operates with a vector store, which is a data structure for storing and querying vector-based data representations. This agent can query the vector store to find information relevant to user inputs.
|
||||
|
||||
**Params**
|
||||
**Parameters**:
|
||||
|
||||
- **LLM:** Language Model to use in the `VectorStoreAgent`.
|
||||
- **Vector Store Info:** `VectorStoreInfo` to use in the `VectorStoreAgent`.
|
||||
- **LLM:** The language model used by the `VectorStoreAgent`.
|
||||
- **Vector Store Info:** The `VectorStoreInfo` used by the agent.
|
||||
|
||||
---
|
||||
|
||||
### VectorStoreRouterAgent
|
||||
|
||||
The `VectorStoreRouterAgent` is a custom agent that takes a vector store router as input. It is typically used when there’s a need to retrieve information from multiple vector stores. These can be connected through a `VectorStoreRouterToolkit` and sent over to the `VectorStoreRouterAgent`. An agent configured with multiple vector stores can route queries to the appropriate store based on the context.
|
||||
The `VectorStoreRouterAgent` is a custom agent that uses a vector store router. It is typically used to retrieve information from multiple vector stores connected through a `VectorStoreRouterToolkit`.
|
||||
|
||||
**Params**
|
||||
**Parameters**:
|
||||
|
||||
- **LLM:** Language Model to use in the `VectorStoreRouterAgent`.
|
||||
- **Vector Store Router Toolkit:** `VectorStoreRouterToolkit` to use in the `VectorStoreRouterAgent`.
|
||||
- **LLM:** The language model used by the `VectorStoreRouterAgent`.
|
||||
- **Vector Store Router Toolkit:** The toolkit used by the agent.
|
||||
|
||||
---
|
||||
|
||||
### ZeroShotAgent
|
||||
|
||||
The `ZeroShotAgent` is an agent that uses the ReAct framework to determine which tool to use based solely on the tool's description. It can be configured with any number of tools and requires a description for each tool. The agent is designed to be the most general-purpose action agent. It uses an `LLMChain` to determine which actions to take and in what order.
|
||||
The `ZeroShotAgent` uses the ReAct framework to decide which tool to use based on the tool's description. It is the most general-purpose action agent, capable of determining the necessary actions and their sequence through an `LLMChain`.
|
||||
|
||||
**Params**
|
||||
**Parameters**:
|
||||
|
||||
- **Allowed Tools:** Tools that the agent will have access to.
|
||||
- **LLM Chain:** LLM Chain to be used by the agent.
|
||||
- **Allowed Tools:** The tools accessible to the agent.
|
||||
- **LLM Chain:** The LLM Chain used by the agent.
|
||||
|
|
@ -6,143 +6,65 @@ import Admonition from "@theme/Admonition";
|
|||
# Chains
|
||||
|
||||
<Admonition type="caution" icon="🚧" title="ZONE UNDER CONSTRUCTION">
|
||||
<p>
|
||||
We appreciate your understanding as we polish our documentation – it may
|
||||
contain some rough edges. Share your feedback or report issues to help us
|
||||
improve! 🛠️📝
|
||||
</p>
|
||||
<p>
|
||||
Thank you for your patience while we enhance our documentation. It may
|
||||
have some imperfections. Share your feedback or report issues to help us
|
||||
improve! 🛠️📝
|
||||
</p>
|
||||
</Admonition>
|
||||
|
||||
Chains, in the context of language models, refer to a series of calls made to a language model. It allows for the output of one call to be used as the input for another call. Different types of chains allow for different levels of complexity. Chains are useful for creating pipelines and executing specific scenarios.
|
||||
Chains, in the context of language models, refer to a series of calls made to a language model. This approach allows for using the output of one call as the input for another. Different chain types facilitate varying complexity levels, making them useful for creating pipelines and executing specific scenarios.
|
||||
|
||||
---
|
||||
|
||||
### CombineDocsChain
|
||||
|
||||
The `CombineDocsChain` incorporates methods to combine or aggregate loaded documents for question-answering functionality.
|
||||
`CombineDocsChain` includes methods to combine or aggregate loaded documents for question-answering functionality.
|
||||
|
||||
<Admonition type="info">
|
||||
Acts as a proxy for LangChain’s [documents](https://python.langchain.com/docs/modules/chains/document/) chains produced by the `load_qa_chain` function.
|
||||
|
||||
Works as a proxy of LangChain’s [documents](https://python.langchain.com/docs/modules/chains/document/) chains generated by the `load_qa_chain` function.
|
||||
|
||||
</Admonition>
|
||||
|
||||
**Params**
|
||||
**Parameters**:
|
||||
|
||||
- **LLM:** Language Model to use in the chain.
|
||||
- **chain_type:** The chain type to be used. Each one of them applies a different “combination strategy”.
|
||||
|
||||
- **stuff**: The stuff [documents](https://python.langchain.com/docs/modules/chains/document/stuff) chain (“stuff" as in "to stuff" or "to fill") is the most straightforward of _the_ document chains. It takes a list of documents, inserts them all into a prompt, and passes that prompt to an LLM. This chain is well-suited for applications where documents are small and only a few are passed in for most calls.
|
||||
- **map_reduce**: The map-reduce [documents](https://python.langchain.com/docs/modules/chains/document/map_reduce) chain first applies an LLM chain to each document individually (the Map step), treating the chain output as a new document. It then passes all the new documents to a separate combined documents chain to get a single output (the Reduce step). It can optionally first compress or collapse the mapped documents to make sure that they fit in the combined documents chain (which will often pass them to an LLM). This compression step is performed recursively if necessary.
|
||||
- **map_rerank**: The map re-rank [documents](https://python.langchain.com/docs/modules/chains/document/map_rerank) chain runs an initial prompt on each document that not only tries to complete a task but also gives a score for how certain it is in its answer. The highest-scoring response is returned.
|
||||
- **refine**: The refine [documents](https://python.langchain.com/docs/modules/chains/document/refine) chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.
|
||||
|
||||
Since the Refine chain only passes a single document to the LLM at a time, it is well-suited for tasks that require analyzing more documents than can fit in the model's context. The obvious tradeoff is that this chain will make far more LLM calls than, for example, the Stuff documents chain. There are also certain tasks that are difficult to accomplish iteratively. For example, the Refine chain can perform poorly when documents frequently cross-reference one another or when a task requires detailed information from many documents.
|
||||
- **chain_type:** Type of chain to be used, each applying a different combination strategy:
|
||||
- **stuff**: Most straightforward document chain. It takes a list of documents, inserts them all into a prompt, and passes that prompt to an LLM. Suitable for cases where documents are small and few.
|
||||
- **map_reduce**: Applies an LLM to each document individually (the `Map` step), treating the output as a new document. It then combines these documents to get a single output (the `Reduce` step). Compression may occur to ensure documents fit in the final chain.
|
||||
- **map_rerank**: Runs an initial prompt on each document to complete a task and score its certainty. Returns the highest-scoring response.
|
||||
- **refine**: Iteratively updates its answer by looping over the input documents. Each document, along with the latest intermediate answer, is passed to an LLM to generate a new response. This method suits tasks requiring analysis of more documents than the model's context can handle, though it can be less effective for tasks requiring detailed cross-referencing or comprehensive information.
|
||||
|
||||
---
|
||||
|
||||
### ConversationChain
|
||||
|
||||
The `ConversationChain` is a straightforward chain for interactive conversations with a language model, making it ideal for chatbots or virtual assistants. It allows for dynamic conversations, question-answering, and complex dialogues.
|
||||
`ConversationChain` facilitates dynamic, interactive conversations with a language model, ideal for chatbots or virtual assistants.
|
||||
|
||||
**Params**
|
||||
**Parameters**:
|
||||
|
||||
- **LLM:** Language Model to use in the chain.
|
||||
- **Memory:** Default memory store.
|
||||
- **input_key:** Used to specify the key under which the user input will be stored in the conversation memory. It allows you to provide the user's input to the chain for processing and generating a response.
|
||||
- **output_key:** Used to specify the key under which the generated response will be stored in the conversation memory. It allows you to retrieve the response using the specified key.
|
||||
- **verbose:** This parameter is used to control the level of detail in the output of the chain. When set to True, it will print out some internal states of the chain while it is being run, which can be helpful for debugging and understanding the chain's behavior. If set to False, it will suppress the verbose output — defaults to `False`.
|
||||
- **input_key:** Specifies the key under which user input is stored in the conversation memory, enabling the chain to process and generate responses.
|
||||
- **output_key:** Specifies the key under which the generated response is stored, allowing retrieval of the response using this key.
|
||||
- **verbose:** Controls the verbosity of the chain's output. Set to `True` to enable detailed internal state outputs, useful for debugging and understanding the chain's behavior. Defaults to `False`.
|
||||
|
||||
---
|
||||
|
||||
### ConversationalRetrievalChain
|
||||
|
||||
The `ConversationalRetrievalChain` extracts information and provides answers by combining document search and question-answering abilities.
|
||||
`ConversationalRetrievalChain` combines document search with question-answering capabilities, extracting information and providing answers.
|
||||
|
||||
<Admonition type="info">
|
||||
A retriever finds documents based on a query but doesn’t store them; it returns the documents matching the query.
|
||||
|
||||
A retriever is a component that finds documents based on a query. It doesn't store the documents themselves, but it returns the ones that match the query.
|
||||
|
||||
</Admonition >
|
||||
|
||||
**Params**
|
||||
**Parameters**:
|
||||
|
||||
- **LLM:** Language Model to use in the chain.
|
||||
- **Memory:** Default memory store.
|
||||
- **Retriever:** The retriever used to fetch relevant documents.
|
||||
- **chain_type:** The chain type to be used. Each one of them applies a different “combination strategy”.
|
||||
|
||||
- **stuff**: The stuff [documents](https://python.langchain.com/docs/modules/chains/document/stuff) chain (“stuff" as in "to stuff" or "to fill") is the most straightforward of _the_ document chains. It takes a list of documents, inserts them all into a prompt, and passes that prompt to an LLM. This chain is well-suited for applications where documents are small and only a few are passed in for most calls.
|
||||
- **map_reduce**: The map-reduce [documents](https://python.langchain.com/docs/modules/chains/document/map_reduce) chain first applies an LLM chain to each document individually (the Map step), treating the chain output as a new document. It then passes all the new documents to a separate combined documents chain to get a single output (the Reduce step). It can optionally first compress or collapse the mapped documents to make sure that they fit in the combined documents chain (which will often pass them to an LLM). This compression step is performed recursively if necessary.
|
||||
- **map_rerank**: The map re-rank [documents](https://python.langchain.com/docs/modules/chains/document/map_rerank) chain runs an initial prompt on each document that not only tries to complete a task but also gives a score for how certain it is in its answer. The highest-scoring response is returned.
|
||||
- **refine**: The refine [documents](https://python.langchain.com/docs/modules/chains/document/refine) chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.
|
||||
|
||||
Since the Refine chain only passes a single document to the LLM at a time, it is well-suited for tasks that require analyzing more documents than can fit in the model's context. The obvious tradeoff is that this chain will make far more LLM calls than, for example, the Stuff documents chain. There are also certain tasks that are difficult to accomplish iteratively. For example, the Refine chain can perform poorly when documents frequently cross-reference one another or when a task requires detailed information from many documents.
|
||||
|
||||
- **return_source_documents:** Used to specify whether or not to include the source documents that were used to answer the question in the output. When set to `True`, source documents will be included in the output along with the generated answer. This can be useful for providing additional context or references to the user — defaults to `True`.
|
||||
- **verbose:** Whether or not to run in verbose mode. In verbose mode, intermediate logs will be printed to the console — defaults to `False`.
|
||||
- **chain_type:** Type of chain to be used, each applying a different combination strategy:
|
||||
- **stuff**: Inserts a list of documents into a prompt and passes it to an LLM. Suitable for cases where documents are small and few.
|
||||
- **map_reduce**: Processes each document with an LLM separately, combines them for a single output. Compressions may occur to fit documents into the final chain.
|
||||
- **map_rerank**: Scores responses based on certainty from each document, returns the highest.
|
||||
- **refine**: Updates answers iteratively by looping through documents, passing each with intermediate answers to an LLM for a new response. This method is beneficial for tasks that involve extensive document analysis.
|
||||
- **return_source_documents:** Specifies whether to include source documents used in the output. Useful for providing context or references to the user. Defaults to `True`.
|
||||
- **verbose:** Controls verbosity of output. Set to `True` for detailed logs, useful for debugging. Defaults to `False`.
|
||||
|
||||
---
|
||||
|
||||
### LLMChain
|
||||
|
||||
The `LLMChain` is a straightforward chain that adds functionality around language models. It combines a prompt template with a language model. To use it, create input variables to format the prompt template. The formatted prompt is then sent to the language model, and the generated output is returned as the result of the `LLMChain`.
|
||||
|
||||
**Params**
|
||||
|
||||
- **LLM:** Language Model to use in the chain.
|
||||
- **Memory:** Default memory store.
|
||||
- **Prompt**: Prompt template object to use in the chain.
|
||||
- **output_key:** This parameter is used to specify which key in the LLM output dictionary should be returned as the final output. By default, the `LLMChain` returns both the input and output key values — defaults to `text`.
|
||||
- **verbose:** Whether or not to run in verbose mode. In verbose mode, intermediate logs will be printed to the console — defaults to `False`.
|
||||
|
||||
---
|
||||
|
||||
### LLMMathChain
|
||||
|
||||
The `LLMMathChain` combines a language model (LLM) and a math calculation component. It allows the user to input math problems and get the corresponding solutions.
|
||||
|
||||
The `LLMMathChain` works by using the language model with an `LLMChain` to understand the input math problem and generate a math expression. It then passes this expression to the math component, which evaluates it and returns the result.
|
||||
|
||||
**Params**
|
||||
|
||||
- **LLM:** Language Model to use in the chain.
|
||||
- **LLMChain:** LLM Chain to use in the chain.
|
||||
- **Memory:** Default memory store.
|
||||
- **input_key:** Used to specify the input value for the mathematical calculation. It allows you to provide the specific values or variables that you want to use in the calculation — defaults to `question`.
|
||||
- **output_key:** Used to specify the key under which the output of the mathematical calculation will be stored. It allows you to retrieve the result of the calculation using the specified key — defaults to `answer`.
|
||||
- **verbose:** Whether or not to run in verbose mode. In verbose mode, intermediate logs will be printed to the console — defaults to `False`.
|
||||
|
||||
---
|
||||
|
||||
### RetrievalQA
|
||||
|
||||
`RetrievalQA` is a chain used to find relevant documents or information to answer a given query. The retriever is responsible for returning the relevant documents based on the query, and the QA component then extracts the answer from those documents. The retrieval QA system combines the capabilities of both the retriever and the QA component to provide accurate and relevant answers to user queries.
|
||||
|
||||
<Admonition type="info">
|
||||
|
||||
A retriever is a component that finds documents based on a query. It doesn't store the documents themselves, but it returns the ones that match the query.
|
||||
|
||||
</Admonition >
|
||||
|
||||
**Params**
|
||||
|
||||
- **Combine Documents Chain:** Chain to use to combine the documents.
|
||||
- **Memory:** Default memory store.
|
||||
- **Retriever:** The retriever used to fetch relevant documents.
|
||||
- **input_key:** This parameter is used to specify the key in the input data that contains the question. It is used to retrieve the question from the input data and pass it to the question-answering model for generating the answer — defaults to `query`.
|
||||
- **output_key:** This parameter is used to specify the key in the output data where the generated answer will be stored. It is used to retrieve the answer from the output data after the question-answering model has generated it — defaults to `result`.
|
||||
- **return_source_documents:** Used to specify whether or not to include the source documents that were used to answer the question in the output. When set to `True`, source documents will be included in the output along with the generated answer. This can be useful for providing additional context or references to the user — defaults to `True`.
|
||||
- **verbose:** Whether or not to run in verbose mode. In verbose mode, intermediate logs will be printed to the console — defaults to `False`.
|
||||
|
||||
---
|
||||
|
||||
### SQLDatabaseChain
|
||||
|
||||
The `SQLDatabaseChain` finds answers to questions using a SQL database. It works by using the language model to understand the SQL query and generate the corresponding SQL code. It then passes the SQL code to the SQL database component, which executes the query on the database and returns the result.
|
||||
|
||||
**Params**
|
||||
|
||||
- **Db:** SQL Database to connect to.
|
||||
- **LLM:** Language Model to use in the chain.
|
||||
- **Prompt:** Prompt template to translate natural language to SQL.
|
||||
|
|
|
|||
|
|
@ -2,114 +2,105 @@ import Admonition from "@theme/Admonition";
|
|||
|
||||
# Custom Components
|
||||
|
||||
Used to create a custom component, a special type of Langflow component that allows users to extend the functionality of the platform by creating their own reusable and configurable components from a Python script.
|
||||
|
||||
To use a custom component, follow these steps:
|
||||
|
||||
- Create a class that inherits from _`langflow.CustomComponent`_ and contains a _`build`_ method.
|
||||
- Use arguments with [Type Annotations (or Type Hints)](https://docs.python.org/3/library/typing.html) of the _`build`_ method to create component fields.
|
||||
- If applicable, use the _`build_config`_ method to customize how these fields look and behave.
|
||||
|
||||
<Admonition type="info" label="Tip">
|
||||
|
||||
For an in-depth explanation of custom components, their rules, and applications, make sure to read [Custom Component guidelines](../administration/custom-component).
|
||||
|
||||
Read the [Custom Component Guidelines](../administration/custom-component) for detailed information on custom components.
|
||||
</Admonition>
|
||||
|
||||
**Params**
|
||||
Custom components let you extend Langflow by creating reusable and configurable components from a Python script.
|
||||
|
||||
- **Code:** The Python code to define the component.
|
||||
## Usage
|
||||
|
||||
## The CustomComponent Class
|
||||
To create a custom component:
|
||||
|
||||
The CustomComponent class serves as the foundation for creating custom components. By inheriting this class, users can create new, configurable components, tailored to their specific requirements.
|
||||
1. Define a class that inherits from `langflow.CustomComponent`.
|
||||
2. Implement a `build` method in your class.
|
||||
3. Use type annotations in the `build` method to define component fields.
|
||||
4. Optionally, use the `build_config` method to customize field appearance and behavior.
|
||||
|
||||
**Methods**
|
||||
**Parameters**
|
||||
|
||||
- **build**: This method is required within a Custom Component class. It defines the component's functionality and specifies how it processes input data to produce output data. This method is called when the component is built (i.e., when you click the _Build_ ⚡ button in the canvas).
|
||||
- **Code:** The Python code that defines the component.
|
||||
|
||||
The type annotations of the _`build`_ instance method are used to create the fields of the component.
|
||||
## CustomComponent Class
|
||||
|
||||
| Supported Types |
|
||||
| --------------------------------------------------------- |
|
||||
| _`str`_, _`int`_, _`float`_, _`bool`_, _`list`_, _`dict`_ |
|
||||
| _`langflow.field_typing.NestedDict`_ |
|
||||
| _`langflow.field_typing.Prompt`_ |
|
||||
| _`langchain.chains.base.Chain`_ |
|
||||
| _`langchain.PromptTemplate`_ |
|
||||
| _`from langchain.schema.language_model import BaseLanguageModel`_ |
|
||||
| _`langchain.Tool`_ |
|
||||
| _`langchain.document_loaders.base.BaseLoader`_ |
|
||||
| _`langchain.schema.Document`_ |
|
||||
| _`langchain.text_splitters.TextSplitter`_ |
|
||||
| _`langchain.vectorstores.base.VectorStore`_ |
|
||||
| _`langchain.embeddings.base.Embeddings`_ |
|
||||
| _`langchain.schema.BaseRetriever`_ |
|
||||
This class is the foundation for creating custom components. It allows users to create new, configurable components tailored to their needs.
|
||||
|
||||
The difference between _`dict`_ and _`langflow.field_typing.NestedDict`_ is that one adds a simple key-value pair field, while the other opens a more robust dictionary editor.
|
||||
### Methods
|
||||
|
||||
<Admonition type="info">
|
||||
To use the _`Prompt`_ type, you must also add _`**kwargs`_ to the _`build`_ method. This is because the _`Prompt`_ type passes new arbitrary keyword arguments to it.
|
||||
**build:** This method is essential in a `CustomComponent` class. It defines the component's functionality and how it processes input data. The build method is invoked when you click the **Build** button on the canvas.
|
||||
|
||||
If you want to add the values of the variables to the template you defined, you must format the PromptTemplate inside the CustomComponent class.
|
||||
</Admonition>
|
||||
The following types are supported in the build method:
|
||||
|
||||
| Supported Types |
|
||||
| --------------------------------------------------------- |
|
||||
| _`str`_, _`int`_, _`float`_, _`bool`_, _`list`_, _`dict`_ |
|
||||
| _`langflow.field_typing.NestedDict`_ |
|
||||
| _`langflow.field_typing.Prompt`_ |
|
||||
| _`langchain.chains.base.Chain`_ |
|
||||
| _`langchain.PromptTemplate`_ |
|
||||
| _`from langchain.schema.language_model import BaseLanguageModel`_ |
|
||||
| _`langchain.Tool`_ |
|
||||
| _`langchain.document_loaders.base.BaseLoader`_ |
|
||||
| _`langchain.schema.Document`_ |
|
||||
| _`langchain.text_splitters.TextSplitter`_ |
|
||||
| _`langchain.vectorstores.base.VectorStore`_ |
|
||||
| _`langchain.embeddings.base.Embeddings`_ |
|
||||
| _`langchain.schema.BaseRetriever`_ |
|
||||
|
||||
<Admonition type="info">
|
||||
Unlike Langchain types, base Python types do not add a
|
||||
[handle](../administration/components) to the field by default. To add handles,
|
||||
use the _`input_types`_ key in the _`build_config`_ method.
|
||||
</Admonition>
|
||||
|
||||
- **build_config**: Used to define the configuration fields of the component (if applicable). It should always return a dictionary with specific keys representing the field names and corresponding configurations. This method is called when the code is processed (i.e., when you click _Check and Save_ in the code editor). It must follow the format described below:
|
||||
|
||||
- Top-level keys are field names.
|
||||
- Their values are can be of type _`langflow.field_typing.TemplateField`_ or _`dict`_. They specify the behavior of the generated fields.
|
||||
|
||||
Below are the available keys used to configure component fields:
|
||||
|
||||
| Key | Description |
|
||||
| -------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| _`is_list: bool`_ | If the field can be a list of values, meaning that the user can manually add more inputs to the same field. |
|
||||
| _`options: List[str]`_ | When defined, the field becomes a dropdown menu where a list of strings defines the options to be displayed. If the _`value`_ attribute is set to one of the options, that option becomes default. For this parameter to work, _`field_type`_ should invariably be _`str`_. |
|
||||
| _`multiline: bool`_ | Defines if a string field opens a text editor. Useful for longer texts. |
|
||||
| _`input_types: List[str]`_ | Used when you want a _`str`_ field to have connectable handles. |
|
||||
| _`display_name: str`_ | Defines the name of the field. |
|
||||
| _`advanced: bool`_ | Hide the field in the canvas view (displayed component settings only). Useful when a field is for advanced users. |
|
||||
| _`password: bool`_ | To mask the input text. Useful to hide sensitive text (e.g. API keys). |
|
||||
| _`required: bool`_ | This is determined automatically but can be used to override the default behavior. |
|
||||
| _`info: str`_ | Adds a tooltip to the field. |
|
||||
| _`file_types: List[str]`_ | This is a requirement if the _`field_type`_ is _file_. Defines which file types will be accepted. For example, _json_, _yaml_ or _yml_. |
|
||||
| _`range_spec: langflow.field_typing.RangeSpec`_ | This is a requirement if the _`field_type`_ is _`float`_. Defines the range of values accepted and the step size. If none is defined, the default is _`[-1, 1, 0.1]`_. |
|
||||
| _`title_case: bool`_ | Formats the name of the field when _`display_name`_ is not defined. Set it to False to keep the name as you set it in the _`build`_ method. |
|
||||
| _`refresh_button: bool`_ | If set to True a button will appear to the right of the field, and when clicked, it will call the _`update_build_config`_ method which takes in the _`build_config`_, the name of the field (_`field_name`_) and the latest value of the field (_`field_value`_). This is useful when you want to update the _`build_config`_ based on the value of the field. |
|
||||
| _`real_time_refresh: bool`_ | If set to True, the _`update_build_config`_ method will be called every time the field value changes. |
|
||||
| _`field_type: str`_ | You should never define this key. It is automatically set based on the type hint of the _`build`_ method. |
|
||||
|
||||
<Admonition type="info" label="Tip">
|
||||
|
||||
By using the _`update_build_config`_ method, you can update the _`build_config`_ in whatever way you want based on the value of the field or not.
|
||||
The difference between _`dict`_ and _`langflow.field_typing.NestedDict`_ is that one adds a simple key-value pair field, while the other opens a more robust dictionary editor.
|
||||
|
||||
<Admonition type="info">
|
||||
Use the `Prompt` type by adding **kwargs to the build method.
|
||||
If you want to add the values of the variables to the template you defined, format the `PromptTemplate` inside the `CustomComponent` class.
|
||||
</Admonition>
|
||||
|
||||
- The CustomComponent class also provides helpful methods for specific tasks (e.g., to load and use other flows from the Langflow platform):
|
||||
<Admonition type="info">
|
||||
Use base Python types without a handle by default. To add handles, use the `input_types` key in the `build_config` method.
|
||||
</Admonition>
|
||||
|
||||
| Method Name | Description |
|
||||
| -------------- | ------------------------------------------------------------------- |
|
||||
| _`list_flows`_ | Returns a list of Flow objects with an _`id`_ and a _`name`_. |
|
||||
| _`get_flow`_ | Returns a Flow object. Parameters are _`flow_name`_ or _`flow_id`_. |
|
||||
| _`load_flow`_ | Loads a flow from a given _`id`_. |
|
||||
**build_config:** Defines the configuration fields of the component. This method returns a dictionary where each key represents a field name and each value defines the field's behavior.
|
||||
|
||||
- Useful attributes:
|
||||
Supported keys for configuring fields:
|
||||
|
||||
| Attribute Name | Description |
|
||||
| -------------- | ----------------------------------------------------------------------------- |
|
||||
| _`status`_ | Displays the value it receives in the _`build`_ method. Useful for debugging. |
|
||||
| _`field_order`_ | Defines the order the fields will be displayed in the canvas. |
|
||||
| _`icon`_ | Defines the emoji (for example, _`:rocket:`_) that will be displayed in the canvas. |
|
||||
| Key | Description |
|
||||
| --------------------- | --------------------------------------------------- |
|
||||
| `is_list` | Boolean indicating if the field can hold multiple values. |
|
||||
| `options` | Dropdown menu options. |
|
||||
| `multiline` | Boolean indicating if a field allows multiline input. |
|
||||
| `input_types` | Allows connection handles for string fields. |
|
||||
| `display_name` | Field name displayed in the UI. |
|
||||
| `advanced` | Hides the field in the default UI view. |
|
||||
| `password` | Masks input, useful for sensitive data. |
|
||||
| `required` | Overrides the default behavior to make a field mandatory. |
|
||||
| `info` | Tooltip for the field. |
|
||||
| `file_types` | Accepted file types, useful for file fields. |
|
||||
| `range_spec` | Defines valid ranges for float fields. |
|
||||
| `title_case` | Boolean that controls field name capitalization. |
|
||||
| `refresh_button` | Adds a refresh button that updates field values. |
|
||||
| `real_time_refresh` | Updates the configuration as field values change. |
|
||||
| `field_type` | Automatically set based on the build method's type hint. |
|
||||
|
||||
<Admonition type="info" label="Tip">
|
||||
<Admonition type="info" label="Tip">
|
||||
Use the `update_build_config` method to dynamically update configurations based on field values.
|
||||
</Admonition>
|
||||
|
||||
## Additional methods and attributes
|
||||
|
||||
The `CustomComponent` class also provides helpful methods for specific tasks (e.g., to load and use other flows from the Langflow platform):
|
||||
|
||||
### Methods
|
||||
|
||||
- `list_flows`: Lists available flows.
|
||||
- `get_flow`: Retrieves a specific flow by name or ID.
|
||||
- `load_flow`: Loads a flow by ID.
|
||||
|
||||
### Attributes
|
||||
|
||||
- `status`: Shows values from the `build` method, useful for debugging.
|
||||
- `field_order`: Controls the display order of fields.
|
||||
- `icon`: Sets the canvas display icon.
|
||||
|
||||
<Admonition type="info" label="Tip">
|
||||
Check out the [FlowRunner](../examples/flow-runner) example to understand how to call a flow from a custom component.
|
||||
</Admonition>
|
||||
|
||||
</Admonition>
|
||||
|
|
|
|||
|
|
@ -2,86 +2,59 @@ import Admonition from '@theme/Admonition';
|
|||
|
||||
# Data
|
||||
|
||||
### API Request
|
||||
## API Request
|
||||
|
||||
This component makes HTTP requests to the specified URLs.
|
||||
This component sends HTTP requests to the specified URLs.
|
||||
|
||||
**Params**
|
||||
Use this component to interact with external APIs or services and retrieve data. Ensure that the URLs are valid and that you configure the method, headers, body, and timeout correctly.
|
||||
|
||||
- **URLs:** URLs to make requests to.
|
||||
- **Method:** The HTTP method to use.
|
||||
- **Headers:** The headers to send with the request.
|
||||
- **Body:** The body to send with the request (for POST, PATCH, PUT).
|
||||
- **Timeout:** The timeout to use for the request.
|
||||
**Parameters:**
|
||||
|
||||
<Admonition type="tip" title="Tip">
|
||||
<p>
|
||||
Use this component to make HTTP requests to external APIs or services and retrieve data.
|
||||
</p>
|
||||
<p>
|
||||
Ensure that you provide valid URLs and configure the method, headers, body, and timeout appropriately.
|
||||
</p>
|
||||
</Admonition>
|
||||
- **URLs:** The URLs to target.
|
||||
- **Method:** The HTTP method, such as GET or POST.
|
||||
- **Headers:** The headers to include with the request.
|
||||
- **Body:** The data to send with the request (for methods like POST, PATCH, PUT).
|
||||
- **Timeout:** The maximum time to wait for a response.
|
||||
|
||||
---
|
||||
|
||||
### Directory
|
||||
## Directory
|
||||
|
||||
This component recursively loads files from a directory.
|
||||
This component recursively retrieves files from a specified directory.
|
||||
|
||||
**Params**
|
||||
Use this component to retrieve various file types, such as text or JSON files, from a directory. Make sure to provide the correct path and configure the other parameters as needed.
|
||||
|
||||
- **Path:** The path to the directory.
|
||||
- **Types:** File types to load. Leave empty to load all types.
|
||||
- **Depth:** Depth to search for files.
|
||||
- **Max Concurrency:** The maximum number of concurrent file loading operations.
|
||||
- **Load Hidden:** If true, hidden files will be loaded.
|
||||
- **Recursive:** If true, the search will be recursive.
|
||||
- **Silent Errors:** If true, errors will not raise an exception.
|
||||
- **Use Multithreading:** If true, use multithreading for loading files.
|
||||
|
||||
<Admonition type="tip" title="Tip">
|
||||
<p>
|
||||
Use this component to load files from a directory, such as text files, JSON files, etc.
|
||||
</p>
|
||||
<p>
|
||||
Ensure that you provide the correct path to the directory and configure other parameters as needed.
|
||||
</p>
|
||||
</Admonition>
|
||||
**Parameters:**
|
||||
|
||||
- **Path:** The directory path.
|
||||
- **Types:** The types of files to retrieve. Leave this blank to retrieve all file types.
|
||||
- **Depth:** The level of directory depth to search.
|
||||
- **Max Concurrency:** The maximum number of simultaneous file loading operations.
|
||||
- **Load Hidden:** Set to true to include hidden files.
|
||||
- **Recursive:** Set to true to enable recursive search.
|
||||
- **Silent Errors:** Set to true to suppress exceptions on errors.
|
||||
- **Use Multithreading:** Set to true to use multithreading in file loading.
|
||||
|
||||
---
|
||||
|
||||
### File
|
||||
## File
|
||||
|
||||
This component loads a generic file.
|
||||
This component loads a file.
|
||||
|
||||
**Params**
|
||||
Use this component to load files, such as text or JSON files. Ensure you specify the correct path and configure other parameters as necessary.
|
||||
|
||||
- **Path:** The path to the file.
|
||||
- **Silent Errors:** If true, errors will not raise an exception.
|
||||
**Parameters:**
|
||||
|
||||
<Admonition type="tip" title="Tip">
|
||||
<p>
|
||||
Use this component to load a generic file, such as a text file, JSON file, etc.
|
||||
</p>
|
||||
<p>
|
||||
Ensure that you provide the correct path to the file and configure other parameters as needed.
|
||||
</p>
|
||||
</Admonition>
|
||||
- **Path:** The file path.
|
||||
- **Silent Errors:** Set to true to prevent exceptions on errors.
|
||||
|
||||
---
|
||||
|
||||
### URL
|
||||
## URL
|
||||
|
||||
This component fetches content from one or more URLs.
|
||||
This component retrieves content from specified URLs.
|
||||
|
||||
**Params**
|
||||
Ensure the URLs are valid and adjust other parameters as needed.
|
||||
**Parameters:**
|
||||
|
||||
- **URLs:** The URLs from which content will be fetched.
|
||||
|
||||
<Admonition type="tip" title="Tip">
|
||||
<p>
|
||||
Ensure that you provide valid URLs and configure other parameters as needed.
|
||||
</p>
|
||||
</Admonition>
|
||||
- **URLs:** The URLs to retrieve content from.
|
||||
|
|
|
|||
|
|
@ -1,144 +1,116 @@
|
|||
import Admonition from "@theme/Admonition";
|
||||
|
||||
# Embeddings
|
||||
|
||||
### Amazon Bedrock Embeddings
|
||||
## Amazon Bedrock Embeddings
|
||||
|
||||
Used to load [Amazon Bedrocks’s](https://aws.amazon.com/bedrock/) embedding models.
|
||||
Used to load embedding models from [Amazon Bedrock](https://aws.amazon.com/bedrock/).
|
||||
|
||||
**Params**
|
||||
| **Parameter** | **Type** | **Description** | **Default** |
|
||||
|-----------------------------|-------------------|------------------------------------------------------------------------------------------------------------------------------------|-------------|
|
||||
| `credentials_profile_name` | `str` | Name of the AWS credentials profile in ~/.aws/credentials or ~/.aws/config, which has access keys or role information. | |
|
||||
| `model_id` | `str` | ID of the model to call, e.g., `amazon.titan-embed-text-v1`. This is equivalent to the `modelId` property in the `list-foundation-models` API. | |
|
||||
| `endpoint_url` | `str` | URL to set a specific service endpoint other than the default AWS endpoint. | |
|
||||
| `region_name` | `str` | AWS region to use, e.g., `us-west-2`. Falls back to `AWS_DEFAULT_REGION` environment variable or region specified in ~/.aws/config if not provided. | |
|
||||
|
||||
- **credentials_profile_name:** The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which has either access keys or role information specified. If not specified, the default credential profile or, if on an EC2 instance, credentials from IMDS will be used. See [the AWS documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html) for more details.
|
||||
## Cohere Embeddings
|
||||
|
||||
- **model_id:** Id of the model to call, e.g., amazon.titan-embed-text-v1, this is equivalent to the modelId property in the list-foundation-models api.
|
||||
Used to load embedding models from [Cohere](https://cohere.com/).
|
||||
|
||||
- **endpoint_url:** Needed if you don’t want to default to us-east-1 endpoint.
|
||||
| **Parameter** | **Type** | **Description** | **Default** |
|
||||
|---------------------|-------------------|-------------------------------------------------------------------------------------------------------------------------------|-----------------------|
|
||||
| `cohere_api_key` | `str` | API key required to authenticate with the Cohere service. | |
|
||||
| `model` | `str` | Language model used for embedding text documents and performing queries. | `embed-english-v2.0` |
|
||||
| `truncate` | `bool` | Whether to truncate the input text to fit within the model's constraints. | `False` |
|
||||
|
||||
- **region_name:** The aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable or region specified in ~/.aws/config in case it is not provided here.
|
||||
|
||||
---
|
||||
|
||||
### Cohere Embeddings
|
||||
|
||||
Used to load [Cohere’s](https://cohere.com/) embedding models.
|
||||
|
||||
**Params**
|
||||
|
||||
- **cohere_api_key:** Holds the API key required to authenticate with the Cohere service.
|
||||
|
||||
- **model:** The language model used for embedding text documents and performing queries —defaults to `embed-english-v2.0`.
|
||||
|
||||
- **truncate:** Used to specify whether or not to truncate the input text. Truncation is useful when dealing with long texts that exceed the model's maximum input length. By truncating the text, the user can ensure that it fits within the model's constraints.
|
||||
|
||||
---
|
||||
|
||||
### Azure OpenAI Embeddings
|
||||
## Azure OpenAI Embeddings
|
||||
|
||||
Generate embeddings using Azure OpenAI models.
|
||||
|
||||
**Params**
|
||||
| **Parameter** | **Type** | **Description** | **Default** |
|
||||
|---------------------|-------------------|-------------------------------------------------------------------------------------------------------------------------------|-----------------------|
|
||||
| `Azure Endpoint` | `str` | Your Azure endpoint, including the resource. Example: `https://example-resource.azure.openai.com/` | |
|
||||
| `Deployment Name` | `str` | The name of the deployment. | |
|
||||
| `API Version` | `str` | The API version to use, options include various dates. | |
|
||||
| `API Key` | `str` | The API key to access the Azure OpenAI service. | |
|
||||
|
||||
- **Azure Endpoint:** Your Azure endpoint, including the resource. Example: `https://example-resource.azure.openai.com/`
|
||||
- **Deployment Name:** The name of the deployment.
|
||||
- **API Version:** The API version to use. (Options: 2022-12-01, 2023-03-15-preview, 2023-05-15, 2023-06-01-preview, 2023-07-01-preview, 2023-08-01-preview)
|
||||
- **API Key:** The API key to access the Azure OpenAI service.
|
||||
|
||||
---
|
||||
|
||||
### Hugging Face API Embeddings
|
||||
## Hugging Face API Embeddings
|
||||
|
||||
Generate embeddings using Hugging Face Inference API models.
|
||||
|
||||
**Params**
|
||||
| **Parameter** | **Type** | **Description** | **Default** |
|
||||
|---------------------|-------------------|-------------------------------------------------------------------------------------------------------------------------------|-----------------------|
|
||||
| `API Key` | `str` | API key for accessing the Hugging Face Inference API. | |
|
||||
| `API URL` | `str` | URL of the Hugging Face Inference API. | `http://localhost:8080` |
|
||||
| `Model Name` | `str` | Name of the model to use for embeddings. | `BAAI/bge-large-en-v1.5` |
|
||||
| `Cache Folder` | `str` | Folder path to cache Hugging Face models. | |
|
||||
| `Encode Kwargs` | `dict` | Additional arguments for the encoding process. | |
|
||||
| `Model Kwargs` | `dict` | Additional arguments for the model. | |
|
||||
| `Multi Process` | `bool` | Whether to use multiple processes. | `False` |
|
||||
|
||||
- **API Key:** API key for accessing the Hugging Face Inference API. (Type: str)
|
||||
- **API URL:** URL of the Hugging Face Inference API. (Default: http://localhost:8080)
|
||||
- **Model Name:** Name of the model to use. (Default: BAAI/bge-large-en-v1.5)
|
||||
- **Cache Folder:** Folder path to cache Hugging Face models. (Advanced)
|
||||
- **Encode Kwargs:** Additional arguments for the encoding process. (Type: dict, Advanced)
|
||||
- **Model Kwargs:** Additional arguments for the model. (Type: dict, Advanced)
|
||||
- **Multi Process:** Whether to use multiple processes. (Default: False, Advanced)
|
||||
## Hugging Face Embeddings
|
||||
|
||||
---
|
||||
Used to load embedding models from [HuggingFace](https://huggingface.co).
|
||||
|
||||
### Hugging Face Embeddings
|
||||
| **Parameter** | **Type** | **Description** | **Default** |
|
||||
|---------------------|-------------------|-------------------------------------------------------------------------------------------------------------------------------|-----------------------|
|
||||
| `Cache Folder` | `str` | Folder path to cache HuggingFace models. | |
|
||||
| `Encode Kwargs` | `dict` | Additional arguments for the encoding process. | |
|
||||
| `Model Kwargs` | `dict` | Additional arguments for the model. | |
|
||||
| `Model Name` | `str` | Name of the HuggingFace model to use. | `sentence-transformers/all-mpnet-base-v2` |
|
||||
| `Multi Process` | `bool` | Whether to use multiple processes. | `False` |
|
||||
|
||||
Used to load [HuggingFace’s](https://huggingface.co) embedding models.
|
||||
## OpenAI Embeddings
|
||||
|
||||
**Params**
|
||||
Used to load embedding models from [OpenAI](https://openai.com/).
|
||||
|
||||
- **Cache Folder:** Folder path to cache HuggingFace models.
|
||||
- **Encode Kwargs:** Additional arguments for the encoding process. (Type: dict)
|
||||
- **Model Kwargs:** Additional arguments for the model. (Type: dict)
|
||||
- **Model Name:** Name of the HuggingFace model to use. (Default: sentence-transformers/all-mpnet-base-v2)
|
||||
- **Multi Process:** Whether to use multiple processes. (Default: False)
|
||||
| **Parameter** | **Type** | **Description** | **Default** |
|
||||
|-----------------------------|-------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------|
|
||||
| `OpenAI API Key` | `str` | The API key to use for accessing the OpenAI API. | |
|
||||
| `Default Headers` | `Dict[str, str]` | Default headers for the HTTP requests. | |
|
||||
| `Default Query` | `NestedDict` | Default query parameters for the HTTP requests. | |
|
||||
| `Allowed Special` | `List[str]` | Special tokens allowed for processing. | `[]` |
|
||||
| `Disallowed Special` | `List[str]` | Special tokens disallowed for processing. | `["all"]` |
|
||||
| `Chunk Size` | `int` | Chunk size for processing. | `1000` |
|
||||
| `Client` | `Any` | HTTP client for making requests. | |
|
||||
| `Deployment` | `str` | Deployment name for the model. | `text-embedding-3-small` |
|
||||
| `Embedding Context Length` | `int` | Length of embedding context. | `8191` |
|
||||
| `Max Retries` | `int` | Maximum number of retries for failed requests. | `6` |
|
||||
| `Model` | `str` | Name of the model to use. | `text-embedding-3-small` |
|
||||
| `Model Kwargs` | `NestedDict` | Additional keyword arguments for the model. | |
|
||||
| `OpenAI API Base` | `str` | Base URL of the OpenAI API. | |
|
||||
| `OpenAI API Type` | `str` | Type of the OpenAI API. | |
|
||||
| `OpenAI API Version` | `str` | Version of the OpenAI API. | |
|
||||
| `OpenAI Organization` | `str` | Organization associated with the API key. | |
|
||||
| `OpenAI Proxy` | `str` | Proxy server for the requests. | |
|
||||
| `Request Timeout` | `float` | Timeout for the HTTP requests. | |
|
||||
| `Show Progress Bar` | `bool` | Whether to show a progress bar for processing. | `False` |
|
||||
| `Skip Empty` | `bool` | Whether to skip empty inputs. | `False` |
|
||||
| `TikToken Enable` | `bool` | Whether to enable TikToken. | `True` |
|
||||
| `TikToken Model Name` | `str` | Name of the TikToken model. | |
|
||||
|
||||
---
|
||||
|
||||
### Ollama Embeddings
|
||||
## Ollama Embeddings
|
||||
|
||||
Generate embeddings using Ollama models.
|
||||
|
||||
**Params**
|
||||
| **Parameter** | **Type** | **Description** | **Default** |
|
||||
|---------------------|-------------------|--------------------------------------------------------------------------------------------------------------------|---------------------------|
|
||||
| `Ollama Model` | `str` | Name of the Ollama model to use. | `llama2` |
|
||||
| `Ollama Base URL` | `str` | Base URL of the Ollama API. | `http://localhost:11434` |
|
||||
| `Model Temperature` | `float` | Temperature parameter for the model. Adjusts the randomness in the generated embeddings. | |
|
||||
|
||||
- **Ollama Model:** Name of the Ollama model to use. (Default: llama2)
|
||||
- **Ollama Base URL:** Base URL of the Ollama API. (Default: http://localhost:11434)
|
||||
- **Model Temperature:** Temperature parameter for the model. (Type: float)
|
||||
|
||||
---
|
||||
|
||||
### OpenAI Embeddings
|
||||
|
||||
Used to load [OpenAI’s](https://openai.com/) embedding models.
|
||||
|
||||
**Params**
|
||||
|
||||
- **OpenAI API Key:** The API key to use for accessing the OpenAI API. (Type: str)
|
||||
- **Default Headers:** Default headers for the HTTP requests. (Type: Dict[str, str], Optional)
|
||||
- **Default Query:** Default query parameters for the HTTP requests. (Type: NestedDict, Optional)
|
||||
- **Allowed Special:** Special tokens allowed for processing. (Type: List[str], Default: [])
|
||||
- **Disallowed Special:** Special tokens disallowed for processing. (Type: List[str], Default: ["all"])
|
||||
- **Chunk Size:** Chunk size for processing. (Type: int, Default: 1000)
|
||||
- **Client:** HTTP client for making requests. (Type: Any, Optional)
|
||||
- **Deployment:** Deployment name for the model. (Type: str, Default: "text-embedding-3-small")
|
||||
- **Embedding Context Length:** Length of embedding context. (Type: int, Default: 8191)
|
||||
- **Max Retries:** Maximum number of retries for failed requests. (Type: int, Default: 6)
|
||||
- **Model:** Name of the model to use. (Type: str, Default: "text-embedding-3-small")
|
||||
- **Model Kwargs:** Additional keyword arguments for the model. (Type: NestedDict, Optional)
|
||||
- **OpenAI API Base:** Base URL of the OpenAI API. (Type: str, Optional)
|
||||
- **OpenAI API Type:** Type of the OpenAI API. (Type: str, Optional)
|
||||
- **OpenAI API Version:** Version of the OpenAI API. (Type: str, Optional)
|
||||
- **OpenAI Organization:** Organization associated with the API key. (Type: str, Optional)
|
||||
- **OpenAI Proxy:** Proxy server for the requests. (Type: str, Optional)
|
||||
- **Request Timeout:** Timeout for the HTTP requests. (Type: float, Optional)
|
||||
- **Show Progress Bar:** Whether to show a progress bar for processing. (Type: bool, Default: False)
|
||||
- **Skip Empty:** Whether to skip empty inputs. (Type: bool, Default: False)
|
||||
- **TikToken Enable:** Whether to enable TikToken. (Type: bool, Default: True)
|
||||
- **TikToken Model Name:** Name of the TikToken model. (Type: str, Optional)
|
||||
|
||||
---
|
||||
|
||||
### VertexAI Embeddings
|
||||
## VertexAI Embeddings
|
||||
|
||||
Wrapper around [Google Vertex AI](https://cloud.google.com/vertex-ai) [Embeddings API](https://cloud.google.com/vertex-ai/docs/generative-ai/embeddings/get-text-embeddings).
|
||||
|
||||
<Admonition type="info">
|
||||
Vertex AI is a cloud computing platform offered by Google Cloud Platform
|
||||
(GCP). It provides access, management, and development of applications and
|
||||
services through global data centers. To use Vertex AI PaLM, you need to have
|
||||
the
|
||||
[google-cloud-aiplatform](https://pypi.org/project/google-cloud-aiplatform/)
|
||||
Python package installed and credentials configured for your environment.
|
||||
</Admonition>
|
||||
|
||||
- **credentials:** The default custom credentials (google.auth.credentials.Credentials) to use.
|
||||
- **location:** The default location to use when making API calls – defaults to `us-central1`.
|
||||
- **max_output_tokens:** Token limit determines the maximum amount of text output from one prompt – defaults to `128`.
|
||||
- **model_name:** The name of the Vertex AI large language model – defaults to `text-bison`.
|
||||
- **project:** The default GCP project to use when making Vertex API calls.
|
||||
- **request_parallelism:** The amount of parallelism allowed for requests issued to VertexAI models – defaults to `5`.
|
||||
- **temperature:** Tunes the degree of randomness in text generations. Should be a non-negative value – defaults to `0`.
|
||||
- **top_k:** How the model selects tokens for output, the next token is selected from – defaults to `40`.
|
||||
- **top_p:** Tokens are selected from most probable to least until the sum of their – defaults to `0.95`.
|
||||
- **tuned_model_name:** The name of a tuned model. If provided, model_name is ignored.
|
||||
- **verbose:** This parameter is used to control the level of detail in the output of the chain. When set to True, it will print out some internal states of the chain while it is being run, which can help debug and understand the chain's behavior. If set to False, it will suppress the verbose output – defaults to `False`.
|
||||
| **Parameter** | **Type** | **Description** | **Default** |
|
||||
|-----------------------------|-------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------|
|
||||
| `credentials` | `Credentials` | The default custom credentials to use. | |
|
||||
| `location` | `str` | The default location to use when making API calls. | `us-central1`|
|
||||
| `max_output_tokens` | `int` | Token limit determines the maximum amount of text output from one prompt. | `128` |
|
||||
| `model_name` | `str` | The name of the Vertex AI large language model. | `text-bison`|
|
||||
| `project` | `str` | The default GCP project to use when making Vertex API calls. | |
|
||||
| `request_parallelism` | `int` | The amount of parallelism allowed for requests issued to VertexAI models. | `5` |
|
||||
| `temperature` | `float` | Tunes the degree of randomness in text generations. Should be a non-negative value. | `0` |
|
||||
| `top_k` | `int` | How the model selects tokens for output, the next token is selected from the top `k` tokens. | `40` |
|
||||
| `top_p` | `float` | Tokens are selected from the most probable to least until the sum of their probabilities exceeds the top `p` value. | `0.95` |
|
||||
| `tuned_model_name` | `str` | The name of a tuned model. If provided, `model_name` is ignored. | |
|
||||
| `verbose` | `bool` | This parameter controls the level of detail in the output. When set to `True`, it prints internal states of the chain to help debug. | `False` |
|
||||
|
|
|
|||
|
|
@ -2,23 +2,23 @@ import Admonition from '@theme/Admonition';
|
|||
|
||||
# Experimental
|
||||
|
||||
Experimental are components that are currently in a beta phase. This means they have undergone initial development and testing but have not yet reached a stable or fully supported status. Users are encouraged to explore these components, provide feedback, and report any issues encountered during their usage.
|
||||
Components in the experimental phase are currently in beta. They have been initially developed and tested but haven't yet achieved a stable or fully supported status. We encourage users to explore these components, provide feedback, and report any issues encountered.
|
||||
|
||||
### Clear Message History Component
|
||||
|
||||
This component is designed to clear the message history associated with a specific session ID.
|
||||
This component clears the message history for a specified session ID.
|
||||
|
||||
**Beta:** This component is currently in beta.
|
||||
**Beta:** This component is in beta.
|
||||
|
||||
**Parameters**
|
||||
|
||||
- **Session ID:**
|
||||
- **Display Name:** Session ID
|
||||
- **Info:** The session ID to clear the message history.
|
||||
- **Info:** Clears the message history for this ID.
|
||||
|
||||
**Usage**
|
||||
|
||||
To use this component, provide the session ID for which you want to clear the message history.
|
||||
Provide the session ID to clear its message history.
|
||||
|
||||
---
|
||||
|
||||
|
|
@ -30,68 +30,68 @@ This component extracts specified keys from a record.
|
|||
|
||||
- **Record:**
|
||||
- **Display Name:** Record
|
||||
- **Info:** The record from which to extract the keys.
|
||||
- **Info:** The record from which to extract keys.
|
||||
|
||||
- **Keys:**
|
||||
- **Display Name:** Keys
|
||||
- **Info:** The keys to extract from the record.
|
||||
- **Info:** The keys to be extracted.
|
||||
|
||||
- **Silent Errors:**
|
||||
- **Display Name:** Silent Errors
|
||||
- **Info:** If True, errors will not be raised.
|
||||
- **Info:** Set to true to suppress errors.
|
||||
- **Advanced:** True
|
||||
|
||||
**Usage**
|
||||
|
||||
To use this component, provide the record from which you want to extract keys, specify the keys to extract, and optionally set whether to raise errors for missing keys.
|
||||
Provide the record and specify the keys you want to extract. Optionally, enable silent errors for missing keys.
|
||||
|
||||
---
|
||||
|
||||
### Flow as Tool
|
||||
|
||||
This component constructs a Tool from a function that runs the loaded Flow.
|
||||
This component turns a function running a flow into a Tool.
|
||||
|
||||
**Parameters**
|
||||
|
||||
- **Flow Name:**
|
||||
- **Display Name:** Flow Name
|
||||
- **Info:** The name of the flow to run.
|
||||
- **Options:** List of available flow names.
|
||||
- **Info:** Select the flow to run.
|
||||
- **Options:** List of available flows.
|
||||
- **Real-time Refresh:** True
|
||||
- **Refresh Button:** True
|
||||
|
||||
- **Name:**
|
||||
- **Display Name:** Name
|
||||
- **Description:** The name of the tool.
|
||||
- **Description:** The tool's name.
|
||||
|
||||
- **Description:**
|
||||
- **Display Name:** Description
|
||||
- **Description:** The description of the tool.
|
||||
- **Description:** Describes the tool.
|
||||
|
||||
- **Return Direct:**
|
||||
- **Display Name:** Return Direct
|
||||
- **Description:** Return the result directly from the Tool.
|
||||
- **Description:** Returns the result directly.
|
||||
- **Advanced:** True
|
||||
|
||||
**Usage**
|
||||
|
||||
To use this component, select the desired flow from the available options, provide a name and description for the tool, and specify whether to return the result directly from the tool.
|
||||
Select a flow, name and describe the tool, and decide if you want to return the result directly.
|
||||
|
||||
---
|
||||
|
||||
### Listen
|
||||
|
||||
This component listens for a notification.
|
||||
This component listens for a specified notification.
|
||||
|
||||
**Parameters**
|
||||
|
||||
- **Name:**
|
||||
- **Display Name:** Name
|
||||
- **Info:** The name of the notification to listen for.
|
||||
- **Info:** The notification to listen for.
|
||||
|
||||
**Usage**
|
||||
|
||||
To use this component, specify the name of the notification to listen for.
|
||||
Specify the notification to listen for.
|
||||
|
||||
---
|
||||
|
||||
|
|
@ -101,12 +101,14 @@ This component lists all available flows.
|
|||
|
||||
**Usage**
|
||||
|
||||
To use this component, simply call it without any parameters.
|
||||
Call this component without parameters to list all flows.
|
||||
|
||||
---
|
||||
|
||||
### Merge Records
|
||||
|
||||
This component merges a list of records.
|
||||
|
||||
**Parameters**
|
||||
|
||||
- **Records:**
|
||||
|
|
@ -114,37 +116,37 @@ To use this component, simply call it without any parameters.
|
|||
|
||||
**Usage**
|
||||
|
||||
To use this component, provide a list of records to merge.
|
||||
Provide the records you want to merge.
|
||||
|
||||
---
|
||||
|
||||
### Notify
|
||||
|
||||
This component generates a notification to the Get Notified component.
|
||||
This component generates a notification.
|
||||
|
||||
**Parameters**
|
||||
|
||||
- **Name:**
|
||||
- **Display Name:** Name
|
||||
- **Info:** The name of the notification.
|
||||
- **Info:** The notification's name.
|
||||
|
||||
- **Record:**
|
||||
- **Display Name:** Record
|
||||
- **Info:** The record to store.
|
||||
- **Info:** Optionally, a record to store in the notification.
|
||||
|
||||
- **Append:**
|
||||
- **Display Name:** Append
|
||||
- **Info:** If True, the record will be appended to the notification.
|
||||
- **Info:** Set to true to append the record to the notification.
|
||||
|
||||
**Usage**
|
||||
|
||||
To use this component, specify the name of the notification, provide an optional record to store, and indicate whether to append the record to the notification.
|
||||
Specify the notification name, provide a record if necessary, and indicate whether to append it.
|
||||
|
||||
---
|
||||
|
||||
### Run Flow
|
||||
|
||||
This component runs a flow.
|
||||
This component runs a specified flow.
|
||||
|
||||
**Parameters**
|
||||
|
||||
|
|
@ -154,33 +156,33 @@ This component runs a flow.
|
|||
|
||||
- **Flow Name:**
|
||||
- **Display Name:** Flow Name
|
||||
- **Info:** The name of the flow to run.
|
||||
- **Options:** List of available flow names.
|
||||
- **Info:** Select the flow to run.
|
||||
- **Options:** List of available flows.
|
||||
- **Refresh Button:** True
|
||||
|
||||
- **Tweaks:**
|
||||
- **Display Name:** Tweaks
|
||||
- **Info:** Tweaks to apply to the flow.
|
||||
- **Info:** Modifications to apply to the flow.
|
||||
|
||||
**Usage**
|
||||
|
||||
To use this component, provide the input value, specify the flow name to run, and optionally provide tweaks to apply to the flow.
|
||||
Provide the input value, select the flow, and apply any tweaks.
|
||||
|
||||
---
|
||||
|
||||
### Runnable Executor
|
||||
|
||||
This component executes a runnable.
|
||||
This component executes a specified runnable.
|
||||
|
||||
**Parameters**
|
||||
|
||||
- **Input Key:**
|
||||
- **Display Name:** Input Key
|
||||
- **Info:** The key to use for the input.
|
||||
- **Info:** The input key.
|
||||
|
||||
- **Inputs:**
|
||||
- **Display Name:** Inputs
|
||||
- **Info:** The inputs to pass to the runnable.
|
||||
- **Info:** Inputs for the runnable.
|
||||
|
||||
- **Runnable:**
|
||||
- **Display Name:** Runnable
|
||||
|
|
@ -188,45 +190,45 @@ This component executes a runnable.
|
|||
|
||||
- **Output Key:**
|
||||
- **Display Name:** Output Key
|
||||
- **Info:** The key to use for the output.
|
||||
- **Info:** The output key.
|
||||
|
||||
**Usage**
|
||||
|
||||
To use this component, specify the input key, provide the inputs to pass to the runnable, select the runnable to execute, and optionally specify the output key.
|
||||
Specify the input key, provide inputs, select the runnable, and optionally define the output key.
|
||||
|
||||
---
|
||||
|
||||
### SQL Executor
|
||||
|
||||
This component executes an SQL query.
|
||||
This component executes an SQL query.
|
||||
|
||||
**Parameters**
|
||||
|
||||
- **Database URL:**
|
||||
- **Display Name:** Database URL
|
||||
- **Info:** The URL of the database.
|
||||
- **Info:** The database's URL.
|
||||
|
||||
- **Include Columns:**
|
||||
- **Display Name:** Include Columns
|
||||
- **Info:** Include columns in the result.
|
||||
- **Info:** Whether to include columns in the result.
|
||||
|
||||
- **Passthrough:**
|
||||
- **Display Name:** Passthrough
|
||||
- **Info:** If an error occurs, return the query instead of raising an exception.
|
||||
- **Info:** Returns the query instead of raising an exception if an error occurs.
|
||||
|
||||
- **Add Error:**
|
||||
- **Display Name:** Add Error
|
||||
- **Info:** Add the error to the result.
|
||||
- **Info:** Includes the error in the result.
|
||||
|
||||
**Usage**
|
||||
|
||||
To use this component, provide the SQL query, specify the database URL, and optionally configure include columns, passthrough, and add error settings.
|
||||
Provide the SQL query, specify the database URL, and configure settings for columns, error handling, and passthrough.
|
||||
|
||||
---
|
||||
|
||||
### SubFlow
|
||||
|
||||
This component dynamically generates a component from a flow. The output is a list of records with keys 'result' and 'message'.
|
||||
This component dynamically generates a tool from a flow.
|
||||
|
||||
**Parameters**
|
||||
|
||||
|
|
@ -236,15 +238,15 @@ This component dynamically generates a component from a flow. The output is a li
|
|||
|
||||
- **Flow Name:**
|
||||
- **Display Name:** Flow Name
|
||||
- **Info:** The name of the flow to run.
|
||||
- **Options:** List of available flow names.
|
||||
- **Info:** Select the flow to run.
|
||||
- **Options:** List of available flows.
|
||||
- **Real Time Refresh:** True
|
||||
- **Refresh Button:** True
|
||||
|
||||
- **Tweaks:**
|
||||
- **Display Name:** Tweaks
|
||||
- **Info:** Tweaks to apply to the flow.
|
||||
- **Info:** Modifications to apply to the flow.
|
||||
|
||||
**Usage**
|
||||
|
||||
To use this component, specify the flow name and provide any necessary tweaks to apply to the flow.
|
||||
Select a flow, apply any necessary tweaks, and generate a tool.
|
||||
|
|
|
|||
|
|
@ -2,49 +2,49 @@ import Admonition from '@theme/Admonition';
|
|||
|
||||
# Helpers
|
||||
|
||||
### Chat Memory
|
||||
### Chat memory
|
||||
|
||||
This component retrieves stored chat messages given a specific Session ID.
|
||||
This component retrieves stored chat messages based on a specific session ID.
|
||||
|
||||
**Params**
|
||||
#### Parameters
|
||||
|
||||
- **Sender Type:** Choose the sender type from options like "Machine", "User", or "Machine and User".
|
||||
- **Sender Name:** (Optional) The name of the sender.
|
||||
- **Number of Messages:** Number of messages to retrieve.
|
||||
- **Session ID:** The Session ID of the chat history.
|
||||
- **Order:** Choose the order of the messages, either "Ascending" or "Descending".
|
||||
- **Record Template:** (Optional) Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.
|
||||
- **Sender type:** Choose the sender type from options like "Machine", "User", or "Both".
|
||||
- **Sender name:** (Optional) The name of the sender.
|
||||
- **Number of messages:** Number of messages to retrieve.
|
||||
- **Session ID:** The session ID of the chat history.
|
||||
- **Order:** Choose the message order, either "Ascending" or "Descending".
|
||||
- **Record template:** (Optional) Template to convert a record to text. If left empty, the system dynamically sets it to the record's text key.
|
||||
|
||||
---
|
||||
|
||||
### Combine Text
|
||||
### Combine text
|
||||
|
||||
This component concatenates two text sources into a single text chunk using a specified delimiter.
|
||||
|
||||
**Params**
|
||||
#### Parameters
|
||||
|
||||
- **First Text:** The first text input to concatenate.
|
||||
- **Second Text:** The second text input to concatenate.
|
||||
- **Delimiter:** A string used to separate the two text inputs. Defaults to a whitespace.
|
||||
- **First text:** The first text input to concatenate.
|
||||
- **Second text:** The second text input to concatenate.
|
||||
- **Delimiter:** A string used to separate the two text inputs. Defaults to a space.
|
||||
|
||||
---
|
||||
|
||||
### Create Record
|
||||
### Create record
|
||||
|
||||
This component dynamically creates a Record with a specified number of fields.
|
||||
This component dynamically creates a record with a specified number of fields.
|
||||
|
||||
**Params**
|
||||
#### Parameters
|
||||
|
||||
- **Number of Fields:** Number of fields to be added to the record.
|
||||
- **Text Key:** Key to be used as text.
|
||||
- **Number of fields:** Number of fields to be added to the record.
|
||||
- **Text key:** Key used as text.
|
||||
|
||||
---
|
||||
|
||||
### Custom Component
|
||||
### Custom component
|
||||
|
||||
Use this component as a template to create your own custom component.
|
||||
Use this component as a template to create your custom component.
|
||||
|
||||
**Params**
|
||||
#### Parameters
|
||||
|
||||
- **Parameter:** Describe the purpose of this parameter.
|
||||
|
||||
|
|
@ -54,74 +54,74 @@ Use this component as a template to create your own custom component.
|
|||
</p>
|
||||
</Admonition>
|
||||
|
||||
Learn more about [Custom Component](http://docs.langflow.org/components/custom).
|
||||
Learn more about creating custom components at [Custom Component](http://docs.langflow.org/components/custom).
|
||||
|
||||
---
|
||||
|
||||
### Documents to Records
|
||||
### Documents to records
|
||||
|
||||
Convert LangChain Documents into Records.
|
||||
Convert LangChain documents into records.
|
||||
|
||||
**Parameters**
|
||||
#### Parameters
|
||||
|
||||
- **Documents:** Documents to be converted into Records.
|
||||
- **Documents:** Documents to be converted into records.
|
||||
|
||||
---
|
||||
|
||||
### ID Generator
|
||||
### ID generator
|
||||
|
||||
Generates a unique ID.
|
||||
|
||||
**Parameters**
|
||||
#### Parameters
|
||||
|
||||
- **Value:** Unique ID generated.
|
||||
|
||||
---
|
||||
|
||||
### Message History
|
||||
### Message history
|
||||
|
||||
Retrieves stored chat messages given a specific Session ID.
|
||||
Retrieves stored chat messages based on a specific session ID.
|
||||
|
||||
**Parameters**
|
||||
#### Parameters
|
||||
|
||||
- **Sender Type:** Options for the sender type.
|
||||
- **Sender Name:** Sender name.
|
||||
- **Number of Messages:** Number of messages to retrieve.
|
||||
- **Sender type:** Options for the sender type.
|
||||
- **Sender name:** Sender name.
|
||||
- **Number of messages:** Number of messages to retrieve.
|
||||
- **Session ID:** Session ID of the chat history.
|
||||
- **Order:** Order of the messages.
|
||||
|
||||
---
|
||||
|
||||
### Records to Text
|
||||
### Records to text
|
||||
|
||||
Convert Records into plain text following a specified template.
|
||||
Convert records into plain text following a specified template.
|
||||
|
||||
**Parameters**
|
||||
#### Parameters
|
||||
|
||||
- **Records:** The records to convert to text.
|
||||
- **Template:** The template to use for formatting the records. It can contain the keys `{text}`, `{data}` or any other key in the Record.
|
||||
- **Template:** The template used for formatting the records. It can contain keys like `{text}`, `{data}`, or any other key in the record.
|
||||
|
||||
---
|
||||
|
||||
### Split Text
|
||||
### Split text
|
||||
|
||||
Split text into chunks of a specified length.
|
||||
|
||||
**Parameters**
|
||||
#### Parameters
|
||||
|
||||
- **Texts:** Texts to split.
|
||||
- **Separators:** The characters to split on. Defaults to [" "].
|
||||
- **Max Chunk Size:** The maximum length (in number of characters) of each chunk.
|
||||
- **Chunk Overlap:** The amount of character overlap between chunks.
|
||||
- **Separators:** Characters to split on. Defaults to a space.
|
||||
- **Max chunk size:** The maximum length (in characters) of each chunk.
|
||||
- **Chunk overlap:** The amount of character overlap between chunks.
|
||||
- **Recursive:** Whether to split recursively.
|
||||
|
||||
---
|
||||
|
||||
### Update Record
|
||||
### Update record
|
||||
|
||||
Update Record with text-based key/value pairs, similar to updating a Python dictionary.
|
||||
Update a record with text-based key/value pairs, similar to updating a Python dictionary.
|
||||
|
||||
**Parameters**
|
||||
#### Parameters
|
||||
|
||||
- **Record:** The record to update.
|
||||
- **New Data:** The new data to update the record with.
|
||||
- **New data:** The new data to update the record with.
|
||||
|
|
|
|||
|
|
@ -1,43 +1,27 @@
|
|||
import Admonition from "@theme/Admonition";
|
||||
import Admonition from '@theme/Admonition';
|
||||
import ZoomableImage from "/src/theme/ZoomableImage.js";
|
||||
|
||||
# Inputs
|
||||
|
||||
### Chat Input
|
||||
## Chat Input
|
||||
|
||||
This component is designed to get user input from the chat.
|
||||
This component obtains user input from the chat.
|
||||
|
||||
**Params**
|
||||
**Parameters**
|
||||
|
||||
- **Sender Type:** specifies the sender type. Defaults to _`"User"`_. Options are _`"Machine"`_ and _`"User"`_.
|
||||
|
||||
- **Sender Name:** specifies the name of the sender. Defaults to _`"User"`_.
|
||||
|
||||
- **Message:** specifies the message text. It is a multiline text input.
|
||||
|
||||
- **Session ID:** specifies the session ID of the chat history. If provided, the message will be saved in the Message History.
|
||||
- **Sender Type:** Specifies the sender type. Defaults to `User`. Options are `Machine` and `User`.
|
||||
- **Sender Name:** Specifies the name of the sender. Defaults to `User`.
|
||||
- **Message:** Specifies the message text. It is a multiline text input.
|
||||
- **Session ID:** Specifies the session ID of the chat history. If provided, the message will be saved in the Message History.
|
||||
|
||||
<Admonition type="note" title="Note">
|
||||
<p>
|
||||
If _`As Record`_ is _`true`_ and the _`Message`_ is a _`Record`_, the data
|
||||
of the _`Record`_ will be updated with the _`Sender`_, _`Sender Name`_, and
|
||||
_`Session ID`_.
|
||||
If `As Record` is `true` and the `Message` is a `Record`, the data
|
||||
of the `Record` will be updated with the `Sender`, `Sender Name`, and
|
||||
`Session ID`.
|
||||
</p>
|
||||
</Admonition>
|
||||
|
||||
When you get it from the sidebar, it will look like the image below but that is because some fields are in the advanced section.
|
||||
|
||||
<ZoomableImage
|
||||
alt="Docusaurus themed image"
|
||||
sources={{
|
||||
light: "img/chat-input.png",
|
||||
dark: "img/chat-input.png",
|
||||
}}
|
||||
style={{ width: "50%", margin: "20px auto" }}
|
||||
/>
|
||||
|
||||
If you expose all its fields, it will look like the image below.
|
||||
|
||||
<ZoomableImage
|
||||
alt="Docusaurus themed image"
|
||||
sources={{
|
||||
|
|
@ -47,7 +31,7 @@ If you expose all its fields, it will look like the image below.
|
|||
style={{ width: "40%", margin: "20px auto" }}
|
||||
/>
|
||||
|
||||
One key capability of the Chat Input component is how it transforms the Playground into a chat window. This feature is particularly useful for scenarios where user input is required to initiate or influence the flow.
|
||||
One significant capability of the Chat Input component is its ability to transform the Playground into a chat window. This feature is particularly valuable for scenarios requiring user input to initiate or influence the flow.
|
||||
|
||||
<ZoomableImage
|
||||
alt="Docusaurus themed image"
|
||||
|
|
@ -60,33 +44,13 @@ One key capability of the Chat Input component is how it transforms the Playgrou
|
|||
|
||||
---
|
||||
|
||||
### Prompt
|
||||
## Prompt
|
||||
|
||||
Create a prompt template with dynamic variables. This is a very useful component for structuring prompts and passing dynamic data to a language model.
|
||||
This component creates a prompt template with dynamic variables. This is useful for structuring prompts and passing dynamic data to a language model.
|
||||
|
||||
**Parameters**
|
||||
|
||||
- **Template:** the template for the prompt. This field allows you to create other fields dynamically by using curly brackets `{}`. For example, if you have a template like this: _`"Hello {name}, how are you?"`_, a new field called _`name`_ will be created.
|
||||
|
||||
<Admonition type="note" title="Note">
|
||||
<p>
|
||||
Prompt variables can be created with any chosen name inside curly brackets,
|
||||
e.g. `{variable_name}`
|
||||
</p>
|
||||
</Admonition>
|
||||
|
||||
Here is how it looks when you get it from the sidebar.
|
||||
|
||||
<ZoomableImage
|
||||
alt="Docusaurus themed image"
|
||||
sources={{
|
||||
light: "img/prompt.png",
|
||||
dark: "img/prompt.png",
|
||||
}}
|
||||
style={{ width: "50%", margin: "20px auto" }}
|
||||
/>
|
||||
|
||||
And here when you add a Template with the value _`Hello {name}, how are you?`_.
|
||||
- **Template:** The template for the prompt. This field allows you to create other fields dynamically by using curly brackets `{}`. For example, if you have a template like `Hello {name}, how are you?`, a new field called `name` will be created. Prompt variables can be created with any name inside curly brackets, e.g. `{variable_name}`.
|
||||
|
||||
<ZoomableImage
|
||||
alt="Docusaurus themed image"
|
||||
|
|
@ -99,39 +63,18 @@ And here when you add a Template with the value _`Hello {name}, how are you?`_.
|
|||
|
||||
---
|
||||
|
||||
### Text Input
|
||||
## Text Input
|
||||
|
||||
This component is designed for simple text input, allowing users to pass textual data to subsequent components in the workflow. It's particularly useful for scenarios where a brief user input is required to initiate or influence the flow.
|
||||
The **Text Input** component adds an **Input** field on the Playground. This enables you to define parameters while running and testing your flow.
|
||||
|
||||
**Params**
|
||||
**Parameters**
|
||||
|
||||
- **Value:** Specifies the text input value. This is where the user can input the text data that will be passed to the next component in the sequence. If no value is provided, it defaults to an empty string.
|
||||
- **Record Template:** Specifies how a Record should be converted into Text.
|
||||
- **Value:** Specifies the text input value. This is where the user inputs text data that will be passed to the next component in the sequence. If no value is provided, it defaults to an empty string.
|
||||
- **Record Template:** Specifies how a `Record` should be converted into `Text`.
|
||||
|
||||
<Admonition type="note" title="Note">
|
||||
<p>
|
||||
The `TextInput` component serves as a straightforward means for setting Text
|
||||
input values in the chat window. It ensures that textual data can be
|
||||
seamlessly passed to subsequent components in the flow.
|
||||
</p>
|
||||
</Admonition>
|
||||
The **Record Template** field is used to specify how a `Record` should be converted into `Text`. This is particularly useful when you want to extract specific information from a `Record` and pass it as text to the next component in the sequence.
|
||||
|
||||
It should look like this when dropped directly from the sidebar.
|
||||
|
||||
<ZoomableImage
|
||||
alt="Docusaurus themed image"
|
||||
sources={{
|
||||
light: "img/text-input.png",
|
||||
dark: "img/text-input.png",
|
||||
}}
|
||||
style={{ width: "50%", margin: "20px auto", margin: "20px auto" }}
|
||||
/>
|
||||
|
||||
And when you expose all its fields, it will look like the image below.
|
||||
|
||||
The **Record Template** field is used to specify how a Record should be converted into Text. This is particularly useful when you want to extract specific information from a Record and pass it as text to the next component in the sequence.
|
||||
|
||||
For example, if you have a Record with the following structure:
|
||||
For example, if you have a `Record` with the following structure:
|
||||
|
||||
```json
|
||||
{
|
||||
|
|
@ -141,7 +84,9 @@ For example, if you have a Record with the following structure:
|
|||
}
|
||||
```
|
||||
|
||||
You can use a template like this: _`"Name: {name}, Age: {age}"`_ to convert the Record into a text string like this: _`"Name: John Doe, Age: 30"`_, and if you pass more than one Record, the text will be concatenated with a new line separator.
|
||||
A template with `Name: {name}, Age: {age}` will convert the `Record` into a text string of `Name: John Doe, Age: 30`.
|
||||
|
||||
If you pass more than one `Record`, the text will be concatenated with a new line separator.
|
||||
|
||||
<ZoomableImage
|
||||
alt="Docusaurus themed image"
|
||||
|
|
@ -152,13 +97,3 @@ You can use a template like this: _`"Name: {name}, Age: {age}"`_ to convert the
|
|||
style={{ width: "50%", margin: "20px auto" }}
|
||||
/>
|
||||
|
||||
The Text Input component gives you the possibility to add an Input field on the Playground. This is useful because it allows you to define parameters while running and testing your flow.
|
||||
|
||||
<ZoomableImage
|
||||
alt="Docusaurus themed image"
|
||||
sources={{
|
||||
light: "img/interaction-panel-text-input.png",
|
||||
dark: "img/interaction-panel-text-input.png",
|
||||
}}
|
||||
style={{ width: "50%", margin: "20px auto" }}
|
||||
/>
|
||||
|
|
|
|||
|
|
@ -4,125 +4,124 @@ import Admonition from '@theme/Admonition';
|
|||
|
||||
<Admonition type="caution" icon="🚧" title="ZONE UNDER CONSTRUCTION">
|
||||
<p>
|
||||
We appreciate your understanding as we polish our documentation – it may contain some rough edges. Share your feedback or report issues to help us improve! 🛠️📝
|
||||
Thanks for your patience as we improve our documentation—it might have some rough edges. Share your feedback or report issues to help us enhance it! 🛠️📝
|
||||
</p>
|
||||
</Admonition>
|
||||
|
||||
Memory is a concept in chat-based applications that allows the system to remember previous interactions. It helps in maintaining the context of the conversation and enables the system to understand new messages in relation to past messages.
|
||||
Memory is a concept in chat-based applications that allows the system to remember previous interactions. This capability helps maintain the context of the conversation and enables the system to understand new messages in light of past messages.
|
||||
|
||||
---
|
||||
|
||||
### MessageHistory
|
||||
|
||||
This component is designed to retrieve stored messages based on various filters such as sender type, sender name, session ID, and a specific file path where messages are stored. It allows for a flexible retrieval of chat history, providing insights into past interactions.
|
||||
This component retrieves stored messages using various filters such as sender type, sender name, session ID, and the specific file path where messages are stored. It offers flexible retrieval of chat history, providing insights into past interactions.
|
||||
|
||||
**Params**
|
||||
**Parameters**
|
||||
|
||||
- **Sender Type:** (Optional) Specifies the type of the sender. Options are _`"Machine"`_, _`"User"`_, or _`"Machine and User"`_. Filters the messages by the type of the sender.
|
||||
|
||||
- **Sender Name:** (Optional) Specifies the name of the sender. Filters the messages by the name of the sender.
|
||||
|
||||
- **Session ID:** (Optional) Specifies the session ID of the chat history. Filters the messages belonging to a specific session.
|
||||
|
||||
- **Number of Messages:** Specifies the number of messages to retrieve. Defaults to _`5`_. Determines how many recent messages from the chat history to fetch.
|
||||
- **sender_type** (optional): Specifies the sender's type. Options include `"Machine"`, `"User"`, or `"Machine and User"`. Filters messages by the sender type.
|
||||
- **sender_name** (optional): Specifies the sender's name. Filters messages by the sender's name.
|
||||
- **session_id** (optional): Specifies the session ID of the chat history. Filters messages by session.
|
||||
- **number_of_messages**: Specifies the number of messages to retrieve. Defaults to `5`. Determines the number of recent messages from the chat history to fetch.
|
||||
|
||||
<Admonition type="note" title="Note">
|
||||
<p>
|
||||
The component retrieves messages based on the provided criteria, including the specific file path for stored messages. If no specific criteria are provided, it will return the most recent messages up to the specified limit. This component can be used to review past interactions and analyze the flow of conversations.
|
||||
The component retrieves messages based on the provided criteria, including the specific file path for stored messages. If no specific criteria are provided, it returns the most recent messages up to the specified limit. This component can be used to review past interactions and analyze conversation flows.
|
||||
</p>
|
||||
</Admonition>
|
||||
|
||||
### ConversationBufferMemory
|
||||
|
||||
The `ConversationBufferMemory` component is a type of memory system that plainly stores the last few inputs and outputs of a conversation.
|
||||
The `ConversationBufferMemory` component stores the last few inputs and outputs of a conversation.
|
||||
|
||||
**Params**
|
||||
**Parameters**
|
||||
|
||||
- **input_key:** Used to specify the key under which the user input will be stored in the conversation memory. It allows you to provide the user's input to the chain for processing and generating a response.
|
||||
- **memory_key:** Specifies the prompt variable name where the memory will store and retrieve the chat messages. It allows for the preservation of the conversation history throughout the interaction with the language model – defaults to `chat_history`.
|
||||
- **output_key:** Used to specify the key under which the generated response will be stored in the conversation memory. It allows you to retrieve the response using the specified key.
|
||||
- **return_messages:** Determines whether the history should be returned as a string or as a list of messages. If `return_messages` is set to True, the history will be returned as a list of messages. If `return_messages` is set to False or not specified, the history will be returned as a string. The default is `False`.
|
||||
- **input_key**: Specifies the key under which the user input will be stored in the conversation memory.
|
||||
- **memory_key**: Specifies the prompt variable name where the memory will store and retrieve chat messages. Defaults to `chat_history`.
|
||||
- **output_key**: Specifies the key under which the generated response will be stored.
|
||||
- **return_messages**: Determines whether the history should be returned as a string or as a list of messages. The default is `False`.
|
||||
|
||||
---
|
||||
|
||||
### ConversationBufferWindowMemory
|
||||
|
||||
`ConversationBufferWindowMemory` is a variation of the `ConversationBufferMemory` that maintains a list of the recent interactions in a conversation. It only keeps the last K interactions in memory, which can be useful for maintaining a sliding window of the most recent interactions without letting the buffer get too large.
|
||||
`ConversationBufferWindowMemory` is a variant of the `ConversationBufferMemory` that keeps only the last K interactions in memory. It's useful for maintaining a sliding window of recent interactions without letting the buffer get too large.
|
||||
|
||||
**Params**
|
||||
**Parameters**
|
||||
|
||||
- **input_key:** Used to specify the keys in the memory object where the input messages should be stored. It allows for the retrieval and manipulation of input messages.
|
||||
- **memory_key:** Specifies the prompt variable name where the memory will store and retrieve the chat messages. It allows for the preservation of the conversation history throughout the interaction with the language model. Defaults to `chat_history`.
|
||||
- **k:** Used to specify the number of interactions or messages that should be stored in the conversation buffer. It determines the size of the sliding window that keeps track of the most recent interactions.
|
||||
- **output_key:** Used to specify the key under which the generated response will be stored in the conversation memory. It allows you to retrieve the response using the specified key.
|
||||
- **return_messages:** Determines whether the history should be returned as a string or as a list of messages. If `return_messages` is set to True, the history will be returned as a list of messages. If `return_messages` is set to False or not specified, the history will be returned as a string. The default is `False`.
|
||||
- **input_key**: Specifies the keys in the memory object where input messages are stored.
|
||||
- **memory_key**: Specifies the prompt variable name for storing and retrieving chat messages. Defaults to `chat_history`.
|
||||
- **k**: Specifies the number of interactions or messages to be stored in the conversation buffer.
|
||||
- **output_key**: Specifies the key under which the generated response will be stored.
|
||||
- **return_messages**: Determines whether the history should be returned as a string or as a list of messages. The default is `False`.
|
||||
|
||||
---
|
||||
|
||||
### ConversationEntityMemory
|
||||
|
||||
The `ConversationEntityMemory` component incorporates intricate memory structures, specifically a key-value store, for entities referenced in a conversation. This facilitates the storage and retrieval of information related to entities that have been mentioned throughout the conversation.
|
||||
The `ConversationEntityMemory` component uses a key-value store to manage entities mentioned in conversations. This structure enhances the storage and retrieval of information about specific entities.
|
||||
|
||||
**Params**
|
||||
**Parameters**
|
||||
|
||||
- **Entity Store:** Structure that stores information about specific entities mentioned in a conversation.
|
||||
- **LLM:** Language Model to use in the `ConversationEntityMemory`.
|
||||
- **chat_history_key:** Specify a unique identifier for the chat history data associated with a particular entity. This allows for organizing and accessing the chat history data for each entity within the conversation entity memory. Defaults to `history`
|
||||
- **input_key:** Used to specify the keys in the memory object where the input messages should be stored. It allows for the retrieval and manipulation of input messages.
|
||||
- **k:** Refers to the number of entities that can be stored in the memory. It determines the maximum number of entities that can be stored and retrieved from the memory object. Defaults to `10`
|
||||
- **output_key:** Used to specify the key under which the generated response will be stored in the conversation memory. It allows you to retrieve the response using the specified key.
|
||||
- **return_messages:** Determines whether the history should be returned as a string or as a list of messages. If `return_messages` is set to True, the history will be returned as a list of messages. If `return_messages` is set to False or not specified, the history will be returned as a string. The default is `False`.
|
||||
- **entity_store**: A structure that stores information about entities mentioned in a conversation.
|
||||
- **LLM**: Specifies the language model used in the `ConversationEntityMemory`.
|
||||
- **chat_history_key**: A unique identifier for the chat history data associated with a particular entity. This key helps organize and access chat history data for each entity within the memory. Defaults to `history`.
|
||||
- **input_key**: Identifies where input messages are stored in the memory object, allowing for their retrieval and manipulation.
|
||||
- **k**: Specifies the maximum number of entities that can be stored and retrieved from the memory. Defaults to `10`.
|
||||
- **output_key**: Identifies the key under which the generated response is stored, enabling retrieval using this key.
|
||||
- **return_messages**: Controls whether the history is returned as a string or as a list of messages. Defaults to `False`.
|
||||
|
||||
---
|
||||
|
||||
### ConversationKGMemory
|
||||
|
||||
`ConversationKGMemory` is a type of memory that uses a knowledge graph to recreate memory. It allows the extraction of entities and knowledge triplets from a new message, using previous messages as context.
|
||||
The `ConversationKGMemory` utilizes a knowledge graph to enhance memory capabilities. It extracts entities and knowledge triplets from new messages, using previous messages as context.
|
||||
|
||||
**Params**
|
||||
**Parameters**
|
||||
|
||||
- **LLM:** Language Model to use in the `ConversationKGMemory`.
|
||||
- **input_key:** Used to specify the keys in the memory object where the input messages should be stored. It allows for the retrieval and manipulation of input messages.
|
||||
- **k:** Represents the number of previous conversation turns that will be stored in the memory. By setting "k" to 2, it means that the memory will retain the previous 2 conversation turns, allowing the model to access and utilize the information from those turns during the conversation. Defaults to `10`
|
||||
- **memory_key:** Specifies the prompt variable name where the memory will store and retrieve the chat messages. It allows for the preservation of the conversation history throughout the interaction with the language model. Defaults to `chat_history`.
|
||||
- **output_key:** Used to specify the key under which the generated response will be stored in the conversation memory. It allows you to retrieve the response using the specified key.
|
||||
- **return_messages:** Determines whether the history should be returned as a string or as a list of messages. If `return_messages` is set to True, the history will be returned as a list of messages. If `return_messages` is set to False or not specified, the history will be returned as a string. The default is `False`.
|
||||
- **LLM**: Specifies the language model used in the `ConversationKGMemory`.
|
||||
- **input_key**: Identifies where input messages are stored in the memory object, facilitating their retrieval and manipulation.
|
||||
- **k**: Indicates the number of previous conversation turns stored in memory, allowing the model to utilize information from these turns. Defaults to `10`.
|
||||
- **memory_key**: Specifies the prompt variable name where the memory stores and retrieves chat messages. Defaults to `chat_history`.
|
||||
- **output_key**: Identifies the key under which the generated response
|
||||
|
||||
is stored, enabling retrieval using this key.
|
||||
- **return_messages**: Controls whether the history is returned as a string or as a list of messages. Defaults to `False`.
|
||||
|
||||
---
|
||||
|
||||
### ConversationSummaryMemory
|
||||
|
||||
The `ConversationSummaryMemory` is a memory component that creates a summary of the conversation over time. It condenses information from the conversation and stores the current summary in memory. It is particularly useful for longer conversations where keeping the entire message history in the prompt would take up too many tokens.
|
||||
The `ConversationSummaryMemory` summarizes conversations over time, condensing information and storing it efficiently. It's particularly useful for long conversations.
|
||||
|
||||
**Params**
|
||||
**Parameters**
|
||||
|
||||
- **LLM:** Language Model to use in the `ConversationSummaryMemory`.
|
||||
- **input_key:** Used to specify the keys in the memory object where the input messages should be stored. It allows for the retrieval and manipulation of input messages.
|
||||
- **memory_key:** Specifies the prompt variable name where the memory will store and retrieve the chat messages. It allows for the preservation of the conversation history throughout the interaction with the language model. Defaults to `chat_history`.
|
||||
- **output_key:** Used to specify the key under which the generated response will be stored in the conversation memory. It allows you to retrieve the response using the specified key.
|
||||
- **return_messages:** Determines whether the history should be returned as a string or as a list of messages. If `return_messages` is set to True, the history will be returned as a list of messages. If `return_messages` is set to False or not specified, the history will be returned as a string. The default is `False`.
|
||||
- **LLM**: Specifies the language model used in the `ConversationSummaryMemory`.
|
||||
- **input_key**: Identifies where input messages are stored in the memory object, facilitating their retrieval and manipulation.
|
||||
- **memory_key**: Specifies the prompt variable name where the memory stores and retrieves chat messages. Defaults to `chat_history`.
|
||||
- **output_key**: Identifies the key under which the generated response is stored, enabling retrieval using this key.
|
||||
- **return_messages**: Controls whether the history is returned as a string or as a list of messages. Defaults to `False`.
|
||||
|
||||
---
|
||||
|
||||
### PostgresChatMessageHistory
|
||||
|
||||
The `PostgresChatMessageHistory` is a memory component that allows for the storage and retrieval of chat message history using a PostgreSQL database. The connection to the PostgreSQL database is established using a connection string, which includes the necessary authentication and database information.
|
||||
The `PostgresChatMessageHistory` component uses a PostgreSQL database to store and retrieve chat message history.
|
||||
|
||||
**Params**
|
||||
**Parameters**
|
||||
|
||||
- **connection_string:** Refers to a string that contains the necessary information to establish a connection to a PostgreSQL database. The `connection_string` typically includes details such as the username, password, host, port, and database name required to connect to the PostgreSQL database. Defaults to `postgresql://postgres:mypassword@localhost/chat_history`
|
||||
- **session_id:** It is a unique identifier that is used to associate chat message history with a specific session or conversation.
|
||||
- **table_name:** Refers to the name of the table in the PostgreSQL database where the chat message history will be stored. Defaults to `message_store`
|
||||
- **connection_string**: Specifies the details needed to connect to the PostgreSQL database, including username, password, host, port, and database name. Defaults to `postgresql://postgres:mypassword@localhost/chat_history`.
|
||||
- **session_id**: A unique identifier used to link chat message history with a specific session or conversation.
|
||||
- **table_name**: The name of the PostgreSQL database table where chat message history is stored. Defaults to `message_store`.
|
||||
|
||||
---
|
||||
|
||||
### VectorRetrieverMemory
|
||||
|
||||
The `VectorRetrieverMemory` is a memory component that allows for the retrieval of vectors based on a given query. It is used to perform vector-based searches and retrievals.
|
||||
The `VectorRetrieverMemory` retrieves vectors based on queries, facilitating vector-based searches and retrievals.
|
||||
|
||||
**Params**
|
||||
**Parameters**
|
||||
|
||||
- **Retriever:** The retriever used to fetch documents.
|
||||
- **input_key:** Used to specify the keys in the memory object where the input messages should be stored. It allows for the retrieval and manipulation of input messages.
|
||||
- **memory_key:** Specifies the prompt variable name where the memory will store and retrieve the chat messages. It allows for the preservation of the conversation history throughout the interaction with the language model – defaults to `chat_history`.
|
||||
- **return_messages:** Determines whether the history should be returned as a string or as a list of messages. If `return_messages` is set to True, the history will be returned as a list of messages. If `return_messages` is set to False or not specified, the history will be returned as a string – defaults to `False`.
|
||||
- **Retriever**: The tool used to fetch documents.
|
||||
- **input_key**: Identifies where input messages are stored in the memory object, facilitating their retrieval and manipulation.
|
||||
- **memory_key**: Specifies the prompt variable name where the memory stores and retrieves chat messages. Defaults to `chat_history`.
|
||||
- **return_messages**: Controls whether the history is returned as a string or as a list of messages. Defaults to `False`.
|
||||
|
|
@ -1,221 +1,143 @@
|
|||
import Admonition from '@theme/Admonition';
|
||||
|
||||
# LLMs
|
||||
# Large Language Models (LLMs)
|
||||
|
||||
<Admonition type="caution" icon="🚧" title="ZONE UNDER CONSTRUCTION">
|
||||
<p>
|
||||
We appreciate your understanding as we polish our documentation – it may contain some rough edges. Share your feedback or report issues to help us improve! 🛠️📝
|
||||
Thank you for your patience as we refine our documentation. You might encounter some inconsistencies. Please help us improve by sharing your feedback or reporting any issues! 🛠️📝
|
||||
</p>
|
||||
</Admonition>
|
||||
|
||||
An LLM stands for Large Language Model. It is a core component of Langflow and provides a standard interface for interacting with different LLMs from various providers such as OpenAI, Cohere, and HuggingFace. LLMs are used widely throughout Langflow, including in chains and agents. They can be used to generate text based on a given prompt (or input).
|
||||
A Large Language Model (LLM) is a foundational component of Langflow. It provides a uniform interface for interacting with LLMs from various providers, including OpenAI, Cohere, and HuggingFace. Langflow extensively uses LLMs across its chains and agents, employing them to generate text based on specific prompts or inputs.
|
||||
|
||||
---
|
||||
|
||||
### Anthropic
|
||||
|
||||
Wrapper around Anthropic's large language models. Find out more at [Anthropic](https://www.anthropic.com).
|
||||
This is a wrapper for Anthropic's large language models. Learn more at [Anthropic](https://www.anthropic.com).
|
||||
|
||||
- **anthropic_api_key:** Used to authenticate and authorize access to the Anthropic API.
|
||||
|
||||
- **anthropic_api_url:** Specifies the URL of the Anthropic API to connect to.
|
||||
|
||||
- **temperature:** Tunes the degree of randomness in text generations. Should be a non-negative value.
|
||||
- **anthropic_api_key:** This key authenticates and authorizes access to the Anthropic API.
|
||||
- **anthropic_api_url:** This URL connects to the Anthropic API.
|
||||
- **temperature:** This parameter adjusts the randomness level in text generation. Set this to a non-negative number.
|
||||
|
||||
---
|
||||
|
||||
### ChatAnthropic
|
||||
|
||||
Wrapper around Anthropic's large language model used for chat-based interactions. Find out more at [Anthropic](https://www.anthropic.com).
|
||||
This is a wrapper for Anthropic's large language model designed for chat-based interactions. Learn more at [Anthropic](https://www.anthropic.com).
|
||||
|
||||
- **anthropic_api_key:** Used to authenticate and authorize access to the Anthropic API.
|
||||
|
||||
- **anthropic_api_url:** Specifies the URL of the Anthropic API to connect to.
|
||||
|
||||
- **temperature:** Tunes the degree of randomness in text generations. Should be a non-negative value.
|
||||
- **anthropic_api_key:** This key authenticates and authorizes access to the Anthropic API.
|
||||
- **anthropic_api_url:** This URL connects to the Anthropic API.
|
||||
- **temperature:** This parameter adjusts the randomness level in text generation. Set this to a non-negative number.
|
||||
|
||||
---
|
||||
|
||||
### CTransformers
|
||||
|
||||
The `CTransformers` component provides access to the Transformer models implemented in C/C++ using the [GGML](https://github.com/ggerganov/ggml) library.
|
||||
`CTransformers` provides access to Transformer models implemented in C/C++ using the [GGML](https://github.com/ggerganov/ggml) library.
|
||||
|
||||
<Admonition type="info">
|
||||
|
||||
Make sure to have the `ctransformers` python package installed. Learn more about installation, supported models, and usage [here](https://github.com/marella/ctransformers).
|
||||
Ensure the `ctransformers` Python package is installed. Discover more about installation, supported models, and usage [here](https://github.com/marella/ctransformers).
|
||||
</Admonition>
|
||||
|
||||
**config:** Configuration for the Transformer models. Check out [config](https://github.com/marella/ctransformers#config). Defaults to:
|
||||
- **config:** This configuration is for the Transformer models. Check the default settings and possible configurations at [config](https://github.com/marella/ctransformers#config).
|
||||
|
||||
```
|
||||
```json
|
||||
{
|
||||
|
||||
"top_k": 40,
|
||||
|
||||
"top_p": 0.95,
|
||||
|
||||
"temperature": 0.8,
|
||||
|
||||
"repetition_penalty": 1.1,
|
||||
|
||||
"last_n_tokens": 64,
|
||||
|
||||
"seed": -1,
|
||||
|
||||
"max_new_tokens": 256,
|
||||
|
||||
"stop": null,
|
||||
|
||||
"stream": false,
|
||||
|
||||
"reset": true,
|
||||
|
||||
"batch_size": 8,
|
||||
|
||||
"threads": -1,
|
||||
|
||||
"context_length": -1,
|
||||
|
||||
"gpu_layers": 0
|
||||
|
||||
"top_k": 40,
|
||||
"top_p": 0.95,
|
||||
"temperature": 0.8,
|
||||
"repetition_penalty": 1.1,
|
||||
"last_n_tokens": 64,
|
||||
"seed": -1,
|
||||
"max_new_tokens": 256,
|
||||
"stop": null,
|
||||
"stream": false,
|
||||
"reset": true,
|
||||
"batch_size": 8,
|
||||
"threads": -1,
|
||||
"context_length": -1,
|
||||
"gpu_layers": 0
|
||||
}
|
||||
```
|
||||
|
||||
**model:** The path to a model file or directory or the name of a Hugging Face Hub model repo.
|
||||
- **model**: The file path, directory, or Hugging Face Hub model repository name.
|
||||
- **model_file**: The specific model file name within the repository or directory.
|
||||
- **model_type**: The type of transformer model used. For further information, visit [ctransformers](https://github.com/marella/ctransformers).
|
||||
|
||||
**model_file:** The name of the model file in the repo or directory.
|
||||
### ChatOpenAI Component
|
||||
|
||||
**model_type:** Transformer model to be used. Learn more [here](https://github.com/marella/ctransformers).
|
||||
This component interfaces with [OpenAI's](https://openai.com) large language models, supporting a variety of tasks such as chatbots, generative question-answering, and summarization.
|
||||
|
||||
---
|
||||
- **max_tokens**: The maximum number of tokens to generate for each completion. Set to `-1` to generate as many tokens as possible, based on the model's context size. The default is `256`.
|
||||
- **model_kwargs**: A dictionary containing any additional model parameters for undefined calls.
|
||||
- **model_name**: Specifies the OpenAI chat model in use.
|
||||
- **openai_api_base**: The base URL for accessing the OpenAI API.
|
||||
- **openai_api_key**: The API key required for authentication with the OpenAI API.
|
||||
- **temperature**: Adjusts the randomness level of the text generation. This should be a non-negative number, defaulting to `0.7`.
|
||||
|
||||
### ChatOpenAI
|
||||
### Cohere Component
|
||||
|
||||
Wrapper around [OpenAI's](https://openai.com) chat large language models. This component supports some of the LLMs (Large Language Models) available by OpenAI and is used for tasks such as chatbots, Generative Question-Answering (GQA), and summarization.
|
||||
A wrapper for accessing [Cohere's](https://cohere.com) large language models.
|
||||
|
||||
- **max_tokens:** The maximum number of tokens to generate in the completion. `-1` returns as many tokens as possible, given the prompt and the model's maximal context size – defaults to `256`.
|
||||
- **model_kwargs:** Holds any model parameters valid for creating non-specified calls.
|
||||
- **model_name:** Defines the OpenAI chat model to be used.
|
||||
- **openai_api_base:** Used to specify the base URL for the OpenAI API. It is typically set to the API endpoint provided by the OpenAI service.
|
||||
- **openai_api_key:** Key used to authenticate and access the OpenAI API.
|
||||
- **temperature:** Tunes the degree of randomness in text generations. Should be a non-negative value – defaults to `0.7`.
|
||||
- **cohere_api_key**: The API key needed for Cohere service authentication.
|
||||
- **max_tokens**: The limit on the number of tokens to generate per request, defaulting to `256`.
|
||||
- **temperature**: Adjusts the randomness level in text generations. This should be a non-negative number, defaulting to `0.75`.
|
||||
|
||||
---
|
||||
### HuggingFaceHub Component
|
||||
|
||||
### Cohere
|
||||
A component facilitating access to models hosted on the [HuggingFace Hub](https://www.huggingface.co/models).
|
||||
|
||||
Wrapper around [Cohere's](https://cohere.com) large language models.
|
||||
- **huggingfacehub_api_token**: The token required for API authentication.
|
||||
- **model_kwargs**: Parameters passed to the model.
|
||||
- **repo_id**: Specifies the model repository, defaulting to `gpt2`.
|
||||
- **task**: The specific task to execute with the model, returning either `generated_text` or `summary_text`.
|
||||
|
||||
- **cohere_api_key:** Holds the API key required to authenticate with the Cohere service.
|
||||
- **max_tokens:** Maximum number of tokens to predict per generation – defaults to `256`.
|
||||
- **temperature:** Tunes the degree of randomness in text generations. Should be a non-negative value – defaults to `0.75`.
|
||||
### LlamaCpp Component
|
||||
|
||||
---
|
||||
This component provides access to `llama.cpp` models, ensuring high performance and flexibility.
|
||||
|
||||
### HuggingFaceHub
|
||||
- **echo**: Whether to echo the input prompt, defaulting to `False`.
|
||||
- **f16_kv**: Indicates if half-precision should be used for the key/value cache, defaulting to `True`.
|
||||
- **last_n_tokens_size**: The lookback size for applying repeat penalties, defaulting to `64`.
|
||||
- **logits_all**: Whether to return logits for all tokens or just the last one, defaulting to `False`.
|
||||
- **logprobs**: The number of log probabilities to return. If set to None, no probabilities are returned.
|
||||
- **lora_base**: The path to the base Llama LoRA model.
|
||||
- **lora_path**: The specific path to the Llama LoRA model. If set to None, no LoRA model is loaded.
|
||||
- **max_tokens**: The maximum number of tokens to generate in one session, defaulting to `256`.
|
||||
- **model_path**: The file path to the Llama model.
|
||||
- **n_batch**: The number of tokens processed in parallel, defaulting to `8`.
|
||||
- **n_ctx**: The context window size for tokens, defaulting to `512`.
|
||||
- **repeat_penalty**: The penalty applied to repeated tokens, defaulting to `1.1`.
|
||||
- **seed**: The seed for random number generation. If set to `-1`, a random seed is used.
|
||||
- **stop**: A list of stop strings that terminate generation when encountered.
|
||||
- **streaming**: Indicates whether to stream results token by token, defaulting to `True`.
|
||||
- **suffix**: A suffix appended to generated text. If None, no suffix is appended.
|
||||
- **tags**: Tags added to the execution trace for monitoring.
|
||||
- **temperature**: The sampling temperature, defaulting to `0.8`.
|
||||
- **top_k**: The top-k sampling setting, defaulting to `40`.
|
||||
- **top_p**: The cumulative probability threshold for top-p sampling, defaulting to `0.95`.
|
||||
- **use_mlock**: Forces the system to retain the model in RAM, defaulting to `False`.
|
||||
- **use_mmap**: Indicates whether to maintain the model loaded in RAM, defaulting to `True`.
|
||||
- **verbose**: Controls the verbosity of output details. When enabled, it provides insights into internal states to aid debugging and understanding, defaulting to `False`.
|
||||
- **vocab_only**: Loads only the vocabulary without model weights, defaulting to `False`.
|
||||
|
||||
Wrapper around [HuggingFace](https://www.huggingface.co/models) models.
|
||||
### VertexAI Component
|
||||
|
||||
<Admonition type="info">
|
||||
The HuggingFace Hub is an online platform that hosts over 120k models, 20k datasets, and 50k demo apps, all of which are open-source and publicly available. Discover more at [HuggingFace](http://www.huggingface.co).
|
||||
</Admonition>
|
||||
This component integrates with [Google Vertex AI](https://cloud.google.com/vertex-ai) large language models to enhance AI capabilities.
|
||||
|
||||
- **huggingfacehub_api_token:** Token needed to authenticate the API.
|
||||
- **model_kwargs:** Keyword arguments to pass to the model.
|
||||
- **repo_id:** Model name to use – defaults to `gpt2`.
|
||||
- **task:** Task to call the model with. Should be a task that returns `generated_text` or `summary_text`.
|
||||
- **credentials**: Custom
|
||||
|
||||
---
|
||||
credentials used for API interactions.
|
||||
- **location**: The default location for API calls, defaulting to `us-central1`.
|
||||
- **max_output_tokens**: Limits the output tokens per prompt, defaulting to `128`.
|
||||
- **model_name**: The name of the Vertex AI model in use, defaulting to `text-bison`.
|
||||
- **project**: The default Google Cloud Platform project for API calls.
|
||||
- **request_parallelism**: The level of request parallelism for VertexAI model interactions, defaulting to `5`.
|
||||
- **temperature**: Adjusts the randomness level in text generations, defaulting to `0`.
|
||||
- **top_k**: The setting for selecting the top-k tokens for outputs.
|
||||
- **top_p**: The threshold for summing probabilities of the most likely tokens, defaulting to `0.95`.
|
||||
- **tuned_model_name**: Specifies a tuned model name, which overrides the default model name if provided.
|
||||
- **verbose**: Controls the output verbosity to assist in debugging and understanding the operational details, defaulting to `False`.
|
||||
|
||||
### LlamaCpp
|
||||
|
||||
The `LlamaCpp` component provides access to the `llama.cpp` models.
|
||||
|
||||
<Admonition type="info">
|
||||
Make sure to have the `llama.cpp` python package installed. Learn more about installation, supported models, and usage [here](https://github.com/ggerganov/llama.cpp).
|
||||
</Admonition>
|
||||
|
||||
- **echo:** Whether to echo the prompt – defaults to `False`.
|
||||
- **f16_kv:** Use half-precision for key/value cache – defaults to `True`.
|
||||
- **last_n_tokens_size:** The number of tokens to look back at when applying the repeat_penalty. Defaults to `64`.
|
||||
- **logits_all:** Return logits for all tokens, not just the last token Defaults to `False`.
|
||||
- **logprobs:** The number of logprobs to return. If None, no logprobs are returned.
|
||||
- **lora_base:** The path to the Llama LoRA base model.
|
||||
- **lora_path:** The path to the Llama LoRA. If None, no LoRa is loaded.
|
||||
- **max_tokens:** The maximum number of tokens to generate. Defaults to `256`.
|
||||
- **model_path:** The path to the Llama model file.
|
||||
- **n_batch:** Number of tokens to process in parallel. Should be a number between 1 and n_ctx. Defaults to `8`.
|
||||
- **n_ctx:** Token context window. Defaults to `512`.
|
||||
- **n_gpu_layers:** Number of layers to be loaded into GPU memory. Default None.
|
||||
- **n_parts:**Number of parts to split the model into. If -1, the number of parts is automatically determined. Defaults to `-1`.
|
||||
- **n_threads:** Number of threads to use. If None, the number of threads is automatically determined.
|
||||
- **repeat_penalty:** The penalty to apply to repeated tokens. Defaults to `1.1`.
|
||||
- **seed:** Seed. If -1, a random seed is used. Defaults to `-1`.
|
||||
- **stop:** A list of strings to stop generation when encountered.
|
||||
- **streaming:** Whether to stream the results, token by token. Defaults to `True`.
|
||||
- **suffix:** A suffix to append to the generated text. If None, no suffix is appended.
|
||||
- **tags:** Tags to add to the run trace.
|
||||
- **temperature:** The temperature to use for sampling. Defaults to `0.8`.
|
||||
- **top_k:** The top-k value to use for sampling. Defaults to `40`.
|
||||
- **top_p:** The top-p value to use for sampling. Defaults to `0.95`.
|
||||
- **use_mlock:** Force the system to keep the model in RAM. Defaults to `False`.
|
||||
- **use_mmap:** Whether to keep the model loaded in RAM. Defaults to `True`.
|
||||
- **verbose:** This parameter is used to control the level of detail in the output of the chain. When set to True, it will print out some internal states of the chain while it is being run, which can help debug and understand the chain's behavior. If set to False, it will suppress the verbose output. Defaults to `False`.
|
||||
- **vocab_only:** Only load the vocabulary, no weights. Defaults to `False`.
|
||||
|
||||
---
|
||||
|
||||
### OpenAI
|
||||
|
||||
Wrapper around [OpenAI's](https://openai.com) large language models.
|
||||
|
||||
- **max_tokens:** The maximum number of tokens to generate in the completion. `-1` returns as many tokens as possible, given the prompt and the model's maximal context size – defaults to `256`.
|
||||
- **model_kwargs:** Holds any model parameters valid for creating non-specified calls.
|
||||
- **model_name:** Defines the OpenAI model to be used.
|
||||
- **openai_api_base:** Used to specify the base URL for the OpenAI API. It is typically set to the API endpoint provided by the OpenAI service.
|
||||
- **openai_api_key:** Key used to authenticate and access the OpenAI API.
|
||||
- **temperature:** Tunes the degree of randomness in text generations. Should be a non-negative value – defaults to `0.7`.
|
||||
|
||||
---
|
||||
|
||||
### VertexAI
|
||||
|
||||
Wrapper around [Google Vertex AI](https://cloud.google.com/vertex-ai) large language models.
|
||||
|
||||
<Admonition type="info">
|
||||
Vertex AI is a cloud computing platform offered by Google Cloud Platform (GCP). It provides access, management, and development of applications and services through global data centers. To use Vertex AI PaLM, you need to have the [google-cloud-aiplatform](https://pypi.org/project/google-cloud-aiplatform/) Python package installed and credentials configured for your environment.
|
||||
</Admonition>
|
||||
|
||||
- **credentials:** The default custom credentials (google.auth.credentials.Credentials) to use.
|
||||
- **location:** The default location to use when making API calls – defaults to `us-central1`.
|
||||
- **max_output_tokens:** Token limit determines the maximum amount of text output from one prompt – defaults to `128`.
|
||||
- **model_name:** The name of the Vertex AI large language model – defaults to `text-bison`.
|
||||
- **project:** The default GCP project to use when making Vertex API calls.
|
||||
- **request_parallelism:** The amount of parallelism allowed for requests issued to VertexAI models – defaults to `5`.
|
||||
- **temperature:** Tunes the degree of randomness in text generations. Should be a non-negative value – defaults to `0`.
|
||||
- **top_k:** How the model selects tokens for output, the next token is selected from – defaults to `40`.
|
||||
- **top_p:** Tokens are selected from most probable to least until the sum of their – defaults to `0.95`.
|
||||
- **tuned_model_name:** The name of a tuned model. If provided, model_name is ignored.
|
||||
- **verbose:** This parameter is used to control the level of detail in the output of the chain. When set to True, it will print out some internal states of the chain while it is being run, which can help debug and understand the chain's behavior. If set to False, it will suppress the verbose output – defaults to `False`.
|
||||
|
||||
---
|
||||
|
||||
### ChatVertexAI
|
||||
|
||||
Wrapper around [Google Vertex AI](https://cloud.google.com/vertex-ai) large language models.
|
||||
|
||||
<Admonition type="info">
|
||||
Vertex AI is a cloud computing platform offered by Google Cloud Platform (GCP). It provides access, management, and development of applications and services through global data centers. To use Vertex AI PaLM, you need to have the [google-cloud-aiplatform](https://pypi.org/project/google-cloud-aiplatform/) Python package installed and credentials configured for your environment.
|
||||
</Admonition>
|
||||
|
||||
- **credentials:** The default custom credentials (google.auth.credentials.Credentials) to use.
|
||||
- **location:** The default location to use when making API calls – defaults to `us-central1`.
|
||||
- **max_output_tokens:** Token limit determines the maximum amount of text output from one prompt – defaults to `128`.
|
||||
- **model_name:** The name of the Vertex AI large language model – defaults to `text-bison`.
|
||||
- **project:** The default GCP project to use when making Vertex API calls.
|
||||
- **request_parallelism:** The amount of parallelism allowed for requests issued to VertexAI models – defaults to `5`.
|
||||
- **temperature:** Tunes the degree of randomness in text generations. Should be a non-negative value – defaults to `0`.
|
||||
- **top_k:** How the model selects tokens for output, the next token is selected from – defaults to `40`.
|
||||
- **top_p:** Tokens are selected from most probable to least until the sum of their – defaults to `0.95`.
|
||||
- **tuned_model_name:** The name of a tuned model. If provided, model_name is ignored.
|
||||
- **verbose:** This parameter is used to control the level of detail in the output of the chain. When set to True, it will print out some internal states of the chain while it is being run, which can help debug and understand the chain's behavior. If set to False, it will suppress the verbose output – defaults to `False`.
|
||||
---
|
||||
|
|
@ -2,36 +2,33 @@ import Admonition from '@theme/Admonition';
|
|||
|
||||
# Outputs
|
||||
|
||||
### Chat Output
|
||||
## Chat Output
|
||||
|
||||
This component is designed to send a message to the chat.
|
||||
This component sends a message to the chat.
|
||||
|
||||
**Params**
|
||||
**Parameters**
|
||||
|
||||
- **Sender Type:** specifies the sender type. Defaults to _`"Machine"`_. Options are _`"Machine"`_ and _`"User"`_.
|
||||
- **Sender Type:** Specifies the sender type. Default is `"Machine"`. Options are `"Machine"` and `"User"`.
|
||||
|
||||
- **Sender Name:** specifies the name of the sender. Defaults to _`"AI"`_.
|
||||
- **Sender Name:** Specifies the sender's name. Default is `"AI"`.
|
||||
|
||||
- **Session ID:** specifies the session ID of the chat history. If provided, the message will be saved in the Message History.
|
||||
- **Session ID:** Specifies the session ID of the chat history. If provided, messages are saved in the Message History.
|
||||
|
||||
- **Message:** specifies the message text.
|
||||
- **Message:** Specifies the text of the message.
|
||||
|
||||
<Admonition type="note" title="Note">
|
||||
<p>
|
||||
If _`As Record`_ is _`true`_ and the _`Message`_ is a _`Record`_, the data of the _`Record`_ will be updated with the _`Sender`_, _`Sender Name`_, and _`Session ID`_.
|
||||
If `As Record` is `true` and the `Message` is a `Record`, the data in the `Record` is updated with the `Sender`, `Sender Name`, and `Session ID`.
|
||||
</p>
|
||||
</Admonition>
|
||||
|
||||
### Text Output
|
||||
## Text Output
|
||||
|
||||
This component is designed to display text data to the user. It's particularly useful for scenarios where you don't want to send the text data to the chat, but still want to display it.
|
||||
This component displays text data to the user. It is useful when you want to show text without sending it to the chat.
|
||||
|
||||
**Params**
|
||||
**Parameters**
|
||||
|
||||
- **Value:** Specifies the text data to be displayed. This is where the text data to be displayed is provided. If no value is provided, it defaults to an empty string.
|
||||
- **Value:** Specifies the text data to be displayed. Defaults to an empty string.
|
||||
|
||||
<Admonition type="note" title="Note">
|
||||
<p>
|
||||
The `TextOutput` component serves as a straightforward means for displaying text data. It ensures that textual data can be seamlessly observed in the chat window throughout your flow.
|
||||
</p>
|
||||
</Admonition>
|
||||
|
||||
The `TextOutput` component provides a simple way to display text data. It allows textual data to be visible in the chat window during your interaction flow.
|
||||
|
|
|
|||
|
|
@ -2,26 +2,24 @@ import Admonition from "@theme/Admonition";
|
|||
|
||||
# Prompts
|
||||
|
||||
<Admonition type="caution" icon="🚧" title="ZONE UNDER CONSTRUCTION">
|
||||
<Admonition type="caution" icon="🚧" title="Zone Under Construction">
|
||||
<p>
|
||||
We appreciate your understanding as we polish our documentation – it may
|
||||
contain some rough edges. Share your feedback or report issues to help us
|
||||
improve! 🛠️📝
|
||||
Thank you for your patience as we refine our documentation. It may
|
||||
still have some areas under development. Please share your feedback or report any issues to help us improve!
|
||||
</p>
|
||||
</Admonition>
|
||||
|
||||
A prompt refers to the input given to a language model. It is constructed from multiple components and can be parametrized using prompt templates. A prompt template is a reproducible way to generate prompts and allow for easy customization through input variables.
|
||||
A prompt is the input provided to a language model, consisting of multiple components and can be parameterized using prompt templates. A prompt template offers a reproducible method for generating prompts, enabling easy customization through input variables.
|
||||
|
||||
---
|
||||
|
||||
### PromptTemplate
|
||||
|
||||
The `PromptTemplate` component allows users to create prompts and define variables that provide control over instructing the model. The template can take in a set of variables from the end user and generates the prompt once the conversation is initiated.
|
||||
The `PromptTemplate` component enables users to create prompts and define variables that control how the model is instructed. Users can input a set of variables which the template uses to generate the prompt when a conversation starts.
|
||||
|
||||
<Admonition type="info">
|
||||
Once a variable is defined in the prompt template, it becomes a component
|
||||
input of its own. Check out [Prompt
|
||||
Customization](../administration/prompt-customization) to learn more.
|
||||
After defining a variable in the prompt template, it acts as its own component
|
||||
input. See [Prompt Customization](../administration/prompt-customization) for more details.
|
||||
</Admonition>
|
||||
|
||||
- **template:** Template used to format an individual request.
|
||||
- **template:** The template used to format an individual request.
|
||||
|
|
|
|||
|
|
@ -4,21 +4,21 @@ import Admonition from '@theme/Admonition';
|
|||
|
||||
<Admonition type="caution" icon="🚧" title="ZONE UNDER CONSTRUCTION">
|
||||
<p>
|
||||
We appreciate your understanding as we polish our documentation – it may contain some rough edges. Share your feedback or report issues to help us improve! 🛠️📝
|
||||
We appreciate your patience as we enhance our documentation. It may have some imperfections. Please share your feedback or report issues to help us improve. 🛠️📝
|
||||
</p>
|
||||
</Admonition>
|
||||
|
||||
A retriever is an interface that returns documents given an unstructured query. It is more general than a vector store and does not need to be able to store documents, only to return or retrieve them.
|
||||
A retriever is an interface that returns documents in response to an unstructured query. It's broader than a vector store because it doesn't need to store documents; it only needs to retrieve them.
|
||||
|
||||
---
|
||||
|
||||
### MultiQueryRetriever
|
||||
|
||||
The `MultiQueryRetriever` component automates the process of generating multiple queries, retrieves relevant documents for each query, and combines the results to provide a more extensive and diverse set of potentially relevant documents. This approach enhances the effectiveness of the retrieval process and helps overcome the limitations of traditional distance-based retrieval methods.
|
||||
The `MultiQueryRetriever` automates generating multiple queries, retrieves relevant documents for each query, and aggregates the results. This method improves retrieval effectiveness and addresses the limitations of traditional distance-based methods.
|
||||
|
||||
**Params**
|
||||
**Parameters**
|
||||
|
||||
- **LLM:** Language Model to use in the `MultiQueryRetriever`.
|
||||
- **Prompt:** Prompt to represent a schema for an LLM.
|
||||
- **Retriever:** The retriever used to fetch documents.
|
||||
- **parser_key:** This parameter is used to specify the key or attribute name of the parsed output that will be used for retrieval. It determines how the results from the language model are split into a list of queries. Defaults to `lines`, which means that the output from the language model will be split into a list of lines of text. This allows the retriever to retrieve relevant documents based on each line of text separately.
|
||||
- **LLM:** Specifies the language model used in the `MultiQueryRetriever`.
|
||||
- **Prompt:** Defines a schema for the LLM.
|
||||
- **Retriever:** Identifies the retriever that fetches documents.
|
||||
- **parser_key:** Specifies the key or attribute name of the parsed output for retrieval. By default, it's set to `lines`, meaning the output from the language model is split into separate lines of text. This allows the retriever to fetch documents relevant to each line of text.
|
||||
|
|
|
|||
|
|
@ -4,60 +4,47 @@ import Admonition from "@theme/Admonition";
|
|||
|
||||
<Admonition type="caution" icon="🚧" title="ZONE UNDER CONSTRUCTION">
|
||||
<p>
|
||||
We appreciate your understanding as we polish our documentation – it may
|
||||
contain some rough edges. Share your feedback or report issues to help us
|
||||
improve! 🛠️📝
|
||||
Thank you for your patience as we enhance our documentation. It might
|
||||
currently have some rough edges. Please share your feedback or report any
|
||||
issues to assist us in improving! 🛠️📝
|
||||
</p>
|
||||
</Admonition>
|
||||
|
||||
A text splitter is a tool that divides a document or text into smaller chunks or segments. It is used to break down large texts into more manageable pieces for analysis or processing.
|
||||
A text splitter is a tool that divides a document or text into smaller chunks or segments. This helps make large texts more manageable for analysis or processing.
|
||||
|
||||
---
|
||||
|
||||
### CharacterTextSplitter
|
||||
|
||||
The `CharacterTextSplitter` is used to split a long text into smaller chunks based on a specified character. It splits the text by trying to keep paragraphs, sentences, and words together as long as possible, as these are semantically related pieces of text.
|
||||
The `CharacterTextSplitter` splits a long text into smaller chunks based on a specified character. It aims to keep paragraphs, sentences, and words intact as much as possible since these are semantically related elements of text.
|
||||
|
||||
**Params**
|
||||
**Parameters**
|
||||
|
||||
- **Documents:** Input documents to split.
|
||||
|
||||
- **chunk_overlap:** Determines the number of characters that overlap between consecutive chunks when splitting text. It specifies how much of the previous chunk should be included in the next chunk.
|
||||
|
||||
For example, if the `chunk_overlap` is set to 20 and the `chunk_size` is set to 100, the splitter will create chunks of 100 characters each, but the last 20 characters of each chunk will overlap with the first 20 characters of the next chunk. This allows for a smoother transition between chunks and ensures that no information is lost – defaults to `200`.
|
||||
|
||||
- **chunk_size:** Determines the maximum number of characters in each chunk when splitting a text. It specifies the size or length of each chunk.
|
||||
|
||||
For example, if the chunk_size is set to 100, the splitter will create chunks of 100 characters each. If the text is longer than 100 characters, it will be divided into multiple chunks of equal size, except for the last chunk, which may be smaller if there are remaining characters –defaults to `1000`.
|
||||
|
||||
- **separator:** Specifies the character that will be used to split the text into chunks – defaults to `.`
|
||||
- **Documents:** The input documents to split.
|
||||
- **chunk_overlap:** The number of characters that overlap between consecutive chunks. This setting ensures a smoother transition between chunks and prevents information loss. For example, with a `chunk_overlap` of 20 and a `chunk_size` of 100, each chunk will have the last 20 characters overlap with the next chunk's first 20 characters. The default is `200`.
|
||||
- **chunk_size:** The maximum number of characters in each chunk. If the text exceeds the specified `chunk_size`, it will be divided into multiple chunks of equal size, with the possible exception of the last chunk, which may be smaller if fewer characters remain. The default is `1000`.
|
||||
- **separator:** The character used to split the text into chunks. The default is `.`.
|
||||
|
||||
---
|
||||
|
||||
### RecursiveCharacterTextSplitter
|
||||
|
||||
The `RecursiveCharacterTextSplitter` splits the text by trying to keep paragraphs, sentences, and words together as long as possible, similar to the `CharacterTextSplitter`. However, it also recursively splits the text into smaller chunks if the chunk size exceeds a specified threshold.
|
||||
The `RecursiveCharacterTextSplitter` functions similarly to the `CharacterTextSplitter` by trying to keep paragraphs, sentences, and words together. It also recursively splits the text into smaller chunks if the initial chunk size exceeds a specified threshold.
|
||||
|
||||
**Params**
|
||||
**Parameters**
|
||||
|
||||
- **Documents:** Input documents to split.
|
||||
|
||||
- **chunk_overlap:** Determines the number of characters that overlap between consecutive chunks when splitting text. It specifies how much of the previous chunk should be included in the next chunk.
|
||||
|
||||
- **chunk_size:** Determines the maximum number of characters in each chunk when splitting a text. It specifies the size or length of each chunk.
|
||||
|
||||
- **separators:** The `separators` in RecursiveCharacterTextSplitter are the characters used to split the text into chunks. The text splitter tries to create chunks based on splitting on the first character in the list of `separators`. If any chunks are too large, it moves on to the next character in the list and continues splitting. Defaults to ["\n\n", "\n", " ", ""].
|
||||
- **Documents:** The input documents to split.
|
||||
- **chunk_overlap:** The number of characters that overlap between consecutive chunks.
|
||||
- **chunk_size:** The maximum number of characters in each chunk.
|
||||
- **separators:** A list of characters used to split the text into chunks. The splitter first tries to split text using the first character in the `separators` list. If any chunk exceeds the maximum size, it proceeds to the next character in the list and continues splitting. The defaults are ["\n\n", "\n", " ", ""].
|
||||
|
||||
### LanguageRecursiveTextSplitter
|
||||
|
||||
The `LanguageRecursiveTextSplitter` is a text splitter that splits the text into smaller chunks based on the (programming) language of the text.
|
||||
The `LanguageRecursiveTextSplitter` divides text into smaller chunks based on the programming language of the text.
|
||||
|
||||
**Params**
|
||||
**Parameters**
|
||||
|
||||
- **Documents:** Input documents to split.
|
||||
|
||||
- **chunk_overlap:** Determines the number of characters that overlap between consecutive chunks when splitting text. It specifies how much of the previous chunk should be included in the next chunk.
|
||||
|
||||
- **chunk_size:** Determines the maximum number of characters in each chunk when splitting a text. It specifies the size or length of each chunk.
|
||||
|
||||
- **separator_type:** The parameter allows the user to split the code with multiple language support. It supports various languages such as Ruby, Python, Solidity, Java, and more. Defaults to `Python`.
|
||||
- **Documents:** The input documents to split.
|
||||
- **chunk_overlap:** The number of characters that overlap between consecutive chunks.
|
||||
- **chunk_size:** The maximum number of characters in each chunk.
|
||||
- **separator_type:** This parameter allows splitting text across multiple programming languages such as Ruby, Python, Solidity, Java, and more. The default is `Python`.
|
||||
|
|
|
|||
|
|
@ -4,75 +4,68 @@ import Admonition from '@theme/Admonition';
|
|||
|
||||
<Admonition type="caution" icon="🚧" title="ZONE UNDER CONSTRUCTION">
|
||||
<p>
|
||||
We appreciate your understanding as we polish our documentation – it may contain some rough edges. Share your feedback or report issues to help us improve! 🛠️📝
|
||||
Thanks for your patience as we refine our documentation. It might have some rough edges currently. Please share your feedback or report issues to help us enhance it! 🛠️📝
|
||||
</p>
|
||||
</Admonition>
|
||||
|
||||
|
||||
### SearchApi
|
||||
|
||||
Real-time search engine results API. Returns structured JSON data that includes answer box, knowledge graph, organic results, and more.
|
||||
SearchApi offers a real-time search engine results API that returns structured JSON data, including answer boxes, knowledge graphs, organic results, and more.
|
||||
|
||||
**Parameters**
|
||||
#### Parameters
|
||||
|
||||
- **Api Key:** A unique identifier for the SearchApi, necessary for authenticating requests to real-time search engines. This key can be retrieved from the [SearchApi dashboard](https://www.searchapi.io/).
|
||||
- **Engine:** Specifies the search engine. For instance: google, google_scholar, bing, youtube, and youtube_transcripts. A full list of supported engines is available in the [documentation](https://www.searchapi.io/docs/google).
|
||||
- **Parameters:** Allows the selection of any parameters recognized by SearchApi, with some being required and others optional.
|
||||
- **Api Key:** A unique identifier required for authentication with real-time search engines, obtainable through the [SearchApi dashboard](https://www.searchapi.io/).
|
||||
- **Engine:** Specifies the search engine used, such as Google, Google Scholar, Bing, YouTube, and YouTube transcripts. Refer to the [documentation](https://www.searchapi.io/docs/google) for a complete list of supported engines.
|
||||
- **Parameters:** Allows the selection of various parameters recognized by SearchApi. Some parameters are mandatory while others are optional.
|
||||
|
||||
**Output**
|
||||
|
||||
- **Document:** The JSON response from the request as a Document.
|
||||
#### Output
|
||||
|
||||
- **Document:** The JSON response from the request.
|
||||
|
||||
### BingSearchRun
|
||||
|
||||
Bing Search is a web search engine owned and operated by Microsoft. It provides search results for various types of content, including web pages, images, videos, and news articles. It uses a combination of algorithms and human editors to deliver search results to users.
|
||||
Bing Search, a web search engine by Microsoft, provides search results for various content types like web pages, images, videos, and news articles. It combines algorithms and human editors to deliver these results.
|
||||
|
||||
**Params**
|
||||
|
||||
- **Api Wrapper:** A BingSearchAPIWrapper component that takes the search URL and a subscription key.
|
||||
#### Parameters
|
||||
|
||||
- **Api Wrapper:** A BingSearchAPIWrapper component that processes the search URL and subscription key.
|
||||
|
||||
### Calculator
|
||||
|
||||
The calculator tool provides mathematical calculation capabilities to an agent by leveraging an LLMMathChain. It allows the agent to perform math when needed to answer questions.
|
||||
The calculator tool leverages an LLMMathChain to provide mathematical calculation capabilities, enabling the agent to perform computations as needed.
|
||||
|
||||
**Params**
|
||||
|
||||
- **LLM:** Language Model to use in the calculation.
|
||||
#### Parameters
|
||||
|
||||
- **LLM:** The Language Model used for calculations.
|
||||
|
||||
### GoogleSearchResults
|
||||
|
||||
A wrapper around Google Search. Useful for when the user needs to answer questions about with more control over the JSON data returned from the API. It returns the full JSON response configured based on the parameters passed to the API wrapper.
|
||||
This is a wrapper around Google Search tailored for users who need precise control over the JSON data returned from the API.
|
||||
|
||||
**Params**
|
||||
|
||||
- **Api Wrapper:** A GoogleSearchAPIWrapper with Google API key and CSE ID
|
||||
#### Parameters
|
||||
|
||||
- **Api Wrapper:** A GoogleSearchAPIWrapper equipped with a Google API key and CSE ID.
|
||||
|
||||
### GoogleSearchRun
|
||||
|
||||
A quick wrapper around Google Search. It executes the search query and returns just the first result snippet from the highest-priority result type.
|
||||
This tool acts as a quick wrapper around Google Search, executing the search query and returning the snippet from the most relevant result.
|
||||
|
||||
**Params**
|
||||
|
||||
- **Api Wrapper:** A GoogleSearchAPIWrapper with Google API key and CSE ID
|
||||
#### Parameters
|
||||
|
||||
- **Api Wrapper:** A GoogleSearchAPIWrapper equipped with a Google API key and CSE ID.
|
||||
|
||||
### GoogleSerperRun
|
||||
|
||||
A low-cost Google Search API.
|
||||
A cost-effective Google Search API.
|
||||
|
||||
**Params**
|
||||
|
||||
- **Api Wrapper:** A GoogleSerperAPIWrapper component with API key and result keys
|
||||
#### Parameters
|
||||
|
||||
- **Api Wrapper:** A GoogleSerperAPIWrapper with the required API key and result keys.
|
||||
|
||||
### InfoSQLDatabaseTool
|
||||
|
||||
Tool for getting metadata about a SQL database. The input to this tool is a comma-separated list of tables, and the output is the schema and sample rows for those tables. Example Input: `“table1`, `table2`, `table3”`.
|
||||
This tool retrieves metadata about SQL databases. It takes a comma-separated list of table names as input and outputs the schema and sample rows for those tables.
|
||||
|
||||
**Params**
|
||||
#### Parameters
|
||||
|
||||
- **Db:** SQLDatabase to query.
|
||||
- **Db:** The SQL database to query.
|
||||
|
|
|
|||
|
|
@ -2,95 +2,91 @@ import Admonition from "@theme/Admonition";
|
|||
|
||||
# Utilities
|
||||
|
||||
<Admonition type="caution" icon="🚧" title="ZONE UNDER CONSTRUCTION">
|
||||
<p>
|
||||
We appreciate your understanding as we polish our documentation – it may
|
||||
contain some rough edges. Share your feedback or report issues to help us
|
||||
improve! 🛠️📝
|
||||
</p>
|
||||
<Admonition type="caution" icon="🚧" title="Zone Under Construction">
|
||||
We appreciate your understanding as we polish our documentation—it may
|
||||
contain some rough edges. Share your feedback or report issues to help us
|
||||
improve! 🛠️📝
|
||||
</Admonition>
|
||||
|
||||
Utilities are a set of actions that can be used to perform common tasks in a flow. They are available in the **Utilities** section in the sidebar.
|
||||
|
||||
---
|
||||
|
||||
### GET Request
|
||||
### GET request
|
||||
|
||||
Make a GET request to the given URL.
|
||||
Make a GET request to the specified URL.
|
||||
|
||||
**Params**
|
||||
**Parameters**
|
||||
|
||||
- **URL:** The URL to make the request to. There can be more than one URL, in which case the request will be made to each URL in order.
|
||||
- **URL:** The URL to make the request to. If there are multiple URLs, the request will be made to each URL in order.
|
||||
- **Headers:** A dictionary of headers to send with the request.
|
||||
|
||||
**Output**
|
||||
|
||||
- **List of Documents:** A list of Documents containing the JSON response from each request.
|
||||
- **List of documents:** A list of documents containing the JSON response from each request.
|
||||
|
||||
---
|
||||
|
||||
### POST Request
|
||||
### POST request
|
||||
|
||||
Make a POST request to the given URL.
|
||||
Make a POST request to the specified URL.
|
||||
|
||||
**Params**
|
||||
**Parameters**
|
||||
|
||||
- **URL:** The URL to make the request to.
|
||||
- **Headers:** A dictionary of headers to send with the request.
|
||||
- **Document:** The Document containing a JSON object to send with the request.
|
||||
- **Document:** The document containing a JSON object to send with the request.
|
||||
|
||||
**Output**
|
||||
|
||||
- **Document:** The JSON response from the request as a Document.
|
||||
- **Document:** The JSON response from the request as a document.
|
||||
|
||||
---
|
||||
|
||||
### Update Request
|
||||
### Update request
|
||||
|
||||
Make a PATCH or PUT request to the given URL.
|
||||
Make a PATCH or PUT request to the specified URL.
|
||||
|
||||
**Params**
|
||||
**Parameters**
|
||||
|
||||
- **URL:** The URL to make the request to.
|
||||
- **Headers:** A dictionary of headers to send with the request.
|
||||
- **Document:** The Document containing a JSON object to send with the request.
|
||||
- **Method:** The HTTP method to use for the request. Can be either `PATCH` or `PUT`.
|
||||
- **Document:** The document containing a JSON object to send with the request.
|
||||
- **Method:** The HTTP method to use for the request, either `PATCH` or `PUT`.
|
||||
|
||||
**Output**
|
||||
|
||||
- **Document:** The JSON response from the request as a Document.
|
||||
- **Document:** The JSON response from the request as a document.
|
||||
|
||||
---
|
||||
|
||||
### JSON Document Builder
|
||||
### JSON document builder
|
||||
|
||||
Build a Document containing a JSON object using a key and another Document page content.
|
||||
Build a document containing a JSON object using a key and another document page content.
|
||||
|
||||
**Params**
|
||||
**Parameters**
|
||||
|
||||
- **Key:** The key to use for the JSON object.
|
||||
- **Document:** The Document page to use for the JSON object.
|
||||
- **Document:** The document page to use for the JSON object.
|
||||
|
||||
**Output**
|
||||
|
||||
- **List of Documents:** A list containing the Document with the JSON object.
|
||||
- **List of documents:** A list containing the document with the JSON object.
|
||||
|
||||
## Unique ID Generator
|
||||
## Unique ID generator
|
||||
|
||||
Generates a unique identifier (UUID) for each instance it is invoked, providing a distinct and reliable identifier suitable for a variety of applications.
|
||||
|
||||
**Params**
|
||||
**Parameters**
|
||||
|
||||
- **Value:** This field displays the generated unique identifier (UUID). The UUID is generated dynamically for each instance of the component, ensuring uniqueness across different uses.
|
||||
- **Value:** This field displays the generated unique identifier (UUID). The UUID is dynamically generated for each instance of the component, ensuring uniqueness across different uses.
|
||||
|
||||
**Output**
|
||||
|
||||
- Returns a unique identifier (UUID) as a string. This UUID is generated using Python's `uuid` module, ensuring that each identifier is unique and can be used as a reliable reference in your application.
|
||||
|
||||
<Admonition type="note" title="Note">
|
||||
<p>
|
||||
The Unique ID Generator is crucial for scenarios requiring distinct identifiers, such as session management, transaction tracking, or any context where different instances or entities must be uniquely identified. The generated UUID is provided as a hexadecimal string, offering a high level of uniqueness and security for identification purposes.
|
||||
</p>
|
||||
The Unique ID Generator is crucial for scenarios requiring distinct identifiers, such as session management, transaction tracking, or any context where different instances or entities must be uniquely identified. The generated UUID is provided as a hexadecimal string, offering a high level of uniqueness and security for identification purposes.
|
||||
</Admonition>
|
||||
|
||||
For additional information and examples, please consult the [Langflow Components Custom Documentation](http://docs.langflow.org/components/custom).
|
||||
|
|
|
|||
|
|
@ -1,119 +1,77 @@
|
|||
import Admonition from "@theme/Admonition";
|
||||
|
||||
# Vector Stores
|
||||
# Vector Stores Documentation
|
||||
|
||||
### Astra DB
|
||||
|
||||
The `Astra DB` is a component for initializing an Astra DB Vector Store from Records. It facilitates the creation of Astra DB-based vector indexes for efficient document storage and retrieval.
|
||||
The `Astra DB` initializes a vector store using Astra DB from records. It creates Astra DB-based vector indexes to efficiently store and retrieve documents.
|
||||
|
||||
**Params**
|
||||
**Parameters:**
|
||||
|
||||
- **Input:** The input documents or records.
|
||||
|
||||
- **Embedding:** The embedding model used by Astra DB.
|
||||
|
||||
- **Collection Name:** The name of the collection in Astra DB.
|
||||
|
||||
- **Token:** The token for Astra DB.
|
||||
|
||||
- **API Endpoint:** The API endpoint for Astra DB.
|
||||
|
||||
- **Namespace:** The namespace in Astra DB.
|
||||
|
||||
- **Metric:** The metric to use in Astra DB.
|
||||
|
||||
- **Batch Size:** The batch size for Astra DB.
|
||||
|
||||
- **Bulk Insert Batch Concurrency:** The bulk insert batch concurrency for Astra DB.
|
||||
|
||||
- **Bulk Insert Overwrite Concurrency:** The bulk insert overwrite concurrency for Astra DB.
|
||||
|
||||
- **Bulk Delete Concurrency:** The bulk delete concurrency for Astra DB.
|
||||
|
||||
- **Setup Mode:** The setup mode for the vector store.
|
||||
|
||||
- **Pre Delete Collection:** Pre delete collection.
|
||||
|
||||
- **Metadata Indexing Include:** Metadata indexing include.
|
||||
|
||||
- **Metadata Indexing Exclude:** Metadata indexing exclude.
|
||||
|
||||
- **Collection Indexing Policy:** Collection indexing policy.
|
||||
- **Input:** Documents or records for input.
|
||||
- **Embedding:** Embedding model Astra DB uses.
|
||||
- **Collection Name:** Name of the Astra DB collection.
|
||||
- **Token:** Authentication token for Astra DB.
|
||||
- **API Endpoint:** API endpoint for Astra DB.
|
||||
- **Namespace:** Astra DB namespace.
|
||||
- **Metric:** Metric used by Astra DB.
|
||||
- **Batch Size:** Batch size for operations.
|
||||
- **Bulk Insert Batch Concurrency:** Concurrency level for bulk inserts.
|
||||
- **Bulk Insert Overwrite Concurrency:** Concurrency level for overwriting during bulk inserts.
|
||||
- **Bulk Delete Concurrency:** Concurrency level for bulk deletions.
|
||||
- **Setup Mode:** Setup mode for the vector store.
|
||||
- **Pre Delete Collection:** Option to delete the collection before setup.
|
||||
- **Metadata Indexing Include:** Fields to include in metadata indexing.
|
||||
- **Metadata Indexing Exclude:** Fields to exclude from metadata indexing.
|
||||
- **Collection Indexing Policy:** Indexing policy for the collection.
|
||||
|
||||
<Admonition type="note" title="Note">
|
||||
<p>
|
||||
Ensure that the required Astra DB token and API endpoint are properly configured.
|
||||
</p>
|
||||
|
||||
Ensure you configure the necessary Astra DB token and API endpoint before starting.
|
||||
</Admonition>
|
||||
|
||||
---
|
||||
|
||||
### Astra DB Search
|
||||
|
||||
The `Astra DBSearch` is a component for searching an existing Astra DB Vector Store for similar documents. It extends the functionality of the `Astra DB` component to provide efficient document retrieval based on similarity metrics.
|
||||
`Astra DBSearch` searches an existing Astra DB vector store for documents similar to the input. It uses the `Astra DB` component's functionality for efficient retrieval.
|
||||
|
||||
**Params**
|
||||
**Parameters:**
|
||||
|
||||
- **Search Type:** The type of search to perform (e.g., Similarity, MMR).
|
||||
|
||||
- **Input Value:** The input value to search for.
|
||||
|
||||
- **Embedding:** The embedding model used by Astra DB.
|
||||
|
||||
- **Collection Name:** The name of the collection in Astra DB.
|
||||
|
||||
- **Token:** The token for Astra DB.
|
||||
|
||||
- **API Endpoint:** The API endpoint for Astra DB.
|
||||
|
||||
- **Namespace:** The namespace in Astra DB.
|
||||
|
||||
- **Metric:** The metric to use in Astra DB.
|
||||
|
||||
- **Batch Size:** The batch size for Astra DB.
|
||||
|
||||
- **Bulk Insert Batch Concurrency:** The bulk insert batch concurrency for Astra DB.
|
||||
|
||||
- **Bulk Insert Overwrite Concurrency:** The bulk insert overwrite concurrency for Astra DB.
|
||||
|
||||
- **Bulk Delete Concurrency:** The bulk delete concurrency for Astra DB.
|
||||
|
||||
- **Setup Mode:** The setup mode for the vector store.
|
||||
|
||||
- **Pre Delete Collection:** Pre delete collection.
|
||||
|
||||
- **Metadata Indexing Include:** Metadata indexing include.
|
||||
|
||||
- **Metadata Indexing Exclude:** Metadata indexing exclude.
|
||||
|
||||
- **Collection Indexing Policy:** Collection indexing policy.
|
||||
- **Search Type:** Type of search, such as Similarity or MMR.
|
||||
- **Input Value:** Value to search for.
|
||||
- **Embedding:** Embedding model Astra DB uses.
|
||||
- **Collection Name:** Name of the Astra DB collection.
|
||||
- **Token:** Authentication token for Astra DB.
|
||||
- **API Endpoint:** API endpoint for Astra DB.
|
||||
- **Namespace:** Astra DB namespace.
|
||||
- **Metric:** Metric used by Astra DB.
|
||||
- **Batch Size:** Batch size for operations.
|
||||
- **Bulk Insert Batch Concurrency:** Concurrency level for bulk inserts.
|
||||
- **Bulk Insert Overwrite Concurrency:** Concurrency level for overwriting during bulk inserts.
|
||||
- **Bulk Delete Concurrency:** Concurrency level for bulk deletions.
|
||||
- **Setup Mode:** Setup mode for the vector store.
|
||||
- **Pre Delete Collection:** Option to delete the collection before setup.
|
||||
- **Metadata Indexing Include:** Fields to include in metadata indexing.
|
||||
- **Metadata Indexing Exclude:** Fields to exclude from metadata indexing.
|
||||
- **Collection Indexing Policy:** Indexing policy for the collection.
|
||||
|
||||
---
|
||||
|
||||
### Chroma
|
||||
|
||||
The `Chroma` is a component designed for implementing a Vector Store using Chroma. This component allows users to utilize Chroma for efficient vector storage and retrieval within their language processing workflows.
|
||||
`Chroma` sets up a vector store using Chroma for efficient vector storage and retrieval within language processing workflows.
|
||||
|
||||
**Params**
|
||||
|
||||
- **Collection Name:** The name of the collection.
|
||||
|
||||
- **Persist Directory:** The directory to persist the Vector Store to.
|
||||
|
||||
- **Server CORS Allow Origins (Optional):** The CORS allow origins for the Chroma server.
|
||||
|
||||
- **Server Host (Optional):** The host for the Chroma server.
|
||||
|
||||
- **Server Port (Optional):** The port for the Chroma server.
|
||||
|
||||
- **Server gRPC Port (Optional):** The gRPC port for the Chroma server.
|
||||
|
||||
- **Server SSL Enabled (Optional):** Whether to enable SSL for the Chroma server.
|
||||
**Parameters:**
|
||||
|
||||
- **Collection Name:** Name of the collection.
|
||||
- **Persist Directory:** Directory to persist the Vector Store.
|
||||
- **Server CORS Allow Origins (Optional):** CORS allow origins for the Chroma server.
|
||||
- **Server Host (Optional):** Host for the Chroma server.
|
||||
- **Server Port (Optional):** Port for the Chroma server.
|
||||
- **Server gRPC Port (Optional):** gRPC port for the Chroma server.
|
||||
- **Server SSL Enabled (Optional):** SSL configuration for the Chroma server.
|
||||
- **Input:** Input data for creating the Vector Store.
|
||||
|
||||
- **Embedding:** The embeddings to use for the Vector Store.
|
||||
- **Embedding:** Embeddings used for the Vector Store.
|
||||
|
||||
For detailed documentation and integration guides, please refer to the [Chroma Component Documentation](https://python.langchain.com/docs/integrations/vectorstores/chroma).
|
||||
|
||||
|
|
@ -121,515 +79,335 @@ For detailed documentation and integration guides, please refer to the [Chroma C
|
|||
|
||||
### Chroma Search
|
||||
|
||||
The `ChromaSearch` is a component designed for searching a Chroma collection for similar documents. This component integrates with Chroma to facilitate efficient document retrieval based on similarity metrics.
|
||||
`ChromaSearch` searches a Chroma collection for documents similar to the input text. It leverages Chroma to ensure efficient document retrieval.
|
||||
|
||||
**Params**
|
||||
**Parameters:**
|
||||
|
||||
- **Input:** The input text to search for similar documents.
|
||||
|
||||
- **Search Type:** The type of search to perform ("Similarity" or "MMR").
|
||||
|
||||
- **Collection Name:** The name of the Chroma collection.
|
||||
|
||||
- **Index Directory:** The directory where the Chroma index is stored.
|
||||
|
||||
- **Embedding:** The embedding model used to vectorize inputs (make sure to use the same as the index).
|
||||
|
||||
- **Server CORS Allow Origins (Optional):** The CORS allow origins for the Chroma server.
|
||||
|
||||
- **Server Host (Optional):** The host for the Chroma server.
|
||||
|
||||
- **Server Port (Optional):** The port for the Chroma server.
|
||||
|
||||
- **Server gRPC Port (Optional):** The gRPC port for the Chroma server.
|
||||
|
||||
- **Server SSL Enabled (Optional):** Whether SSL is enabled for the Chroma server.
|
||||
- **Input:** Input text for search.
|
||||
- **Search Type:** Type of search, such as Similarity or MMR.
|
||||
- **Collection Name:** Name of the Chroma collection.
|
||||
- **Index Directory:** Directory where the Chroma index is stored.
|
||||
- **Embedding:** Embedding model used for vectorization.
|
||||
- **Server CORS Allow Origins (Optional):** CORS allow origins for the Chroma server.
|
||||
- **Server Host (Optional):** Host for the Chroma server.
|
||||
- **Server Port (Optional):** Port for the Chroma server.
|
||||
- **Server gRPC Port (Optional):** gRPC port for the Chroma server.
|
||||
- **Server SSL Enabled (Optional):** SSL configuration for the Chroma server.
|
||||
|
||||
---
|
||||
|
||||
### FAISS
|
||||
|
||||
The `FAISS` is a component designed for ingesting documents into a FAISS Vector Store. It facilitates efficient document indexing and retrieval using the FAISS library.
|
||||
The `FAISS` component manages document ingestion into a FAISS Vector Store, optimizing document indexing and retrieval.
|
||||
|
||||
**Params**
|
||||
**Parameters:**
|
||||
|
||||
- **Embedding:** The embedding model used to vectorize inputs.
|
||||
- **Embedding:** Model used for vectorizing inputs.
|
||||
- **Input:** Documents to ingest.
|
||||
- **Folder Path:** Save path for the FAISS index, relative to Langflow.
|
||||
- **Index Name:** Index identifier.
|
||||
|
||||
- **Input:** The input documents to ingest into the FAISS Vector Store.
|
||||
|
||||
- **Folder Path:** The path to save the FAISS index. It will be relative to where Langflow is running.
|
||||
|
||||
- **Index Name:** The name of the FAISS index.
|
||||
|
||||
For detailed documentation and integration guides, please refer to the [FAISS Component Documentation](https://faiss.ai/index.html).
|
||||
For more details, see the [FAISS Component Documentation](https://faiss.ai/index.html).
|
||||
|
||||
---
|
||||
|
||||
### FAISS Search
|
||||
|
||||
The `FAISSSearch` is a component for searching a FAISS Vector Store for similar documents. It enables efficient document retrieval based on similarity metrics using FAISS.
|
||||
`FAISSSearch` searches a FAISS Vector Store for documents similar to a given input, using similarity metrics for efficient retrieval.
|
||||
|
||||
**Params**
|
||||
**Parameters:**
|
||||
|
||||
- **Embedding:** The embedding model used by the FAISS Vector Store.
|
||||
|
||||
- **Folder Path:** The path from which to load the FAISS index. It will be relative to where Langflow is running.
|
||||
|
||||
- **Input:** The input value to search for similar documents.
|
||||
|
||||
- **Index Name:** The name of the FAISS index.
|
||||
- **Embedding:** Model used in the FAISS Vector Store.
|
||||
- **Folder Path:** Path to load the FAISS index from, relative to Langflow.
|
||||
- **Input:** Search query.
|
||||
- **Index Name:** Index identifier.
|
||||
|
||||
---
|
||||
|
||||
### MongoDB Atlas
|
||||
|
||||
The `MongoDBAtlas` is a component used to construct a MongoDB Atlas Vector Search vector store from Records. It facilitates the creation of MongoDB Atlas-based vector stores for efficient document storage and retrieval.
|
||||
`MongoDBAtlas` builds a MongoDB Atlas-based vector store from records, streamlining the storage and retrieval of documents.
|
||||
|
||||
**Params**
|
||||
**Parameters:**
|
||||
|
||||
- **Embedding:** The embedding model used by the MongoDB Atlas Vector Search.
|
||||
|
||||
- **Input:** The input documents or records.
|
||||
|
||||
- **Collection Name:** The name of the collection in the MongoDB Atlas database.
|
||||
|
||||
- **Database Name:** The name of the database in MongoDB Atlas.
|
||||
|
||||
- **Index Name:** The name of the index in MongoDB Atlas.
|
||||
|
||||
- **MongoDB Atlas Cluster URI:** The URI of the MongoDB Atlas cluster.
|
||||
|
||||
- **Search Kwargs:** Additional search arguments for MongoDB Atlas.
|
||||
- **Embedding:** Model used by MongoDB Atlas.
|
||||
- **Input:** Documents or records.
|
||||
- **Collection Name:** Collection identifier in MongoDB Atlas.
|
||||
- **Database Name:** Database identifier.
|
||||
- **Index Name:** Index identifier.
|
||||
- **MongoDB Atlas Cluster URI:** Cluster URI.
|
||||
- **Search Kwargs:** Additional search parameters.
|
||||
|
||||
<Admonition type="note" title="Note">
|
||||
<p>Ensure that pymongo is installed to use MongoDB Atlas Vector Store.</p>
|
||||
Ensure pymongo is installed for using MongoDB Atlas Vector Store.
|
||||
</Admonition>
|
||||
|
||||
---
|
||||
|
||||
### MongoDB Atlas Search
|
||||
|
||||
The `MongoDBAtlasSearch` is a component for searching a MongoDB Atlas Vector Store for similar documents. It extends the functionality of the MongoDBAtlasComponent to provide efficient document retrieval based on similarity metrics.
|
||||
`MongoDBAtlasSearch` leverages the MongoDBAtlas component to search for documents based on similarity metrics.
|
||||
|
||||
**Params**
|
||||
**Parameters:**
|
||||
|
||||
- **Search Type:** The type of search to perform. Options: "Similarity", "MMR".
|
||||
|
||||
- **Input:** The input value to search for.
|
||||
|
||||
- **Embedding:** The embedding model used by the MongoDB Atlas Vector Store.
|
||||
|
||||
- **Collection Name:** The name of the collection in the MongoDB Atlas database.
|
||||
|
||||
- **Database Name:** The name of the database in MongoDB Atlas.
|
||||
|
||||
- **Index Name:** The name of the index in MongoDB Atlas.
|
||||
|
||||
- **MongoDB Atlas Cluster URI:** The URI of the MongoDB Atlas cluster.
|
||||
|
||||
- **Search Kwargs:** Additional search arguments for MongoDB Atlas.
|
||||
- **Search Type:** Type of search, such as "Similarity" or "MMR".
|
||||
- **Input:** Search query.
|
||||
- **Embedding:** Model used in the Vector Store.
|
||||
- **Collection Name:** Collection identifier.
|
||||
- **Database Name:** Database identifier.
|
||||
- **Index Name:** Index identifier.
|
||||
- **MongoDB Atlas Cluster URI:** Cluster URI.
|
||||
- **Search Kwargs:** Additional search parameters.
|
||||
|
||||
---
|
||||
|
||||
### PGVector
|
||||
|
||||
The `PGVector` is a component for implementing a Vector Store using PostgreSQL. It allows users to store and retrieve vectors efficiently within a PostgreSQL database.
|
||||
`PGVector` integrates a Vector Store within a PostgreSQL database, allowing efficient storage and retrieval of vectors.
|
||||
|
||||
**Params**
|
||||
**Parameters:**
|
||||
|
||||
- **Input:** The input value to use for the Vector Store.
|
||||
- **Input:** Value for the Vector Store.
|
||||
- **Embedding:** Model used.
|
||||
- **PostgreSQL Server Connection String:** Server URL.
|
||||
- **Table:** Table name in the PostgreSQL database.
|
||||
|
||||
- **Embedding:** The embedding model used by the Vector Store.
|
||||
|
||||
- **PostgreSQL Server Connection String:** The URL for the PostgreSQL server.
|
||||
|
||||
- **Table:** The name of the table in the PostgreSQL database.
|
||||
|
||||
For detailed documentation and integration guides, please refer to the [PGVector Component Documentation](https://python.langchain.com/docs/integrations/vectorstores/pgvector).
|
||||
For more details, see the [PGVector Component Documentation](https://python.langchain.com/docs/integrations/vectorstores/pgvector).
|
||||
|
||||
<Admonition type="note" title="Note">
|
||||
<p>
|
||||
Ensure that the required PostgreSQL server is accessible and properly
|
||||
configured.
|
||||
</p>
|
||||
Ensure the PostgreSQL server is accessible and configured correctly.
|
||||
</Admonition>
|
||||
|
||||
---
|
||||
|
||||
### PGVector Search
|
||||
|
||||
The `PGVectorSearch` is a component for searching a PGVector Store for similar documents. It extends the functionality of the PGVectorComponent to provide efficient document retrieval based on similarity metrics.
|
||||
`PGVectorSearch` extends `PGVector` to search for documents based on similarity metrics.
|
||||
|
||||
**Params**
|
||||
**Parameters:**
|
||||
|
||||
- **Input:** The input value to search for.
|
||||
|
||||
- **Embedding:** The embedding model used by the Vector Store.
|
||||
|
||||
- **PostgreSQL Server Connection String:** The URL for the PostgreSQL server.
|
||||
|
||||
- **Table:** The name of the table in the PostgreSQL database.
|
||||
|
||||
- **Search Type:** The type of search to perform (e.g., "Similarity", "MMR").
|
||||
- **Input:** Search query.
|
||||
- **Embedding:** Model used.
|
||||
- **PostgreSQL Server Connection String:** Server URL.
|
||||
- **Table:** Table name.
|
||||
- **Search Type:** Type of search, such as "Similarity" or "MMR".
|
||||
|
||||
---
|
||||
|
||||
### Pinecone
|
||||
|
||||
The `Pinecone` is a component used to construct a Pinecone wrapper from Records. It facilitates the creation of Pinecone-based vector indexes for efficient document storage and retrieval.
|
||||
`Pinecone` constructs a Pinecone wrapper from records, setting up Pinecone-based vector indexes for document storage and retrieval.
|
||||
|
||||
**Params**
|
||||
**Parameters:**
|
||||
|
||||
- **Input:** The input documents or records.
|
||||
|
||||
- **Embedding:** The embedding model used by Pinecone.
|
||||
|
||||
- **Index Name:** The name of the index in Pinecone.
|
||||
|
||||
- **Namespace:** The namespace in Pinecone.
|
||||
|
||||
- **Pinecone API Key:** The API key for Pinecone.
|
||||
|
||||
- **Pinecone Environment:** The environment for Pinecone.
|
||||
|
||||
- **Search Kwargs:** Additional search keyword arguments for Pinecone.
|
||||
|
||||
- **Pool Threads:** The number of threads to use for Pinecone.
|
||||
- **Input:** Documents or records.
|
||||
- **Embedding:** Model used.
|
||||
- **Index Name:** Index identifier.
|
||||
- **Namespace:** Namespace used.
|
||||
- **Pinecone API Key:** API key.
|
||||
- **Pinecone Environment:** Environment settings.
|
||||
- **Search Kwargs:** Additional search parameters.
|
||||
- **Pool Threads:** Number of threads.
|
||||
|
||||
<Admonition type="note" title="Note">
|
||||
<p>
|
||||
Ensure that the required Pinecone API key and environment are properly
|
||||
configured.
|
||||
</p>
|
||||
Ensure the Pinecone API key and environment are correctly configured.
|
||||
</Admonition>
|
||||
|
||||
---
|
||||
|
||||
### Pinecone Search
|
||||
|
||||
The `PineconeSearch` is a component used to search a Pinecone Vector Store for similar documents. It extends the functionality of the `PineconeComponent` to provide efficient document retrieval based on similarity metrics.
|
||||
`PineconeSearch` searches a Pinecone Vector Store for documents similar to the input, using advanced similarity metrics.
|
||||
|
||||
**Params**
|
||||
**Parameters:**
|
||||
|
||||
- **Search Type:** The type of search to perform (e.g., Similarity, MMR).
|
||||
|
||||
- **Input Value:** The input value to search for.
|
||||
|
||||
- **Embedding:** The embedding model used by Pinecone.
|
||||
|
||||
- **Index Name:** The name of the index in Pinecone.
|
||||
|
||||
- **Namespace:** The namespace in Pinecone.
|
||||
|
||||
- **Pinecone API Key:** The API key for Pinecone.
|
||||
|
||||
- **Pinecone Environment:** The environment for Pinecone.
|
||||
|
||||
- **Search Kwargs:** Additional search keyword arguments for Pinecone.
|
||||
|
||||
- **Pool Threads:** The number of threads to use for Pinecone.
|
||||
- **Search Type:** Type of search, such as "Similarity" or "MMR".
|
||||
- **Input Value:** Search query.
|
||||
- **Embedding:** Model used.
|
||||
- **Index Name:** Index identifier.
|
||||
- **Namespace:** Namespace used.
|
||||
- **Pinecone API Key:** API key.
|
||||
- **Pinecone Environment:** Environment settings.
|
||||
- **Search Kwargs:** Additional search parameters.
|
||||
- **Pool Threads:** Number of threads.
|
||||
|
||||
---
|
||||
|
||||
### Qdrant
|
||||
|
||||
The `Qdrant` is a component used to construct a Qdrant wrapper from a list of texts. It allows for efficient similarity search and retrieval operations based on the provided embeddings.
|
||||
`Qdrant` allows efficient similarity searches and retrieval operations, using a list of texts to construct a Qdrant wrapper.
|
||||
|
||||
**Params**
|
||||
**Parameters:**
|
||||
|
||||
- **Input:** The input documents or records.
|
||||
|
||||
- **Embedding:** The embedding model used by Qdrant.
|
||||
|
||||
- **API Key:** The API key for Qdrant (password field).
|
||||
|
||||
- **Collection Name:** The name of the collection in Qdrant.
|
||||
|
||||
- **Content Payload Key:** The key for the content payload in the documents (advanced).
|
||||
|
||||
- **Distance Function:** The distance function to use in Qdrant (advanced).
|
||||
|
||||
- **gRPC Port:** The gRPC port for Qdrant (advanced).
|
||||
|
||||
- **Host:** The host for Qdrant (advanced).
|
||||
|
||||
- **HTTPS:** Enable HTTPS for Qdrant (advanced).
|
||||
|
||||
- **Location:** The location for Qdrant (advanced).
|
||||
|
||||
- **Metadata Payload Key:** The key for the metadata payload in the documents (advanced).
|
||||
|
||||
- **Path:** The path for Qdrant (advanced).
|
||||
|
||||
- **Port:** The port for Qdrant (advanced).
|
||||
|
||||
- **Prefer gRPC:** Prefer gRPC for Qdrant (advanced).
|
||||
|
||||
- **Prefix:** The prefix for Qdrant (advanced).
|
||||
|
||||
- **Search Kwargs:** Additional search keyword arguments for Qdrant (advanced).
|
||||
|
||||
- **Timeout:** The timeout for Qdrant (advanced).
|
||||
|
||||
- **URL:** The URL for Qdrant (advanced).
|
||||
- **Input:** Documents or records.
|
||||
- **Embedding:** Model used.
|
||||
- **API Key:** Qdrant API key.
|
||||
- **Collection Name:** Collection identifier.
|
||||
- **Advanced Settings:** Includes content payload key, distance function, gRPC port, host, HTTPS, location, metadata payload key, path, port, prefer gRPC, prefix, search kwargs, timeout, URL.
|
||||
|
||||
---
|
||||
|
||||
### Qdrant Search
|
||||
|
||||
The `QdrantSearch` is a component used to search a Qdrant Vector Store for similar documents. It extends the functionality of the `QdrantComponent` to provide efficient document retrieval based on similarity metrics.
|
||||
`QdrantSearch` extends `Qdrant` to search for documents similar to the input based on advanced similarity metrics.
|
||||
|
||||
**Params**
|
||||
**Parameters:**
|
||||
|
||||
- **Search Type:** The type of search to perform (e.g., Similarity, MMR).
|
||||
|
||||
- **Input Value:** The input value to search for.
|
||||
|
||||
- **Embedding:** The embedding model used by Qdrant.
|
||||
|
||||
- **API Key:** The API key for Qdrant (password field).
|
||||
|
||||
- **Collection Name:** The name of the collection in Qdrant.
|
||||
|
||||
- **Content Payload Key:** The key for the content payload in the documents (advanced).
|
||||
|
||||
- **Distance Function:** The distance function to use in Qdrant (advanced).
|
||||
|
||||
- **gRPC Port:** The gRPC port for Qdrant (advanced).
|
||||
|
||||
- **Host:** The host for Qdrant (advanced).
|
||||
|
||||
- **HTTPS:** Enable HTTPS for Qdrant (advanced).
|
||||
|
||||
- **Location:** The location for Qdrant (advanced).
|
||||
|
||||
- **Metadata Payload Key:** The key for the metadata payload in the documents (advanced).
|
||||
|
||||
- **Path:** The path for Qdrant (advanced).
|
||||
|
||||
- **Port:** The port for Qdrant (advanced).
|
||||
|
||||
- **Prefer gRPC:** Prefer gRPC for Qdrant (advanced).
|
||||
|
||||
- **Prefix:** The prefix for Qdrant (advanced).
|
||||
|
||||
- **Search Kwargs:** Additional search keyword arguments for Qdrant (advanced).
|
||||
|
||||
- **Timeout:** The timeout for Qdrant (advanced).
|
||||
|
||||
- **URL:** The URL for Qdrant (advanced).
|
||||
- **Search Type:** Type of search, such as "Similarity" or "MMR".
|
||||
- **Input Value:** Search query.
|
||||
- **Embedding:** Model used.
|
||||
- **API Key:** Qdrant API key.
|
||||
- **Collection Name:** Collection identifier.
|
||||
- **Advanced Settings:** Includes content payload key, distance function, gRPC port, host, HTTPS, location, metadata payload key, path, port, prefer gRPC, prefix, search kwargs, timeout, URL.
|
||||
|
||||
---
|
||||
|
||||
### Redis
|
||||
|
||||
The `Redis` is a component for implementing a Vector Store using Redis. It provides functionality to store and retrieve vectors efficiently from a Redis database.
|
||||
`Redis` manages a Vector Store in a Redis database, supporting efficient vector storage and retrieval.
|
||||
|
||||
**Params**
|
||||
**Parameters:**
|
||||
|
||||
- **Index Name:** The name of the index in Redis (default: your_index).
|
||||
- **Index Name:** Default index name.
|
||||
- **Input:** Data for building the Redis Vector Store.
|
||||
- **Embedding:** Model used.
|
||||
- **Schema:** Optional schema file (.yaml) for document structure.
|
||||
- **Redis Server Connection String:** Server URL.
|
||||
- **Redis Index:** Optional index name.
|
||||
|
||||
- **Input:** The input data to build the Redis Vector Store (input types: Document, Record).
|
||||
|
||||
- **Embedding:** The embedding model used by Redis.
|
||||
|
||||
- **Schema:** The schema file (.yaml) to define the structure of the documents (optional).
|
||||
|
||||
- **Redis Server Connection String:** The connection string for the Redis server.
|
||||
|
||||
- **Redis Index:** The name of the Redis index (optional).
|
||||
|
||||
For detailed documentation, please refer to the [Redis Documentation](https://python.langchain.com/docs/integrations/vectorstores/redis).
|
||||
For detailed documentation, refer to the [Redis Documentation](https://python.langchain.com/docs/integrations/vectorstores/redis).
|
||||
|
||||
<Admonition type="note" title="Note">
|
||||
<p>
|
||||
Ensure that the required Redis server connection URL and index name are
|
||||
properly configured. If no documents are provided, a schema must be
|
||||
provided.
|
||||
</p>
|
||||
Ensure the Redis server URL and index name are configured correctly. Provide a schema if no documents are available.
|
||||
</Admonition>
|
||||
|
||||
---
|
||||
|
||||
### Redis Search
|
||||
|
||||
The `RedisSearch` is a component for searching a Redis Vector Store for similar documents.
|
||||
`RedisSearch` searches a Redis Vector Store for documents similar to the input.
|
||||
|
||||
**Params**
|
||||
**Parameters:**
|
||||
|
||||
- **Search Type:** The type of search to perform (e.g., Similarity, MMR).
|
||||
|
||||
- **Input Value:** The input value to search for.
|
||||
|
||||
- **Index Name:** The name of the index in Redis (default: your_index).
|
||||
|
||||
- **Embedding:** The embedding model used by Redis.
|
||||
|
||||
- **Schema:** The schema file (.yaml) to define the structure of the documents (optional).
|
||||
|
||||
- **Redis Server Connection String:** The connection string for the Redis server.
|
||||
|
||||
- **Redis Index:** The name of the Redis index (optional).
|
||||
- **Search Type:** Type of search, such as "Similarity" or "MMR".
|
||||
- **Input Value:** Search query.
|
||||
- **Index Name:** Default index name.
|
||||
- **Embedding:** Model used.
|
||||
- **Schema:** Optional schema file (.yaml) for document structure.
|
||||
- **Redis Server Connection String:** Server URL.
|
||||
- **Redis Index:** Optional index name.
|
||||
|
||||
---
|
||||
|
||||
### Supabase
|
||||
|
||||
The `Supabase` is a component for initializing a Supabase Vector Store from texts and embeddings.
|
||||
`Supabase` initializes a Supabase Vector Store from texts and embeddings, setting up an environment for efficient document retrieval.
|
||||
|
||||
**Params**
|
||||
**Parameters:**
|
||||
|
||||
- **Input:** The input documents or records.
|
||||
|
||||
- **Embedding:** The embedding model used by Supabase.
|
||||
|
||||
- **Query Name:** The name of the query (optional).
|
||||
|
||||
- **Search Kwargs:** Additional search keyword arguments for Supabase (advanced).
|
||||
|
||||
- **Supabase Service Key:** The service key for Supabase.
|
||||
|
||||
- **Supabase URL:** The URL for the Supabase instance.
|
||||
|
||||
- **Table Name:** The name of the table in Supabase (advanced).
|
||||
- **Input:** Documents or records.
|
||||
- **Embedding:** Model used.
|
||||
- **Query Name:** Optional query name.
|
||||
- **Search Kwargs:** Advanced search parameters.
|
||||
- **Supabase Service Key:** Service key.
|
||||
- **Supabase URL:** Instance URL.
|
||||
- **Table Name:** Optional table name.
|
||||
|
||||
<Admonition type="note" title="Note">
|
||||
<p>
|
||||
Ensure that the required Supabase service key, Supabase URL, and table name
|
||||
are properly configured.
|
||||
</p>
|
||||
Ensure the Supabase service key, URL, and table name are properly configured.
|
||||
</Admonition>
|
||||
|
||||
---
|
||||
|
||||
### Supabase Search
|
||||
|
||||
The `SupabaseSearch` is a component for searching a Supabase Vector Store for similar documents.
|
||||
`SupabaseSearch` searches a Supabase Vector Store for documents similar to the input.
|
||||
|
||||
**Params**
|
||||
**Parameters:**
|
||||
|
||||
- **Search Type:** The type of search to perform (e.g., Similarity, MMR).
|
||||
|
||||
- **Input Value:** The input value to search for.
|
||||
|
||||
- **Embedding:** The embedding model used by Supabase.
|
||||
|
||||
- **Query Name:** The name of the query (optional).
|
||||
|
||||
- **Search Kwargs:** Additional search keyword arguments for Supabase (advanced).
|
||||
|
||||
- **Supabase Service Key:** The service key for Supabase.
|
||||
|
||||
- **Supabase URL:** The URL for the Supabase instance.
|
||||
|
||||
- **Table Name:** The name of the table in Supabase (advanced).
|
||||
- **Search Type:** Type of search, such as "Similarity" or "MMR".
|
||||
- **Input Value:** Search query.
|
||||
- **Embedding:** Model used.
|
||||
- **Query Name:** Optional query name.
|
||||
- **Search Kwargs:** Advanced search parameters.
|
||||
- **Supabase Service Key:** Service key.
|
||||
- **Supabase URL:** Instance URL.
|
||||
- **Table Name:** Optional table name.
|
||||
|
||||
---
|
||||
|
||||
### Vectara
|
||||
|
||||
The `Vectara` is a component for implementing a Vector Store using Vectara.
|
||||
`Vectara` sets up a Vectara Vector Store from files or upserted data, optimizing document retrieval.
|
||||
|
||||
**Params**
|
||||
**Parameters:**
|
||||
|
||||
- **Vectara Customer ID:** The customer ID for Vectara.
|
||||
- **Vectara Customer ID:** Customer ID.
|
||||
- **Vectara Corpus ID:** Corpus ID.
|
||||
- **Vectara API Key:** API key.
|
||||
- **Files Url:** Optional URLs for file initialization.
|
||||
- **Input:** Optional data for corpus upsert.
|
||||
|
||||
- **Vectara Corpus ID:** The corpus ID for Vectara.
|
||||
|
||||
- **Vectara API Key:** The API key for Vectara.
|
||||
|
||||
- **Files Url:** The URL(s) of the file(s) to be used for initializing the Vectara Vector Store (optional).
|
||||
|
||||
- **Input:** The input data to be upserted to the corpus (optional).
|
||||
|
||||
For detailed documentation and integration guides, please refer to the [Vectara Component Documentation](https://python.langchain.com/docs/integrations/vectorstores/vectara).
|
||||
For more information, consult the [Vectara Component Documentation](https://python.langchain.com/docs/integrations/vectorstores/vectara).
|
||||
|
||||
<Admonition type="note" title="Note">
|
||||
<p>
|
||||
If `inputs` are provided, they will be upserted to the corpus. If
|
||||
`files_url` are provided, Vectara will process the files from the URLs.
|
||||
</p>
|
||||
If inputs or files_url are provided, they will be processed accordingly.
|
||||
</Admonition>
|
||||
|
||||
---
|
||||
|
||||
### Vectara Search
|
||||
|
||||
The `VectaraSearch` is a component for searching a Vectara Vector Store for similar documents.
|
||||
`VectaraSearch` searches a Vectara Vector Store for documents based on the provided input.
|
||||
|
||||
**Params**
|
||||
**Parameters:**
|
||||
|
||||
- **Search Type:** The type of search to perform (e.g., Similarity, MMR).
|
||||
|
||||
- **Input Value:** The input value to search for.
|
||||
|
||||
- **Vectara Customer ID:** The customer ID for Vectara.
|
||||
|
||||
- **Vectara Corpus ID:** The corpus ID for Vectara.
|
||||
|
||||
- **Vectara API Key:** The API key for Vectara.
|
||||
|
||||
- **Files Url:** The URL(s) of the file(s) to be used for initializing the Vectara Vector Store (optional).
|
||||
- **Search Type:** Type of search, such as "Similarity" or "MMR".
|
||||
- **Input Value:** Search query.
|
||||
- **Vectara Customer ID:** Customer ID.
|
||||
- **Vectara Corpus ID:** Corpus ID.
|
||||
- **Vectara API Key:** API key.
|
||||
- **Files Url:** Optional URLs for file initialization.
|
||||
|
||||
---
|
||||
|
||||
### Weaviate
|
||||
|
||||
The `Weaviate` is a component for implementing a Vector Store using Weaviate.
|
||||
`Weaviate` facilitates a Weaviate Vector Store setup, optimizing text and document indexing and retrieval.
|
||||
|
||||
**Params**
|
||||
**Parameters:**
|
||||
|
||||
- **Weaviate URL:** The URL of the Weaviate instance (default: http://localhost:8080).
|
||||
- **Weaviate URL:** Default instance URL.
|
||||
- **Search By Text:** Indicates whether to search by text.
|
||||
- **API Key:** Optional API key for authentication.
|
||||
- **Index Name:** Optional index name.
|
||||
- **Text Key:** Default text extraction key.
|
||||
- **Input:** Document or record.
|
||||
- **Embedding:** Model used.
|
||||
- **Attributes:** Optional additional attributes.
|
||||
|
||||
- **Search By Text:** Boolean indicating whether to search by text (default: False).
|
||||
|
||||
- **API Key:** The API key for authentication (optional).
|
||||
|
||||
- **Index name:** The name of the index in Weaviate (optional).
|
||||
|
||||
- **Text Key:** The key used to extract text from documents (default: "text").
|
||||
|
||||
- **Input:** The input document or record.
|
||||
|
||||
- **Embedding:** The embedding model used by Weaviate.
|
||||
|
||||
- **Attributes:** Additional attributes to consider during indexing (optional).
|
||||
|
||||
For detailed documentation and integration guides, please refer to the [Weaviate Component Documentation](https://python.langchain.com/docs/integrations/vectorstores/weaviate).
|
||||
For more details, see the [Weaviate Component Documentation](https://python.langchain.com/docs/integrations/vectorstores/weaviate).
|
||||
|
||||
<Admonition type="note" title="Note">
|
||||
<p>
|
||||
Before using the Weaviate Vector Store component, ensure that you have a
|
||||
Weaviate instance running and accessible at the specified URL. Additionally,
|
||||
make sure to provide the correct API key for authentication if required.
|
||||
Adjust the index name, text key, and attributes according to your dataset
|
||||
and indexing requirements. Finally, ensure that the provided embeddings are
|
||||
compatible with Weaviate's requirements.
|
||||
</p>
|
||||
Ensure Weaviate instance is running and accessible. Verify API key, index name, text key, and attributes are set correctly.
|
||||
</Admonition>
|
||||
|
||||
---
|
||||
|
||||
### Weaviate Search
|
||||
|
||||
The `WeaviateSearch` component facilitates searching a Weaviate Vector Store for similar documents.
|
||||
`WeaviateSearch` searches a Weaviate Vector Store for documents similar to the input.
|
||||
|
||||
**Params**
|
||||
**Parameters:**
|
||||
|
||||
- **Search Type:** The type of search to perform (e.g., Similarity, MMR).
|
||||
- **Search Type:** Type of search, such as "Similarity" or "MMR".
|
||||
- **Input Value:** Search query.
|
||||
- **Weaviate URL:** Default instance URL.
|
||||
- **Search By Text:** Indicates whether to search by text.
|
||||
- **API Key:** Optional API key for authentication.
|
||||
- **Index Name:** Optional index name.
|
||||
- **Text Key:** Default text extraction key.
|
||||
- **Embedding:** Model used.
|
||||
- **Attributes:** Optional additional attributes.
|
||||
|
||||
- **Input Value:** The input value to search for.
|
||||
|
||||
- **Weaviate URL:** The URL of the Weaviate instance (default: http://localhost:8080).
|
||||
|
||||
- **Search By Text:** Boolean indicating whether to search by text (default: False).
|
||||
|
||||
- **API Key:** The API key for authentication (optional).
|
||||
|
||||
- **Index name:** The name of the index in Weaviate (optional).
|
||||
|
||||
- **Text Key:** The key used to extract text from documents (default: "text").
|
||||
|
||||
- **Embedding:** The embedding model used by Weaviate.
|
||||
|
||||
- **Attributes:** Additional attributes to consider during indexing (optional).
|
||||
---
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue