feat(docs): add "ZONE UNDER CONSTRUCTION" message to components (#665)

This commit is contained in:
Gabriel Luiz Freitas Almeida 2023-07-25 10:02:41 -03:00 committed by GitHub
commit f5fd9fe22a
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
15 changed files with 470 additions and 9 deletions

View file

@ -1,5 +1,14 @@
import Admonition from '@theme/Admonition';
# Agents
<Admonition type="caution" icon="🚧" title="ZONE UNDER CONSTRUCTION">
<p>
We appreciate your understanding as we polish our documentation it may contain some rough edges. Share your feedback or report issues to help us improve! 🛠️📝
</p>
</Admonition>
Agents are components that use reasoning to make decisions and take actions, designed to autonomously perform tasks or provide services with some degree of “freedom” (or agency). They combine the power of LLM chaining processes with access to external tools such as APIs to interact with applications and accomplish tasks.
---

View file

@ -1,9 +1,16 @@
import ThemedImage from "@theme/ThemedImage";
import useBaseUrl from "@docusaurus/useBaseUrl";
import ZoomableImage from "/src/theme/ZoomableImage.js";
import Admonition from '@theme/Admonition';
# Chains
<Admonition type="caution" icon="🚧" title="ZONE UNDER CONSTRUCTION">
<p>
We appreciate your understanding as we polish our documentation it may contain some rough edges. Share your feedback or report issues to help us improve! 🛠️📝
</p>
</Admonition>
Chains, in the context of language models, refer to a series of calls made to a language model. It allows for the output of one call to be used as the input for another call. Different types of chains allow for different levels of complexity. Chains are useful for creating pipelines and executing specific scenarios.
---

View file

@ -1,5 +1,13 @@
import Admonition from '@theme/Admonition';
# Embeddings
<Admonition type="caution" icon="🚧" title="ZONE UNDER CONSTRUCTION">
<p>
We appreciate your understanding as we polish our documentation it may contain some rough edges. Share your feedback or report issues to help us improve! 🛠️📝
</p>
</Admonition>
Embeddings are vector representations of text that capture the semantic meaning of the text. They are created using text embedding models and allow us to think about the text in a vector space, enabling us to perform tasks like semantic search, where we look for pieces of text that are most similar in the vector space.
---

View file

@ -1,2 +1,198 @@
import Admonition from '@theme/Admonition';
# LLMs
(coming soon)
<Admonition type="caution" icon="🚧" title="ZONE UNDER CONSTRUCTION">
<p>
We appreciate your understanding as we polish our documentation it may contain some rough edges. Share your feedback or report issues to help us improve! 🛠️📝
</p>
</Admonition>
An LLM stands for Large Language Model. It is a core component of Langflow and provides a standard interface for interacting with different LLMs from various providers such as OpenAI, Cohere, and HuggingFace. LLMs are used widely throughout Langflow, including in chains and agents. They can be used to generate text based on a given prompt (or input).
---
### Anthropic
Wrapper around Anthropic's large language models. Find out more at [Anthropic](https://www.anthropic.com).
- **anthropic_api_key:** Used to authenticate and authorize access to the Anthropic API.
- **anthropic_api_url:** Specifies the URL of the Anthropic API to connect to.
- **temperature:** Tunes the degree of randomness in text generations. Should be a non-negative value.
---
### ChatAnthropic
Wrapper around Anthropic's large language model used for chat-based interactions. Find out more at [Anthropic](https://www.anthropic.com).
- **anthropic_api_key:** Used to authenticate and authorize access to the Anthropic API.
- **anthropic_api_url:** Specifies the URL of the Anthropic API to connect to.
- **temperature:** Tunes the degree of randomness in text generations. Should be a non-negative value.
---
### CTransformers
The `CTransformers` component provides access to the Transformer models implemented in C/C++ using the [GGML](https://github.com/ggerganov/ggml) library.
:::info
Make sure to have the `ctransformers` python package installed. Learn more about installation, supported models, and usage [here](https://github.com/marella/ctransformers).
:::
**config:** Configuration for the Transformer models. Check out [config](https://github.com/marella/ctransformers#config). Defaults to:
```
{
"top_k": 40,
"top_p": 0.95,
"temperature": 0.8,
"repetition_penalty": 1.1,
"last_n_tokens": 64,
"seed": -1,
"max_new_tokens": 256,
"stop": null,
"stream": false,
"reset": true,
"batch_size": 8,
"threads": -1,
"context_length": -1,
"gpu_layers": 0
}
```
**model:** The path to a model file or directory or the name of a Hugging Face Hub model repo.
**model_file:** The name of the model file in the repo or directory.
**model_type:** Transformer model to be used. Learn more [here](https://github.com/marella/ctransformers).
---
### ChatOpenAI
Wrapper around [OpenAI's](https://openai.com) chat large language models. This component supports some of the LLMs (Large Language Models) available by OpenAI and is used for tasks such as chatbots, Generative Question-Answering (GQA), and summarization.
- **max_tokens:** The maximum number of tokens to generate in the completion. `-1` returns as many tokens as possible, given the prompt and the model's maximal context size defaults to `256`.
- **model_kwargs:** Holds any model parameters valid for creating non-specified calls.
- **model_name:** Defines the OpenAI chat model to be used.
- **openai_api_base:** Used to specify the base URL for the OpenAI API. It is typically set to the API endpoint provided by the OpenAI service.
- **openai_api_key:** Key used to authenticate and access the OpenAI API.
- **temperature:** Tunes the degree of randomness in text generations. Should be a non-negative value defaults to `0.7`.
---
### Cohere
Wrapper around [Cohere's](https://cohere.com) large language models.
- **cohere_api_key:** Holds the API key required to authenticate with the Cohere service.
- **max_tokens:** Maximum number of tokens to predict per generation defaults to `256`.
- **temperature:** Tunes the degree of randomness in text generations. Should be a non-negative value defaults to `0.75`.
---
### HuggingFaceHub
Wrapper around [HuggingFace](https://www.huggingface.co/models) models.
:::info
The HuggingFace Hub is an online platform that hosts over 120k models, 20k datasets, and 50k demo apps, all of which are open-source and publicly available. Discover more at [HuggingFace](http://www.huggingface.co).
:::
- **huggingfacehub_api_token:** Token needed to authenticate the API.
- **model_kwargs:** Keyword arguments to pass to the model.
- **repo_id:** Model name to use defaults to `gpt2`.
- **task:** Task to call the model with. Should be a task that returns `generated_text` or `summary_text`.
---
### LlamaCpp
The `LlamaCpp` component provides access to the `llama.cpp` models.
:::info
Make sure to have the `llama.cpp` python package installed. Learn more about installation, supported models, and usage [here](https://github.com/ggerganov/llama.cpp).
:::
- **echo:** Whether to echo the prompt defaults to `False`.
- **f16_kv:** Use half-precision for key/value cache defaults to `True`.
- **last_n_tokens_size:** The number of tokens to look back at when applying the repeat_penalty. Defaults to `64`.
- **logits_all:** Return logits for all tokens, not just the last token Defaults to `False`.
- **logprobs:** The number of logprobs to return. If None, no logprobs are returned.
- **lora_base:** The path to the Llama LoRA base model.
- **lora_path:** The path to the Llama LoRA. If None, no LoRa is loaded.
- **max_tokens:** The maximum number of tokens to generate. Defaults to `256`.
- **model_path:** The path to the Llama model file.
- **n_batch:** Number of tokens to process in parallel. Should be a number between 1 and n_ctx. Defaults to `8`.
- **n_ctx:** Token context window. Defaults to `512`.
- **n_gpu_layers:** Number of layers to be loaded into GPU memory. Default None.
- **n_parts:**Number of parts to split the model into. If -1, the number of parts is automatically determined. Defaults to `-1`.
- **n_threads:** Number of threads to use. If None, the number of threads is automatically determined.
- **repeat_penalty:** The penalty to apply to repeated tokens. Defaults to `1.1`.
- **seed:** Seed. If -1, a random seed is used. Defaults to `-1`.
- **stop:** A list of strings to stop generation when encountered.
- **streaming:** Whether to stream the results, token by token. Defaults to `True`.
- **suffix:** A suffix to append to the generated text. If None, no suffix is appended.
- **tags:** Tags to add to the run trace.
- **temperature:** The temperature to use for sampling. Defaults to `0.8`.
- **top_k:** The top-k value to use for sampling. Defaults to `40`.
- **top_p:** The top-p value to use for sampling. Defaults to `0.95`.
- **use_mlock:** Force the system to keep the model in RAM. Defaults to `False`.
- **use_mmap:** Whether to keep the model loaded in RAM. Defaults to `True`.
- **verbose:** This parameter is used to control the level of detail in the output of the chain. When set to True, it will print out some internal states of the chain while it is being run, which can help debug and understand the chain's behavior. If set to False, it will suppress the verbose output. Defaults to `False`.
- **vocab_only:** Only load the vocabulary, no weights. Defaults to `False`.
---
### OpenAI
Wrapper around [OpenAI's](https://openai.com) large language models.
- **max_tokens:** The maximum number of tokens to generate in the completion. `-1` returns as many tokens as possible, given the prompt and the model's maximal context size defaults to `256`.
- **model_kwargs:** Holds any model parameters valid for creating non-specified calls.
- **model_name:** Defines the OpenAI model to be used.
- **openai_api_base:** Used to specify the base URL for the OpenAI API. It is typically set to the API endpoint provided by the OpenAI service.
- **openai_api_key:** Key used to authenticate and access the OpenAI API.
- **temperature:** Tunes the degree of randomness in text generations. Should be a non-negative value defaults to `0.7`.
---
### VertexAI
Wrapper around [Google Vertex AI](https://cloud.google.com/vertex-ai) large language models.
:::info
Vertex AI is a cloud computing platform offered by Google Cloud Platform (GCP). It provides access, management, and development of applications and services through global data centers. To use Vertex AI PaLM, you need to have the [google-cloud-aiplatform](https://pypi.org/project/google-cloud-aiplatform/) Python package installed and credentials configured for your environment.
:::
- **credentials:** The default custom credentials (google.auth.credentials.Credentials) to use.
- **location:** The default location to use when making API calls defaults to `us-central1`.
- **max_output_tokens:** Token limit determines the maximum amount of text output from one prompt defaults to `128`.
- **model_name:** The name of the Vertex AI large language model defaults to `text-bison`.
- **project:** The default GCP project to use when making Vertex API calls.
- **request_parallelism:** The amount of parallelism allowed for requests issued to VertexAI models defaults to `5`.
- **temperature:** Tunes the degree of randomness in text generations. Should be a non-negative value defaults to `0`.
- **top_k:** How the model selects tokens for output, the next token is selected from defaults to `40`.
- **top_p:** Tokens are selected from most probable to least until the sum of their defaults to `0.95`.
- **tuned_model_name:** The name of a tuned model. If provided, model_name is ignored.
- **verbose:** This parameter is used to control the level of detail in the output of the chain. When set to True, it will print out some internal states of the chain while it is being run, which can help debug and understand the chain's behavior. If set to False, it will suppress the verbose output defaults to `False`.

View file

@ -1,2 +1,10 @@
import Admonition from '@theme/Admonition';
# Loaders
(coming soon)
<Admonition type="caution" icon="🚧" title="ZONE UNDER CONSTRUCTION">
<p>
We appreciate your understanding as we polish our documentation it may contain some rough edges. Share your feedback or report issues to help us improve! 🛠️📝
</p>
</Admonition>

View file

@ -1,2 +1,108 @@
import Admonition from '@theme/Admonition';
# Memories
(coming soon)
<Admonition type="caution" icon="🚧" title="ZONE UNDER CONSTRUCTION">
<p>
We appreciate your understanding as we polish our documentation it may contain some rough edges. Share your feedback or report issues to help us improve! 🛠️📝
</p>
</Admonition>
Memory is a concept in chat-based applications that allows the system to remember previous interactions. It helps in maintaining the context of the conversation and enables the system to understand new messages in relation to past messages.
---
### ConversationBufferMemory
The `ConversationBufferMemory` component is a type of memory system that plainly stores the last few inputs and outputs of a conversation.
**Params**
- **input_key:** Used to specify the key under which the user input will be stored in the conversation memory. It allows you to provide the user's input to the chain for processing and generating a response.
- **memory_key:** Specifies the prompt variable name where the memory will store and retrieve the chat messages. It allows for the preservation of the conversation history throughout the interaction with the language model defaults to `chat_history`.
- **output_key:** Used to specify the key under which the generated response will be stored in the conversation memory. It allows you to retrieve the response using the specified key.
- **return_messages:** Determines whether the history should be returned as a string or as a list of messages. If `return_messages` is set to True, the history will be returned as a list of messages. If `return_messages` is set to False or not specified, the history will be returned as a string. The default is `False`.
---
### ConversationBufferWindowMemory
`ConversationBufferWindowMemory` is a variation of the `ConversationBufferMemory` that maintains a list of the recent interactions in a conversation. It only keeps the last K interactions in memory, which can be useful for maintaining a sliding window of the most recent interactions without letting the buffer get too large.
**Params**
- **input_key:** Used to specify the keys in the memory object where the input messages should be stored. It allows for the retrieval and manipulation of input messages.
- **memory_key:** Specifies the prompt variable name where the memory will store and retrieve the chat messages. It allows for the preservation of the conversation history throughout the interaction with the language model. Defaults to `chat_history`.
- **k:** Used to specify the number of interactions or messages that should be stored in the conversation buffer. It determines the size of the sliding window that keeps track of the most recent interactions.
- **output_key:** Used to specify the key under which the generated response will be stored in the conversation memory. It allows you to retrieve the response using the specified key.
- **return_messages:** Determines whether the history should be returned as a string or as a list of messages. If `return_messages` is set to True, the history will be returned as a list of messages. If `return_messages` is set to False or not specified, the history will be returned as a string. The default is `False`.
---
### ConversationEntityMemory
The `ConversationEntityMemory` component incorporates intricate memory structures, specifically a key-value store, for entities referenced in a conversation. This facilitates the storage and retrieval of information related to entities that have been mentioned throughout the conversation.
**Params**
- **Entity Store:** Structure that stores information about specific entities mentioned in a conversation.
- **LLM:** Language Model to use in the `ConversationEntityMemory`.
- **chat_history_key:** Specify a unique identifier for the chat history data associated with a particular entity. This allows for organizing and accessing the chat history data for each entity within the conversation entity memory. Defaults to `history`
- **input_key:** Used to specify the keys in the memory object where the input messages should be stored. It allows for the retrieval and manipulation of input messages.
- **k:** Refers to the number of entities that can be stored in the memory. It determines the maximum number of entities that can be stored and retrieved from the memory object. Defaults to `10`
- **output_key:** Used to specify the key under which the generated response will be stored in the conversation memory. It allows you to retrieve the response using the specified key.
- **return_messages:** Determines whether the history should be returned as a string or as a list of messages. If `return_messages` is set to True, the history will be returned as a list of messages. If `return_messages` is set to False or not specified, the history will be returned as a string. The default is `False`.
---
### ConversationKGMemory
`ConversationKGMemory` is a type of memory that uses a knowledge graph to recreate memory. It allows the extraction of entities and knowledge triplets from a new message, using previous messages as context.
**Params**
- **LLM:** Language Model to use in the `ConversationKGMemory`.
- **input_key:** Used to specify the keys in the memory object where the input messages should be stored. It allows for the retrieval and manipulation of input messages.
- **k:** Represents the number of previous conversation turns that will be stored in the memory. By setting "k" to 2, it means that the memory will retain the previous 2 conversation turns, allowing the model to access and utilize the information from those turns during the conversation. Defaults to `10`
- **memory_key:** Specifies the prompt variable name where the memory will store and retrieve the chat messages. It allows for the preservation of the conversation history throughout the interaction with the language model. Defaults to `chat_history`.
- **output_key:** Used to specify the key under which the generated response will be stored in the conversation memory. It allows you to retrieve the response using the specified key.
- **return_messages:** Determines whether the history should be returned as a string or as a list of messages. If `return_messages` is set to True, the history will be returned as a list of messages. If `return_messages` is set to False or not specified, the history will be returned as a string. The default is `False`.
---
### ConversationSummaryMemory
The `ConversationSummaryMemory` is a memory component that creates a summary of the conversation over time. It condenses information from the conversation and stores the current summary in memory. It is particularly useful for longer conversations where keeping the entire message history in the prompt would take up too many tokens.
**Params**
- **LLM:** Language Model to use in the `ConversationSummaryMemory`.
- **input_key:** Used to specify the keys in the memory object where the input messages should be stored. It allows for the retrieval and manipulation of input messages.
- **memory_key:** Specifies the prompt variable name where the memory will store and retrieve the chat messages. It allows for the preservation of the conversation history throughout the interaction with the language model. Defaults to `chat_history`.
- **output_key:** Used to specify the key under which the generated response will be stored in the conversation memory. It allows you to retrieve the response using the specified key.
- **return_messages:** Determines whether the history should be returned as a string or as a list of messages. If `return_messages` is set to True, the history will be returned as a list of messages. If `return_messages` is set to False or not specified, the history will be returned as a string. The default is `False`.
---
### PostgresChatMessageHistory
The `PostgresChatMessageHistory` is a memory component that allows for the storage and retrieval of chat message history using a PostgreSQL database. The connection to the PostgreSQL database is established using a connection string, which includes the necessary authentication and database information.
**Params**
- **connection_string:** Refers to a string that contains the necessary information to establish a connection to a PostgreSQL database. The `connection_string` typically includes details such as the username, password, host, port, and database name required to connect to the PostgreSQL database. Defaults to `postgresql://postgres:mypassword@localhost/chat_history`
- **session_id:** It is a unique identifier that is used to associate chat message history with a specific session or conversation.
- **table_name:** Refers to the name of the table in the PostgreSQL database where the chat message history will be stored. Defaults to `message_store`
---
### VectorRetrieverMemory
The `VectorRetrieverMemory` is a memory component that allows for the retrieval of vectors based on a given query. It is used to perform vector-based searches and retrievals.
**Params**
- **Retriever:** The retriever used to fetch documents.
- **input_key:** Used to specify the keys in the memory object where the input messages should be stored. It allows for the retrieval and manipulation of input messages.
- **memory_key:** Specifies the prompt variable name where the memory will store and retrieve the chat messages. It allows for the preservation of the conversation history throughout the interaction with the language model defaults to `chat_history`.
- **return_messages:** Determines whether the history should be returned as a string or as a list of messages. If `return_messages` is set to True, the history will be returned as a list of messages. If `return_messages` is set to False or not specified, the history will be returned as a string defaults to `False`.

View file

@ -1,5 +1,13 @@
import Admonition from '@theme/Admonition';
# Prompts
<Admonition type="caution" icon="🚧" title="ZONE UNDER CONSTRUCTION">
<p>
We appreciate your understanding as we polish our documentation it may contain some rough edges. Share your feedback or report issues to help us improve! 🛠️📝
</p>
</Admonition>
A prompt refers to the input given to a language model. It is constructed from multiple components and can be parametrized using prompt templates. A prompt template is a reproducible way to generate prompts and allow for easy customization through input variables.
---

View file

@ -0,0 +1,24 @@
import Admonition from '@theme/Admonition';
# Retrievers
<Admonition type="caution" icon="🚧" title="ZONE UNDER CONSTRUCTION">
<p>
We appreciate your understanding as we polish our documentation it may contain some rough edges. Share your feedback or report issues to help us improve! 🛠️📝
</p>
</Admonition>
A retriever is an interface that returns documents given an unstructured query. It is more general than a vector store and does not need to be able to store documents, only to return or retrieve them.
---
### MultiQueryRetriever
The `MultiQueryRetriever` component automates the process of generating multiple queries, retrieves relevant documents for each query, and combines the results to provide a more extensive and diverse set of potentially relevant documents. This approach enhances the effectiveness of the retrieval process and helps overcome the limitations of traditional distance-based retrieval methods.
**Params**
- **LLM:** Language Model to use in the `MultiQueryRetriever`.
- **Prompt:** Prompt to represent a schema for an LLM.
- **Retriever:** The retriever used to fetch documents.
- **parser_key:** This parameter is used to specify the key or attribute name of the parsed output that will be used for retrieval. It determines how the results from the language model are split into a list of queries. Defaults to `lines`, which means that the output from the language model will be split into a list of lines of text. This allows the retriever to retrieve relevant documents based on each line of text separately.

View file

@ -1,2 +1,49 @@
import Admonition from '@theme/Admonition';
# Text Splitters
(coming soon)
<Admonition type="caution" icon="🚧" title="ZONE UNDER CONSTRUCTION">
<p>
We appreciate your understanding as we polish our documentation it may contain some rough edges. Share your feedback or report issues to help us improve! 🛠️📝
</p>
</Admonition>
A text splitter is a tool that divides a document or text into smaller chunks or segments. It is used to break down large texts into more manageable pieces for analysis or processing.
---
### CharacterTextSplitter
The `CharacterTextSplitter` is used to split a long text into smaller chunks based on a specified character. It splits the text by trying to keep paragraphs, sentences, and words together as long as possible, as these are semantically related pieces of text.
**Params**
- **Documents:** Input documents to split.
- **chunk_overlap:** Determines the number of characters that overlap between consecutive chunks when splitting text. It specifies how much of the previous chunk should be included in the next chunk.
For example, if the `chunk_overlap` is set to 20 and the `chunk_size` is set to 100, the splitter will create chunks of 100 characters each, but the last 20 characters of each chunk will overlap with the first 20 characters of the next chunk. This allows for a smoother transition between chunks and ensures that no information is lost defaults to `200`.
- **chunk_size:** Determines the maximum number of characters in each chunk when splitting a text. It specifies the size or length of each chunk.
For example, if the chunk_size is set to 100, the splitter will create chunks of 100 characters each. If the text is longer than 100 characters, it will be divided into multiple chunks of equal size, except for the last chunk, which may be smaller if there are remaining characters defaults to `1000`.
- **separator:** Specifies the character that will be used to split the text into chunks defaults to `.`
---
### RecursiveCharacterTextSplitter
The `RecursiveCharacterTextSplitter` splits the text by trying to keep paragraphs, sentences, and words together as long as possible, similar to the `CharacterTextSplitter`. However, it also recursively splits the text into smaller chunks if the chunk size exceeds a specified threshold.
**Params**
- **Documents:** Input documents to split.
- **chunk_overlap:** Determines the number of characters that overlap between consecutive chunks when splitting text. It specifies how much of the previous chunk should be included in the next chunk.
- **chunk_size:** Determines the maximum number of characters in each chunk when splitting a text. It specifies the size or length of each chunk.
- **separator_type:** The parameter allows the user to split the code with multiple language support. It supports various languages such as Text, Ruby, Python, Solidity, Java, and more. Defaults to `Text`.
- **separators:** The `separators` in RecursiveCharacterTextSplitter are the characters used to split the text into chunks. The text splitter tries to create chunks based on splitting on the first character in the list of `separators`. If any chunks are too large, it moves on to the next character in the list and continues splitting. Defaults to `.`

View file

@ -1,2 +1,9 @@
import Admonition from '@theme/Admonition';
# Toolkits
(coming soon)
<Admonition type="caution" icon="🚧" title="ZONE UNDER CONSTRUCTION">
<p>
We appreciate your understanding as we polish our documentation it may contain some rough edges. Share your feedback or report issues to help us improve! 🛠️📝
</p>
</Admonition>

View file

@ -1,2 +1,9 @@
import Admonition from '@theme/Admonition';
# Tools
(coming soon)
<Admonition type="caution" icon="🚧" title="ZONE UNDER CONSTRUCTION">
<p>
We appreciate your understanding as we polish our documentation it may contain some rough edges. Share your feedback or report issues to help us improve! 🛠️📝
</p>
</Admonition>

View file

@ -1,2 +1,10 @@
import Admonition from '@theme/Admonition';
# Utilities
(coming soon)
<Admonition type="caution" icon="🚧" title="ZONE UNDER CONSTRUCTION">
<p>
We appreciate your understanding as we polish our documentation it may contain some rough edges. Share your feedback or report issues to help us improve! 🛠️📝
</p>
</Admonition>

View file

@ -1,2 +1,9 @@
import Admonition from '@theme/Admonition';
# Vector Stores
(coming soon)
<Admonition type="caution" icon="🚧" title="ZONE UNDER CONSTRUCTION">
<p>
We appreciate your understanding as we polish our documentation it may contain some rough edges. Share your feedback or report issues to help us improve! 🛠️📝
</p>
</Admonition>

View file

@ -1,2 +1,20 @@
import Admonition from '@theme/Admonition';
# Wrappers
(coming soon)
<Admonition type="caution" icon="🚧" title="ZONE UNDER CONSTRUCTION">
<p>
We appreciate your understanding as we polish our documentation it may contain some rough edges. Share your feedback or report issues to help us improve! 🛠️📝
</p>
</Admonition>
### TextRequestsWrapper
This component is designed to work with the Python Requests module, which is a popular tool for making web requests. Used to fetch data from a particular website.
**Params**
- **header:** specifies the headers to be included in the HTTP request. Defaults to `{'Authorization': 'Bearer <token>'}`.
Headers are key-value pairs that provide additional information about the request or the client making the request. They can be used to send authentication credentials, specify the content type of the request, set cookies, and more. They allow the client and the server to communicate additional information beyond the basic request.

View file

@ -35,6 +35,7 @@ module.exports = {
"components/loaders",
"components/memories",
"components/prompts",
"components/retrievers",
"components/text-splitters",
"components/toolkits",
"components/tools",