📝 docs(guidelines): add async-api.mdx to provide documentation on asynchronous processing feature

📝 docs(guides): add async-tasks.mdx to provide a guide on using the Async API implementation

📝 docs(sidebars.js): update sidebar to include async-api.mdx and async-tasks.mdx in the appropriate sections
This commit is contained in:
Gabriel Luiz Freitas Almeida 2023-09-26 17:01:52 -03:00 committed by anovazzi1
commit 84bf3c83e3
3 changed files with 103 additions and 0 deletions

View file

@ -0,0 +1,57 @@
import Admonition from "@theme/Admonition";
# Asynchronous Processing
## Introduction
Starting from version 0.5, Langflow introduces a new feature to its API: the _`sync`_ flag. This flag allows users to opt for asynchronous processing of their flows, freeing up resources and enabling better control over long-running tasks.
This feature supports running tasks in a Celery worker queue and AnyIO task groups for now.
<Admonition type="warning" caption="Experimental Feature">
This is an experimental feature. The default behavior of the API is still
synchronous processing. The API may change in the future.
</Admonition>
## The _`sync`_ Flag
The _`sync`_ flag can be included in the payload of your POST request to the _`/api/v1/process/<your_flow_id>`_ endpoint.
When set to _`false`_, the API will initiate an asynchronous task instead of processing the flow synchronously.
### API Request with _`sync`_ flag
```bash
curl -X POST \
http://localhost:3000/api/v1/process/<your_flow_id> \
-H 'Content-Type: application/json' \
-H 'x-api-key: <your_api_key>' \
-d '{"inputs": {"text": ""}, "tweaks": {}, "sync": false}'
```
## Checking Task Status
You can check the status of an asynchronous task by making a GET request to the `/task/{task_id}/status` endpoint.
```bash
curl -X GET \
http://localhost:3000/api/v1/task/<task_id>/status \
-H 'x-api-key: <your_api_key>'
```
### Response
The endpoint will return the current status of the task and, if completed, the result of the task. Possible statuses include:
- _`PENDING`_: The task is waiting for execution.
- _`SUCCESS`_: The task has completed successfully.
- _`FAILURE`_: The task has failed.
Example response for a completed task:
```json
{
"status": "SUCCESS",
"result": {
"output": "..."
}
}
```

View file

@ -0,0 +1,44 @@
import Admonition from "@theme/Admonition";
# Async API
## Introduction
<Admonition type="info" caption="In development">
This implementation is still in development. Contributions are welcome!
</Admonition>
The Async API is an implementation of the Langflow API that uses [Celery](https://docs.celeryproject.org/en/stable/)
to run the tasks asynchronously, using a message broker to send and receive messages, a result backend to store the results and a cache to store the task states and session data.
### Configuration
The folder _`./deploy`_ in the [Github repository](https://github.com/logspace-ai/langflow) contains a _`.env.example`_ file that can be used to configure a Langflow deployment.
The file contains the variables required to configure a Celery worker queue, Redis cache and result backend and a RabbitMQ message broker.
To set it up locally you can copy the file to _`.env`_ and run the following command:
```bash
docker compose up -d
```
This will set up the following containers:
- Langflow API
- Celery worker
- RabbitMQ message broker
- Redis cache
- PostgreSQL database
- PGAdmin
- Flower
- Traefik
- Grafana
- Prometheus
### Testing
To run the tests for the Async API, you can run the following command:
```bash
docker compose -f docker-compose.test.yml up --exit-code-from tests tests result_backend broker celeryworker db --build
```

View file

@ -18,6 +18,7 @@ module.exports = {
items: [
"guidelines/login",
"guidelines/api",
"guidelines/async-api",
"guidelines/components",
"guidelines/features",
"guidelines/collection",
@ -54,6 +55,7 @@ module.exports = {
label: "Step-by-Step Guides",
collapsed: false,
items: [
"guides/async-tasks",
"guides/loading_document",
"guides/chatprompttemplate_guide",
"guides/langfuse_integration",