Merge pull request #1329 from earayu/fix_readme_grammar_typo
fix grammar and typo in lightrag-api REAEME.md
This commit is contained in:
@@ -1,6 +1,6 @@
|
|||||||
# LightRAG Server and WebUI
|
# LightRAG Server and WebUI
|
||||||
|
|
||||||
The LightRAG Server is designed to provide Web UI and API support. The Web UI facilitates document indexing, knowledge graph exploration, and a simple RAG query interface. LightRAG Server also provide an Ollama compatible interfaces, aiming to emulate LightRAG as an Ollama chat model. This allows AI chat bot, such as Open WebUI, to access LightRAG easily.
|
The LightRAG Server is designed to provide a Web UI and API support. The Web UI facilitates document indexing, knowledge graph exploration, and a simple RAG query interface. LightRAG Server also provides an Ollama-compatible interface, aiming to emulate LightRAG as an Ollama chat model. This allows AI chat bots, such as Open WebUI, to access LightRAG easily.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
@@ -8,7 +8,7 @@ The LightRAG Server is designed to provide Web UI and API support. The Web UI fa
|
|||||||
|
|
||||||

|

|
||||||
|
|
||||||
## Getting Start
|
## Getting Started
|
||||||
|
|
||||||
### Installation
|
### Installation
|
||||||
|
|
||||||
@@ -27,7 +27,7 @@ git clone https://github.com/HKUDS/lightrag.git
|
|||||||
# Change to the repository directory
|
# Change to the repository directory
|
||||||
cd lightrag
|
cd lightrag
|
||||||
|
|
||||||
# create a Python virtual enviroment if neccesary
|
# create a Python virtual environment if necessary
|
||||||
# Install in editable mode with API support
|
# Install in editable mode with API support
|
||||||
pip install -e ".[api]"
|
pip install -e ".[api]"
|
||||||
```
|
```
|
||||||
@@ -43,16 +43,16 @@ LightRAG necessitates the integration of both an LLM (Large Language Model) and
|
|||||||
|
|
||||||
It is recommended to use environment variables to configure the LightRAG Server. There is an example environment variable file named `env.example` in the root directory of the project. Please copy this file to the startup directory and rename it to `.env`. After that, you can modify the parameters related to the LLM and Embedding models in the `.env` file. It is important to note that the LightRAG Server will load the environment variables from `.env` into the system environment variables each time it starts. Since the LightRAG Server will prioritize the settings in the system environment variables, if you modify the `.env` file after starting the LightRAG Server via the command line, you need to execute `source .env` to make the new settings take effect.
|
It is recommended to use environment variables to configure the LightRAG Server. There is an example environment variable file named `env.example` in the root directory of the project. Please copy this file to the startup directory and rename it to `.env`. After that, you can modify the parameters related to the LLM and Embedding models in the `.env` file. It is important to note that the LightRAG Server will load the environment variables from `.env` into the system environment variables each time it starts. Since the LightRAG Server will prioritize the settings in the system environment variables, if you modify the `.env` file after starting the LightRAG Server via the command line, you need to execute `source .env` to make the new settings take effect.
|
||||||
|
|
||||||
Here are some examples of common settings for LLM and Embedding models:
|
Here are some examples of common settings for LLM and Embedding models:
|
||||||
|
|
||||||
* OpenAI LLM + Ollama Embedding
|
* OpenAI LLM + Ollama Embedding:
|
||||||
|
|
||||||
```
|
```
|
||||||
LLM_BINDING=openai
|
LLM_BINDING=openai
|
||||||
LLM_MODEL=gpt-4o
|
LLM_MODEL=gpt-4o
|
||||||
LLM_BINDING_HOST=https://api.openai.com/v1
|
LLM_BINDING_HOST=https://api.openai.com/v1
|
||||||
LLM_BINDING_API_KEY=your_api_key
|
LLM_BINDING_API_KEY=your_api_key
|
||||||
### Max tokens send to LLM (less than model context size)
|
### Max tokens sent to LLM (less than model context size)
|
||||||
MAX_TOKENS=32768
|
MAX_TOKENS=32768
|
||||||
|
|
||||||
EMBEDDING_BINDING=ollama
|
EMBEDDING_BINDING=ollama
|
||||||
@@ -62,14 +62,14 @@ EMBEDDING_DIM=1024
|
|||||||
# EMBEDDING_BINDING_API_KEY=your_api_key
|
# EMBEDDING_BINDING_API_KEY=your_api_key
|
||||||
```
|
```
|
||||||
|
|
||||||
* Ollama LLM + Ollama Embedding
|
* Ollama LLM + Ollama Embedding:
|
||||||
|
|
||||||
```
|
```
|
||||||
LLM_BINDING=ollama
|
LLM_BINDING=ollama
|
||||||
LLM_MODEL=mistral-nemo:latest
|
LLM_MODEL=mistral-nemo:latest
|
||||||
LLM_BINDING_HOST=http://localhost:11434
|
LLM_BINDING_HOST=http://localhost:11434
|
||||||
# LLM_BINDING_API_KEY=your_api_key
|
# LLM_BINDING_API_KEY=your_api_key
|
||||||
### Max tokens send to LLM (base on your Ollama Server capacity)
|
### Max tokens sent to LLM (based on your Ollama Server capacity)
|
||||||
MAX_TOKENS=8192
|
MAX_TOKENS=8192
|
||||||
|
|
||||||
EMBEDDING_BINDING=ollama
|
EMBEDDING_BINDING=ollama
|
||||||
@@ -82,12 +82,12 @@ EMBEDDING_DIM=1024
|
|||||||
### Starting LightRAG Server
|
### Starting LightRAG Server
|
||||||
|
|
||||||
The LightRAG Server supports two operational modes:
|
The LightRAG Server supports two operational modes:
|
||||||
* The simple and efficient Uvicorn mode
|
* The simple and efficient Uvicorn mode:
|
||||||
|
|
||||||
```
|
```
|
||||||
lightrag-server
|
lightrag-server
|
||||||
```
|
```
|
||||||
* The multiprocess Gunicorn + Uvicorn mode (production mode, not supported on Windows environments)
|
* The multiprocess Gunicorn + Uvicorn mode (production mode, not supported on Windows environments):
|
||||||
|
|
||||||
```
|
```
|
||||||
lightrag-gunicorn --workers 4
|
lightrag-gunicorn --workers 4
|
||||||
@@ -96,44 +96,44 @@ The `.env` file **must be placed in the startup directory**.
|
|||||||
|
|
||||||
Upon launching, the LightRAG Server will create a documents directory (default is `./inputs`) and a data directory (default is `./rag_storage`). This allows you to initiate multiple instances of LightRAG Server from different directories, with each instance configured to listen on a distinct network port.
|
Upon launching, the LightRAG Server will create a documents directory (default is `./inputs`) and a data directory (default is `./rag_storage`). This allows you to initiate multiple instances of LightRAG Server from different directories, with each instance configured to listen on a distinct network port.
|
||||||
|
|
||||||
Here are some common used startup parameters:
|
Here are some commonly used startup parameters:
|
||||||
|
|
||||||
- `--host`: Server listening address (default: 0.0.0.0)
|
- `--host`: Server listening address (default: 0.0.0.0)
|
||||||
- `--port`: Server listening port (default: 9621)
|
- `--port`: Server listening port (default: 9621)
|
||||||
- `--timeout`: LLM request timeout (default: 150 seconds)
|
- `--timeout`: LLM request timeout (default: 150 seconds)
|
||||||
- `--log-level`: Logging level (default: INFO)
|
- `--log-level`: Logging level (default: INFO)
|
||||||
- --input-dir: specifying the directory to scan for documents (default: ./input)
|
- `--input-dir`: Specifying the directory to scan for documents (default: ./inputs)
|
||||||
|
|
||||||
> The requirement for the .env file to be in the startup directory is intentionally designed this way. The purpose is to support users in launching multiple LightRAG instances simultaneously. Allow different .env files for different instances.
|
> The requirement for the .env file to be in the startup directory is intentionally designed this way. The purpose is to support users in launching multiple LightRAG instances simultaneously, allowing different .env files for different instances.
|
||||||
|
|
||||||
### Auto scan on startup
|
### Auto scan on startup
|
||||||
|
|
||||||
When starting any of the servers with the `--auto-scan-at-startup` parameter, the system will automatically:
|
When starting any of the servers with the `--auto-scan-at-startup` parameter, the system will automatically:
|
||||||
|
|
||||||
1. Scan for new files in the input directory
|
1. Scan for new files in the input directory
|
||||||
2. Indexing new documents that aren't already in the database
|
2. Index new documents that aren't already in the database
|
||||||
3. Make all content immediately available for RAG queries
|
3. Make all content immediately available for RAG queries
|
||||||
|
|
||||||
> The `--input-dir` parameter specify the input directory to scan for. You can trigger input diretory scan from webui.
|
> The `--input-dir` parameter specifies the input directory to scan. You can trigger the input directory scan from the Web UI.
|
||||||
|
|
||||||
### Multiple workers for Gunicorn + Uvicorn
|
### Multiple workers for Gunicorn + Uvicorn
|
||||||
|
|
||||||
The LightRAG Server can operate in the `Gunicorn + Uvicorn` preload mode. Gunicorn's Multiple Worker (multiprocess) capability prevents document indexing tasks from blocking RAG queries. Using CPU-exhaustive document extraction tools, such as docling, can lead to the entire system being blocked in pure Uvicorn mode.
|
The LightRAG Server can operate in the `Gunicorn + Uvicorn` preload mode. Gunicorn's multiple worker (multiprocess) capability prevents document indexing tasks from blocking RAG queries. Using CPU-exhaustive document extraction tools, such as docling, can lead to the entire system being blocked in pure Uvicorn mode.
|
||||||
|
|
||||||
Though LightRAG Server uses one workers to process the document indexing pipeline, with aysnc task supporting of Uvicorn, multiple files can be processed in parallell. The bottleneck of document indexing speed mainly lies with the LLM. If your LLM supports high concurrency, you can accelerate document indexing by increasing the concurrency level of the LLM. Below are several environment variables related to concurrent processing, along with their default values:
|
Though LightRAG Server uses one worker to process the document indexing pipeline, with the async task support of Uvicorn, multiple files can be processed in parallel. The bottleneck of document indexing speed mainly lies with the LLM. If your LLM supports high concurrency, you can accelerate document indexing by increasing the concurrency level of the LLM. Below are several environment variables related to concurrent processing, along with their default values:
|
||||||
|
|
||||||
```
|
```
|
||||||
### Num of worker processes, not greater then (2 x number_of_cores) + 1
|
### Number of worker processes, not greater than (2 x number_of_cores) + 1
|
||||||
WORKERS=2
|
WORKERS=2
|
||||||
### Num of parallel files to process in one batch
|
### Number of parallel files to process in one batch
|
||||||
MAX_PARALLEL_INSERT=2
|
MAX_PARALLEL_INSERT=2
|
||||||
### Max concurrency requests of LLM
|
### Max concurrent requests to the LLM
|
||||||
MAX_ASYNC=4
|
MAX_ASYNC=4
|
||||||
```
|
```
|
||||||
|
|
||||||
### Install Lightrag as a Linux Service
|
### Install LightRAG as a Linux Service
|
||||||
|
|
||||||
Create a your service file `lightrag.sevice` from the sample file : `lightrag.sevice.example`. Modified the WorkingDirectoryand EexecStart in the service file:
|
Create your service file `lightrag.service` from the sample file: `lightrag.service.example`. Modify the `WorkingDirectory` and `ExecStart` in the service file:
|
||||||
|
|
||||||
```text
|
```text
|
||||||
Description=LightRAG Ollama Service
|
Description=LightRAG Ollama Service
|
||||||
@@ -141,7 +141,7 @@ WorkingDirectory=<lightrag installed directory>
|
|||||||
ExecStart=<lightrag installed directory>/lightrag/api/lightrag-api
|
ExecStart=<lightrag installed directory>/lightrag/api/lightrag-api
|
||||||
```
|
```
|
||||||
|
|
||||||
Modify your service startup script: `lightrag-api`. Change you python virtual environment activation command as needed:
|
Modify your service startup script: `lightrag-api`. Change your Python virtual environment activation command as needed:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
@@ -164,21 +164,21 @@ sudo systemctl enable lightrag.service
|
|||||||
|
|
||||||
## Ollama Emulation
|
## Ollama Emulation
|
||||||
|
|
||||||
We provide an Ollama-compatible interfaces for LightRAG, aiming to emulate LightRAG as an Ollama chat model. This allows AI chat frontends supporting Ollama, such as Open WebUI, to access LightRAG easily.
|
We provide Ollama-compatible interfaces for LightRAG, aiming to emulate LightRAG as an Ollama chat model. This allows AI chat frontends supporting Ollama, such as Open WebUI, to access LightRAG easily.
|
||||||
|
|
||||||
### Connect Open WebUI to LightRAG
|
### Connect Open WebUI to LightRAG
|
||||||
|
|
||||||
After starting the lightrag-server, you can add an Ollama-type connection in the Open WebUI admin pannel. And then a model named `lightrag:latest` will appear in Open WebUI's model management interface. Users can then send queries to LightRAG through the chat interface. You'd better install LightRAG as service for this use case.
|
After starting the lightrag-server, you can add an Ollama-type connection in the Open WebUI admin panel. And then a model named `lightrag:latest` will appear in Open WebUI's model management interface. Users can then send queries to LightRAG through the chat interface. You should install LightRAG as a service for this use case.
|
||||||
|
|
||||||
Open WebUI's use LLM to do the session title and session keyword generation task. So the Ollama chat chat completion API detects and forwards OpenWebUI session-related requests directly to underlying LLM. Screen shot from Open WebUI:
|
Open WebUI uses an LLM to do the session title and session keyword generation task. So the Ollama chat completion API detects and forwards OpenWebUI session-related requests directly to the underlying LLM. Screenshot from Open WebUI:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
### Choose Query mode in chat
|
### Choose Query mode in chat
|
||||||
|
|
||||||
The defautl query mode is `hybrid` if you send a message(query) from Ollama interface of LightRAG. You can select query mode by sending a message with query prefix.
|
The default query mode is `hybrid` if you send a message (query) from the Ollama interface of LightRAG. You can select query mode by sending a message with a query prefix.
|
||||||
|
|
||||||
A query prefix in the query string can determines which LightRAG query mode is used to generate the respond for the query. The supported prefixes include:
|
A query prefix in the query string can determine which LightRAG query mode is used to generate the response for the query. The supported prefixes include:
|
||||||
|
|
||||||
```
|
```
|
||||||
/local
|
/local
|
||||||
@@ -196,30 +196,28 @@ A query prefix in the query string can determines which LightRAG query mode is u
|
|||||||
/mixcontext
|
/mixcontext
|
||||||
```
|
```
|
||||||
|
|
||||||
For example, chat message "/mix What's LightRag" will trigger a mix mode query for LighRAG. A chat message without query prefix will trigger a hybrid mode query by default.
|
For example, the chat message `/mix What's LightRAG?` will trigger a mix mode query for LightRAG. A chat message without a query prefix will trigger a hybrid mode query by default.
|
||||||
|
|
||||||
"/bypass" not a LightRAG query mode, it will tell API Server to pass the query directly to the underlying LLM with chat history. So user can use LLM to answer question base on the chat history. If you are using Open WebUI as front end, you can just switch the model to a normal LLM instead of using /bypass prefix.
|
`/bypass` is not a LightRAG query mode; it will tell the API Server to pass the query directly to the underlying LLM, including the chat history. So the user can use the LLM to answer questions based on the chat history. If you are using Open WebUI as a front end, you can just switch the model to a normal LLM instead of using the `/bypass` prefix.
|
||||||
|
|
||||||
"/context" is not a LightRAG query mode neither, it will tell LightRAG to return only the context information prepared for LLM. You can check the context if it's want you want, or process the conext by your self.
|
`/context` is also not a LightRAG query mode; it will tell LightRAG to return only the context information prepared for the LLM. You can check the context if it's what you want, or process the context by yourself.
|
||||||
|
|
||||||
|
## API Key and Authentication
|
||||||
|
|
||||||
|
By default, the LightRAG Server can be accessed without any authentication. We can configure the server with an API Key or account credentials to secure it.
|
||||||
|
|
||||||
## API-Key and Authentication
|
* API Key:
|
||||||
|
|
||||||
By default, the LightRAG Server can be accessed without any authentication. We can configure the server with an API-Key or account credentials to secure it.
|
|
||||||
|
|
||||||
* API-KEY
|
|
||||||
|
|
||||||
```
|
```
|
||||||
LIGHTRAG_API_KEY=your-secure-api-key-here
|
LIGHTRAG_API_KEY=your-secure-api-key-here
|
||||||
WHITELIST_PATHS=/health,/api/*
|
WHITELIST_PATHS=/health,/api/*
|
||||||
```
|
```
|
||||||
|
|
||||||
> Health check and Ollama emuluation endpoins is exclude from API-KEY check by default.
|
> Health check and Ollama emulation endpoints are excluded from API Key check by default.
|
||||||
|
|
||||||
* Account credentials (the web UI requires login before access)
|
* Account credentials (the Web UI requires login before access can be granted):
|
||||||
|
|
||||||
LightRAG API Server implements JWT-based authentication using HS256 algorithm. To enable secure access control, the following environment variables are required:
|
LightRAG API Server implements JWT-based authentication using the HS256 algorithm. To enable secure access control, the following environment variables are required:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# For jwt auth
|
# For jwt auth
|
||||||
@@ -230,16 +228,14 @@ TOKEN_EXPIRE_HOURS=4
|
|||||||
|
|
||||||
> Currently, only the configuration of an administrator account and password is supported. A comprehensive account system is yet to be developed and implemented.
|
> Currently, only the configuration of an administrator account and password is supported. A comprehensive account system is yet to be developed and implemented.
|
||||||
|
|
||||||
If Account credentials are not configured, the web UI will access the system as a Guest. Therefore, even if only API-KEY is configured, all API can still be accessed through the Guest account, which remains insecure. Hence, to safeguard the API, it is necessary to configure both authentication methods simultaneously.
|
If Account credentials are not configured, the Web UI will access the system as a Guest. Therefore, even if only an API Key is configured, all APIs can still be accessed through the Guest account, which remains insecure. Hence, to safeguard the API, it is necessary to configure both authentication methods simultaneously.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## For Azure OpenAI Backend
|
## For Azure OpenAI Backend
|
||||||
|
|
||||||
Azure OpenAI API can be created using the following commands in Azure CLI (you need to install Azure CLI first from [https://docs.microsoft.com/en-us/cli/azure/install-azure-cli](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli)):
|
Azure OpenAI API can be created using the following commands in Azure CLI (you need to install Azure CLI first from [https://docs.microsoft.com/en-us/cli/azure/install-azure-cli](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli)):
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Change the resource group name, location and OpenAI resource name as needed
|
# Change the resource group name, location, and OpenAI resource name as needed
|
||||||
RESOURCE_GROUP_NAME=LightRAG
|
RESOURCE_GROUP_NAME=LightRAG
|
||||||
LOCATION=swedencentral
|
LOCATION=swedencentral
|
||||||
RESOURCE_NAME=LightRAG-OpenAI
|
RESOURCE_NAME=LightRAG-OpenAI
|
||||||
@@ -257,7 +253,7 @@ az cognitiveservices account keys list --name $RESOURCE_NAME -g $RESOURCE_GROUP_
|
|||||||
The output of the last command will give you the endpoint and the key for the OpenAI API. You can use these values to set the environment variables in the `.env` file.
|
The output of the last command will give you the endpoint and the key for the OpenAI API. You can use these values to set the environment variables in the `.env` file.
|
||||||
|
|
||||||
```
|
```
|
||||||
# Azure OpenAI Configuration in .env
|
# Azure OpenAI Configuration in .env:
|
||||||
LLM_BINDING=azure_openai
|
LLM_BINDING=azure_openai
|
||||||
LLM_BINDING_HOST=your-azure-endpoint
|
LLM_BINDING_HOST=your-azure-endpoint
|
||||||
LLM_MODEL=your-model-deployment-name
|
LLM_MODEL=your-model-deployment-name
|
||||||
@@ -265,22 +261,20 @@ LLM_BINDING_API_KEY=your-azure-api-key
|
|||||||
### API version is optional, defaults to latest version
|
### API version is optional, defaults to latest version
|
||||||
AZURE_OPENAI_API_VERSION=2024-08-01-preview
|
AZURE_OPENAI_API_VERSION=2024-08-01-preview
|
||||||
|
|
||||||
### if using Azure OpenAI for embeddings
|
### If using Azure OpenAI for embeddings
|
||||||
EMBEDDING_BINDING=azure_openai
|
EMBEDDING_BINDING=azure_openai
|
||||||
EMBEDDING_MODEL=your-embedding-deployment-name
|
EMBEDDING_MODEL=your-embedding-deployment-name
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## LightRAG Server Configuration in Detail
|
## LightRAG Server Configuration in Detail
|
||||||
|
|
||||||
API Server can be config in three way (highest priority first):
|
The API Server can be configured in three ways (highest priority first):
|
||||||
|
|
||||||
* Command line arguments
|
* Command line arguments
|
||||||
* Enviroment variables or .env file
|
* Environment variables or .env file
|
||||||
* Config.ini (Only for storage configuration)
|
* Config.ini (Only for storage configuration)
|
||||||
|
|
||||||
Most of the configurations come with a default settings, check out details in sample file: `.env.example`. Datastorage configuration can be also set by config.ini. A sample file `config.ini.example` is provided for your convenience.
|
Most of the configurations come with default settings; check out the details in the sample file: `.env.example`. Data storage configuration can also be set by config.ini. A sample file `config.ini.example` is provided for your convenience.
|
||||||
|
|
||||||
### LLM and Embedding Backend Supported
|
### LLM and Embedding Backend Supported
|
||||||
|
|
||||||
@@ -291,65 +285,65 @@ LightRAG supports binding to various LLM/Embedding backends:
|
|||||||
* openai & openai compatible
|
* openai & openai compatible
|
||||||
* azure_openai
|
* azure_openai
|
||||||
|
|
||||||
Use environment variables `LLM_BINDING` or CLI argument `--llm-binding` to select LLM backend type. Use environment variables `EMBEDDING_BINDING` or CLI argument `--embedding-binding` to select LLM backend type.
|
Use environment variables `LLM_BINDING` or CLI argument `--llm-binding` to select the LLM backend type. Use environment variables `EMBEDDING_BINDING` or CLI argument `--embedding-binding` to select the Embedding backend type.
|
||||||
|
|
||||||
### Entity Extraction Configuration
|
### Entity Extraction Configuration
|
||||||
* ENABLE_LLM_CACHE_FOR_EXTRACT: Enable LLM cache for entity extraction (default: true)
|
* ENABLE_LLM_CACHE_FOR_EXTRACT: Enable LLM cache for entity extraction (default: true)
|
||||||
|
|
||||||
It's very common to set `ENABLE_LLM_CACHE_FOR_EXTRACT` to true for test environment to reduce the cost of LLM calls.
|
It's very common to set `ENABLE_LLM_CACHE_FOR_EXTRACT` to true for a test environment to reduce the cost of LLM calls.
|
||||||
|
|
||||||
### Storage Types Supported
|
### Storage Types Supported
|
||||||
|
|
||||||
LightRAG uses 4 types of storage for difference purposes:
|
LightRAG uses 4 types of storage for different purposes:
|
||||||
|
|
||||||
* KV_STORAGE:llm response cache, text chunks, document information
|
* KV_STORAGE: llm response cache, text chunks, document information
|
||||||
* VECTOR_STORAGE:entities vectors, relation vectors, chunks vectors
|
* VECTOR_STORAGE: entities vectors, relation vectors, chunks vectors
|
||||||
* GRAPH_STORAGE:entity relation graph
|
* GRAPH_STORAGE: entity relation graph
|
||||||
* DOC_STATUS_STORAGE:documents indexing status
|
* DOC_STATUS_STORAGE: document indexing status
|
||||||
|
|
||||||
Each storage type have servals implementations:
|
Each storage type has several implementations:
|
||||||
|
|
||||||
* KV_STORAGE supported implement-name
|
* KV_STORAGE supported implementations:
|
||||||
|
|
||||||
```
|
```
|
||||||
JsonKVStorage JsonFile(default)
|
JsonKVStorage JsonFile (default)
|
||||||
PGKVStorage Postgres
|
PGKVStorage Postgres
|
||||||
RedisKVStorage Redis
|
RedisKVStorage Redis
|
||||||
MongoKVStorage MogonDB
|
MongoKVStorage MongoDB
|
||||||
```
|
```
|
||||||
|
|
||||||
* GRAPH_STORAGE supported implement-name
|
* GRAPH_STORAGE supported implementations:
|
||||||
|
|
||||||
```
|
```
|
||||||
NetworkXStorage NetworkX(defualt)
|
NetworkXStorage NetworkX (default)
|
||||||
Neo4JStorage Neo4J
|
Neo4JStorage Neo4J
|
||||||
PGGraphStorage Postgres
|
PGGraphStorage Postgres
|
||||||
AGEStorage AGE
|
AGEStorage AGE
|
||||||
```
|
```
|
||||||
|
|
||||||
* VECTOR_STORAGE supported implement-name
|
* VECTOR_STORAGE supported implementations:
|
||||||
|
|
||||||
```
|
```
|
||||||
NanoVectorDBStorage NanoVector(default)
|
NanoVectorDBStorage NanoVector (default)
|
||||||
PGVectorStorage Postgres
|
PGVectorStorage Postgres
|
||||||
MilvusVectorDBStorge Milvus
|
MilvusVectorDBStorage Milvus
|
||||||
ChromaVectorDBStorage Chroma
|
ChromaVectorDBStorage Chroma
|
||||||
FaissVectorDBStorage Faiss
|
FaissVectorDBStorage Faiss
|
||||||
QdrantVectorDBStorage Qdrant
|
QdrantVectorDBStorage Qdrant
|
||||||
MongoVectorDBStorage MongoDB
|
MongoVectorDBStorage MongoDB
|
||||||
```
|
```
|
||||||
|
|
||||||
* DOC_STATUS_STORAGE:supported implement-name
|
* DOC_STATUS_STORAGE: supported implementations:
|
||||||
|
|
||||||
```
|
```
|
||||||
JsonDocStatusStorage JsonFile(default)
|
JsonDocStatusStorage JsonFile (default)
|
||||||
PGDocStatusStorage Postgres
|
PGDocStatusStorage Postgres
|
||||||
MongoDocStatusStorage MongoDB
|
MongoDocStatusStorage MongoDB
|
||||||
```
|
```
|
||||||
|
|
||||||
### How Select Storage Implementation
|
### How to Select Storage Implementation
|
||||||
|
|
||||||
You can select storage implementation by environment variables. Your can set the following environmental variables to a specific storage implement-name before the your first start of the API Server:
|
You can select storage implementation by environment variables. You can set the following environment variables to a specific storage implementation name before the first start of the API Server:
|
||||||
|
|
||||||
```
|
```
|
||||||
LIGHTRAG_KV_STORAGE=PGKVStorage
|
LIGHTRAG_KV_STORAGE=PGKVStorage
|
||||||
@@ -358,30 +352,30 @@ LIGHTRAG_GRAPH_STORAGE=PGGraphStorage
|
|||||||
LIGHTRAG_DOC_STATUS_STORAGE=PGDocStatusStorage
|
LIGHTRAG_DOC_STATUS_STORAGE=PGDocStatusStorage
|
||||||
```
|
```
|
||||||
|
|
||||||
You can not change storage implementation selection after you add documents to LightRAG. Data migration from one storage implementation to anthor is not supported yet. For further information please read the sample env file or config.ini file.
|
You cannot change storage implementation selection after adding documents to LightRAG. Data migration from one storage implementation to another is not supported yet. For further information, please read the sample env file or config.ini file.
|
||||||
|
|
||||||
### LightRag API Server Comand Line Options
|
### LightRAG API Server Command Line Options
|
||||||
|
|
||||||
| Parameter | Default | Description |
|
| Parameter | Default | Description |
|
||||||
|-----------|---------|-------------|
|
| --------------------- | ------------- | ------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| --host | 0.0.0.0 | Server host |
|
| --host | 0.0.0.0 | Server host |
|
||||||
| --port | 9621 | Server port |
|
| --port | 9621 | Server port |
|
||||||
| --working-dir | ./rag_storage | Working directory for RAG storage |
|
| --working-dir | ./rag_storage | Working directory for RAG storage |
|
||||||
| --input-dir | ./inputs | Directory containing input documents |
|
| --input-dir | ./inputs | Directory containing input documents |
|
||||||
| --max-async | 4 | Maximum async operations |
|
| --max-async | 4 | Maximum number of async operations |
|
||||||
| --max-tokens | 32768 | Maximum token size |
|
| --max-tokens | 32768 | Maximum token size |
|
||||||
| --timeout | 150 | Timeout in seconds. None for infinite timeout(not recommended) |
|
| --timeout | 150 | Timeout in seconds. None for infinite timeout (not recommended) |
|
||||||
| --log-level | INFO | Logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL) |
|
| --log-level | INFO | Logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL) |
|
||||||
| --verbose | - | Verbose debug output (True, Flase) |
|
| --verbose | - | Verbose debug output (True, False) |
|
||||||
| --key | None | API key for authentication. Protects lightrag server against unauthorized access |
|
| --key | None | API key for authentication. Protects the LightRAG server against unauthorized access |
|
||||||
| --ssl | False | Enable HTTPS |
|
| --ssl | False | Enable HTTPS |
|
||||||
| --ssl-certfile | None | Path to SSL certificate file (required if --ssl is enabled) |
|
| --ssl-certfile | None | Path to SSL certificate file (required if --ssl is enabled) |
|
||||||
| --ssl-keyfile | None | Path to SSL private key file (required if --ssl is enabled) |
|
| --ssl-keyfile | None | Path to SSL private key file (required if --ssl is enabled) |
|
||||||
| --top-k | 50 | Number of top-k items to retrieve; corresponds to entities in "local" mode and relationships in "global" mode. |
|
| --top-k | 50 | Number of top-k items to retrieve; corresponds to entities in "local" mode and relationships in "global" mode. |
|
||||||
| --cosine-threshold | 0.4 | The cossine threshold for nodes and relations retrieval, works with top-k to control the retrieval of nodes and relations. |
|
| --cosine-threshold | 0.4 | The cosine threshold for nodes and relation retrieval, works with top-k to control the retrieval of nodes and relations. |
|
||||||
| --llm-binding | ollama | LLM binding type (lollms, ollama, openai, openai-ollama, azure_openai) |
|
| --llm-binding | ollama | LLM binding type (lollms, ollama, openai, openai-ollama, azure_openai) |
|
||||||
| --embedding-binding | ollama | Embedding binding type (lollms, ollama, openai, azure_openai) |
|
| --embedding-binding | ollama | Embedding binding type (lollms, ollama, openai, azure_openai) |
|
||||||
| auto-scan-at-startup | - | Scan input directory for new files and start indexing |
|
| --auto-scan-at-startup| - | Scan input directory for new files and start indexing |
|
||||||
|
|
||||||
### .env Examples
|
### .env Examples
|
||||||
|
|
||||||
@@ -427,7 +421,7 @@ EMBEDDING_BINDING_HOST=http://localhost:11434
|
|||||||
|
|
||||||
## API Endpoints
|
## API Endpoints
|
||||||
|
|
||||||
All servers (LoLLMs, Ollama, OpenAI and Azure OpenAI) provide the same REST API endpoints for RAG functionality. When API Server is running, visit:
|
All servers (LoLLMs, Ollama, OpenAI and Azure OpenAI) provide the same REST API endpoints for RAG functionality. When the API Server is running, visit:
|
||||||
|
|
||||||
- Swagger UI: http://localhost:9621/docs
|
- Swagger UI: http://localhost:9621/docs
|
||||||
- ReDoc: http://localhost:9621/redoc
|
- ReDoc: http://localhost:9621/redoc
|
||||||
@@ -438,9 +432,9 @@ You can test the API endpoints using the provided curl commands or through the S
|
|||||||
2. Start the RAG server
|
2. Start the RAG server
|
||||||
3. Upload some documents using the document management endpoints
|
3. Upload some documents using the document management endpoints
|
||||||
4. Query the system using the query endpoints
|
4. Query the system using the query endpoints
|
||||||
5. Trigger document scan if new files is put into inputs directory
|
5. Trigger document scan if new files are put into the inputs directory
|
||||||
|
|
||||||
### Query Endpoints
|
### Query Endpoints:
|
||||||
|
|
||||||
#### POST /query
|
#### POST /query
|
||||||
Query the RAG system with options for different search modes.
|
Query the RAG system with options for different search modes.
|
||||||
@@ -448,7 +442,7 @@ Query the RAG system with options for different search modes.
|
|||||||
```bash
|
```bash
|
||||||
curl -X POST "http://localhost:9621/query" \
|
curl -X POST "http://localhost:9621/query" \
|
||||||
-H "Content-Type: application/json" \
|
-H "Content-Type: application/json" \
|
||||||
-d '{"query": "Your question here", "mode": "hybrid", ""}'
|
-d '{"query": "Your question here", "mode": "hybrid"}'
|
||||||
```
|
```
|
||||||
|
|
||||||
#### POST /query/stream
|
#### POST /query/stream
|
||||||
@@ -460,7 +454,7 @@ curl -X POST "http://localhost:9621/query/stream" \
|
|||||||
-d '{"query": "Your question here", "mode": "hybrid"}'
|
-d '{"query": "Your question here", "mode": "hybrid"}'
|
||||||
```
|
```
|
||||||
|
|
||||||
### Document Management Endpoints
|
### Document Management Endpoints:
|
||||||
|
|
||||||
#### POST /documents/text
|
#### POST /documents/text
|
||||||
Insert text directly into the RAG system.
|
Insert text directly into the RAG system.
|
||||||
@@ -491,13 +485,13 @@ curl -X POST "http://localhost:9621/documents/batch" \
|
|||||||
|
|
||||||
#### POST /documents/scan
|
#### POST /documents/scan
|
||||||
|
|
||||||
Trigger document scan for new files in the Input directory.
|
Trigger document scan for new files in the input directory.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
curl -X POST "http://localhost:9621/documents/scan" --max-time 1800
|
curl -X POST "http://localhost:9621/documents/scan" --max-time 1800
|
||||||
```
|
```
|
||||||
|
|
||||||
> Ajust max-time according to the estimated index time for all new files.
|
> Adjust max-time according to the estimated indexing time for all new files.
|
||||||
|
|
||||||
#### DELETE /documents
|
#### DELETE /documents
|
||||||
|
|
||||||
@@ -507,7 +501,7 @@ Clear all documents from the RAG system.
|
|||||||
curl -X DELETE "http://localhost:9621/documents"
|
curl -X DELETE "http://localhost:9621/documents"
|
||||||
```
|
```
|
||||||
|
|
||||||
### Ollama Emulation Endpoints
|
### Ollama Emulation Endpoints:
|
||||||
|
|
||||||
#### GET /api/version
|
#### GET /api/version
|
||||||
|
|
||||||
@@ -519,7 +513,7 @@ curl http://localhost:9621/api/version
|
|||||||
|
|
||||||
#### GET /api/tags
|
#### GET /api/tags
|
||||||
|
|
||||||
Get Ollama available models.
|
Get available Ollama models.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
curl http://localhost:9621/api/tags
|
curl http://localhost:9621/api/tags
|
||||||
@@ -527,20 +521,20 @@ curl http://localhost:9621/api/tags
|
|||||||
|
|
||||||
#### POST /api/chat
|
#### POST /api/chat
|
||||||
|
|
||||||
Handle chat completion requests. Routes user queries through LightRAG by selecting query mode based on query prefix. Detects and forwards OpenWebUI session-related requests (for meta data generation task) directly to underlying LLM.
|
Handle chat completion requests. Routes user queries through LightRAG by selecting query mode based on query prefix. Detects and forwards OpenWebUI session-related requests (for metadata generation task) directly to the underlying LLM.
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
curl -N -X POST http://localhost:9621/api/chat -H "Content-Type: application/json" -d \
|
curl -N -X POST http://localhost:9621/api/chat -H "Content-Type: application/json" -d \
|
||||||
'{"model":"lightrag:latest","messages":[{"role":"user","content":"猪八戒是谁"}],"stream":true}'
|
'{"model":"lightrag:latest","messages":[{"role":"user","content":"猪八戒是谁"}],"stream":true}'
|
||||||
```
|
```
|
||||||
|
|
||||||
> For more information about Ollama API pls. visit : [Ollama API documentation](https://github.com/ollama/ollama/blob/main/docs/api.md)
|
> For more information about Ollama API, please visit: [Ollama API documentation](https://github.com/ollama/ollama/blob/main/docs/api.md)
|
||||||
|
|
||||||
#### POST /api/generate
|
#### POST /api/generate
|
||||||
|
|
||||||
Handle generate completion requests. For compatibility purpose, the request is not processed by LightRAG, and will be handled by underlying LLM model.
|
Handle generate completion requests. For compatibility purposes, the request is not processed by LightRAG, and will be handled by the underlying LLM model.
|
||||||
|
|
||||||
### Utility Endpoints
|
### Utility Endpoints:
|
||||||
|
|
||||||
#### GET /health
|
#### GET /health
|
||||||
Check server health and configuration.
|
Check server health and configuration.
|
||||||
|
Reference in New Issue
Block a user