Added Docker container setup
This commit is contained in:
37
.env.example
Normal file
37
.env.example
Normal file
@@ -0,0 +1,37 @@
|
||||
# Server Configuration
|
||||
HOST=0.0.0.0
|
||||
PORT=9621
|
||||
|
||||
# Directory Configuration
|
||||
WORKING_DIR=/app/data/rag_storage
|
||||
INPUT_DIR=/app/data/inputs
|
||||
|
||||
# LLM Configuration
|
||||
LLM_BINDING=ollama
|
||||
LLM_BINDING_HOST=http://localhost:11434
|
||||
LLM_MODEL=mistral-nemo:latest
|
||||
|
||||
# Embedding Configuration
|
||||
EMBEDDING_BINDING=ollama
|
||||
EMBEDDING_BINDING_HOST=http://localhost:11434
|
||||
EMBEDDING_MODEL=bge-m3:latest
|
||||
|
||||
# RAG Configuration
|
||||
MAX_ASYNC=4
|
||||
MAX_TOKENS=32768
|
||||
EMBEDDING_DIM=1024
|
||||
MAX_EMBED_TOKENS=8192
|
||||
|
||||
# Security (empty for no key)
|
||||
LIGHTRAG_API_KEY=your-secure-api-key-here
|
||||
|
||||
# Logging
|
||||
LOG_LEVEL=INFO
|
||||
|
||||
# Optional SSL Configuration
|
||||
#SSL=true
|
||||
#SSL_CERTFILE=/path/to/cert.pem
|
||||
#SSL_KEYFILE=/path/to/key.pem
|
||||
|
||||
# Optional Timeout
|
||||
#TIMEOUT=30
|
38
Dockerfile
Normal file
38
Dockerfile
Normal file
@@ -0,0 +1,38 @@
|
||||
# Build stage
|
||||
FROM python:3.11-slim as builder
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Install build dependencies
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
build-essential \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Copy only requirements files first to leverage Docker cache
|
||||
COPY requirements.txt .
|
||||
COPY lightrag/api/requirements.txt ./lightrag/api/
|
||||
|
||||
# Install dependencies
|
||||
RUN pip install --user --no-cache-dir -r requirements.txt
|
||||
RUN pip install --user --no-cache-dir -r lightrag/api/requirements.txt
|
||||
|
||||
# Final stage
|
||||
FROM python:3.11-slim
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Copy only necessary files from builder
|
||||
COPY --from=builder /root/.local /root/.local
|
||||
COPY . .
|
||||
|
||||
# Make sure scripts in .local are usable
|
||||
ENV PATH=/root/.local/bin:$PATH
|
||||
|
||||
# Create necessary directories
|
||||
RUN mkdir -p /app/data/rag_storage /app/data/inputs
|
||||
|
||||
# Expose the default port
|
||||
EXPOSE 9621
|
||||
|
||||
# Set entrypoint
|
||||
ENTRYPOINT ["python", "-m", "lightrag.api.lightrag_server"]
|
21
docker-compose.yml
Normal file
21
docker-compose.yml
Normal file
@@ -0,0 +1,21 @@
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
lightrag:
|
||||
build: .
|
||||
ports:
|
||||
- "${PORT:-9621}:9621"
|
||||
volumes:
|
||||
- ./data/rag_storage:/app/data/rag_storage
|
||||
- ./data/inputs:/app/data/inputs
|
||||
env_file:
|
||||
- .env
|
||||
environment:
|
||||
- TZ=UTC
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- lightrag_net
|
||||
|
||||
networks:
|
||||
lightrag_net:
|
||||
driver: bridge
|
174
docs/DockerDeployment.md
Normal file
174
docs/DockerDeployment.md
Normal file
@@ -0,0 +1,174 @@
|
||||
# LightRAG
|
||||
|
||||
A lightweight Knowledge Graph Retrieval-Augmented Generation system with multiple LLM backend support.
|
||||
|
||||
## 🚀 Installation
|
||||
|
||||
### Prerequisites
|
||||
- Python 3.10+
|
||||
- Git
|
||||
- Docker (optional for Docker deployment)
|
||||
|
||||
### Native Installation
|
||||
|
||||
1. Clone the repository:
|
||||
```bash
|
||||
# Linux/MacOS
|
||||
git clone https://github.com/ParisNeo/LightRAG.git
|
||||
cd LightRAG
|
||||
```
|
||||
```powershell
|
||||
# Windows PowerShell
|
||||
git clone https://github.com/ParisNeo/LightRAG.git
|
||||
cd LightRAG
|
||||
```
|
||||
|
||||
2. Configure your environment:
|
||||
```bash
|
||||
# Linux/MacOS
|
||||
cp .env.example .env
|
||||
# Edit .env with your preferred configuration
|
||||
```
|
||||
```powershell
|
||||
# Windows PowerShell
|
||||
Copy-Item .env.example .env
|
||||
# Edit .env with your preferred configuration
|
||||
```
|
||||
|
||||
3. Create and activate virtual environment:
|
||||
```bash
|
||||
# Linux/MacOS
|
||||
python -m venv venv
|
||||
source venv/bin/activate
|
||||
```
|
||||
```powershell
|
||||
# Windows PowerShell
|
||||
python -m venv venv
|
||||
.\venv\Scripts\Activate
|
||||
```
|
||||
|
||||
4. Install dependencies:
|
||||
```bash
|
||||
# Both platforms
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
## 🐳 Docker Deployment
|
||||
|
||||
Docker instructions work the same on all platforms with Docker Desktop installed.
|
||||
|
||||
1. Build and start the container:
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
### Configuration Options
|
||||
|
||||
LightRAG can be configured using environment variables in the `.env` file:
|
||||
|
||||
#### Server Configuration
|
||||
- `HOST`: Server host (default: 0.0.0.0)
|
||||
- `PORT`: Server port (default: 9621)
|
||||
|
||||
#### LLM Configuration
|
||||
- `LLM_BINDING`: LLM backend to use (lollms/ollama/openai)
|
||||
- `LLM_BINDING_HOST`: LLM server host URL
|
||||
- `LLM_MODEL`: Model name to use
|
||||
|
||||
#### Embedding Configuration
|
||||
- `EMBEDDING_BINDING`: Embedding backend (lollms/ollama/openai)
|
||||
- `EMBEDDING_BINDING_HOST`: Embedding server host URL
|
||||
- `EMBEDDING_MODEL`: Embedding model name
|
||||
|
||||
#### RAG Configuration
|
||||
- `MAX_ASYNC`: Maximum async operations
|
||||
- `MAX_TOKENS`: Maximum token size
|
||||
- `EMBEDDING_DIM`: Embedding dimensions
|
||||
- `MAX_EMBED_TOKENS`: Maximum embedding token size
|
||||
|
||||
#### Security
|
||||
- `LIGHTRAG_API_KEY`: API key for authentication
|
||||
|
||||
### Data Storage Paths
|
||||
|
||||
The system uses the following paths for data storage:
|
||||
```
|
||||
data/
|
||||
├── rag_storage/ # RAG data persistence
|
||||
└── inputs/ # Input documents
|
||||
```
|
||||
|
||||
### Example Deployments
|
||||
|
||||
1. Using with Ollama:
|
||||
```env
|
||||
LLM_BINDING=ollama
|
||||
LLM_BINDING_HOST=http://localhost:11434
|
||||
LLM_MODEL=mistral
|
||||
EMBEDDING_BINDING=ollama
|
||||
EMBEDDING_BINDING_HOST=http://localhost:11434
|
||||
EMBEDDING_MODEL=bge-m3
|
||||
```
|
||||
|
||||
2. Using with OpenAI:
|
||||
```env
|
||||
LLM_BINDING=openai
|
||||
LLM_MODEL=gpt-3.5-turbo
|
||||
EMBEDDING_BINDING=openai
|
||||
EMBEDDING_MODEL=text-embedding-ada-002
|
||||
OPENAI_API_KEY=your-api-key
|
||||
```
|
||||
|
||||
### API Usage
|
||||
|
||||
Once deployed, you can interact with the API at `http://localhost:9621`
|
||||
|
||||
Example query using PowerShell:
|
||||
```powershell
|
||||
$headers = @{
|
||||
"X-API-Key" = "your-api-key"
|
||||
"Content-Type" = "application/json"
|
||||
}
|
||||
$body = @{
|
||||
query = "your question here"
|
||||
} | ConvertTo-Json
|
||||
|
||||
Invoke-RestMethod -Uri "http://localhost:9621/query" -Method Post -Headers $headers -Body $body
|
||||
```
|
||||
|
||||
Example query using curl:
|
||||
```bash
|
||||
curl -X POST "http://localhost:9621/query" \
|
||||
-H "X-API-Key: your-api-key" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"query": "your question here"}'
|
||||
```
|
||||
|
||||
## 🔒 Security
|
||||
|
||||
Remember to:
|
||||
1. Set a strong API key in production
|
||||
2. Use SSL in production environments
|
||||
3. Configure proper network security
|
||||
|
||||
## 📦 Updates
|
||||
|
||||
To update the Docker container:
|
||||
```bash
|
||||
docker-compose pull
|
||||
docker-compose up -d --build
|
||||
```
|
||||
|
||||
To update native installation:
|
||||
```bash
|
||||
# Linux/MacOS
|
||||
git pull
|
||||
source venv/bin/activate
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
```powershell
|
||||
# Windows PowerShell
|
||||
git pull
|
||||
.\venv\Scripts\Activate
|
||||
pip install -r requirements.txt
|
||||
```
|
Reference in New Issue
Block a user