Merge pull request #1225 from danielaskdd/main
Remove the comments at the end of the environment variable lines in .env file
This commit is contained in:
10
env.example
10
env.example
@@ -55,10 +55,14 @@ SUMMARY_LANGUAGE=English
|
||||
# MAX_EMBED_TOKENS=8192
|
||||
|
||||
### LLM Configuration
|
||||
TIMEOUT=150 # Time out in seconds for LLM, None for infinite timeout
|
||||
### Time out in seconds for LLM, None for infinite timeout
|
||||
TIMEOUT=150
|
||||
### Some models like o1-mini require temperature to be set to 1
|
||||
TEMPERATURE=0.5
|
||||
MAX_ASYNC=4 # Max concurrency requests of LLM
|
||||
MAX_TOKENS=32768 # Max tokens send to LLM (less than context size of the model)
|
||||
### Max concurrency requests of LLM
|
||||
MAX_ASYNC=4
|
||||
### Max tokens send to LLM (less than context size of the model)
|
||||
MAX_TOKENS=32768
|
||||
|
||||
### Ollama example (For local services installed with docker, you can use host.docker.internal as host)
|
||||
LLM_BINDING=ollama
|
||||
|
@@ -422,7 +422,6 @@ EMBEDDING_BINDING_HOST=http://localhost:11434
|
||||
```
|
||||
|
||||
|
||||
|
||||
## API Endpoints
|
||||
|
||||
All servers (LoLLMs, Ollama, OpenAI and Azure OpenAI) provide the same REST API endpoints for RAG functionality. When API Server is running, visit:
|
||||
|
Reference in New Issue
Block a user