Update README.md to move Neo4j Storage content
Move `Using Neo4J for Storage` content outside of Ollama details group for improved visibility to this option.
This commit is contained in:
55
README.md
55
README.md
@@ -203,34 +203,6 @@ rag = LightRAG(
|
|||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
### Using Neo4J for Storage
|
|
||||||
|
|
||||||
* For production level scenarios you will most likely want to leverage an enterprise solution
|
|
||||||
* for KG storage. Running Neo4J in Docker is recommended for seamless local testing.
|
|
||||||
* See: https://hub.docker.com/_/neo4j
|
|
||||||
|
|
||||||
|
|
||||||
```python
|
|
||||||
export NEO4J_URI="neo4j://localhost:7687"
|
|
||||||
export NEO4J_USERNAME="neo4j"
|
|
||||||
export NEO4J_PASSWORD="password"
|
|
||||||
|
|
||||||
When you launch the project be sure to override the default KG: NetworkS
|
|
||||||
by specifying kg="Neo4JStorage".
|
|
||||||
|
|
||||||
# Note: Default settings use NetworkX
|
|
||||||
#Initialize LightRAG with Neo4J implementation.
|
|
||||||
WORKING_DIR = "./local_neo4jWorkDir"
|
|
||||||
|
|
||||||
rag = LightRAG(
|
|
||||||
working_dir=WORKING_DIR,
|
|
||||||
llm_model_func=gpt_4o_mini_complete, # Use gpt_4o_mini_complete LLM model
|
|
||||||
kg="Neo4JStorage", #<-----------override KG default
|
|
||||||
log_level="DEBUG" #<-----------override log_level default
|
|
||||||
)
|
|
||||||
```
|
|
||||||
see test_neo4j.py for a working example.
|
|
||||||
|
|
||||||
### Increasing context size
|
### Increasing context size
|
||||||
In order for LightRAG to work context should be at least 32k tokens. By default Ollama models have context size of 8k. You can achieve this using one of two ways:
|
In order for LightRAG to work context should be at least 32k tokens. By default Ollama models have context size of 8k. You can achieve this using one of two ways:
|
||||||
|
|
||||||
@@ -328,6 +300,33 @@ with open("./newText.txt") as f:
|
|||||||
rag.insert(f.read())
|
rag.insert(f.read())
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Using Neo4J for Storage
|
||||||
|
|
||||||
|
* For production level scenarios you will most likely want to leverage an enterprise solution
|
||||||
|
* for KG storage. Running Neo4J in Docker is recommended for seamless local testing.
|
||||||
|
* See: https://hub.docker.com/_/neo4j
|
||||||
|
|
||||||
|
```python
|
||||||
|
export NEO4J_URI="neo4j://localhost:7687"
|
||||||
|
export NEO4J_USERNAME="neo4j"
|
||||||
|
export NEO4J_PASSWORD="password"
|
||||||
|
|
||||||
|
# When you launch the project be sure to override the default KG: NetworkX
|
||||||
|
# by specifying kg="Neo4JStorage".
|
||||||
|
|
||||||
|
# Note: Default settings use NetworkX
|
||||||
|
# Initialize LightRAG with Neo4J implementation.
|
||||||
|
WORKING_DIR = "./local_neo4jWorkDir"
|
||||||
|
|
||||||
|
rag = LightRAG(
|
||||||
|
working_dir=WORKING_DIR,
|
||||||
|
llm_model_func=gpt_4o_mini_complete, # Use gpt_4o_mini_complete LLM model
|
||||||
|
kg="Neo4JStorage", #<-----------override KG default
|
||||||
|
log_level="DEBUG" #<-----------override log_level default
|
||||||
|
)
|
||||||
|
```
|
||||||
|
see test_neo4j.py for a working example.
|
||||||
|
|
||||||
### Insert Custom KG
|
### Insert Custom KG
|
||||||
|
|
||||||
```python
|
```python
|
||||||
|
Reference in New Issue
Block a user