Update README

This commit is contained in:
yangdx
2025-04-21 00:25:26 +08:00
parent 0c6e80cce9
commit 908953924a
2 changed files with 11 additions and 3 deletions

View File

@@ -125,6 +125,10 @@ curl https://raw.githubusercontent.com/gusye1234/nano-graphrag/main/tests/mock_d
python examples/lightrag_openai_demo.py
```
For a streaming response implementation example, please see `examples/lightrag_openai_compatible_demo.py`. Prior to execution, ensure you modify the sample codes LLM and embedding configurations accordingly.
**Note**: When running the demo program, please be aware that different test scripts may use different embedding models. If you switch to a different embedding model, you must clear the data directory (`./dickens`); otherwise, the program may encounter errors. If you wish to retain the LLM cache, you can preserve the `kv_store_llm_response_cache.json` file while clearing the data directory.
## Query
Use the below Python snippet (in a script) to initialize LightRAG and perform queries:
@@ -838,7 +842,7 @@ For production level scenarios you will most likely want to leverage an enterpri
create INDEX CONCURRENTLY entity_idx_node_id ON dickens."Entity" (ag_catalog.agtype_access_operator(properties, '"node_id"'::agtype));
CREATE INDEX CONCURRENTLY entity_node_id_gin_idx ON dickens."Entity" using gin(properties);
ALTER TABLE dickens."DIRECTED" CLUSTER ON directed_sid_idx;
-- drop if necessary
drop INDEX entity_p_idx;
drop INDEX vertex_p_idx;