Update sample code and README

This commit is contained in:
yangdx
2025-04-20 14:33:16 +08:00
parent 4ae5246a7e
commit 5f2cd871a8
3 changed files with 63 additions and 47 deletions

View File

@@ -35,21 +35,6 @@
## 安装
### 安装LightRAG Core
* 从源代码安装(推荐)
```bash
cd LightRAG
pip install -e .
```
* 从PyPI安装
```bash
pip install lightrag-hku
```
### 安装LightRAG服务器
LightRAG服务器旨在提供Web UI和API支持。Web UI便于文档索引、知识图谱探索和简单的RAG查询界面。LightRAG服务器还提供兼容Ollama的接口旨在将LightRAG模拟为Ollama聊天模型。这使得AI聊天机器人如Open WebUI可以轻松访问LightRAG。
@@ -68,17 +53,40 @@ pip install "lightrag-hku[api]"
pip install -e ".[api]"
```
**有关LightRAG服务器的更多信息,请参阅[LightRAG服务器](./lightrag/api/README.md)。**
### 安装LightRAG Core
## 快速开始 (仅对LightRAG Core)
* [视频演示](https://www.youtube.com/watch?v=g21royNJ4fw)展示如何在本地运行LightRAG。
* 所有代码都可以在`examples`中找到。
* 如果使用OpenAI模型请在环境中设置OpenAI API密钥`export OPENAI_API_KEY="sk-..."`
* 下载演示文本"狄更斯的圣诞颂歌"
* 从源代码安装(推荐)
```bash
cd LightRAG
pip install -e .
```
* 从PyPI安装
```bash
pip install lightrag-hku
```
## 快速开始
### 使用LightRAG服务器
**有关LightRAG服务器的更多信息请参阅[LightRAG服务器](./lightrag/api/README.md)。**
## 使用LightRAG Core
LightRAG核心功能的示例代码请参见`examples`目录。您还可参照[视频](https://www.youtube.com/watch?v=g21royNJ4fw)视频完成环境配置。若已持有OpenAI API密钥可以通过以下命令运行演示代码
```bash
### you should run the demo code with project folder
cd LightRAG
### provide your API-KEY for OpenAI
export OPENAI_API_KEY="sk-...your_opeai_key..."
### download the demo document of "A Christmas Carol" by Charles Dickens
curl https://raw.githubusercontent.com/gusye1234/nano-graphrag/main/tests/mock_data.txt > ./book.txt
### run the demo code
python examples/lightrag_openai_demo.py
```
## 查询
@@ -815,7 +823,7 @@ rag = LightRAG(
create INDEX CONCURRENTLY entity_idx_node_id ON dickens."Entity" (ag_catalog.agtype_access_operator(properties, '"node_id"'::agtype));
CREATE INDEX CONCURRENTLY entity_node_id_gin_idx ON dickens."Entity" using gin(properties);
ALTER TABLE dickens."DIRECTED" CLUSTER ON directed_sid_idx;
-- 如有必要可以删除
drop INDEX entity_p_idx;
drop INDEX vertex_p_idx;

View File

@@ -71,21 +71,6 @@
## Installation
### Install LightRAG Core
* Install from source (Recommend)
```bash
cd LightRAG
pip install -e .
```
* Install from PyPI
```bash
pip install lightrag-hku
```
### Install LightRAG Server
The LightRAG Server is designed to provide Web UI and API support. The Web UI facilitates document indexing, knowledge graph exploration, and a simple RAG query interface. LightRAG Server also provide an Ollama compatible interfaces, aiming to emulate LightRAG as an Ollama chat model. This allows AI chat bot, such as Open WebUI, to access LightRAG easily.
@@ -104,17 +89,40 @@ pip install "lightrag-hku[api]"
pip install -e ".[api]"
```
**For more information about LightRAG Server, please refer to [LightRAG Server](./lightrag/api/README.md).**
### Install LightRAG Core
## Quick Start for LightRAG core only
* [Video demo](https://www.youtube.com/watch?v=g21royNJ4fw) of running LightRAG locally.
* All the code can be found in the `examples`.
* Set OpenAI API key in environment if using OpenAI models: `export OPENAI_API_KEY="sk-...".`
* Download the demo text "A Christmas Carol by Charles Dickens":
* Install from source (Recommend)
```bash
cd LightRAG
pip install -e .
```
* Install from PyPI
```bash
pip install lightrag-hku
```
## Quick Start
### Quick Start for LightRAG Server
For more information about LightRAG Server, please refer to [LightRAG Server](./lightrag/api/README.md).
### Quick Start for LightRAG core
To get started with LightRAG core, refer to the sample codes available in the `examples` folder. Additionally, a [video demo](https://www.youtube.com/watch?v=g21royNJ4fw) demonstration is provided to guide you through the local setup process. If you already possess an OpenAI API key, you can run the demo right away:
```bash
### you should run the demo code with project folder
cd LightRAG
### provide your API-KEY for OpenAI
export OPENAI_API_KEY="sk-...your_opeai_key..."
### download the demo document of "A Christmas Carol" by Charles Dickens
curl https://raw.githubusercontent.com/gusye1234/nano-graphrag/main/tests/mock_data.txt > ./book.txt
### run the demo code
python examples/lightrag_openai_demo.py
```
## Query
@@ -836,7 +844,7 @@ For production level scenarios you will most likely want to leverage an enterpri
create INDEX CONCURRENTLY entity_idx_node_id ON dickens."Entity" (ag_catalog.agtype_access_operator(properties, '"node_id"'::agtype));
CREATE INDEX CONCURRENTLY entity_node_id_gin_idx ON dickens."Entity" using gin(properties);
ALTER TABLE dickens."DIRECTED" CLUSTER ON directed_sid_idx;
-- drop if necessary
drop INDEX entity_p_idx;
drop INDEX vertex_p_idx;

View File

@@ -89,7 +89,7 @@ def create_openai_async_client(
if base_url is not None:
merged_configs["base_url"] = base_url
else:
merged_configs["base_url"] = os.environ["OPENAI_API_BASE"]
merged_configs["base_url"] = os.environ.get("OPENAI_API_BASE", "https://api.openai.com/v1")
return AsyncOpenAI(**merged_configs)