From e607ff833caf1813eb7fa4a7a2d0f9cb94b34390 Mon Sep 17 00:00:00 2001 From: Larfii <834462287@qq.com> Date: Tue, 8 Oct 2024 16:08:45 +0800 Subject: [PATCH] update README.md --- README.md | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index 0cff8218..1cea1d2b 100644 --- a/README.md +++ b/README.md @@ -65,7 +65,7 @@ The dataset used in LightRAG can be download from [TommyChien/UltraDomain](https ### Generate Query LightRAG uses the following prompt to generate high-level queries, with the corresponding code located in `example/generate_query.py`. -```json +```python Given the following description of a dataset: {description} @@ -91,7 +91,7 @@ Output the results in the following structure: ### Batch Eval To evaluate the performance of two RAG systems on high-level queries, LightRAG uses the following prompt, with the specific code available in `example/batch_eval.py`. -```json +```python ---Role--- You are an expert tasked with evaluating two answers to the same question based on three criteria: **Comprehensiveness**, **Diversity**, and **Empowerment**. ---Goal--- @@ -160,7 +160,7 @@ Output your evaluation in the following JSON format: ## Code Structure -```json +```python . ├── examples │ ├── batch_eval.py @@ -183,7 +183,7 @@ Output your evaluation in the following JSON format: ``` ## Citation -```json +```python @article{guo2024lightrag, title={LightRAG: Simple and Fast Retrieval-Augmented Generation}, author={Zirui Guo and Lianghao Xia and Yanhua Yu and Tu Ao and Chao Huang}, @@ -193,3 +193,4 @@ archivePrefix={arXiv}, primaryClass={cs.IR} } ``` +