From b63c6155ee0d2e5d8504c1c723b86cec342ae7a8 Mon Sep 17 00:00:00 2001
From: Magic_yuan <317617749@qq.com>
Date: Wed, 11 Dec 2024 14:10:27 +0800
Subject: [PATCH] =?UTF-8?q?style(lightrag):=20=E8=B0=83=E6=95=B4ReadMe,?=
=?UTF-8?q?=E5=8A=A0=E5=85=A5=E8=87=AA=E5=AE=9A=E4=B9=89=E5=AE=9E=E4=BD=93?=
=?UTF-8?q?=E7=B1=BB=E5=9E=8B=E5=8F=82=E6=95=B0=E9=85=8D=E7=BD=AE=E7=A4=BA?=
=?UTF-8?q?=E4=BE=8B?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
README.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/README.md b/README.md
index a1454792..a24c9b72 100644
--- a/README.md
+++ b/README.md
@@ -594,7 +594,7 @@ if __name__ == "__main__":
| **llm\_model\_kwargs** | `dict` | Additional parameters for LLM generation | |
| **vector\_db\_storage\_cls\_kwargs** | `dict` | Additional parameters for vector database (currently not used) | |
| **enable\_llm\_cache** | `bool` | If `TRUE`, stores LLM results in cache; repeated prompts return cached responses | `TRUE` |
-| **addon\_params** | `dict` | Additional parameters, e.g., `{"example_number": 1, "language": "Simplified Chinese"}`: sets example limit and output language | `example_number: all examples, language: English` |
+| **addon\_params** | `dict` | Additional parameters, e.g., `{"example_number": 1, "language": "Simplified Chinese", "entity_types": ["organization", "person", "geo", "event"]}`: sets example limit and output language | `example_number: all examples, language: English` |
| **convert\_response\_to\_json\_func** | `callable` | Not used | `convert_response_to_json` |
| **embedding\_cache\_config** | `dict` | Configuration for question-answer caching. Contains three parameters:
- `enabled`: Boolean value to enable/disable cache lookup functionality. When enabled, the system will check cached responses before generating new answers.
- `similarity_threshold`: Float value (0-1), similarity threshold. When a new question's similarity with a cached question exceeds this threshold, the cached answer will be returned directly without calling the LLM.
- `use_llm_check`: Boolean value to enable/disable LLM similarity verification. When enabled, LLM will be used as a secondary check to verify the similarity between questions before returning cached answers. | Default: `{"enabled": False, "similarity_threshold": 0.95, "use_llm_check": False}` |