From 41f76ec4592eca9873e9f94ea9a5545c51f266e8 Mon Sep 17 00:00:00 2001 From: Yannick Stephan Date: Sun, 9 Feb 2025 01:05:27 +0100 Subject: [PATCH] updated readme --- README.md | 18 ++++++++++++++---- 1 file changed, 14 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index 850cacd3..480b8d00 100644 --- a/README.md +++ b/README.md @@ -355,16 +355,26 @@ In order to run this experiment on low RAM GPU you should select small model and ```python class QueryParam: mode: Literal["local", "global", "hybrid", "naive", "mix"] = "global" + """Specifies the retrieval mode: + - "local": Focuses on context-dependent information. + - "global": Utilizes global knowledge. + - "hybrid": Combines local and global retrieval methods. + - "naive": Performs a basic search without advanced techniques. + - "mix": Integrates knowledge graph and vector retrieval. + """ only_need_context: bool = False + """If True, only returns the retrieved context without generating a response.""" response_type: str = "Multiple Paragraphs" - # Number of top-k items to retrieve; corresponds to entities in "local" mode and relationships in "global" mode. + """Defines the response format. Examples: 'Multiple Paragraphs', 'Single Paragraph', 'Bullet Points'.""" top_k: int = 60 - # Number of tokens for the original chunks. + """Number of top items to retrieve. Represents entities in 'local' mode and relationships in 'global' mode.""" max_token_for_text_unit: int = 4000 - # Number of tokens for the relationship descriptions + """Maximum number of tokens allowed for each retrieved text chunk.""" max_token_for_global_context: int = 4000 - # Number of tokens for the entity descriptions + """Maximum number of tokens allocated for relationship descriptions in global retrieval.""" max_token_for_local_context: int = 4000 + """Maximum number of tokens allocated for entity descriptions in local retrieval.""" + ... ``` > default value of Top_k can be change by environment variables TOP_K.