Some enhancements:
- Enable the llm_cache storage to support get_by_mode_and_id, to improve the performance for using real KV server - Provide an option for the developers to cache the LLM response when extracting entities for a document. Solving the paint point that sometimes the process failed, the processed chunks we need to call LLM again, money and time wasted. With the new option (by default not enabled) enabling, we can cache that result, can significantly save the time and money for beginners.
This commit is contained in:
@@ -176,6 +176,8 @@ class LightRAG:
|
||||
vector_db_storage_cls_kwargs: dict = field(default_factory=dict)
|
||||
|
||||
enable_llm_cache: bool = True
|
||||
# Sometimes there are some reason the LLM failed at Extracting Entities, and we want to continue without LLM cost, we can use this flag
|
||||
enable_llm_cache_for_entity_extract: bool = False
|
||||
|
||||
# extension
|
||||
addon_params: dict = field(default_factory=dict)
|
||||
@@ -402,6 +404,7 @@ class LightRAG:
|
||||
knowledge_graph_inst=self.chunk_entity_relation_graph,
|
||||
entity_vdb=self.entities_vdb,
|
||||
relationships_vdb=self.relationships_vdb,
|
||||
llm_response_cache=self.llm_response_cache,
|
||||
global_config=asdict(self),
|
||||
)
|
||||
|
||||
|
Reference in New Issue
Block a user