Some enhancements:

- Enable the llm_cache storage to support get_by_mode_and_id, to improve the performance for using real KV server
- Provide an option for the developers to cache the LLM response when extracting entities for a document. Solving the paint point that sometimes the process failed, the processed chunks we need to call LLM again, money and time wasted. With the new option (by default not enabled) enabling, we can cache that result, can significantly save the time and money for beginners.
This commit is contained in:
Samuel Chan
2025-01-06 12:50:05 +08:00
parent 6c1b669f0f
commit 6ae27d8f06
7 changed files with 182 additions and 70 deletions

View File

@@ -43,6 +43,7 @@ async def main():
llm_model_name="glm-4-flashx",
llm_model_max_async=4,
llm_model_max_token_size=32768,
enable_llm_cache_for_entity_extract=True,
embedding_func=EmbeddingFunc(
embedding_dim=768,
max_token_size=8192,