Some enhancements:

- Enable the llm_cache storage to support get_by_mode_and_id, to improve the performance for using real KV server
- Provide an option for the developers to cache the LLM response when extracting entities for a document. Solving the paint point that sometimes the process failed, the processed chunks we need to call LLM again, money and time wasted. With the new option (by default not enabled) enabling, we can cache that result, can significantly save the time and money for beginners.
This commit is contained in:
Samuel Chan
2025-01-06 12:50:05 +08:00
parent 6c1b669f0f
commit 6ae27d8f06
7 changed files with 182 additions and 70 deletions

12
contributor-readme.MD Normal file
View File

@@ -0,0 +1,12 @@
# Handy Tips for Developers Who Want to Contribute to the Project
## Pre-commit Hooks
Please ensure you have run pre-commit hooks before committing your changes.
### Guides
1. **Installing Pre-commit Hooks**:
- Install pre-commit using pip: `pip install pre-commit`
- Initialize pre-commit in your repository: `pre-commit install`
- Run pre-commit hooks: `pre-commit run --all-files`
2. **Pre-commit Hooks Configuration**:
- Create a `.pre-commit-config.yaml` file in the root of your repository.
- Add your hooks to the `.pre-commit-config.yaml`file.