Saifeddine ALOUI
92d45ffedc
Update README.md
2025-01-24 21:14:31 +01:00
Saifeddine ALOUI
223439c648
Update README.md
2025-01-24 21:12:58 +01:00
Saifeddine ALOUI
c44ccde09a
Update README.md
2025-01-24 21:08:46 +01:00
Saifeddine ALOUI
cbbde7d1ce
Update README.md
2025-01-24 21:06:57 +01:00
Saifeddine ALOUI
d7cb57a50f
New Logo
2025-01-24 21:05:11 +01:00
Magic_yuan
5719aa8882
支持多轮对话
2025-01-24 19:03:36 +08:00
yangdx
da9eaafcb7
Fix API docs link
2025-01-23 01:53:59 +08:00
Nick French
76352867d6
Fixed API docs link in readme.md
2025-01-21 10:35:16 -05:00
Nick French
6549b9c950
Fixing API Docs link in Readme.md
2025-01-21 10:32:25 -05:00
zrguo
28a84b2aa2
Merge pull request #592 from danielaskdd/yangdx
...
Add Ollama compatible API server
2025-01-17 14:29:31 +08:00
Saifeddine ALOUI
cbd02bbbab
Added link to documentation from main README.md
2025-01-16 22:16:34 +01:00
Saifeddine ALOUI
2c3ff234e9
Moving extended api documentation to new doc folder
2025-01-16 22:14:16 +01:00
yangdx
d15753d51a
Merge branch 'main' into yangdx
2025-01-16 20:20:09 +08:00
Gurjot Singh
2ea104d738
Fix linting errors
2025-01-16 11:31:22 +05:30
Gurjot Singh
dd105d47fa
Update README.md to include a detailed explanation of the new query_with_separate_keyword_extraction function.
2025-01-16 11:15:21 +05:30
Samuel Chan
efe0644212
Merge branch 'HKUDS:main' into main
2025-01-16 07:52:35 +08:00
yangdx
ae9e37a120
Merge remote-tracking branch 'origin/main' into yangdx
2025-01-16 01:50:46 +08:00
✨Data Intelligence Lab@HKU✨
8f0196f6b9
Update README.md
2025-01-15 13:08:07 +08:00
Samuel Chan
2b7d253117
Merge remote-tracking branch 'origin/main'
2025-01-15 12:09:05 +08:00
Samuel Chan
d91a330e9d
Enrich README.md for postgres usage, make some change to cater python version<12
2025-01-15 12:02:55 +08:00
yangdx
0bfeb237e3
创建yangdx分支,并添加测试脚本
2025-01-14 23:04:41 +08:00
zrguo
867475fd1f
Update README.md
2025-01-13 10:28:19 +08:00
Samuel Chan
cfaffb17a3
Merge remote-tracking branch 'origin/main'
2025-01-12 17:08:18 +08:00
Samuel Chan
f3e0fb87f5
Add known issue of Apache AGE to the readme.
2025-01-12 17:01:31 +08:00
Samuel Chan
63a71c04fd
Add known issue of Apache AGE to the readme.
2025-01-12 16:56:30 +08:00
zrguo
a2e96b67e9
Merge pull request #570 from ShanGor/main
...
Revise the AGE usage for postgres_impl
2025-01-12 13:23:06 +08:00
Samuel Chan
1998a5b204
Merge remote-tracking branch 'origin/main'
...
# Conflicts:
# README.md
2025-01-11 10:40:09 +08:00
Samuel Chan
d03d6f5fc5
Revised the postgres implementation, to use attributes(node_id) rather than nodes to identify an entity. Which significantly reduced the table counts.
2025-01-11 09:30:19 +08:00
Saifeddine ALOUI
e21fbef60b
updated documlentation
2025-01-10 22:38:57 +01:00
Saifeddine ALOUI
2297007b7b
Simplified the api services issue #565
2025-01-10 20:30:58 +01:00
zrguo
9e7784ab8a
Update README.md
2025-01-08 18:17:32 +08:00
Samuel Chan
196350b75b
Revise the readme to fix the broken link.
2025-01-07 07:02:37 +08:00
✨Data Intelligence Lab@HKU✨
22e9f1cd89
Update README.md
2025-01-06 23:21:02 +08:00
✨Data Intelligence Lab@HKU✨
e415f88bd4
Update README.md
2025-01-06 23:20:26 +08:00
zrguo
916380e511
Update README.md
2025-01-06 15:39:44 +08:00
zrguo
e2a4819af9
Update README.md
2025-01-06 15:37:37 +08:00
Samuel Chan
6ae27d8f06
Some enhancements:
...
- Enable the llm_cache storage to support get_by_mode_and_id, to improve the performance for using real KV server
- Provide an option for the developers to cache the LLM response when extracting entities for a document. Solving the paint point that sometimes the process failed, the processed chunks we need to call LLM again, money and time wasted. With the new option (by default not enabled) enabling, we can cache that result, can significantly save the time and money for beginners.
2025-01-06 12:50:05 +08:00
Saifeddine ALOUI
b15c398889
applyed linting
2025-01-04 02:23:39 +01:00
Saifeddine ALOUI
518a8a726a
Added servers protection using an API key to restrict access to only authenticated entities.
2025-01-04 02:21:37 +01:00
zrguo
71e9267f4b
Update README.md
2024-12-31 17:25:57 +08:00
Magic_yuan
aaaf617451
feat(lightrag): Implement mix search mode combining knowledge graph and vector retrieval
...
- Add 'mix' mode to QueryParam for hybrid search functionality
- Implement mix_kg_vector_query to combine knowledge graph and vector search results
- Update LightRAG class to handle 'mix' mode queries
- Enhance README with examples and explanations for the new mix search mode
- Introduce new prompt structure for generating responses based on combined search results
2024-12-28 11:56:28 +08:00
Magic_yuan
e6b2f68e7c
docs(readme): Add batch size configuration documentation
...
文档(readme): 添加批处理大小配置说明
- Add documentation for insert_batch_size parameter in addon_params
- 在 addon_params 中添加 insert_batch_size 参数的文档说明
- Explain default batch size value and its usage
- 说明默认批处理大小值及其用途
- Add example configuration for batch processing
- 添加批处理配置的示例
2024-12-28 00:16:53 +08:00
Saifeddine ALOUI
f2b52a2a38
Added azure openai lightrag server to the api install and fused documentation.
2024-12-26 21:32:56 +01:00
Saifeddine ALOUI
69b3f0b37b
fixed the default lollms server port number
2024-12-24 11:33:28 +01:00
Saifeddine ALOUI
9951f8584a
Added API as an option to the installation, reorganized the API and fused all documentations in README.md
2024-12-24 10:31:12 +01:00
Magic_yuan
b63c6155ee
style(lightrag): 调整ReadMe,加入自定义实体类型参数配置示例
2024-12-11 14:10:27 +08:00
Magic_yuan
ccf44dc334
feat(cache): 增加 LLM 相似性检查功能并优化缓存机制
...
- 在 embedding 缓存配置中添加 use_llm_check 参数
- 实现 LLM 相似性检查逻辑,作为缓存命中的二次验证- 优化 naive 模式的缓存处理流程
- 调整缓存数据结构,移除不必要的 model 字段
2024-12-08 17:35:52 +08:00
magicyuan876
6540d11096
修复 args_hash在使用常规缓存时候才计算导致embedding缓存时没有计算的bug
2024-12-06 10:21:53 +08:00
magicyuan876
d48c6e4588
feat(lightrag): 添加 查询时使用embedding缓存功能
...
- 在 LightRAG 类中添加 embedding_cache_config配置项
- 实现基于 embedding 相似度的缓存查询和存储
- 添加量化和反量化函数,用于压缩 embedding 数据
- 新增示例演示 embedding 缓存的使用
2024-12-06 08:17:20 +08:00
Larfii
e99832cc13
Fix: unexpected keyword argument error.
2024-12-05 14:11:43 +08:00