Commit Graph

1603 Commits

Author SHA1 Message Date
ArnoChen
3ae2719bfb improve graph viewer UI and rendering
clear data before loading new file in graph viewer

improve font load

fix

format
2025-02-04 14:59:57 +08:00
Saifeddine ALOUI
6a4e1b1401 fixed pipmaster install 2025-02-04 00:28:33 +01:00
Saifeddine ALOUI
14b5adc15c minor fix 2025-02-04 00:26:22 +01:00
Saifeddine ALOUI
61b06a3d1a linting 2025-02-03 23:17:43 +01:00
Saifeddine ALOUI
57af1d1815 Update README.md 2025-02-03 22:54:26 +01:00
Saifeddine ALOUI
a55cf5d1ee Update README.md 2025-02-03 22:52:34 +01:00
Saifeddine ALOUI
9a30dc7b04 Integrated the graphml visualizer as part of lightrag and made it a component that can be installed using [tools] option 2025-02-03 22:51:46 +01:00
Saifeddine ALOUI
797b5fa463 Merge branch 'HKUDS:main' into main 2025-02-03 22:05:59 +01:00
zrguo
0c8a2bface Merge pull request #701 from RayWang1991/main
fix DocStatus issue
2025-02-04 00:16:34 +08:00
zrguo
d97ccbf298 Merge pull request #699 from ArnoChenFx/new-graph-visualizer
refactoring the graph visualizer tool
2025-02-04 00:14:28 +08:00
zrguo
4a7dc3af8f Merge pull request #698 from danielaskdd/add-bypass-support-ollama
Add query prefix "/bypass" for ollama api
2025-02-04 00:13:32 +08:00
zrguo
dca31df1e7 Merge pull request #697 from danielaskdd/improve-unit-test
Add http status check in unit tests.
2025-02-04 00:13:14 +08:00
zrguo
9659b98809 Merge pull request #696 from ultrageopro/main
Add the ability to specify a path for saving lightrag.log
2025-02-04 00:12:48 +08:00
ruirui
e825b079bc fix status error 2025-02-03 23:45:21 +08:00
ArnoChen
460a2cda02 refactoring the graph visualizer
bump version

format

format
2025-02-03 22:56:54 +08:00
Saifeddine ALOUI
da6864d9c6 Merge branch 'HKUDS:main' into main 2025-02-03 11:24:08 +01:00
ultrageopro
0284469fd4 doc: add information about log_dir parameter 2025-02-03 11:25:09 +03:00
yangdx
5cf875755a Update API endpoint documentation to clarify Ollama server compatibility
• Add Ollama server doc for /api/tags
• Update /api/generate endpoint docs
• Update /api/chat endpoint docs
2025-02-03 13:07:08 +08:00
yangdx
4ab02a878f Fix linting 2025-02-03 12:39:52 +08:00
yangdx
ede4122b63 docs: add documentation for /bypass prefix in LightRAG api 2025-02-03 12:25:59 +08:00
yangdx
a8f7b7e2b7 Add "/bypass" mode to skip context retrieval and directly use LLM
• Added SearchMode.bypass enum value
• Added /bypass prefix handler
• Skip RAG when in bypass mode
• Pass conversation history to LLM
• Apply bypass mode for both stream/non-stream
2025-02-03 11:49:17 +08:00
yangdx
840639a873 Fix linting 2025-02-02 22:13:49 +08:00
yangdx
c316edb2d9 Add http status check for unit tests 2025-02-02 22:03:55 +08:00
yangdx
9f82ba4970 Remove error handling tests from "all" mode 2025-02-02 21:59:56 +08:00
ultrageopro
749003f380 fix: path for windows 2025-02-02 14:59:46 +03:00
ultrageopro
ba9c8cd734 fix: default log dir 2025-02-02 14:06:31 +03:00
ultrageopro
35c4115441 feat: custom log dir 2025-02-02 14:04:24 +03:00
zrguo
c07b5522fe Merge pull request #695 from ShanGor/main
Fix the bug from main stream that using doc['status'] and improve Apache AGE performance
2025-02-02 18:27:11 +08:00
zrguo
fade69a2ae Merge pull request #693 from danielaskdd/fix-concurrent-problem
Fixed concurrent problems for document indexing and user query
2025-02-02 18:26:42 +08:00
Samuel Chan
02ac96ff8e - Fix the bug from main stream that using doc['status']
- Improve the performance of Apache AGE.
- Revise the README.md for Apache AGE indexing.
2025-02-02 18:20:32 +08:00
Saifeddine ALOUI
c65dcff991 Fixed a typo 2025-02-02 09:47:05 +01:00
yangdx
7ea1856699 Add comment to clarify LLM cache setting for entity extraction 2025-02-02 07:29:01 +08:00
yangdx
6e1b5d6ce6 Merge branch 'main' into fix-concurrent-problem 2025-02-02 04:36:52 +08:00
yangdx
0a693dbfda Fix linting 2025-02-02 04:27:55 +08:00
yangdx
ecf48a5be5 Add embedding cache config and disable LLM cache for entity extraction for API Server 2025-02-02 04:27:21 +08:00
yangdx
6f5503ebd6 Update similarity_check prompt to avoid generating two scores sometiimes 2025-02-02 04:22:43 +08:00
yangdx
8484564f50 Fix llm_model_func retrieval error. 2025-02-02 03:54:41 +08:00
yangdx
873b52d2e4 Add debug logging for cache response retrieval 2025-02-02 03:15:43 +08:00
yangdx
fdc9017ded Set embedding_func in all llm_response_cache 2025-02-02 03:14:07 +08:00
yangdx
bed5a97ae2 Fix prompt respond cache fail when is_embedding_cache_enabled is true 2025-02-02 03:09:06 +08:00
yangdx
5d14ab03eb Fix linting 2025-02-02 01:56:32 +08:00
yangdx
b45ae1567c Refactor LLM cache handling and entity extraction
- Removed custom LLM function in entity extraction
- Simplified cache handling logic
- Added `force_llm_cache` parameter
- Updated cache handling conditions
2025-02-02 01:28:46 +08:00
yangdx
6c7d7c25d3 Refactor cache handling logic for better readability, keep function unchanged. 2025-02-02 00:10:21 +08:00
yangdx
c9481c81b9 Add cache type "extract" for entity extraction 2025-02-01 23:05:02 +08:00
yangdx
2d387fa6de Save keywords to cache only when it's no empty 2025-02-01 22:54:23 +08:00
yangdx
3c3cdba499 Fix typo error 2025-02-01 22:27:49 +08:00
yangdx
b87703aea6 Add embedding_func to llm_response_cache 2025-02-01 22:19:16 +08:00
yangdx
3bc7c4d8f1 Save cache_type to llm_response_cache 2025-02-01 22:18:59 +08:00
yangdx
c3942077a9 Use direct embedding_func from hashing_kv (do not by pass maxiumu async control) 2025-02-01 22:12:45 +08:00
yangdx
c98a675b6c remove unused parm 2025-02-01 22:07:12 +08:00