278 Commits

Author SHA1 Message Date
Martin Perez-Guevara
3d418d95c5 feat: Integrate Opik for Enhanced Observability in LlamaIndex LLM Interactions
This pull request demonstrates how to create a new Opik project when using LiteLLM for LlamaIndex-based LLM calls. The primary goal is to enable detailed tracing, monitoring, and logging of LLM interactions in a new Opik project_name, particularly when using LiteLLM as an API proxy. This enhancement allows for better debugging, performance analysis, observability when using LightRAG with LiteLLM and Opik.

**Motivation:**

As our application's reliance on Large Language Models (LLMs) grows, robust observability becomes crucial for maintaining system health, optimizing performance, and understanding usage patterns. Integrating Opik provides the following key benefits:

1.  **Improved Debugging:** Enables end-to-end tracing of requests through the LlamaIndex and LiteLLM layers, making it easier to identify and resolve issues or performance bottlenecks.
2.  **Comprehensive Performance Monitoring:** Allows for the collection of vital metrics such as LLM call latency, token usage, and error rates. This data can be filtered and analyzed within Opik using project names and tags.
3.  **Effective Cost Management:** Facilitates tracking of token consumption associated with specific requests or projects, leading to better cost control and optimization.
4.  **Deeper Usage Insights:** Provides a clearer understanding of how different components of the application or various projects are utilizing LLM capabilities.

These changes empower developers to seamlessly add observability to their LlamaIndex-based LLM workflows, especially when leveraging LiteLLM, by passing necessary Opik metadata.

**Changes Made:**

1.  **`lightrag/llm/llama_index_impl.py`:**
    *   Modified the `llama_index_complete_if_cache` function:
        *   The `**kwargs` parameter, which previously handled additional arguments, has been refined. A dedicated `chat_kwargs={}` parameter is now used to pass keyword arguments directly to the `model.achat()` method. This change ensures that vendor-specific parameters, such as LiteLLM's `litellm_params` for Opik metadata, are correctly propagated.
        *   The logic for retrieving `llm_instance` from `kwargs` was removed as `model` is now a direct parameter, simplifying the function.
    *   Updated the `llama_index_complete` function:
        *   Ensured that `**kwargs` (which may include `chat_kwargs` or other parameters intended for `llama_index_complete_if_cache`) are correctly passed down.

2.  **`examples/unofficial-sample/lightrag_llamaindex_litellm_demo.py`:**
    *   This existing demo file was updated to align with the changes in `llama_index_impl.py`.
    *   The `llm_model_func` now passes an empty `chat_kwargs={}` by default to `llama_index_complete_if_cache` if no specific chat arguments are needed, maintaining compatibility with the updated function signature. This file serves as a baseline example without Opik integration.

3.  **`examples/unofficial-sample/lightrag_llamaindex_litellm_opik_demo.py` (New File):**
    *   A new example script has been added to specifically demonstrate the integration of LightRAG with LlamaIndex, LiteLLM, and Opik for observability.
    *   The `llm_model_func` in this demo showcases how to construct the `chat_kwargs` dictionary.
    *   It includes `litellm_params` with a `metadata` field for Opik, containing `project_name` and `tags`. This provides a clear example of how to send observability data to Opik.
    *   The call to `llama_index_complete_if_cache` within `llm_model_func` passes these `chat_kwargs`, ensuring Opik metadata is included in the LiteLLM request.

These modifications provide a more robust and extensible way to pass parameters to the underlying LLM calls, specifically enabling the integration of observability tools like Opik.

Co-authored-by: Martin Perez-Guevara <8766915+MartinPerez@users.noreply.github.com>
Co-authored-by: Young Jin Kim <157011356+jidodata-ykim@users.noreply.github.com>
2025-05-20 17:47:05 +02:00
yangdx
d97da6068a Fix linting 2025-05-20 17:57:42 +08:00
yangdx
e492394fb6 Fix linting 2025-05-20 17:56:52 +08:00
yangdx
7263a1ccf9 Fix linting 2025-05-18 07:17:21 +08:00
sa9arr
36b606d0db Fix: Correct GraphML to JSON mapping in xml_to_json function 2025-05-17 19:32:25 +05:45
yangdx
284e8aac79 Remove deprecated demo code 2025-05-14 01:57:20 +08:00
yangdx
ba26b82d40 Remove deprected demo code 2025-05-14 01:56:26 +08:00
yangdx
0e26cbebd0 Fix linting 2025-05-14 01:14:45 +08:00
yangdx
5c9fd9c4d2 Update Ollama sample code 2025-05-14 01:14:15 +08:00
yangdx
aa36894d6e Remove deprecated demo code 2025-05-14 00:36:38 +08:00
yangdx
ab75027b22 Remove deprecated demo code 2025-05-13 23:59:00 +08:00
yangdx
43948d6f17 Update openai demo 2025-05-13 18:27:55 +08:00
yangdx
461c76ce28 Update openai compatible demo 2025-05-13 17:48:45 +08:00
yangdx
5c533f5e1a Fix liinting 2025-05-13 00:08:21 +08:00
Ben Luo
b8d59a262f Adding Tongyi OpenAI demo to use Qwen
qwen-turbo-latest (currently Qwen3) is supported by now

Signed-off-by: Ben Luo <bn0418@gmail.com>
2025-05-05 12:46:37 +08:00
yangdx
3117bc2e4a Remove buggy examplesfiles 2025-04-30 18:48:41 +08:00
yangdx
3a6109d07c Fix lintings in examples folder 2025-04-30 10:39:55 +08:00
yangdx
6716e19d5c Fix linting 2025-04-21 01:22:23 +08:00
yangdx
bd18c9c8ad Update sample code in README.md 2025-04-21 01:22:04 +08:00
yangdx
0c6e80cce9 Add finalize_storages to sample code 2025-04-21 00:25:13 +08:00
yangdx
e0f0d23e5a Update sample code for OpenAI and OpenAI compatible 2025-04-21 00:09:05 +08:00
yangdx
21f5a3923e Add log support for OpenAI demo 2025-04-20 22:03:30 +08:00
yangdx
697401fdc3 Change OpenAI demo to asyc 2025-04-20 21:39:51 +08:00
drahnreb
9c6b5aefcb fix linting 2025-04-18 16:24:43 +02:00
drahnreb
0e6771b503 add: GemmaTokenizer example 2025-04-18 16:24:43 +02:00
yangdx
247be483eb Merge branch 'main' into clear-doc 2025-04-04 05:45:06 +08:00
yangdx
df07c2a8b1 Remove Gremlin storage implementaion 2025-04-02 14:43:53 +08:00
yangdx
013be621d5 Remove TiDB storage implementaion 2025-04-02 14:40:27 +08:00
yangdx
ce74879258 Remove api demo (reference to LightRAG Server instead) 2025-04-01 18:17:17 +08:00
yangdx
1e31b26cbe Remove Oracle storage implementation 2025-04-01 18:15:29 +08:00
choizhang
164faf94e2 feat(TokenTracker): Add context manager support to simplify token tracking 2025-03-30 00:59:23 +08:00
choizhang
8488229a29 feat: Add TokenTracker to track token usage for LLM calls 2025-03-28 01:25:15 +08:00
omdivyatej
f049f2f5c4 linting errors 2025-03-25 15:20:09 +05:30
omdivyatej
f87c235a4c less comments 2025-03-23 21:42:56 +05:30
omdivyatej
3522da1b21 specify LLM for query 2025-03-23 21:33:49 +05:30
zrguo
32a7d40650 Update lightrag_openai_neo4j_milvus_redis_demo.py 2025-03-09 02:11:31 +08:00
Samuel Chan
b7f67eda21 fix the postgres get all labels and get knowledge graph 2025-03-08 11:45:59 +08:00
zrguo
6c8fa95214 fix demo 2025-03-04 12:25:07 +08:00
zrguo
ef2a5ad191 fix linting 2025-03-03 18:40:03 +08:00
zrguo
1611400854 fix demo 2025-03-03 18:33:42 +08:00
Samuel Chan
25342250d6 Fix the demo issue of PG to cater with new LightRag changes 2025-02-21 20:53:00 +08:00
Yannick Stephan
0d4c580859 Merge pull request #900 from YanSte/cleanup-3
Database Cleanup
2025-02-20 14:22:31 +01:00
Yannick Stephan
214e3e8ad5 fixed last update 2025-02-20 14:12:19 +01:00
Yannick Stephan
1c3a4944d3 Merge pull request #898 from YanSte/update
Database Cleanup
2025-02-20 13:35:37 +01:00
Yannick Stephan
38dc2466da cleanup 2025-02-20 13:34:59 +01:00
Yannick Stephan
c7bc2c63cf cleanup storages 2025-02-20 13:21:41 +01:00
Yannick Stephan
32e489865c cleanup code 2025-02-20 13:18:17 +01:00
Pankaj Kaushal
173a806b9a Moved back to llm dir as per
https://github.com/HKUDS/LightRAG/pull/864#issuecomment-2669705946

- Created two new example scripts demonstrating LightRAG integration with LlamaIndex:
  - `lightrag_llamaindex_direct_demo.py`: Direct OpenAI integration
  - `lightrag_llamaindex_litellm_demo.py`: LiteLLM proxy integration
- Both examples showcase different search modes (naive, local, global, hybrid)
- Includes configuration for working directory, models, and API settings
- Demonstrates text insertion and querying using LightRAG with LlamaIndex
- removed wrapper directory and references to it
2025-02-20 10:23:01 +01:00
Pankaj Kaushal
277070e03b Linting and formatting 2025-02-20 10:23:01 +01:00
Pankaj Kaushal
8a06be9395 Add LlamaIndex Wrapper and Example Implementations
- Updated README.md with new Wrappers section detailing LlamaIndex integration
- Added LlamaIndex wrapper implementation in `lightrag/wrapper/llama_index_impl.py`
- Created two example scripts demonstrating LlamaIndex usage:
  - Direct OpenAI integration
  - LiteLLM proxy integration
- Added wrapper documentation in `lightrag/wrapper/Readme.md`
- Included comprehensive usage examples and configuration details
2025-02-20 10:23:01 +01:00