feat: complete RAG pipeline integration (ingest + query + Memory)

Parser Service:
- Add /ocr/ingest endpoint (PARSER → RAG in one call)
- Add RAG_BASE_URL and RAG_TIMEOUT to config
- Add OcrIngestResponse schema
- Create file_converter utility for PDF/image → PNG bytes
- Endpoint accepts file, dao_id, doc_id, user_id
- Automatically parses with dots.ocr and sends to RAG Service

Router Integration:
- Add _handle_rag_query() method in RouterApp
- Combines Memory + RAG → LLM pipeline
- Get Memory context (facts, events, summaries)
- Query RAG Service for documents
- Build prompt with Memory + RAG documents
- Call LLM provider with combined context
- Return answer with citations

Clients:
- Create rag_client.py for Router (query RAG Service)
- Create memory_client.py for Router (get Memory context)

E2E Tests:
- Create e2e_rag_pipeline.sh script for full pipeline test
- Test ingest → query → router query flow
- Add E2E_RAG_README.md with usage examples

Docker:
- Add RAG_SERVICE_URL and MEMORY_SERVICE_URL to router environment
This commit is contained in:
Apple
2025-11-16 05:02:14 -08:00
parent 6d69f901f7
commit 382e661f1f
10 changed files with 719 additions and 1 deletions

View File

@@ -141,3 +141,12 @@ class ChunksResponse(BaseModel):
doc_id: str = Field(..., description="Document ID")
dao_id: str = Field(..., description="DAO ID")
class OcrIngestResponse(BaseModel):
"""Response from /ocr/ingest endpoint"""
dao_id: str = Field(..., description="DAO identifier")
doc_id: str = Field(..., description="Document identifier")
pages_processed: int = Field(..., description="Number of pages processed")
rag_ingested: bool = Field(..., description="Whether document was ingested into RAG")
raw_json: Dict[str, Any] = Field(..., description="Parsed document JSON")