feat: implement RAG Service MVP with PARSER + Memory integration
RAG Service Implementation: - Create rag-service/ with full structure (config, document_store, embedding, pipelines) - Document Store: PostgreSQL + pgvector via Haystack - Embedding: BAAI/bge-m3 (multilingual, 1024 dim) - Ingest Pipeline: Convert ParsedDocument to Haystack Documents, embed, index - Query Pipeline: Retrieve documents, generate answers via DAGI Router - FastAPI endpoints: /ingest, /query, /health Tests: - Unit tests for ingest and query pipelines - E2E test with example parsed JSON - Test fixtures with real PARSER output example Router Integration: - Add mode='rag_query' routing rule in router-config.yml - Priority 7, uses local_qwen3_8b for RAG queries Docker: - Add rag-service to docker-compose.yml - Configure dependencies (router, city-db) - Add model cache volume Documentation: - Complete README with API examples - Integration guides for PARSER and Router
This commit is contained in:
51
services/rag-service/app/core/config.py
Normal file
51
services/rag-service/app/core/config.py
Normal file
@@ -0,0 +1,51 @@
|
||||
"""
|
||||
Configuration for RAG Service
|
||||
"""
|
||||
|
||||
import os
|
||||
from typing import Literal
|
||||
from pydantic_settings import BaseSettings
|
||||
|
||||
|
||||
class Settings(BaseSettings):
|
||||
"""Application settings"""
|
||||
|
||||
# Service
|
||||
API_HOST: str = "0.0.0.0"
|
||||
API_PORT: int = 9500
|
||||
|
||||
# PostgreSQL + pgvector
|
||||
PG_DSN: str = os.getenv(
|
||||
"PG_DSN",
|
||||
"postgresql+psycopg2://postgres:postgres@city-db:5432/daarion_city"
|
||||
)
|
||||
|
||||
# Embedding model
|
||||
EMBED_MODEL_NAME: str = os.getenv("EMBED_MODEL_NAME", "BAAI/bge-m3")
|
||||
EMBED_DEVICE: Literal["cuda", "cpu", "mps"] = os.getenv("EMBED_DEVICE", "cpu")
|
||||
EMBED_DIM: int = int(os.getenv("EMBED_DIM", "1024")) # BAAI/bge-m3 = 1024
|
||||
|
||||
# Document Store
|
||||
RAG_TABLE_NAME: str = os.getenv("RAG_TABLE_NAME", "rag_documents")
|
||||
SEARCH_STRATEGY: Literal["approximate", "exact"] = os.getenv("SEARCH_STRATEGY", "approximate")
|
||||
|
||||
# Chunking
|
||||
CHUNK_SIZE: int = int(os.getenv("CHUNK_SIZE", "500"))
|
||||
CHUNK_OVERLAP: int = int(os.getenv("CHUNK_OVERLAP", "50"))
|
||||
|
||||
# Retrieval
|
||||
TOP_K: int = int(os.getenv("TOP_K", "5"))
|
||||
|
||||
# LLM (for query pipeline)
|
||||
LLM_PROVIDER: str = os.getenv("LLM_PROVIDER", "router") # router, openai, local
|
||||
ROUTER_BASE_URL: str = os.getenv("ROUTER_BASE_URL", "http://router:9102")
|
||||
OPENAI_API_KEY: str = os.getenv("OPENAI_API_KEY", "")
|
||||
OPENAI_MODEL: str = os.getenv("OPENAI_MODEL", "gpt-4o-mini")
|
||||
|
||||
class Config:
|
||||
env_file = ".env"
|
||||
case_sensitive = True
|
||||
|
||||
|
||||
settings = Settings()
|
||||
|
||||
Reference in New Issue
Block a user