Files
microdao-daarion/services/node-worker/providers/tts_memory_service.py
Apple 129e4ea1fc feat(platform): add new services, tools, tests and crews modules
New router intelligence modules (26 files): alert_ingest/store, audit_store,
architecture_pressure, backlog_generator/store, cost_analyzer, data_governance,
dependency_scanner, drift_analyzer, incident_* (5 files), llm_enrichment,
platform_priority_digest, provider_budget, release_check_runner, risk_* (6 files),
signature_state_store, sofiia_auto_router, tool_governance

New services:
- sofiia-console: Dockerfile, adapters/, monitor/nodes/ops/voice modules, launchd, react static
- memory-service: integration_endpoints, integrations, voice_endpoints, static UI
- aurora-service: full app suite (analysis, job_store, orchestrator, reporting, schemas, subagents)
- sofiia-supervisor: new supervisor service
- aistalk-bridge-lite: Telegram bridge lite
- calendar-service: CalDAV calendar service with reminders
- mlx-stt-service / mlx-tts-service: Apple Silicon speech services
- binance-bot-monitor: market monitor service
- node-worker: STT/TTS memory providers

New tools (9): agent_email, browser_tool, contract_tool, observability_tool,
oncall_tool, pr_reviewer_tool, repo_tool, safe_code_executor, secure_vault

New crews: agromatrix_crew (10 modules: depth_classifier, doc_facts, doc_focus,
farm_state, light_reply, llm_factory, memory_manager, proactivity, reflection_engine,
session_context, style_adapter, telemetry)

Tests: 85+ test files for all new modules
Made-with: Cursor
2026-03-03 07:14:14 -08:00

78 lines
2.4 KiB
Python

"""TTS provider: delegates to existing Memory Service /voice/tts.
Memory Service accepts: JSON {text, voice, speed}
Returns: StreamingResponse — audio/mpeg (MP3 bytes)
Fabric contract output: {audio_b64, format, meta}
"""
import base64
import logging
import os
from typing import Any, Dict
import httpx
logger = logging.getLogger("provider.tts_memory_service")
MEMORY_SERVICE_URL = os.getenv("MEMORY_SERVICE_URL", "http://memory-service:8000")
MAX_TEXT_CHARS = int(os.getenv("TTS_MAX_TEXT_CHARS", "500")) # Memory Service limits to 500
async def synthesize(payload: Dict[str, Any]) -> Dict[str, Any]:
"""Fabric TTS entry point — delegates to Memory Service.
Payload:
text: str (required)
voice: str (optional; Polina/Ostap/default/uk-UA-PolinaNeural/etc.)
speed: float (optional, default 1.0)
Returns Fabric contract: {audio_b64, format, meta, provider, model}
Note: Memory Service uses edge-tts and returns MP3.
No format conversion — caller receives base64-encoded MP3.
"""
text = payload.get("text", "").strip()
if not text:
raise ValueError("text is required")
orig_len = len(text)
truncated = orig_len > MAX_TEXT_CHARS
if truncated:
text = text[:MAX_TEXT_CHARS]
logger.warning(f"TTS text truncated {orig_len}{MAX_TEXT_CHARS} chars")
voice = payload.get("voice", "default")
speed = float(payload.get("speed", 1.0))
async with httpx.AsyncClient(timeout=30) as c:
resp = await c.post(
f"{MEMORY_SERVICE_URL}/voice/tts",
json={"text": text, "voice": voice, "speed": speed},
)
resp.raise_for_status()
audio_bytes = resp.content
engine = resp.headers.get("X-TTS-Engine", "edge-tts")
tts_voice = resp.headers.get("X-TTS-Voice", voice)
content_type = resp.headers.get("content-type", "audio/mpeg")
fmt = "mp3" if "mpeg" in content_type else "wav"
audio_b64 = base64.b64encode(audio_bytes).decode()
return {
"audio_b64": audio_b64,
"format": fmt,
"meta": {
"model": engine,
"voice": tts_voice,
"provider": "memory_service",
"engine": engine,
"audio_bytes": len(audio_bytes),
"service_url": MEMORY_SERVICE_URL,
"truncated": truncated,
"orig_len": orig_len,
"used_len": len(text),
},
"provider": "memory_service",
"model": engine,
}