GeminiGuard Support Backend

Powering AI customer support with Gemini, FastAPI & Redis

API Endpoints

Real-time Chat

WebSocket /chat/stream

Stream AI responses using Gemini API with RAG from FAISS vectors.

{ "type": "user_message", "content": "How do I reset my password?" }

FAQ Upload

POST /faq/upload

Upload CSV/Markdown files to generate embeddings for RAG pipeline.

curl -X POST -F "file=@faqs.csv" http://localhost:8000/faq/upload

Escalation

POST /escalate

Create support tickets stored in PostgreSQL with optional SendGrid email.

{ "issue": "Payment failed", "description": "Card declined despite sufficient funds", "customer_email": "user@example.com" }

Escalation List

GET /escalate/list

Retrieve all support tickets with filtering options.

[ { "id": 1, "status": "open", "created_at": "2023-07-15T09:30:00Z", "customer_email": "user@example.com" } ]

Setup Guide

Environment Variables

GEMINI_API_KEY=your_gemini_key REDIS_URL=redis://default:password@localhost:6379 POSTGRES_URL=postgresql://user:password@localhost:5432/supportdb SENDGRID_KEY=your_sendgrid_key CORS_ORIGINS=https://your-frontend.vercel.app,http://localhost:3000

Running with Docker

# Build the image docker build -t geminiguard-backend . # Run the container docker run -p 8000:8000 --env-file .env geminiguard-backend

Directory Structure

backend/ ├── main.py # FastAPI app entrypoint ├── routes/ # API endpoint definitions │ ├── chat.py # WebSocket chat endpoint │ ├── faq.py # FAQ upload endpoint │ └── escalate.py # Ticket endpoints ├── services/ # Business logic │ ├── gemini_service.py # Gemini API calls │ └── faiss_service.py # FAISS vector operations ├── models/ # Pydantic models ├── database/ # DB connections ├── requirements.txt # Python dependencies └── Dockerfile # Container configuration

Ready to Deploy?

This backend is production-ready for Railway deployment with all required endpoints.

Download Complete Code