Powering AI customer support with Gemini, FastAPI & Redis
Stream AI responses using Gemini API with RAG from FAISS vectors.
{
"type": "user_message",
"content": "How do I reset my password?"
}
Upload CSV/Markdown files to generate embeddings for RAG pipeline.
curl -X POST -F "file=@faqs.csv" http://localhost:8000/faq/upload
Create support tickets stored in PostgreSQL with optional SendGrid email.
{
"issue": "Payment failed",
"description": "Card declined despite sufficient funds",
"customer_email": "user@example.com"
}
Retrieve all support tickets with filtering options.
[
{
"id": 1,
"status": "open",
"created_at": "2023-07-15T09:30:00Z",
"customer_email": "user@example.com"
}
]
GEMINI_API_KEY=your_gemini_key
REDIS_URL=redis://default:password@localhost:6379
POSTGRES_URL=postgresql://user:password@localhost:5432/supportdb
SENDGRID_KEY=your_sendgrid_key
CORS_ORIGINS=https://your-frontend.vercel.app,http://localhost:3000
# Build the image
docker build -t geminiguard-backend .
# Run the container
docker run -p 8000:8000 --env-file .env geminiguard-backend
backend/
├── main.py # FastAPI app entrypoint
├── routes/ # API endpoint definitions
│ ├── chat.py # WebSocket chat endpoint
│ ├── faq.py # FAQ upload endpoint
│ └── escalate.py # Ticket endpoints
├── services/ # Business logic
│ ├── gemini_service.py # Gemini API calls
│ └── faiss_service.py # FAISS vector operations
├── models/ # Pydantic models
├── database/ # DB connections
├── requirements.txt # Python dependencies
└── Dockerfile # Container configuration
This backend is production-ready for Railway deployment with all required endpoints.
Download Complete Code