- Using LLM APIs but worried about sending client data? Built a proxy for that.
OpenAI-compatible proxy that masks personal data and secrets before sending to your provider.
Mask Mode (default):
Route Mode (if you run a local LLM):You send: "Email sarah.chen@hospital.org about meeting Dr. Miller" LLM receives: "Email <EMAIL_1> about meeting <PERSON_1>" You get back: Original names restored in response
What it catches:Requests with PII → Local LLM Everything else → Cloud
Uses Microsoft Presidio for PII detection. ~500MB RAM, 10-50ms per request.PII: Names, emails, phones, credit cards, IBANs, IPs, locations (24 languages) Secrets: Private keys, API keys (OpenAI, AWS, GitHub), JWT tokensWorks with Cursor, Open WebUI, LangChain, or any OpenAI-compatible client.
Docs: https://pasteguard.com/docs
Feedback on edge cases welcome.