Redact PII on the way into your LLM prompts and rehydrate it on the way out — in a single round-trip, with deterministic tokens and zero data retention.
Your app calls /v1/redact with the user's text. TLS 1.3, optional client-side pre-hashing.
Hybrid detector (regex + ML NER) identifies 62 entity types. Each value is replaced with a stable placeholder like [PERSON_1].
Forward the redacted text to OpenAI, Anthropic, Gemini, your fine-tune — anywhere. The model never sees a real name, number, or address.
Pass the reply + session back through /v1/rehydrate. Tokens become real values again.
The SDK wraps both endpoints into a single .shield() call that works with your existing LLM client. Or call /redact and /rehydrate yourself.
# 1. Redact PII before sending to your LLM curl -X POST https://api.peyeeye.ai/v1/redact \ -H "Authorization: Bearer $PEYEEYE_KEY" \ -H "Content-Type: application/json" \ -d '{"text": "Hi, I\'m Ada. Email: ada@example.com", "locale": "en-US"}' # → { "redacted": "Hi, I'm [PERSON_1]. Email: [EMAIL_1]", "session": "ses_…" } # 2. After LLM replies, rehydrate curl -X POST https://api.peyeeye.ai/v1/rehydrate \ -H "Authorization: Bearer $PEYEEYE_KEY" \ -d '{"text": "Hi [PERSON_1], we emailed [EMAIL_1].", "session": "ses_…"}'
// Auto-generated from your examples await peyeeye.entities.create({ id: "ORDER_ID", kind: "compound", pattern: "", positives: [ "#A-884217", "#A-007431", "#A-122900" ], negatives: [ "ADR-19" ], confidence_floor: 0.9 });
Free tier, no credit card. First redaction in under 90 seconds from a fresh terminal.