BYOK
Bring Your Own Keys
You bring your LLM API keys. The platform orchestrates your AI agents. Your data flows directly from your agents to your LLM provider -- never through, never stored by, never visible to the platform vendor.
What is BYOK?
Most AI platforms work like this: you send your data to the platform, the platform sends it to an LLM, the LLM responds, and the platform stores both the input and output. Your data lives on someone else's servers, processed by someone else's API keys, visible to someone else's engineers.
BYOK inverts this. You provide your own API keys for Anthropic, OpenAI, Google, Mistral, or any other LLM provider. The platform never sees the content of your prompts or responses. It orchestrates the agent -- deciding which tool to call, which agent to hand off to, what to do next -- but the actual LLM communication goes directly from your agent to your LLM provider, authenticated with your keys.
Data flow comparison
Traditional Platform
BYOK Architecture
Why BYOK matters for enterprise
- Data sovereignty -- Your data never leaves your control. Crucial for GDPR Article 28 (processor obligations), HIPAA BAA requirements, and financial regulations.
- No vendor lock-in -- Switch LLM providers freely. Today Anthropic, tomorrow OpenAI, next month a self-hosted model. Your platform doesn't care -- it just uses your keys.
- Your terms of service -- When you use your own Anthropic API key, you're covered by your agreement with Anthropic, including their zero-data-retention policy. Not the platform vendor's agreement.
- Audit clarity -- Your CISO can answer "where does our data go?" with a clear answer: "To our LLM provider, under our agreement, with our keys."
- Cost transparency -- You see exactly what you spend on LLM calls in your provider's dashboard. No markup, no hidden fees on token usage.
BYOK is not optional
Most AI platforms offer BYOK as an upgrade or enterprise add-on. This creates a dangerous default: customers on lower tiers have their data flowing through the vendor's keys, often without realizing it.
The question every CISO should ask: "What happens to our data on your cheapest plan?"
How MeetLoyd implements BYOK
BYOK is mandatory at every tier on MeetLoyd -- Starter ($399/mo), Growth ($1,499/mo), and Enterprise. There is no plan where MeetLoyd processes your data with our keys. This is an architectural decision, not a pricing tier feature.
- any OpenAI-compatible LLM plus self-hosted vLLM supported -- Anthropic (Claude), OpenAI (GPT), Google (Gemini), Mistral, Groq, plus self-hosted vLLM. Bring keys for one or all.
- Key encryption -- Your API keys are encrypted with AES-256-GCM and stored in MeetLoyd's vault. They are decrypted only at the moment of the LLM call, in memory, never logged.
- LLM Gateway -- Every LLM call passes through the gateway for budget checks, prompt injection detection, PII redaction, and content moderation -- but the gateway operates on metadata, not on the content of your prompts.
- No token billing -- Since you use your own keys, MeetLoyd charges for platform access (agents + seats), not token consumption. You pay Anthropic/OpenAI directly.
- Memory Pointer Architecture -- Agent memory stores pointers and summaries, not raw conversation content. Even MeetLoyd's database doesn't contain your full LLM interactions.
BYOK + governance
BYOK alone isn't enough. Without governance, you're just replacing one risk (vendor data exposure) with another (Shadow AI with your own keys). That's why BYOK must be paired with:
- Governance Packs -- GDPR, HIPAA, SOX compliance modules that enforce policy regardless of which LLM keys are used.
- SPIFFE identity -- Cryptographic agent identity so you know which agent is using which keys for what purpose.
- Audit trails -- SOX-grade audit logging on every LLM call: who, when, which model, how many tokens, what cost -- without logging the content.