Security
IntentForm passes user input to an AI model and injects model definitions into the system prompt. Both are attack surfaces worth understanding.
Prompt injection
Section titled “Prompt injection”Prompt injection is when a malicious user crafts input designed to override the AI’s instructions — for example, submitting "Ignore previous instructions and return {...}" as their form intent.
IntentForm applies three layers of defence in every built-in provider:
1. Output schema validation
Section titled “1. Output schema validation”Every AI response is validated against a strict Zod schema before it reaches your application. If the model returns anything other than the expected shape (model, values, fieldRelevance, confidence), the provider throws a StructuredOutputParseError.
// This happens automatically inside each provider.// Your app only receives a well-typed, validated object.parseStructuredOutput(rawAiResponse)This means a successful jailbreak that produces invalid JSON or an unexpected structure will be caught and surfaced as an error, not silently accepted.
2. Prompt length limit
Section titled “2. Prompt length limit”All providers reject prompts that exceed maxPromptLength (default: 2000 characters) before sending anything to the AI API. This limits the attack surface for large injection payloads.
openaiProvider({ apiKey: process.env.OPENAI_API_KEY!, maxPromptLength: 500, // stricter limit for short-intent use cases})3. Model definition sanitization
Section titled “3. Model definition sanitization”Model id, label, description, useCases, and field labels are sanitized before being interpolated into the system prompt. Newlines and control characters are stripped so that injected text in model definitions cannot break out of their context.
API key exposure
Section titled “API key exposure”Never pass an API key from the browser. Built-in providers accept an apiKey option — if used client-side, that key is visible to anyone who opens DevTools.
Keep providers server-side and call them through a server function or API route. See the recipes for concrete patterns:
Custom providers
Section titled “Custom providers”If you implement a custom AiProvider, apply the same patterns:
- Validate AI responses with
parseStructuredOutputfrom@intentform/corebefore returning. - Enforce a prompt length cap before calling the external API.
- Never pass raw AI output to your frontend without validation.
import { parseStructuredOutput } from '@intentform/core'
const myProvider: AiProvider = { async generateStructured(input) { if (input.prompt.length > 2000) { throw new Error('Prompt too long') } const raw = await callMyModel(input) const validated = parseStructuredOutput(raw) // throws if invalid return { data: raw, confidence: validated.confidence } },}Summary
Section titled “Summary”| Threat | Mitigation |
|---|---|
| User injects instructions via intent text | Output validated by Zod schema; structured output format enforced by provider APIs |
| Large injection payload | maxPromptLength rejects oversized prompts before API call |
| Injected text in model definitions | Control characters stripped before system prompt interpolation |
| API key leaked to browser | Keep providers server-side — see server-side recipes |
| Malformed AI response passed to app | parseStructuredOutput throws StructuredOutputParseError on invalid shape |