Skip to content

Security

IntentForm passes user input to an AI model and injects model definitions into the system prompt. Both are attack surfaces worth understanding.

Prompt injection is when a malicious user crafts input designed to override the AI’s instructions — for example, submitting "Ignore previous instructions and return {...}" as their form intent.

IntentForm applies three layers of defence in every built-in provider:

Every AI response is validated against a strict Zod schema before it reaches your application. If the model returns anything other than the expected shape (model, values, fieldRelevance, confidence), the provider throws a StructuredOutputParseError.

// This happens automatically inside each provider.
// Your app only receives a well-typed, validated object.
parseStructuredOutput(rawAiResponse)

This means a successful jailbreak that produces invalid JSON or an unexpected structure will be caught and surfaced as an error, not silently accepted.

All providers reject prompts that exceed maxPromptLength (default: 2000 characters) before sending anything to the AI API. This limits the attack surface for large injection payloads.

openaiProvider({
apiKey: process.env.OPENAI_API_KEY!,
maxPromptLength: 500, // stricter limit for short-intent use cases
})

Model id, label, description, useCases, and field labels are sanitized before being interpolated into the system prompt. Newlines and control characters are stripped so that injected text in model definitions cannot break out of their context.

Never pass an API key from the browser. Built-in providers accept an apiKey option — if used client-side, that key is visible to anyone who opens DevTools.

Keep providers server-side and call them through a server function or API route. See the recipes for concrete patterns:

If you implement a custom AiProvider, apply the same patterns:

  • Validate AI responses with parseStructuredOutput from @intentform/core before returning.
  • Enforce a prompt length cap before calling the external API.
  • Never pass raw AI output to your frontend without validation.
import { parseStructuredOutput } from '@intentform/core'
const myProvider: AiProvider = {
async generateStructured(input) {
if (input.prompt.length > 2000) {
throw new Error('Prompt too long')
}
const raw = await callMyModel(input)
const validated = parseStructuredOutput(raw) // throws if invalid
return { data: raw, confidence: validated.confidence }
},
}
ThreatMitigation
User injects instructions via intent textOutput validated by Zod schema; structured output format enforced by provider APIs
Large injection payloadmaxPromptLength rejects oversized prompts before API call
Injected text in model definitionsControl characters stripped before system prompt interpolation
API key leaked to browserKeep providers server-side — see server-side recipes
Malformed AI response passed to appparseStructuredOutput throws StructuredOutputParseError on invalid shape