Skip to main content
Back to Pulse
9to5Mac

Prompt injection attack on Apple Intelligence reveals a flaw, but is easy to fix

Read the full articlePrompt injection attack on Apple Intelligence reveals a flaw, but is easy to fix on 9to5Mac

What Happened

A prompt injection attack on Apple Intelligence reveals that it is fairly well protected from misuse, but the current beta version does have one security flaw which can be exploited. However, the issue would be very easy for the company to fix, so this will almost certainly be done before the pub

Fordel's Take

Apple Intelligence beta shipped with a prompt injection vulnerability — an attacker embeds instructions in external content that override system-level prompts. The current build has no sanitization layer between untrusted content and the model's action context.

Any agent pipeline passing user-controlled content to Apple's on-device models inherits this risk. Most developers sanitize inputs for web-facing LLMs but treat on-device as a trusted environment — that assumption is wrong. Running workflows through Apple Intelligence without sanitization makes the same mistake as trusting client-side validation.

What To Do

Add explicit prompt sanitization before content reaches Apple Intelligence action context instead of relying on system prompt isolation because on-device models have the same injection surface as any web-facing LLM.

Cited By

React

Newsletter

Get the weekly AI digest

The stories that matter, with a builder's perspective. Every Thursday.

Loading comments...