Apple has previewed a significant expansion to Apple Intelligence, its suite of artificial intelligence features embedded across iPhone, iPad, Mac, Apple Watch, and Apple Vision Pro. The update introduces new capabilities like Live Translation, visual intelligence for content on screen, more expressive Genmoji, and custom image creation through Image Playground. Importantly, Apple is also giving third-party developers direct access to its on-device foundation model, promising fast, private, and offline AI interactions.
“Last year, we took the first steps on a journey to bring users intelligence that’s helpful, relevant, easy to use, and right where users need it, all while protecting their privacy,” said Craig Federighi, Apple’s senior vice president of Software Engineering. “Now, the models that power Apple Intelligence are becoming more capable and efficient, and we’re integrating features in even more places across each of our operating systems.”
He added, “We’re also taking the huge step of giving developers direct access to the on-device foundation model powering Apple Intelligence… We can’t wait to see what developers create.”
Live Translation: Real-Time Multilingual Messaging and Calls
One of the most powerful additions is Live Translation, now integrated into Messages, FaceTime, and Phone. Users can type or speak in one language and have it translated automatically in real time, with voice and captions depending on the app. This feature uses on-device models to preserve user privacy and works entirely offline.
Genmoji and Image Playground: Creative AI, Personalised
With new updates, Genmoji can now be created by mixing emojis and adding text prompts to customise them further. Users can adjust hairstyles, expressions, and attributes to make Genmoji resemble friends and family more accurately. In Image Playground, Apple adds ChatGPT-style generation, allowing users to choose artistic styles such as vector or oil painting, or use “Any Style” to define a custom look.
Notably, Image Playground only sends data to ChatGPT if users give explicit permission.
Visual Intelligence: Contextual Understanding Across Screens
Apple’s Visual Intelligence extends AI utility to whatever’s on a user’s screen. With a simple button tap, users can ask ChatGPT or Apple Intelligence to identify products, analyse visuals, or extract information like event details from messages and web pages. This data can then be turned into calendar entries, reminders, or online searches through compatible apps like Etsy and Google.
Apple Watch Gets Smarter With Workout Buddy
The Apple Watch sees the debut of Workout Buddy, a real-time motivational assistant powered by user data. It analyses fitness history, heart rate, distance, and other metrics to deliver personalised, dynamic feedback using voice models from Fitness+ trainers. Workout Buddy works offline and respects user privacy, with data stored and processed on-device.
Foundation Models for Developers: Apple’s AI Goes Open
In a major move, Apple is opening up the on-device AI model behind Apple Intelligence to developers. Using the new Foundation Models framework, apps can integrate privacy-focused intelligence without needing internet access or incurring cloud costs. Features like guided generation, tool calling, and natural language understanding are accessible with as few as three lines of code.
Example use cases include personalised quizzes in educational apps or voice-controlled features in offline navigation apps.
Shortcuts and Siri: Smarter, More Private
Apple’s Shortcuts app gets smarter with new actions driven by Apple Intelligence. Users can create custom workflows that tap into on-device AI or use Private Cloud Compute. Apple has also enhanced Siri, allowing it to handle complex queries, maintain conversation context, and interact with features like Writing Tools and ChatGPT.
For instance, a student could compare lecture transcriptions with their notes, or generate a summary using Siri without data ever leaving the device.
Everyday Enhancements Across Apple Ecosystem
• Reminders now auto-categorise actions from emails and notes.
• Apple Wallet summarises order tracking from multiple merchants.
• Messages introduces auto-suggested polls and AI-generated backgrounds.
• Mail and Photos offer advanced summaries and memory creation using natural language.
• Visual intelligence can identify objects via camera or screen and suggest calendar events.
• Clean Up in Photos removes distracting elements from pictures without altering key content.
• Image Wand turns rough sketches into polished visuals inside Notes.
• Smart Reply, Previews, and Reduce Interruptions Focus refine notifications and messaging.
Apple’s AI Privacy Strategy: Local First, Cloud Only When Needed
Apple continues to prioritise user privacy. Most AI processes happen entirely on-device, and when needed, Private Cloud Compute handles more intensive tasks without storing user data. The cloud servers run on Apple Silicon, and their code is open to independent auditing to ensure transparency.
Languages and Rollout Timeline
Apple Intelligence will support eight more languages by the end of 2025, including Danish, Dutch, Norwegian, Portuguese (Portugal), Swedish, Turkish, Traditional Chinese, and Vietnamese. Current supported languages include English (multiple regions), French, German, Italian, Spanish, Japanese, Korean, and Simplified Chinese.
The features are now available to developers through the Apple Developer Program and will enter public beta next month via the Apple Beta Software Program. The full rollout is expected this autumn on all iPhone 16 models, iPhone 15 Pro series, and M1 or newer iPads and Macs, with Siri and system language set to a supported option.