Apple Inc. initiated the phased rollout of "Apple Intelligence," its suite of generative artificial intelligence features, in October 2024, embedding these capabilities across its iOS, iPadOS, and macOS operating systems. This strategic integration positions Apple in direct competition with other major AI tool developers, including Google, OpenAI, and Anthropic.
The platform, which Cupertino marketing executives have branded as "AI for the rest of us," leverages large information models for both text and image generation to enhance existing application functionalities. Its initial deployment commenced with iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1, supporting U.S. English at launch and subsequently expanding to other English localizations, with broader language support projected for 2025.
Key features of Apple Intelligence include "Writing Tools," integrated into applications such as Mail, Messages, and Pages. These tools are designed to summarize extensive texts, proofread content, and draft messages based on user-provided prompts. In the realm of image generation, the system introduces "Genmojis" for creating custom emojis and the standalone "Image Playground" application, which generates visual content for use in messages or presentations.
Apple Intelligence also delivered a redesigned Siri, now more deeply integrated into the operating systems. This enhancement provides cross-app functionality and "onscreen awareness," enabling the intelligent assistant to utilize contextual information from active applications. Apple Senior Vice President of Software Engineering Craig Federighi stated at WWDC 2025 that a more personalized version of Siri, intended to comprehend "personal context," requires additional development time to meet internal quality standards. A Bloomberg report cited error-ridden performance as a factor contributing to the delay, with this advanced Siri now anticipated in 2026.
Further advancements unveiled at WWDC 2025 include "Visual Intelligence" for image search and a "Live Translation" feature, designed for real-time conversation translation within the Messages, FaceTime, and Phone applications. These additional capabilities are scheduled for release later in 2025 alongside iOS 26.
Compatibility for Apple Intelligence features spans a range of devices, encompassing all iPhone 16 models, iPhone 15 Pro and Pro Max, and iPad, MacBook Air, MacBook Pro, iMac, Mac mini, Mac Studio, and Mac Pro models equipped with M1 chips or later. Additionally, iPad mini devices with A17 chips or newer are supported. The requirement for "Pro" versions of the iPhone 15 is attributed to specific chipset capabilities.
Apple's approach to AI processing involves a "small-model" strategy for numerous tasks, facilitating on-device execution and reducing resource demands. For more intricate queries, the company utilizes its "Private Cloud Compute" offering, which relies on remote servers powered by Apple Silicon. The company asserts that this cloud solution maintains the same privacy standards as its consumer devices.
The company has also integrated OpenAI's ChatGPT, launching with iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2. This partnership enhances Siri's knowledge base and expands the Writing Tools' capabilities by enabling content generation from prompts. Users with paid ChatGPT accounts gain access to premium features. Apple has also confirmed plans for future collaborations with additional generative AI services, identifying Google Gemini as a prospective integration.
At WWDC 2025, Apple introduced the "Foundation Models framework," which allows developers to leverage Apple's offline AI models within third-party applications. According to Federighi, this framework is designed to empower developers to create new AI features that are intelligent, available offline, and privacy-preserving, potentially leading to a reduction in cloud API costs.