• As part of their fundraising announcement, they did disclose that they’re partnering with several companies, including OpenAI. My guess is that they’re going to use a combination of Whisper and GPT4 for some kind of personal assistant as part of their hardware product. While Whisper has been shown to be quite capable of running on-device, it will probably be some time until a powerful language model will be able to do so in production.

  • Or, alternatively, they could deploy a single LLM as part of an OS update. Apps could then interact with the LLM through system frameworks, similar to how the Vision SDK works.

  • It’s incredibly unlikely that Apple will ever license the use of LLMs from outside parties like OpenAI or Anthropic. Their strategy is much more the style of either building it all in-house, or to acquire small startups that they integrate into their products