[[MOB]] @ 2024-06-12 Wed 20.46pm > [!-cf-]+ [[Related notes]] > - [[AI assistant]] --- # [[Journal section]] ### [[MOB]] @ 2024-06-12 Wed 20.47pm After reading some Ben Thompson and listening to today's Sharp Tech where they talk about Apple Intelligence a lot, watching the Platforms State of the Union https://developer.apple.com/videos/play/wwdc2024/102/?time=95: ![[Pasted image 20240612205539.png]] ![[Pasted image 20240612205513.png]] We want to run as much as we can on-device because it delivers low latency and a better user experience. ![[Pasted image 20240612205618.png]] And of course, it helps keep users' personal data and activity private. So, to get more capability on device, we have "[[Adapter]]s". But there are some queries that are too advanced to be run on these limited models on device. There we go to ![[Pasted image 20240612210202.png]] to run those larger foundation models. Okay so then the architecture. ![[Pasted image 20240612212950.png]] - An on-device "[[Semantic index]]" that can organize personal information from across apps ![[Pasted image 20240612213011.png]] And an - [[App Intents Toolbox]] that can understand capabilities of apps and tap into them on a user's behalf ![[Pasted image 20240612213156.png]] Then when a user makes a request, - Orchestration orchestrates how it's handled [[Orchestrator]] ![[Pasted image 20240612213301.png]] Either: - through its on-device intelligence stack (on-device models) - or using Private Cloud Compute (server models) Either way, it draws on its semantic index to ground each request in the relevant personal context, and uses its app intents toolbox to take actions for the user. Note the block on the lower left, containing semantic index, app intents toolbox, orchestration, and on-device models: they're calling that whole cluster of things "Personal Intelligence System".