freetime2
5 hours ago
For a real world example of the challenges of harnessing LLMs, look at Apple. Over a year ago they had a big product launch focused on "Apple Intelligence" that was supposed to make heavy use of LLMs for agentic workflows. But all we've really gotten since then are a couple of minor tools for making emojis, summarizing notifications, and proof reading. And they even had to roll back the notification summaries for a while for being wildly "out of control". [1] And in this year's iPhone launch the AI marketing was toned down significantly.
I think Apple execs genuinely underestimated how difficult it would be to get LLMs to perform up to Apple's typical standards of polish and control.
teeray
31 minutes ago
> minor tools for making emojis, summarizing notifications, and proof reading.
The notification / email summaries are so unbelievably useless too: it’s hardly more work to skim the notification / email that I do anyway.
SchemaLoad
9 minutes ago
Like most AI products it feels like they started with a solution first and went searching for the problems. Text messages being too long wasn't a real problem to begin with.
There are some good parts to Apple Intelligence though. I find the priority notifications feature works pretty well, and the photo cleanup tool works pretty well for small things like removing your finger from the corner of a photo, though it's not going to work on huge tasks like removing a whole person from a photo.
zitterbewegung
4 hours ago
Now their strategy is to allow for Apple Events to work with the MCP.
https://9to5mac.com/2025/09/22/macos-tahoe-26-1-beta-1-mcp-i...
N_Lens
43 minutes ago
Apple’s typical standards of “polish and control” seem to be slipping drastically if MacOS Tahoe is anything to go by.
alfalfasprout
2 hours ago
> I think Apple execs genuinely underestimated how difficult it would be to get LLMs to perform up to Apple's typical standards of polish and control
Not only Apple, this is happening across the industry. Executives' expectations of what AI can deliver are massively inflated by Amodei et al. essentially promising human-level cognition with every release.
The reality is aside from coding assistants and chatbot interfaces (a la chatgpt) we've yet to see AI truly transform polished ecosystems like smartphones and OS' for a reason.
api
2 hours ago
Standard hype cycle. We are probably creating the top of the peak of inflated expectations.
__loam
4 hours ago
I'm happy they ate shit here because I like my mac not getting co-pilot bullshit forced into it, but apparently Apple had two separate teams competing against each other on this topic. Supposedly a lot of politics got in the way of delivering on a good product combined with the general difficulty of building LLM products.
Gigachad
2 hours ago
I do prefer that Apple is opting to have everything run on device so you aren’t being exposed to privacy risks or subscriptions. Even if it means their models won’t be as good as ones running on $30,000 GPUs.
gerdesj
an hour ago
On device.
If you have say 16GB of GPU RAM and around 64GB of RAM and a reasonable CPU then you can make decent use of LLMs. I'm not a Apple jockey but I think you normally have something like that available and so you will have a good time, provided you curb your expectations.
I'm not an expert but it seems that the jump from 16 to 32GB of GPU RAM is large in terms of what you can run and the sheer cost of the GPU!
If you have 32GB of local GPU RAM and gobs of RAM you can rub some pretty large models locally or lots of small ones for differing tasks.
I'm not too sure about your privacy/risk model but owning a modern phone is a really bad starter for 10! You have to decide what that means for you and that's your thing and your's alone.
alfalfasprout
2 hours ago
It also means that when the VC money runs dry, it's sustainable to run those models on-device vs. losing money running on those $$$$$ GPUs (or requiring consumers to opt for expensive subscriptions).
genghisjahn
2 hours ago
Apparently? From what? Where did this information come from that they had two competing teams?
alwa
an hour ago
I feel like I hear people referring to Wayne Ma’s reporting for The Information to that effect.
https://www.theinformation.com/articles/apple-fumbled-siris-...
> Distrust between the two groups got so bad that earlier this year one of Giannandrea’s deputies asked engineers to extensively document the development of a joint project so that if it failed, Federighi’s group couldn’t scapegoat the AI team.
> It didn’t help the relations between the groups when Federighi began amassing his own team of hundreds of machine-learning engineers that goes by the name Intelligent Systems and is run by one of Federighi’s top deputies, Sebastien Marineau-Mes.
protocolture
2 hours ago
Mac LLM vs Lisa LLM?
DonHopkins
2 hours ago
Apple ][ LLM Forever!
https://paleotronic.com/2025/08/03/connect-ai-to-microm8-app...