The rushed launch of Apple Intelligence was a debacle, reminding Apple it should focus on readiness rather than quickly appeasing shareholders.
The rushed launch of Apple Intelligence was a debacle, reminding Apple it should focus on readiness rather than quickly appeasing shareholders.
It sucks for a lot of reasons but mostly because ai is always a “black box” (deep seek the exception) with “magic proprietary code”. You think “Tim Apple” isn’t working with the trump admin to id people for El Salvador?
Being open source doesn’t magically make it good. There’s a ton of open source software that straight up sucks.
Yes, but in this case, you can see what the model is doing, and it is running on your actual computer. Whereas a lot of LLM providers tend to run their models on their own server farms today, partly because it’s prohibitively expensive to run a big model on your machine (Deepseek’s famous R1 model needs at least a hundred GBs of VRAM, or about 20 GPUs) and partly so that they have more control over the thing.
AI isn’t a black box in the sense that it is a mystery machine that could do anything. It’s a black box in the sense that we don’t know exactly how it’s working, with which particular probability vector/tensor is responsible for what, though we have a fairly good general idea of what goes on.
It’s like a brain in that sense. We don’t know which exact nerve-circuits do what, but we have a fairly good general idea of how brains work. We don’t think that if we talk to someone, they’re transmitting everything you say to the hivemind, because brains can’t do that.