Ethan Mollick made a great point in a LinkedIn post about what Apple announced on the AI front in yesterday’s WWDC presentation.

a 3B quantized model running on local silicon will be good for some stuff but not agentic reasoning. The cloud model doesn’t seem to maintain state or history and isn’t frontier. It looks like GPT-4 is just for specific questions.

It is amazing that you can beat Siri largely with local AI on the phone, but the gap between a 3B model (plus whatever low latency model they are running in the cloud) and a frontier model is large.

The way I interpret what Apple is doing when it comes to AI, at least at this point from a consumer standpoint, is that they are going for the fundamentals and the lower-end use cases vs. the higher-end or edges of what might be possible. This feels safe and feels very Apple-ish to me. They care deeply about the experience and the experience of using AI for most people is not great a lot of the time.

Brad Barrish @bradbarrish