Artificial Intelligence Performance … “Also” Security

Samual Axon at Arts Technica published an interview with Apple’s artificial intelligence (AI) chief John Giannandrea concerning the state of AI at Apple.

If you go back enough years, you can see plenty of articles and hear podcasts where I bemoaned Apple’s ability to compete with AI given the fact that they were not willing, for instance, to throw all of our pictures together so they would have one massive server database for the AI gods. Instead, Apple always said they were going to protect user privacy, and while I agreed with the sentiment, I worried about its long term effects.

However, Apple has been turning my opinion around on this over the past several years. This latest article settles my mind entirely on the question. The point that comes through several times is that Apple believes doing AI analysis on a device actually gives the best user experience because the computing is getting done closer to the user.

Yes, I understand this perception of bigger models in data centers somehow are more accurate, but it’s actually wrong. It’s actually technically wrong. It’s better to run the model close to the data, rather than moving the data around. And whether that’s location data—like what are you doing— [or] exercise data—what’s the accelerometer doing in your phone—it’s just better to be close to the source of the data, and so it’s also privacy preserving.
— John Giannandrea

This is the first time I’ve heard someone from Apple explain how their AI model is better because it is faster, and the privacy benefits are icing on the cake. Privacy has always been at the front of that argument. It’s never been the “also”, until now.

I think the reason Apple is starting to talk more about this is the significant lead they have with Apple silicon. I’m just not convinced any other company can pull off AI on the device the way Apple is right now. If anyone at Apple were to ask me, I would say push down the gas pedal with on-device AI as hard as you possibly can.