This Is the Year Apple Has to Deliver on Siri

Mark Gurman delivered more Siri news this week, and I’m left with the same feeling I’ve had for over a year now: equal parts hope and frustration.

Here’s the picture as it currently stands. Apple is planning two separate Siri overhauls, releasing months apart.

The Spring Update: iOS 26.4

The first update arrives with iOS 26.4, expected around March or April. This is the non-chatbot version built on a custom Google Gemini model running on Apple’s Private Cloud Compute servers. The goal here seems straightforward: finally cash all those checks Apple wrote at WWDC 2024.

Remember those promises? Siri that understands personal context. Siri that can find the book recommendation your mom texted you. Siri that works across apps instead of being confined to one at a time. Features that were supposed to ship with iOS 18, then got pushed to “later,” then pushed again to 2026.

The Fall Overhaul: iOS 27

Then, just a few months later at WWDC 2026, Apple plans to announce an entirely different approach. This one is codenamed “Campos,” and it’s a full chatbot experience. Think Claude or ChatGPT, but baked directly into your iPhone, iPad, and Mac. Voice and text inputs. Persistent conversations you can return to. The works.

Craig Federighi has previously expressed skepticism about chatbot interfaces, preferring AI that’s “integrated into everything you do, not a bolt-on chatbot on the side.” But competitive pressure from OpenAI and others seems to have changed the calculus.

Why Two Versions?

This is where I get frustrated. Releasing two fundamentally different versions of Siri months apart doesn’t inspire confidence. The first version sounds like something they cobbled together just to say they kept their promises. I’d almost prefer they skipped it entirely and focused all their energy on the chatbot.

Why This Matters

I’ve been critical of Siri over the last decade. Every year Apple makes promises it can’t keep. Every WWDC brings demos of features that arrive late, broken, or not at all.

And yet I continue to believe that a smart model on our Apple devices, with access to our local data, where everything stays local and private or runs through Private Cloud Compute, could be one of the best implementations of AI we’ve seen.

Think about who this could help. I spent 30 years practicing law. I know firsthand how many professionals are locked out of these AI tools because the privacy story isn’t good enough. Apple could change that.

And for the rest of us? We’re not particularly excited about sharing our personal information with giant AI companies either. A truly private assistant that actually knows your life without selling it to advertisers? That’s the dream.

Apple is uniquely positioned to deliver this. They have the hardware. They have the ecosystem integration. They have the privacy infrastructure. They have over 2 billion devices that could benefit.

But they have yet to prove they can actually ship it.

Where I Am Right Now

I currently use Siri where I can, but that’s very limited. I get far more use out of Claude than I do Siri at this point. (Claude’s recent Cowork feature is shockingly impressive.) That’s not where I want to be. I want the assistant built into my devices to be the one I reach for first.

The Bottom Line

All of this feels like it’s coming to a boiling point. We’ve all been patient with Apple for years now. It’s time for them to prove whether or not they can pull this off.

Let’s hope that in six months, Apple has finally answered the call.

Apple’s AI Team Continues to Jump Ship for Meta

Bloomberg reports that Apple has lost a fourth person from their foundation models group to Meta. Apple is giving raises to the existing team members but they likely pale in comparison to Zuck’s poaching offers (some reportedly in the hundred million range).

Apple is not in the frontier model race, but they still need to be developing their own models for a lot of reasons. To me this feels like just further evidence that Apple needs to make some sort of deal to partner with a frontier model (as Microsoft did with ChatGPT) or start making some acquisitions if, for nothing else, the talent.

I think Apple should be willing to license somebody else’s model if that’s what it takes to fix Siri, even temporarily. In light of all of this poaching, I expect that is only more the case now.

Apple and AI: Time for Action, Not Excuses

Mark Gurman has published the latest tell-all about Apple’s failure to claim a seat at the artificial intelligence big boy table. The more I read about this, the more it becomes clear that failure here has many fathers. Some of Apple’s leadership apparently didn’t see the underlying technology as relevant. Some didn’t want to spend the money. Others just didn’t make it a priority. For all of these reasons, there’s a ton of innovation happening right now in artificial intelligence, and Apple is responsible for none of it.

At this point, I’m much less interested in how Apple got into this position and much more interested in how they intend to get out of it. Apple remains a massive company with tremendous resources, and in my opinion, it’s not too late to turn this battleship around. I still think Apple’s idea for artificial intelligence, as expressed last year at WWDC, makes sense: refine AI into genuinely useful tools that consumers want, and combine that with private, on-device data to give users something truly unique.

But the question that I first asked last June still remains unanswered: Does Apple have the AI chops to actually make this happen? So far, it appears they don’t. There’s been a recent management shuffle, with Mike Rockwell now in charge of Siri, but the jury’s still out on whether this will be enough.

I’m hoping that the combination of leadership changes and a very public black eye will finally give Apple the push it needs to deliver something remarkable in AI. At the end of the day, Apple’s users—myself included—are waiting to see if the company can make good on its promise to deliver thoughtful, private, and genuinely helpful artificial intelligence.

Don’t Underestimate Apple’s Shot at On-Device Medical AI

There’s a rumor that Apple is working on an on-device medical AI. The idea is that your iPhone or Apple Watch could use its onboard silicon to privately analyze your health data and offer recommendations, without sending that sensitive information to the cloud.

The general vibe I’m seeing in response to this rumor is justified skepticism. Plenty of folks out there think there’s no way Apple can pull this off, but I think this is exactly the kind of thing they should be doing. This idea presents an opportunity for Apple.

Apple has been steadily building up its health tech for years. With features like Atrial fibrillation (AFib) detectionECG, and Fall Detection, they’ve proven they can deliver meaningful health tools. And they’ve done it with an eye toward user privacy and accessible design.

Now, imagine layering a personalized AI model on top of that foundation — something smart enough to notice patterns in your vitals, flag potential concerns, or even offer preventative guidance. And because Apple controls the hardware, they could run that AI model entirely on-device. That means your health data stays private, living only on your phone or watch, not bouncing around in the cloud.

Apple’s unique position here — owning both the hardware and the operating system — gives them access to a depth of personal health data that no off-the-shelf Large Language Model could ever touch. Combine that with their Neural Engine and you have a real opportunity to do something both powerful and private.

This also feels like a moment for Apple to make a statement with “Apple Intelligence.” So far, Apple’s AI initiative has been underwhelming and disappointing. This could be a way for them to reset expectations with something carefully designed, respectful of privacy, and genuinely useful.

Of course, this only works if they get it right. Rushing something half-baked out the door won’t cut it, especially when people’s health (and Apple’s AI reputation) is at stake. But if they take their time and nail the execution, this could be a defining moment for Apple’s AI efforts and one more key feature that saves lives.

I hope the rumor’s true and that Apple gives this the time and resources it deserves. It could be something special.

Apple Mail’s New Sorting Features

Apple’s latest operating system betas have finally brought the new Mail sorting and redesign features to iPad and Mac. While we’ve had time to experience these features on iPhone, their arrival on all platforms gives us a complete picture of Apple’s vision for email management.

apple mail window with no selected message, showing the new Apple Intelligence sorting feature with the Promotions label in red. No message is selected.

The response has been interesting. Power users generally aren’t impressed, arguing that web-based mail sorting tools and services like SaneBox offer far more sophisticated features. They’re right. However, I’ve noticed something different among casual users who have never experienced mail sorting before: they like Apple’s new email sorting.

I decided to experiment with this myself. I turned off all my fancy email sorting rules for my personal account and switched to Apple Mail’s new system. After some initial training, I’ve found it works surprisingly well. Sure, my MacSparky email still requires more advanced sorting that’s beyond what Apple offers, but for personal correspondence, this new system hits a sweet spot. Plus, there’s the added benefit of privacy.

This update represents a shift in Apple’s Mail development strategy. For years, they focused primarily on infrastructure improvements, making the app more stable and secure. It’s refreshing to see them adding new features again, even if they aren’t targeting power users. Not every feature needs to cater to the most demanding users, and sometimes simplicity, combined with privacy, is probably where Apple should be aiming.

Apple’s Too Conservative Approach to Text Intelligence

Apple is, understandably, taking a conservative approach to artificial intelligence. Nowhere is this more obvious and product-crippling than its text intelligence features. I am a fan of using AI for an edit pass on my words. Specifically, I’ve come to rely on Grammarly and its ability to sniff out overused adverbs and otherwise general sloppiness in my writing.

I’ve been around long enough to recall when grammar checkers first started appearing in applications like Microsoft Word. They were useless. It was comical how often their recommendations went against the grammar rules and made your writing worse. It wasn’t until the arrival of Grammarly that I got back on board with the idea of a grammar checker, and it’s been quite helpful. Note that I’m not using artificial intelligence to write for me; I’m using it to check my work and act as a first-pass editor. The problem I’ve always had with Grammarly is that it sends my words to the cloud whenever I want them checked.

Ideally, I’d like that done privately and locally. That’s why I was so excited about Apple Intelligence and text intelligence. It would presumably all happen on the device or Apple’s Private Cloud Compute servers. Unfortunately, at least in beta, Apple Intelligence isn’t up to the task. That conservative approach makes Apple’s Text Intelligence useless to me in this editor role. While Apple’s tools can identify obvious grammatical errors, they fall short in the more nuanced aspects of writing assistance.

A telling example: As a test, I recently gave Apple Intelligence a paragraph where the word “very” appeared in three consecutive sentences — a clear style issue that any modern writing tool would flag. However, Apple’s text intelligence didn’t notice this repetition. That’s very, very, very bad.

This limitation reflects a broader pattern in Apple’s approach to AI tools. While the foundation is solid, the current implementation may be too restrained to compete with existing writing assistance tools that offer more comprehensive feedback on style and readability. The challenge for Apple will be finding the sweet spot between maintaining their caution and delivering genuinely useful writing assistance. I get the big picture here. I know they’re not trying to make a Grammarly competitor, but they need to take several steps away from that conservative benchmark if this is going to be useful.

Another problem with the text tools is the implementation of recommended changes. You can have it either replace your text entirely (without any indicator of what exactly was changed) or give you a list of suggested edits, which you must implement manually. Other players in this space, like Grammarly, highlight recommended changes and make it easy to implement or ignore them with a button.

Apple is famous for its ability to create excellent user interfaces, and I suspect they could do something similar but probably better if they put their minds to it. Unfortunately, the current version of the text intelligence tools in Apple Intelligence isn’t even close.

Another Siri?

Mark Gurman’s got another AI/Siri report and it’s a doozy. According to the latest rumors, Apple is cooking up an LLM-powered Siri for iOS 19 and macOS 16.

The idea is that this would be yet another Siri reboot, but this time built on Apple’s own AI models. Think ChatGPT or Google’s Gemini but with that special Apple sauce (and privacy-focused access to your on-device data).

Here’s where I get a bit twitchy, though. Apple has been tight-lipped about the details of its AI strategy, and it’s starting to wear thin. If this massive LLM overhaul is really coming next year, what exactly are we getting with the current “Apple Intelligence” features that are supposed to land this year?

If, after all the WWDC and iPhone release hype, we get through all these betas only to find that Siri is still struggling with basic tasks, and then Apple says, “But wait until next year, we’ve got this whole new system that will finally fix everything!” Well, that will be just a little heartbreaking for me.

Apple’s Image Playground: Safety at the Cost of Utility?

As I’ve spent considerable time with Apple’s Image Playground in the recent iOS 18.2 beta, I’m left with more questions than answers about Apple’s approach to AI image generation. The most striking aspect is how deliberately unrealistic the output appears — every image unmistakably reads as AI-generated, which seems to be exactly what Apple intended.

The guardrails are everywhere. Apple has implemented strict boundaries around generating images of real people, and interestingly, even their own intellectual property is off-limits. When I attempted to generate an image of a Mac mini, the system politely declined.

Drawing a Mac mini is a no-go for Image Playground

This protective stance extends beyond the obvious restrictions: Try anything remotely offensive or controversial, and Image Playground simply won’t engage.

Apple’s cautious approach makes sense. Apple’s customers expect their products to be safe. Moreover, Apple is not aiming to revolutionize AI image generation; rather, they’re working to provide a safe, controlled creative tool for their users. These limitations however can significantly impact practical applications. My simple request to generate an image of a friend holding a Mac mini (a seemingly innocent use case) was rejected outright.

I hope Apple is aware of this tipping point and reconsidering as Image Playground heads toward public launch. At least let it draw your own products, Apple.

An Automation Golden Age

Did you know I have a newsletter? I post some, but not all of my newsletter’s content to this blog. Here’a a recent one.

An Automation Golden Age

I’ve mentioned several times on my podcasts that we’re experiencing a renaissance in automation, particularly on the Mac. This shift isn’t driven by a single tool but rather by the interoperability of a collection of tools.

AppleScript has been available on the Mac for decades, offering significant automation opportunities if you want to learn it. AppleScript allows users to connect applications and work with the same data to accomplish unified tasks. However, for many, learning AppleScript was a challenge. Programmers found it too different from traditional programming languages, and non-programmers struggled with its syntax. As a result, AppleScript adoption remained relatively small.

Apple and Sal Soghoian introduced Automator in early 2005 to address this, bringing drag-and-drop automation with its original version. Meanwhile, tools like Keyboard Maestro and Hazel, developed outside of Apple, have been actively filling the gaps in Apple’s automation solutions for years.

Then came Shortcuts ( Workflow). Initially developed for iOS, Shortcuts is now firmly embedded in the Mac ecosystem. It’s a spiritual (if not direct) descendant of Automator, and in recent years, these tools have learned to work together. You can run Keyboard Maestro macros from Shortcuts, and Shortcuts can be triggered from within Hazel. Users can now mix and match these tools to create robust automation chains, combining the strengths of each.

For those willing to invest the time to master—or at least gain a working knowledge of—these tools, few tasks on the Mac can’t be automated today.

The next big shift in this process is the integration of artificial intelligence. AI is already proving useful in helping generate automation, but if Apple Intelligence can fully tap into user data while still protecting user privacy and integrate it with Shortcuts, we could see a new era of powerful, personalized automation. This leap could be as significant as the jump from AppleScript to Automator. Of course, this depends on Apple getting both Apple Intelligence and the integration right, but I suspect this is already on the big whiteboard in Cupertino.

Shortcuts and Apple Intelligence both use the Intents system to work their magic. Developers who build for Shortcuts benefit from Apple Intelligence and vice versa. With this common architecture, I believe Apple will eventually tighten the connections between Shortcuts and Apple Intelligence. It won’t happen overnight, but over the coming years, I expect this combination to become the next frontier of automation in the Apple ecosystem.

Apple Intelligence Summarization

Because I’m a little nuts, I’m running the Apple Intelligence betas on all my devices. In my opinion, Apple Intelligence still has a ways to go, and these are early days, but we’re starting to get a peek at what Apple’s been working on. The more powerful Apple Intelligence features haven’t entered beta yet, but there are already some features in this first tier of on-device Apple Intelligence features that I’m genuinely enjoying.

One of them is the message summaries. In earlier betas, they just applied to the Messages and Apple Mail apps, but now you can extend them to third-party applications. (Although you can control this in the Notification settings on a per-app basis) This means that notifications on your phone are now summarized by Apple Intelligence on-device. With less than a week of testing, I already dig it.

The summaries are more concise and include more details than the previous notifications generated by the apps. There are no teasers here. It just tells me what I need. It’s AI-based, so it occasionally gets it wrong, but we’re still in early beta, and I will give it some grace. Also, this is the worst it will ever be, and it’s already pretty good.