Mac Power Users 789: Back to the Mac, with Matt Gemmell

Author Matt Gemmell joins Stephen and me on this episode of Mac Power Users to talk about using an iPad as his only comptuer for eight-and-a-half years and why he recently switched back to the Mac.

This episode of Mac Power Users is sponsored by:

  • 1Password: Never forget a password again.
  • Ecamm: Powerful live streaming platform for Mac. Get one month free.
  • DEVONthink: Get Organized — Unleash Your Creativity. Use this link for 10% off.

Sparky’s Case for Focus Modes (And a Short Poll)

One of the best features Apple has added in recent years is Focus Modes. I use them every day across all my devices, and they have become an essential part of how I manage my attention, my work, and even my personal life.

At its core, a Focus Mode is a filter between you and the rest of the world. Instead of being bombarded with every notification, email, or message the moment it arrives, Focus Modes let you decide what gets through based on what you’re doing. They are powerful and easy to automate, so you don’t have to think about it.

My Favorite Focus Modes

I have the usual Work and Personal modes, but I also get more specific:

  • Podcasting Mode: Filters out everything but my co-host and essential podcasting tools. My Home Screen also changes to show time zone widgets for my co-hosts.
  • Production Mode: Prioritizes video editing, screen recording, and keeps distractions to a minimum.
  • Deep Work Mode: Only lets in the people who truly need me, no social media, and a Home Screen optimized for writing and thinking.
  • Disneyland Mode: This one is special. The moment I step near Disneyland, my devices enter a mode that aggressively limits work interruptions so I can enjoy the time with my family.

Automating Focus Modes

You can turn Focus Modes on manually (I usually do it from my Apple Watch), but you can also automate them. In addition to location (like the way I turn on My Disneyland mode), you could also trigger Focus Modes by time of day, when you open certain apps, or even based on whether you’re at home or work.

More Than Just Silence

What I love about Focus Modes is that they don’t just filter notifications. They also change my device’s environment:

  • Custom Home Screens: Each mode gets its own set of widgets and apps. For example, in work mode I see the Slack app we use at Relay where when I’m in personal mode, I have my personal appointments and the weather available.
  • Custom Watch Faces: A quick glance at my Apple Watch instantly tells me what mode I’m in. For example, any blue face represents work mode and green represents personal mode.
  • Shortcuts & Automations: Turning on a Focus Mode can launch apps, set timers, or even change settings like screen brightness or sound output.

The Biggest Mistake with Focus Modes

Most people don’t use them. Maybe they seem complicated, or maybe they feel like too much work to set up. But here’s the thing: you don’t have to get them perfect from the start. Start with a simple one (like a Work or Personal mode), see how it feels, and tweak it over time.

If you’re not using Focus Modes yet, give them a shot. I think you’ll be surprised at how much more in control you feel. Also, we recently did an episode of the Mac Power Users on Focus Modes where I explain my Focus Mode strategies in more detail.

I’m trying to get my arms around how many folks are using Focus Modes. If you have time, could you please fill out this poll on the topic? If you’ve got more thoughts on Focus Modes, the poll also includes an optional open question. I’d love to hear what you think.

MacWhisper 12: Now with Automatic Speaker Recognition

Jordi Bruin’s MacWhisper continues to deliver the goods. Just released is version 12, which adds automatic speaker recognition, making an already great transcription tool even better.

I’ve been using MacWhisper on the back end for a lot of the content I publish here at MacSparky, and I’ve been very happy with the results.

In a lot of ways, Artificial Intelligence is a mixed bag, but when it comes to voice-to-text transcription, it’s a pure win. The ability to quickly and accurately convert spoken words into text saves me time and improves my workflow.

Rogue Amoeba: Fine Audio Software for Your Mac (Sponsor)

This week, my friends at Rogue Amoeba are back to sponsor MacSparky. They make an incredible lineup of audio tools for your Mac, useful for everyone from home hobbyists to professional podcasters and studio technicians.

Thanks to the work the team did in tandem with Apple (see the epic backstory!), you can now get started with Rogue Amoeba’s tools in mere seconds. That includes their flagship audio recording utility Audio Hijack, which was recently updated to version 4.5 with enhancements to the secure and local Transcribe block. (This is the app that I use to record everything from the Mac Power Users on down.)

They also make Loopback, which lets you do things like combine microphone audio with the audio playing from apps running right on your Mac, to create a virtual audio device. It’s a perfect way to spice up calls on Zoom or FaceTime, or make your screen recordings even more immersive. Add their top-notch soundboard app Farrago can manage backing tracks and sound effects while you’re recording your podcast.

SoundSource is the premier audio control app for your Mac. Now you can control the volume of each individual app, apply audio effects (including Audio Unit plugins), and redirect to the different output devices you have available on your system. All this power comes in an app that lives right in your menu bar for instant access. In my opinion, no Mac is fully set up until SoundSource is installed.

You can test out Rogue Amoeba’s software for free with their fully-functional trials, then make a one-time purchase online. Best of all, as a MacSparky reader, you can save 20% through the end of March. Just use coupon code SPARKYXX at their store.

My thanks to Rogue Amoeba for supporting MacSparky, and all my audio needs.

FOD Conversation – Vision Pro Check-In (Podcast)

On March 8 2025, I was joined by a few Labs members to talk about their experience having owned the Vision Pro for a year. It was an engaging discussion and there was a lot of great information. If you’re curious about the Vision Pro and what actual users are thinking about it, this is a great bit of content.
… This is a post for MacSparky Labs members only. Care to join? If you’re already a member, you can log in here.

MacSparky Book Report: Nexus by Yuval Noah Harari

Because I’m spending so much time with artificial intelligence lately, and because it seems to be such an interesting topic for podcast listeners and MacSparky Labs members alike, I decided to read Nexus: A Brief History of Information Networks by Yuval Noah Harari. It’s an intriguing book, not because Harari is a computer scientist or a technology enthusiast, but because he’s a historian. His focus is on how societal change is so often triggered by shifts in information networks.

Harari walks through key moments in history where the way we share and process information radically changed. He shares such examples as the printing press, the telegraph, and radio, and how these inventions reshaped societies in ways that were often unexpected. One example that stood out to me was how, contrary to what we might assume, the printing press initially fueled witch hunts more than it did the scientific revolution. The broader theme of the book is that when information systems change, societies change. Predicting the exact nature of that change, however, is nearly impossible.

We are now heading into another seismic shift, but this one feels different. For the first time, the technology itself is intelligent enough to operate independently. A printing press, for example, only printed the words that humans put into it. Artificial intelligence, on the other hand, can generate new ideas, new writings, and even new perspectives. This creates enormous opportunities, but also significant risks.

One of the key takeaways from Nexus is that every major transition in information networks has led to unintended consequences, some good, some bad. The book left me with mixed feelings about AI. My early experiments with it have shown me how much it can improve productivity and human connections when used correctly. But unlike nuclear research, AI isn’t confined to a few high-security labs; it can be developed anywhere. That makes it incredibly difficult to regulate on a global scale, and history suggests we need to be wary of unforeseen consequences.

The book doesn’t offer answers, but it does prompt big questions. If you’re interested in understanding how our current AI moment fits into the larger arc of history, Nexus is well worth your time.

Apple’s AI Woes

For years now I’ve been writing and talking about the trouble with Siri. This problem became even more acute for Apple with the rise of large language models (LLMs) and the world’s collective realization at just how useful a smart artificial intelligence can be.

Last June, it seemed as if Apple finally found religion about making Siri better. At WWDC 2024 they outlined the “Apple Intelligence” strategy that made a lot of sense. While I never expected Apple to build something on par with one of the frontier models, like ChatGPT, I continue to think they don’t need to. If Apple’s AI could remain private and access all my data, that alone makes it more useful than most artificial intelligence. Moreover, as the platform owner, a smart Siri could act as an AI traffic cop, sending more complex requests to the appropriate outside models.

So I think Apple has the right vision, but I’m starting to question their ability to execute on it. Apple has yet to release even a beta of the iOS 18 version with, as one Apple employee explained to me, the “Siri Brain Transplant.” Indeed, Apple recently announced that the advanced Siri features won’t ship in iOS 18 after all. So the brain transplant has been postponed.

Late last year, there was a rumor that Apple is working separately on an LLM-Siri for iOS 19 that will really show how good Siri can be. The fact that there is already a rumor of a new thing when we don’t yet have the improved old thing doesn’t inspire confidence.

It gets worse, though. Mark Gurman, a reliable source, ​now reports the new LLM Siri is also behind​ and its conversational features may not release to consumers until 2027. Ugh. If true, Apple’s failure to deliver on Siri is epic at the Apple Maps and MobileMe launch levels.

The current LLM leaders are evolving weekly. Can you imagine how good they are going to be by 2027? I honestly can’t.

If these rumors are true, Apple is in trouble. It’s not the 1995 Apple-will-they-go-out-of-business-trouble, but it is trouble nonetheless. ​M.G. Siegler suggests​ that if Apple truly is this far behind, they should just default to ChatGPT until they can get their act together. That would be incredibly embarrassing for Apple, but this whole situation is exactly that. It looks like Apple’s AI initiative has a long way to go. Back in the day when the MobilMe launch failed so miserably, people joked that Steve Jobs was walking through the hallways at Cupertino with a flame thrower strapped to his back asking everyone he met, “Do you work on MobileMe?”. When it comes to AI, I think Apple is approaching a flame thrower moment. ​John Gruber agrees​.