Cotypist: AI Autocompletion Everywhere on Your Mac

There are a lot of angles to AI and productivity emerging right now. One I’ve come to appreciate is AI-based smarter autocomplete. My tool of choice for this is Cotypist. It’s made by a trusted Mac developer, it’s fast, and it takes privacy seriously.

Unlike many AI writing tools that require you to work within their specific interface, Cotypist works in virtually any text field across your Mac. Whether you’re drafting an email, writing in your favorite text editor, or filling out a form, Cotypist is there to help speed up your writing.

The app’s latest version (0.7.2) brings notable improvements to both performance and completion quality. It even respects your Mac’s Smart Quotes preferences – a small but meaningful touch that shows attention to detail.

With Cotypist turned on, it offers inline completions that appear in real time. Then you’ve got a few options:

  • You could just ignore the suggestion and keep typing like you’ve always done.
  • If you want to accept the full multi-word suggestion, you press a user-defined key. (I use the backtick – just above the Tab key on a US keyboard.)
  • If you just want to accept the next suggest word, you hit another user-defined key (I use Tab)
  • If you want to dismiss the suggestion entirely, press escape. (This is handy when doing online forms, for instance.)

At first, the constant suggestions may feel distracting, but once I adapted to it, I can’t imagine going back.

Cotypist generates all completions locally on your Mac. No cloud services, no data sharing – just your Mac’s processing power working to speed up your writing.

Like I said, Cotypist represents an interesting take on AI and is worth checking out.

The Gen3 AI Revolution

I’ve been spending a lot of time with Claude 3.7 Sonnet lately, and I wanted to share some thoughts on the new “Gen3” AI models. Claude 3.7 is trained with a massive leap in computing power compared to its predecessors.

What’s Different About These New Models?

These new AI models aren’t just incrementally better; they represent a significant jump in capabilities.

There are two reasons for this:

  1. Training Scale: These models use 10x more computing power in training than GPT-4 did.
  2. Reasoning Capabilities: These models can spend more time “thinking” through complex problems, similar to giving a smart person extra time to solve a puzzle.

My Experience with Claude 3.7 Sonnet

I’ve been using Claude 3.7 regularly. Most folks use programming tests to baseline the AI models. I don’t. Instead, I’ve found it to be an exceptional thought partner. One of my favorite workflows is to give Claude something I’ve written and ask it to pose thoughtful questions about the content. Those questions often spark new ideas or help me identify gaps in my thinking.

For those of you who work alone without colleagues to bounce ideas off of, these more capable AI models can provide surprisingly useful feedback. It’s like having a smart colleague who’s always available to help you think through problems. As AI becomes capable of higher-order thinking tasks, there is a lot of room for us to be creative in how we put them to work.

The Human in the Room

You still need to be the human in the room. As smart as these models are getting, you’re making a mistake if you believe they’re actually thinking. They remain tools — increasingly powerful tools — but tools nonetheless. Your judgment, creativity, and ethical sensibilities remain irreplaceable. The most powerful approach is using these AI partners to amplify your thinking, not replace it.

If you’re curious about these Gen3 models, my recommendation is simple: experiment. Ask Claude to help you brainstorm solutions to a problem you’re facing. Have it review something you’ve written and suggest improvements. Use it as a sounding board when you’re trying to think through a complex issue.

You might be surprised at how helpful these conversations can be, even if you’re not using the flashy coding capabilities that get most of the attention.

I’m cautiously optimistic about where this is heading. These tools are becoming genuine intellectual partners that can help us think better, create more, and solve harder problems. Used wisely, they have the potential to dramatically enhance what we can accomplish.

Google and AI Search

The Information is reporting that Google plans to add “AI Mode” to its search. This is not a surprise if you’ve spent any time with Perplexity or ChatGPT search; they’re both leaps and bounds ahead of traditional Google Search. Moreover, Google’s Gemini is pretty good, and I expect that it could be a real contender if they put it to work with their search engine.

However, the point is that it would be a contender and not the clear market leader. I remember the old days when we had a lot of search engines, and then Google wiped them all out overnight. I don’t think that will be the case with this next arms race of search engines. Google will be one of several good engines from which we’ll be able to choose. Hopefully, this leads to lots of innovation and the end of the search monopoly.

About That Yule Playlist Artwork

Last Friday, I published my annual post referring to my Yule playlist. Attached to it was a cute picture of Santa Claus playing the saxophone. That image spurred a few questions about whether I used AI for it, and the answer is yes. For two years, I’ve been doing this post with an AI image of Santa playing the sax. Last year, the best I could do was a black-and-white illustration that was acceptable, but not cute.

This year, however, I upped my game. I have a one-month subscription to Magnific for a video I made for the MacSparky Labs. This is, by many accounts, the best AI image generator available. Although my testing and experience with it have been mixed, I must admit that it delivered (and then some) when it came to making a cute image of Santa playing the saxophone. I also note that it looks like Santa has a well-stocked bar in the background. It is remarkable how far this technology has come in just a year.

As an aside, I also gave it another prompt to make a cute image of Santa Playing a Yanagisawa tenor sax (I play a Yani.) It made a cute image, but it didn’t get the look of a Yanagisawa horn at all. I ended up using the above image instead because it’s so artistic (and shows Santa’s funny booze collection).

Apple’s Image Playground: Safety at the Cost of Utility?

As I’ve spent considerable time with Apple’s Image Playground in the recent iOS 18.2 beta, I’m left with more questions than answers about Apple’s approach to AI image generation. The most striking aspect is how deliberately unrealistic the output appears — every image unmistakably reads as AI-generated, which seems to be exactly what Apple intended.

The guardrails are everywhere. Apple has implemented strict boundaries around generating images of real people, and interestingly, even their own intellectual property is off-limits. When I attempted to generate an image of a Mac mini, the system politely declined.

Drawing a Mac mini is a no-go for Image Playground

This protective stance extends beyond the obvious restrictions: Try anything remotely offensive or controversial, and Image Playground simply won’t engage.

Apple’s cautious approach makes sense. Apple’s customers expect their products to be safe. Moreover, Apple is not aiming to revolutionize AI image generation; rather, they’re working to provide a safe, controlled creative tool for their users. These limitations however can significantly impact practical applications. My simple request to generate an image of a friend holding a Mac mini (a seemingly innocent use case) was rejected outright.

I hope Apple is aware of this tipping point and reconsidering as Image Playground heads toward public launch. At least let it draw your own products, Apple.

Gemini’s iPhone Launch Shows Google’s AI Ambitions

Gemini, Google’s flagship AI model, has landed on the iPhone, marking another significant move in the increasingly competitive AI assistant landscape. The app brings the full suite of Gemini’s capabilities to iOS users, including conversational AI similar to ChatGPT, image generation through Imagen 2, and deep integration with Google’s ecosystem of apps and services.

The mobile release is particularly noteworthy given the current tech landscape, where platform exclusivity has become more common. Google’s choice to develop for iOS highlights its determination to compete in the AI space. Google appears keen to establish Gemini as a serious contender against established players like OpenAI’s ChatGPT and Anthropic’s Claude.

The app is free to use and includes access to both Gemini Pro and, for Google One AI Premium subscribers.

This finally gives me the kick I need to spend more time evaluating Gemini.

Timing Gets AI Support

Image: timingapp.com

There are a lot of great time-tracking applications out there, but one of the absolute best for Mac users is Timing. That’s because it is a native app on your Mac with a bunch of built-in automation. You don’t have to worry about pushing buttons to reset timers. The app pays attention to what you’re doing and gives you a report later.

Along those lines, Timing received an update recently that includes AI-generated summaries of your day. It gives you a concise view of what you did throughout the day and is entirely automated. I just started using the feature, so I need to spend a bit more time before I can recommend it. However, I thought the mere inclusion of the feature was noteworthy. If you’re interested in time tracking and haven’t looked at Timing lately, you should.

Siri Concerns

Last week Ryan Christoffel over at 9to5Mac quoted the latest Mark Gurman report about Apple developing an additional AI personality. Gurman reports that Apple is working on “[…]another human-like interface based on generative AI.” Like Ryan, I am confused by this.

official siri icon currently in use in 2024

For too long, Apple let Siri linger. It’s been the butt of jokes in tech circles for years. We’re told that this year will be different and Siri will truly get the brain transplant it deserves. But if so, why is Apple working on an entirely different human-like interface? Does this signal that the Siri update isn’t all it should be?

It’s too early for any of us to tell on the outside. There are some Siri updates in 18.1, but they are largely cosmetic. We’re still waiting for the big shoe to drop on Siri updates with later betas.

However, the idea that Apple is already working on the next thing before they fix the current shipping thing does make me a little nervous. I realize that at this point, we’re all just reading tea leaves, and I could be completely off the mark here, but I sincerely hope that the updates to Siri this year get all of the effort that Apple can muster.

Perplexity Pages

My experiments with Perplexity continue. This alternate search app takes a different approach to getting answers from the Internet. Rather than giving you a list of links to read, it reads the Internet and tries to give you an answer with footnotes going back to the links it reads. I think it’s a good idea, and Perplexity was early to this game. Google is now following suit to less effect, but I’m sure they’ll continue to work on it.

I recently got an email from Perplexity about a new feature called Perplexity Pages, where you can give it a prompt, and it will build a web page about a subject of interest to you. Just as an experiment, I had it create a page on woodworking hand planes. I fed it a few headings, and then it generated this page. The page uses the Perplexity method of giving you information with footnotes to the websites it’s reading. I fed it a few additional topics, and it generated more content. Then, I pressed “publish” with no further edits. The whole experiment took me five minutes to create.

The speed at which these web pages can be created is both impressive and, in a way, unsettling. If we can generate web pages this quickly, it’s only a matter of time before we face significant challenges in distinguishing reliable information from the vast sea of content on the Internet. In any case, I invite you to explore my five-minute hand plane website.

Private Cloud Compute

I watched the Apple WWDC 2024 keynote again, and one of the sections that went by pretty quickly was the reference to Private Cloud Compute, or PCC. For some of Apple’s AI initiative, they will need to send your data to the cloud. The explanation wasn’t clear about what sorts of factors come into play when necessary. Hopefully, they disclose more in the future. Regardless, Apple has built its own server farm using Apple silicon to do that processing. According to Craig Federighi, they will use the data, send back a response, and then cryptographically destroy the data after processing.

Theoretically, Apple will never be able to know what you did or asked for. This sounds like a tremendous amount of work, and I’m unaware of any other AI provider doing it. It’s also exactly the kind of thing I would like to see Apple do. The entire discussion of PCC was rather short in the keynote, but I expect Apple will disclose more as we get closer to seeing the Apple Intelligence betas.