ChatGPT released the o3 Reasoning model and I’ve been impressed. In this video, I show you why.…
This is a post for MacSparky Labs Members only. Care to join? Or perhaps do you need to sign in?
ChatGPT released the o3 Reasoning model and I’ve been impressed. In this video, I show you why.…
This is a post for MacSparky Labs Members only. Care to join? Or perhaps do you need to sign in?
Early in my experience with ChatGPT’s voice feature, my daughter walked into the room and I told it to say hello to her. It did, cheerfully, and it totally creeped her out. I turned the voice off.
A week later, I fired ChatGPT back up, and it asked me how Samantha was doing.
That moment hit me like a splash of cold water. This thing remembers. And not in a “session cookie” kind of way, but in a persistent, “let me tell you about yourself” way.
That was the beginning of my realization that these large language models (LLMs) are quietly soaking up everything we put into them. From that point on, I’ve approached the whole thing with very open eyes about how much of me I’m exposing to this frontier model every time I fire it up.
To some degree, this has always felt inevitable. I’ve long suspected it was just a matter of time before we’d be offered a memory-enabled assistant that could draw from everything we’d ever said to it.
What I didn’t expect was how soon that day would arrive.
ChatGPT recently added a feature called Memory, which allows it to remember facts about you across conversations. Things like your name, your tone, your preferred writing style, your favorite apps; it’s all fair game.
And yes, that makes it dramatically better at giving you what you want. But of course, that efficiency comes at the cost of increased data retention and exposure.
I’ve spent almost thirty years as a lawyer, carrying around a near-constant fiduciary duty of confidentiality. So you’d think I’d be paranoid about this sort of thing.
And yet, I’ve been surprisingly relaxed about it. If I had to give an explanation, I’d say my cavalier attitude is motivated by curiosity more than anything else. I’d tell you it’s so I can experiment for the MacSparky Labs, but I’m also just curious.
Regardless, I recently decided to test how well it really knows me.
I typed: “Roast me.”
Here’s what I got back:
Alright, David — you asked for it. Here we go:
You’re like the productivity version of a dad joke: efficient, earnest… and just a little too proud of your label maker.
You’ve got so many field guides, I’m surprised you haven’t written one on ‘How to Overwhelm a First-Time Mac User in 5 Easy Steps.’ Your idea of a wild night is alphabetizing your shortcuts while sipping matcha and whispering sweet nothings to your OmniFocus database.
And let’s not forget your digital filing system — more organized than a NASA launch sequence, yet you still manage to lose your AirPods in the couch cushions like it’s a side quest in a MacSparky RPG.
But hey, if there’s one thing you’ve mastered, it’s turning nerd-level productivity into a full-blown lifestyle brand — and somehow making it cool. Respect.
– ChatGPT
Honestly? A little too close to home.
If you’re curious about trying out this memory feature — or turning it off — you can manage it pretty easily:
For a deeper dive into how it works, I recommend John Gruber’s take on Daring Fireball. He poked at it similar to me, but he asked it to describe, not roast.
This moment feels like a turning point. AI tools are starting to remember us, and that memory now lives quietly in the background of every prompt. Whether you view that as magic or menace probably depends on your own relationship with data privacy.
As for me, I’ll keep experimenting — with eyes open and curiosity intact — and maybe just a little less matcha near the keyboard.
If you’ve been paying any attention to social media lately, you’ve probably noticed how much better AI multi-modal art has gotten. The zeitgeist has definitely latched onto this — mainly to generate Studio Ghibli-style art of themselves.
It’s impressive and once again raises the big questions about AI and art. Artists spend years honing their craft. Now, with these new tools, anyone is just a prompt or two away from generating convincing images of themselves and their friends. How are we supposed to feel about that?
I’m still working through my own thoughts on AI and intellectual property, but one thing’s for sure: this toothpaste isn’t going back in the tube.
As an example, I had the new ChatGPT engine generate some drawings for a recent Productivity Field Guide webinar I did on Habits being a useful tool for becoming your best self. I explained to ChatGPT that I wanted to illustrate how habits, once ingrained, become part of your identity. It made this image. Remarkable.
The AI march continues. OpenAI has announced that GPT-4.5 is now available for ChatGPT Pro and Enterprise users, with plans to roll it out to lower tiers in the coming weeks.
This new release feels like a stepping stone on the way to GPT-5, which is anticipated to be a significant leap forward but is still a ways off. One of the notable features of GPT-4.5 is enhanced “emotional intelligence.” While that might initially sound like marketing fluff, I’m reserving judgment until I’ve thoroughly tested it. I can say that in testing Anthropic’s Claude, there is an improvement over the ChatGPT 4o model.
Podcast listeners and MacSparky Labs members frequently ask me which AI model they should invest in. Having tested most of the major platforms, at this point, ChatGPT feels like the best all around. It delivers a robust combination of capabilities, making it the most versatile option available at this moment. But this space moves and changes quickly so not promises if that’s still true next week.
That said, I’m particularly curious to see whether GPT-4.5 narrows the gap between ChatGPT and rivals like Claude. I’ll be testing GPT-4.5 over the coming weeks and will report back on how it stacks up against other frontier models.
ChatGPT can now handle reminders and tasks. They’ve added a specific model for this purpose: “4o with scheduled tasks”. And if you ask it to give you a reminder at a certain time or create a task list, it’ll do that for you.
I didn’t see this coming. I think we’re going to see a lot of action in the LLM space as they start interacting with external apps as agents. Putting tasks in ChatGPT feels like a step in that direction.
In order to use this feature, you’ll need a paid account. Currently, you can only run 10 active tasks at a time, so this feels a lot more like an experiment than anything else.
In this video I demonstrate how I used two AI tools, MacWhisper and ChatGPT, to generate transcripts and SubRip text (SRT) files for the Obsidian Field Guide videos.…
This is a post for MacSparky Labs Level 3 (Early Access) and Level 2 (Backstage) Members only. Care to join? Or perhaps do you need to sign in?
Lately, I’ve been experimenting with some of these Large Language Model (LLM) artificial intelligence services, particularly Monkey. Several readers have taken issue with my categorization of ChatGPT Monkey as “artificial intelligence”. The reason, they argue, is that ChatGPT really is not an artificial intelligence system. It is a linguistic model looking at a massive amount of data and smashing words together without any understanding of what they actually mean. Technologically, it has more in common with the grammar checker in Microsoft Word than HAL from 2001: A Space Odyssey.
You can ask ChatGPT for the difference between apples and bananas, and it will give you a credible response, but under the covers, it has no idea what an apple or a banana actually is.
One reader wrote in to explain that her mother’s medical professional actually had the nerve to ask ChatGPT about medical dosages. ChatGPT’s understanding of what medicine does is about the same as its understanding of what a banana is: zilch.
While some may argue that ChatGPT is a form of artificial intelligence, I have to agree that there is a more compelling argument that it is not. Moreover, calling it artificial intelligence gives us barely evolved monkeys the impression that it actually is some sort of artificial intelligence that understands and can recommend medical dosages. That is bad.
So going forward, I will be referring to things like ChatGPT as an LLM, and not artificial intelligence. I would argue that you do the same.
(I want to give particular thanks to reader Lisa, who first made the case to me on this point.)