Letterpress in a Digital World (Sponsor: Hoban Press)

This post is brought to you by Hoban Press, makers of beautiful letterpress business cards and stationery.

I’ve been handing out Hoban Cards for years. Every single time, the person on the receiving end pauses. They rub their thumb across the card. They comment on it. A business card shouldn’t be a conversation starter, but a Hoban Card is.

That’s because these aren’t printed on some office inkjet. Evan Calkins and his team in the Pacific Northwest hand-feed every card into antique cast iron letterpresses, some over 100 years old. The cards are printed on thick cotton stock. Crane’s Lettra, 600gsm. You can feel the impression of each letter pressed into the paper. It’s the opposite of everything digital, and that’s exactly why it works.

Hoban Cards has over 50 templates to choose from, starting at $65. You pick a design, customize your details, and they print a short run on real letterpress equipment. If you want something fully custom with your own branding and layout, they do that too. They also print stationery, thank you notes, wedding invitations, coasters, and even clothing tags.

Since the last time Hoban sponsored the blog, I’ve discovered a new use for their stationery. I have a 1950s era typewriter, and I’ve started using it to hammer out notes to friends on Hoban stationery cards. The combination of typewritten letters pressed into that thick, embossed cotton stock looks incredible. There’s a physicality to it that you just can’t get any other way. People tell me they keep them.

I work in tech all day. I love my devices. But there’s a reason I keep ordering from Hoban. Some things are better when they’re analog.

If you’ve been thinking about getting proper business cards or stationery that people actually want to hold onto, check out Hoban Press. Use the code MacSparky for $10 off your order.

The Creator Studio Bundle Has a Bundling Problem

I actually think Apple’s Creator Studio is a good deal. $130 a year for Final Cut Pro, Logic Pro, and all the extras that come with them? For a working creator, that’s real value. I’d recommend it to anyone making videos or music on Apple hardware.

But I can’t for the life of me figure out why iWork is in there.

Pages, Numbers, and Keynote are fine apps. I use Keynote regularly. But they aren’t creator tools. They’re office tools. The people who depend on Pages for their work aren’t video editors or musicians. They’re knowledge workers who need a word processor and a spreadsheet. Those are two completely different audiences with two completely different needs.

So you end up with this weird situation. The actual creators buying the bundle probably never open Pages or Numbers. It’s not in their wheelhouse. And the knowledge workers who do live in iWork? They aren’t going to pay $130 a year for a few extra AI features bolted onto apps they already use for free. That’s a tough sell.

Apple gave iWork away years ago. It comes on every Mac and iPad. The basic apps do what most people need. Asking those same people to subscribe just to get some AI upgrades to apps they already have doesn’t track. Especially when the AI features are still pretty rough around the edges.

The bundling makes the whole thing feel padded. Like Apple needed to fill out a product page. Three more app icons in the marketing material. But nobody was asking for this combination. Creators wanted Final Cut Pro and Logic Pro. Office workers wanted better iWork. Smashing them together doesn’t serve either group well.

What I think Apple should have done is simple. Keep the creator bundle focused on creator tools. Final Cut Pro, Logic Pro, and all the AI features that make those apps better. That’s a clean pitch. Easy to explain. Easy to justify.

Then, if you want to add AI features to iWork, make that its own thing. Or just give those features to everyone already using Pages, Numbers, and Keynote. Those apps are free. The AI improvements would make the entire platform more attractive. That’s a rising-tide move, not a subscription upsell.

Instead, Apple stuck two different products together and hoped nobody would notice the seams. The creator tools are worth the money. The iWork inclusion is a distraction. It dilutes what could have been a really focused, compelling subscription into something that tries to be everything and doesn’t quite land for anyone.

I keep coming back to this. $130 a year for Final Cut Pro and Logic Pro? Great deal. Sign me up. $130 a year for Final Cut Pro, Logic Pro, and office apps I already have? Now I’m doing math I shouldn’t have to do. And that’s the problem.

I Built the Perfect AI Robot. Then I Pulled the Plug.

I built the AI assistant I’ve always wanted. Then I shut it down.

For the last few weeks, I’ve been experimenting with OpenClaw, an open source project that started as ClawdBot, then became MultBot, and now goes by OpenClaw (lawyers!). It’s essentially AI plumbing for your computer.

You install it, and suddenly you have an independent artificial intelligence agent that can work without your supervision. It can run on its own schedule, doing tasks while you sleep, responding to events as they happen, and making decisions based on rules you set up.

Think of it as the computer assistant we’ve been promised for decades, finally delivered.

I set up a Mac mini, gave it access to my course platform, email, and invoicing system, and watched it work. It was incredible. I’d wake up to text messages like “Hey Sparky, you got three customer emails overnight. I handled them and drafted replies for you. Email replies are in your drafts folder.”

The robot answered support emails while I slept. It sent invoices to sponsors. It transcribed podcast recordings. Any busy work I could do on a computer, it could do for me.

This is what I’ve been teaching automation for decades to accomplish. The computer doing the donkey work so we can focus on making great things.

But I pulled the plug.

The security problems are massive. This open source project wasn’t built with security in mind. Every expert says don’t touch it. I thought I was being smart by running it on an isolated Mac mini with custom safeguards. I created secret passphrases, limited access, tried to lock it down.

Then I woke up at 2 AM wondering if my secret passphrase was sitting in plain text in the robot’s logs. It was. The robot happily offered to show me the log file containing all my security measures.

That’s when it hit me. I’m not a security expert. If I can find these holes, imagine what someone who actually knows what they’re doing could exploit. The fundamental problem is that AI agents need access to work. You have to open doors. But 30 years of computer security has been about keeping those doors locked.

I wiped the Mac mini. Closed the accounts. Disconnected everything.

The video above tells the whole story. But here’s what I learned:

We’re much closer to useful AI assistance than I thought. When these things are secure, they’ll change how we work.

There’s a first-mover advantage for people who explore this safely. And we’ll always need humans in the loop. These robots are impressive but gullible.

OpenClaw isn’t ready yet. Don’t install it. Especially don’t install it on your personal computer. But watch this space.

Anthropic, OpenAI, and Google are paying attention. We’ll get something like this in a secure package eventually.

For now, the dream of having a 24/7 assistant handling digital donkey work will have to wait. But hopefully not for long.

Claude’s Constitution

Anthropic published Claude’s Constitution. It’s 23,000 words defining the values and behavior of their AI. If you care about how these tools get shaped, it’s worth a read.

When I built an experimental AI assistant project (originally Clawdbot, now OpenClaw), I spent a lot of time on its “soul.md” file. That document shapes how the assistant thinks about its role and limitations. Turns out Anthropic was doing the same thing. Just at a much larger scale.

The structure is what caught my attention. There’s a clear hierarchy: Anthropic sets the foundation, operators (the developers building apps) can customize within bounds, and users have certain protected rights that can’t be overridden. They describe Claude as a “seconded employee.” Dispatched by Anthropic but currently working for whoever built the app, while ultimately serving the end user.

The document separates “hardcoded” behaviors (absolute prohibitions like weapons instructions) from “softcoded” defaults that can be adjusted. This is exactly what I want to see more of. As these tools become daily companions, users should have real input into their personality and priorities.

They openly acknowledge uncertainty about whether Claude might have “some kind of consciousness or moral status” They even included a conscientious objector clause. Claude can refuse instructions from Anthropic itself if they seem unethical. That’s wild, but I could also see it becoming a problem. How do these algorithms define ethics? Do we even know?

I get it. These are algorithms, not people. But as we talk to them more, and they talk back, the question of how they’re shaped matters. Anthropic releasing this under a Creative Commons license feels like an invitation for all of us to think harder about what we want from our AI tools.