The M5 vs. M6 MacBook Pro Buyer’s Dilemma

This post brought even more questions about M5 vs M6 MacBooks Pro. The timing is weird this year. The M5 MacBook Pro arrives soon. (Likely March 4!) The M6 arrives later in the year with a completely new design and OLED display. You could buy in spring or wait until fall.

The M5 is a massive jump in AI performance. 4X over M4. Apple engineered the GPU specifically for running language models locally. The M6 is a design refresh. New chassis. OLED display. Probably some GPU improvements too, but nothing as dramatic as the M5 jump.

This is a use case decision. If you’re running local AI models or doing development work that benefits from GPU acceleration, buy the M5. The performance gain is real and the wait costs you months of productivity.

If you’re doing video editing, color grading, or anything where display quality matters to your actual work, the M6 OLED is worth waiting for. If you’re mostly doing text-based work, this choice barely matters. The M5 is more than sufficient.

Now the money angle. Apple usually signs long-term memory contracts. RAM is getting expensive. Memory prices are likely to go up. An M5 with 32GB might be $2,400 now. The same RAM in an M6 could be $2,600 in September.

Prettier design, better display, faster GPU. But you might pay more for the same memory.

My recommendation: Buy the M5 if you need the performance now. You’ll regret waiting more than you’ll regret missing the design refresh.

Wait for the M6 only if display quality or industrial design are actually important to your work. Not aspirationally important. Actually important.

Regardless of whether you go M5 or M6, you’re going to get a helluva Mac.

Why the M5 Matters If You Run AI Locally

Apple says the M5 delivers 4X the peak GPU compute for AI compared to the M4. Most tech sites reported the number and moved on. I don’t think people have fully grasped what this means for running AI locally.

The gain isn’t just faster cores. Apple put a Neural Accelerator in every GPU core. I’ve been running local models through MLX on my M2 Mac for a while now, but barely. The M5 turns local AI from “it works, I guess” into something that feels responsive.

There’s a timing angle here too. NAND and memory prices jumped 55-60% in Q1 2026, and the industry expects them to keep climbing. If you want a Mac with serious memory for local AI work, buying now might save you real money over waiting for the M6 or M7. Future machines could carry a much higher price tag for the same RAM.

I look at these numbers, and then I look at my M2 Mac Studio, and I’m raising an eyebrow. The M2 was great when I bought it. But 4X faster prompt processing with purpose-built AI hardware? That’s the kind of gap that makes you start browsing the Apple Store at midnight.

If you have zero interest in local AI, the M5 is just another chip upgrade. But if you’re running models, experimenting with MLX, or thinking about it, this is the first Mac where Apple clearly built the GPU around AI. And with memory prices headed where they’re headed, the window to get in at current pricing might not stay open.

The M5 Pro and Max Are Going to Be Monsters for Local AI

Back in November, Apple quietly published a research article about the Neural Accelerators in the M5 chip. The numbers are wild.

The base M5 MacBook Pro already delivers up to 4x faster time-to-first-token compared to the M4 when running large language models through MLX. Image generation with FLUX is 3.8x faster. This is on the base chip with 24GB of unified memory.

Think about what happens when the M5 Pro and M5 Max show up with more memory bandwidth and more Neural Accelerators. And eventually the M5 Ultra in the Mac Studio.

Right now, people serious about running local AI often look at expensive PC builds with dedicated GPUs. The M5 generation might change that math entirely. A well-configured M5 Max MacBook Pro or Mac Studio could become the machine for people who want to run models locally, privately, on their own hardware.

Apple’s unified memory architecture was always a theoretical advantage for AI workloads. With the M5’s Neural Accelerators, that advantage is becoming very real. If you’re interested in local AI and you’re on an M3 or earlier, I’d wait for these announcements before buying anything.