A (Retired) Lawyer’s Take on Claude’s Legal Plugin

Anthropic recently released a legal plugin for Claude that handles contract review, NDA triage, and compliance workflows. The day it dropped, Thomson Reuters fell 16%, LegalZoom crashed 20%, and Wolters Kluwer lost 13%. Wall Street noticed. As a guy who spent 30 years practicing law, so did I.

I’ve thought for a long time that transactional law is the area most likely to get disrupted by AI. Contracts follow patterns. They use known language.

The donkey work of reviewing a standard NDA or employment agreement is exactly the kind of thing Claude and other LLMs are good at. Feed it a contract, ask it to flag the problems. It does a surprisingly decent job.

I know there are things a wily attorney picks up that AI just isn’t sophisticated enough to catch. The weird clause buried on page 12 that changes the entire deal. The missing indemnification language that only matters if things go sideways. The stuff you learn to spot after you’ve been burned by it once. AI doesn’t have scar tissue. Lawyers do.

But the routine stuff? Absolutely. Let AI handle first-pass review. Let it draft the boilerplate. Let it compare versions and catch what changed.

That’s real, useful work that used to cost clients hundreds of dollars an hour. AI doing it faster and cheaper is a good thing.

The danger is when people skip the lawyer entirely.

I can already see the lawsuits forming. Someone uses an AI tool to draft a partnership agreement. It looks professional. It reads like a real contract. They sign it.

Six months later they discover the AI missed something critical, or included language that means something different than they thought. Now they’re in trouble.

And here’s the part that keeps me up at night. If your attorney makes that mistake, you have recourse. Legal malpractice exists for a reason. There’s insurance. There’s accountability.

But if your AI-drafted contract has a fatal flaw, where do you go for relief? Who do you sue? The chatbot? Good luck with that.

We’re heading into a period where people are going to trust AI contracts the way they trust Google searches. Confidently and without much thought.

Some of those people are going to get hurt. Not because the technology is bad, but because they treated it like a lawyer when it’s really just a very fast research assistant.

Use AI for contract review. I do. But treat it like a first draft, not a final opinion. The donkey work is AI’s job now. The thinking is still yours (and your attorneys).

Claude’s Constitution

Anthropic published Claude’s Constitution. It’s 23,000 words defining the values and behavior of their AI. If you care about how these tools get shaped, it’s worth a read.

When I built an experimental AI assistant project (originally Clawdbot, now OpenClaw), I spent a lot of time on its “soul.md” file. That document shapes how the assistant thinks about its role and limitations. Turns out Anthropic was doing the same thing. Just at a much larger scale.

The structure is what caught my attention. There’s a clear hierarchy: Anthropic sets the foundation, operators (the developers building apps) can customize within bounds, and users have certain protected rights that can’t be overridden. They describe Claude as a “seconded employee.” Dispatched by Anthropic but currently working for whoever built the app, while ultimately serving the end user.

The document separates “hardcoded” behaviors (absolute prohibitions like weapons instructions) from “softcoded” defaults that can be adjusted. This is exactly what I want to see more of. As these tools become daily companions, users should have real input into their personality and priorities.

They openly acknowledge uncertainty about whether Claude might have “some kind of consciousness or moral status” They even included a conscientious objector clause. Claude can refuse instructions from Anthropic itself if they seem unethical. That’s wild, but I could also see it becoming a problem. How do these algorithms define ethics? Do we even know?

I get it. These are algorithms, not people. But as we talk to them more, and they talk back, the question of how they’re shaped matters. Anthropic releasing this under a Creative Commons license feels like an invitation for all of us to think harder about what we want from our AI tools.

Claude Is My (Current) AI of Choice

I’ve been experimenting with AI tools for a while now, and I’ve settled on Claude as my primary assistant. After spending time with ChatGPT, Gemini, and others, Claude just feels right to me.

What draws me to Claude is how the conversation flows. When I’m working through a problem or trying to get something done, the responses feel natural and aligned with what I’m actually trying to accomplish.

The company behind Claude, Anthropic, seems focused on building AI that handles the busy work so you can focus on the creative stuff. That’s exactly the use case I care about.

I’m not interested in AI that writes my blog posts or creates my presentations. I want AI that handles the tedious tasks that eat up my day. Claude gets that distinction, and it shows in how they’re building the product.

Claude Adds Web Search

Because I cover Artificial Intelligence so much in the MacSparky Labs, I currently have paid accounts for all three of the big services: Google’s Gemini, OpenAI’s ChatGPT, and Anthropic’s Claude. Of the three, I’ve always had a soft spot for Claude. I like the way it thinks; its tone, reasoning, and writing style just seem to resonate with me.

That said, for a long time, Claude had a pretty significant Achilles heel: no web access. You’d ask it something timely or specific, and it would give you a polite shrug.

That changed last week when Anthropic added web search to Claude as a beta feature. I’ve had it turned on since the announcement using Claude 3.7 Sonnet, and it’s made a significant difference.

Just yesterday, I was researching local contractors to help with some fire-hardening improvements on my home. I asked Claude to assist, and it actually delivered solid, relevant results from the web. This is the kind of query that would have stumped Claude a month ago.

The feature feels early — definitely “beta” — but it’s also entirely usable. It’s fast, the results are helpful, and most importantly, Claude now feels like it’s playing in the same league as its competitors when it comes to real-world usefulness.

One thing to note: web search isn’t turned on by default. You’ll need to dive into Claude’s settings to enable it. But if you’re a Claude user, it’s absolutely worth flipping that switch.

The Gen3 AI Revolution

I’ve been spending a lot of time with Claude 3.7 Sonnet lately, and I wanted to share some thoughts on the new “Gen3” AI models. Claude 3.7 is trained with a massive leap in computing power compared to its predecessors.

What’s Different About These New Models?

These new AI models aren’t just incrementally better; they represent a significant jump in capabilities.

There are two reasons for this:

  1. Training Scale: These models use 10x more computing power in training than GPT-4 did.
  2. Reasoning Capabilities: These models can spend more time “thinking” through complex problems, similar to giving a smart person extra time to solve a puzzle.

My Experience with Claude 3.7 Sonnet

I’ve been using Claude 3.7 regularly. Most folks use programming tests to baseline the AI models. I don’t. Instead, I’ve found it to be an exceptional thought partner. One of my favorite workflows is to give Claude something I’ve written and ask it to pose thoughtful questions about the content. Those questions often spark new ideas or help me identify gaps in my thinking.

For those of you who work alone without colleagues to bounce ideas off of, these more capable AI models can provide surprisingly useful feedback. It’s like having a smart colleague who’s always available to help you think through problems. As AI becomes capable of higher-order thinking tasks, there is a lot of room for us to be creative in how we put them to work.

The Human in the Room

You still need to be the human in the room. As smart as these models are getting, you’re making a mistake if you believe they’re actually thinking. They remain tools — increasingly powerful tools — but tools nonetheless. Your judgment, creativity, and ethical sensibilities remain irreplaceable. The most powerful approach is using these AI partners to amplify your thinking, not replace it.

If you’re curious about these Gen3 models, my recommendation is simple: experiment. Ask Claude to help you brainstorm solutions to a problem you’re facing. Have it review something you’ve written and suggest improvements. Use it as a sounding board when you’re trying to think through a complex issue.

You might be surprised at how helpful these conversations can be, even if you’re not using the flashy coding capabilities that get most of the attention.

I’m cautiously optimistic about where this is heading. These tools are becoming genuine intellectual partners that can help us think better, create more, and solve harder problems. Used wisely, they have the potential to dramatically enhance what we can accomplish.