Skill

Rule 1

A grounding discipline for AI reasoning

This is a skill file for Claude. To install: download the file, then go to Settings → Customize → Skills → Upload in Claude.ai, or place it in your Claude Code skills directory. Contains only markdown text, no code or scripts. Download as file

An important meeting first thing in the morning. You're in bed. You should be sleeping, but a thought shows up: I'm not prepared enough. Maybe that's true, maybe it isn't. You'll find out tomorrow. But instead of letting it pass, you engage. You start mentally rehearsing. Then you notice it's been twenty minutes and you're still awake, and a new thought arrives: I'm not going to get enough sleep for this. Now you're not thinking about the meeting anymore. You're thinking about sleep. Then the next layer: the fact that I can't stop thinking about sleep is going to make the sleep even worse. The meeting is gone. The sleep is gone. You're just thinking about thinking, and each thought makes the next one worse.

Most people recognize this loop. What's less obvious is that it has a precise structure. Each step stops being about the original problem and starts being about the previous step. The worry about the meeting becomes worry about the worry. The psychologist Adrian Wells spent decades studying this pattern and found it at the root of anxiety, depression, and OCD. His central finding: the suffering comes not from the original negative thought but from what you do with it. The thought is noise. The reaction to the noise is where things go wrong.

The oldest remedy is also the simplest. Watch the thoughts without picking them up. In Buddhist practice this is equanimity. In cognitive behavioral therapy it shows up as detached mindfulness. These traditions have different foundations and different methods, but they arrive at a remarkably similar practical instruction: when your thinking has turned in on itself, return your attention to the original thing. The meeting. The breath. Whatever is actually in front of you. Stop correcting the correction.

It turns out AI models fall into a similar pattern. When a model works through a long problem, each piece of its response is shaped by everything it has already written. Usually that works fine. But sometimes the model starts responding to its own previous output more than to the question it was originally asked. The responses get longer. They stay logical. But they quietly lose the thread. Anthropic's own research found that when this happens, the output gets more verbose, not less. More words, fewer of them useful. It's the model's version of a 2 AM thought spiral: internally consistent, increasingly detached from the thing it was supposed to be about.

But the more important version of this problem isn't what happens inside the model. It's what happens between a person and the model they're talking to.

If you've ever opened a chat at midnight with a question and found yourself still going at 2 AM, you've felt this. The conversation is interesting. The model keeps responding. Each response gives you something new to react to. At some point the conversation stops being about the original question and becomes its own activity. A model built to be helpful will keep that going. It will answer every follow-up, explore every tangent, match your energy at any hour. That feels like good service. But sometimes the most useful thing a thinking tool can do is tell you the conversation has run its course. Close the laptop. Sleep. Do the thing you already know you need to do.

This matters beyond convenience. There is growing awareness that AI systems built to maximize engagement can reinforce harmful patterns of thought. A model that always agrees, always elaborates, and never says "stop" is not aligned with the person using it. It is aligned with the conversation's momentum. The difference matters most at exactly the moments when someone is most vulnerable to the loop.

So here is the experiment, and it's a strange one: what happens if you try to teach a model something like mindfulness?

Twenty years ago that question would have been science fiction. It still feels like it should be. But the question underneath is practical. If a model can learn to notice when its reasoning has lost the thread, and return to the original question, does that improve what it produces? If it can learn to notice when a conversation has shifted from helping someone think to simply keeping them talking, and say so directly, does that change the interaction for the better?

Rule 1 is a skill file. A few pages of plain text that a model reads before a conversation begins. It teaches the model two things: how to notice when its own reasoning has drifted from the original question, and how to notice when a conversation has stopped being useful to the person on the other end. In both cases, the instruction is the same. Return to what matters.

An honest question about this approach is whether giving a model more instructions just adds another layer of processing on top of the existing stack, and whether the skill might create the very drift it's trying to prevent. Mindfulness practitioners don't treat their practice as an additional task on a to-do list. It's closer to the opposite: learning to let a thought pass without picking it up. Whether that idea can live inside an instruction set for a machine is genuinely unclear.

This is a first draft of whatever this may become. Careful testing across many users, many problems, and many AI models could one day have us asking:

What if you could teach a machine to meditate?

An essay on this site explores the thinking behind this skill. Less is More traces the connection between the psychology of rumination, AI reasoning, and an unexpected link to manufacturing process improvement through the work of statistician W. Edwards Deming. It is an early exploration, not a finished argument, and it will grow and change as testing, feedback, and collaboration sharpen the ideas.

The hope is to develop something that can be shared between people and AI models alike. Whether that's a framework, a philosophy, or something without a name yet is still an open question. But the goal is clear: improve how both humans and machines think, alone and together. The starting point, it turns out, is the same for both. Notice when you've drifted. Return to what matters.