Every AI Edit Now Gets Reviewed Automatically
A simple hook that forces AI to check its own work before stopping.
Every time AI finishes a task, it says "done" and moves on. But a lot of the time, the code it just wrote has leftover dead code, unnecessary complexity, or a simpler approach it didn't consider.
The thing is, if you ask AI to review what it just wrote, it usually finds things to improve. It just doesn't do it unprompted.
I kept manually typing stuff like "review your implementation" after every change, so I automated it.
How It Works
Cursor has a feature called hooks that lets you run scripts on specific events, like when the AI edits a file or finishes a response.
I built two hooks that work together:
Hook 1: Track edits. Every time Cursor's AI edits a file, this hook drops a marker file - a breadcrumb saying "edits were made this turn". It also checks that the edit is real (not empty) and inside the workspace, so it doesn't trigger on irrelevant stuff.
Hook 2: Intercept the stop. When the AI finishes and is about to stop, this hook checks: did the AI make any edits this turn? If yes, it consumes the marker and injects a follow-up message telling the AI to review its implementation before stopping.
If the AI didn't edit anything, like if it just answered a question, nothing happens. No unnecessary review loops.

What It Actually Catches
It doesn't catch something every time, but when it does it's worth it. It's definitely over 50% of the time that it finds something to improve. Here's some real examples:
Finds dead code it left behind after a refactor
Realises there's an easier implementation and swaps to it
Spots a quick refactor to make the code cleaner
Notices a bug it introduced
All for zero effort after the initial setup.
Try It Yourself (Cursor)
The hooks are open source. Grab them here: github.com/anthnykr/agent-setups/tree/main/cursor/hooks
Drop the files into ~/.cursor/hooks/ and you're done. Every AI edit now gets a self-review automatically.
Try It Yourself (Claude Code)
A lot of you use Claude Code, which also has hooks. The concept is the same: run a script when the AI edits a file, track it, and force a review when it tries to stop.
Claude Code hooks are configured in ~/.claude/settings.json and use events like PostToolUse (runs after the AI uses a tool) and Stop (runs when the AI finishes responding). On the Stop event, returning a block decision prevents Claude from stopping and feeds your message back into the conversation.
Here's a prompt you can paste directly into Claude Code to have it set up the hooks for you:
Set up Claude Code hooks that force you to review your implementation before stopping. Here's the behavior I want:
A PostToolUse hook with matcher "Edit|Write" that creates a marker file in ~/.claude/hooks/state/ when you make an edit. Use a SHA256 hash of the session_id as the filename so parallel sessions don't conflict.
A Stop hook that checks if a marker file exists for the current session. If it does, delete the marker and return a block decision with this reason: "Review your implementation before stopping. Check whether there is a better or simpler approach, whether any redundant code remains, whether duplicate logic was introduced, and whether any dead or unused code was left behind. If you find issues, fix them now; if not, briefly confirm the implementation is clean."
If no marker exists (meaning no edits were made), just exit 0 so I can stop normally.
Put the hook scripts in ~/.claude/hooks/ and update ~/.claude/settings.json with the hook configuration. Reference the Claude Code hooks docs at https://code.claude.com/docs/en/hooks for the correct JSON schema and event format.
Claude Code will set up its own review hooks. You can tweak the review prompt to whatever you want.
The Takeaway
AI tools are getting better at writing code, but they still rush to "done." One of the best things you can do is build feedback loops that force AI to check its own work.
A few scripts and suddenly every AI edit gets reviewed automatically. No extra effort, and the times it catches something make it more than worth it.
Let's build.
Anthony
Enjoyed this post?
Subscribe to get new posts delivered to your inbox.