We Are All Reviewers Now
And we're not good at it.
As an Associate at McKinsey, I was always annoyed at how Partners reviewed presentations. They didn't touch PowerPoint. Instead they always asked us to print the whole presentation out after each iteration, grabbed a red pen and marked up the pages by hand.
We burned through a lot of paper, printing deck after deck, just so a Partner didn't have to open a laptop.
Then they’d hand a stack of red-inked pages to the Engagement Manager who would divide and conquer the fixes across the team. Each of us got a smaller stack to create slides from.
At the time, it seemed unnecessary even frustrating because deciphering the red ink often meant losing sleep.
Why wouldn't they edit the PowerPoint directly or just add callouts?
But they understood something I didn’t: review is a fundamentally different cognitive mode than editing, and the environment you review in determines the quality of what you catch. The point was to sit with the deliverable in its final form and see it exactly the way a client would.
Soon, I was doing the same thing even for my own rounds of edits.
I’ve been doing this ever since. And right now, the approach matters more than it ever has.
We may be in a PEAK review period
A disproportionate share of knowledge work has shifted to review.
Even if you were a manager before, you likely reviewed things on a slow cadence, maybe weekly, and you still had entire blocks of work you owned end to end. With AI, you’re potentially reviewing something every five minutes. If you’re coding, it's every minute or less.
Reviewing other people’s work is getting worse too. You’re now reviewing not just what your colleagues did but likely some of their AI outputs too. And because everyone is “more productive” you have even more review work piling up.
Unfortunately we are not yet at the point where we can define objectives and hand them to autonomous systems.
But we’re also not in the old world where a manager reviews a handful of deliverables per week and spends the rest of the time on their own work.
We’re in an awkward middle ground: too many things to review, with neither the tools nor systems to handle it well.
Reviews are problematic
If not managed, review volume creates other issues.
High-frequency AI use destroys flow. While this cycle may improve speed on any individual task, flow is essential for quality and for the kind of deep thinking that catches subtle bugs or sees structural problems.
Less productive time. The more you review, the less time you have to do things. So you lean on AI more to fill the gap, which generates more output to review, which eats more of your day. It’s a slippery slope and we get less meaningful blocks to think.
The tools haven’t caught up. GitHub is designed for human-to-human code review, not for interacting with a coding agent. Most editing workflows assume a human author on the other side. When you have multiple humans and multiple agents working on the same thing, ownership gets murky. Who runs the final review? Who’s accountable?
A few things are working for me
I've been reviewing deliverables since 2014 and have been able to adapt some techniques to this context.
1/ Human review means human review
The most effective pattern to deal with reviews goes back to the McKinsey story and is deliberate disconnection. When I’m reviewing my research paper, I send the PDF to my iPad, open it, and edit with the Pencil (in red). When I go back to my desk, I look at the edits and decide which ones I’ll fix myself and which ones I’ll get AI to help on. But the review itself is entirely unplugged.
The same is true for our recent code audit. I spent significant time reading code line by line on the iPad with the GitHub app. This is still one of the most powerful review methods available, and I suspect it always will be.
To get the most out of your reviews, get as far away from the toolchain as practical.
2/ Use AI for context loading
Disconnecting for review doesn’t mean disconnecting from AI entirely. The trick is sequencing. Before I sit down for deep review, I use AI to prepare.
When it comes to doing work, AI requires context from us. But when it comes to human review, AI can actually provide us with more context.
For the code audit, I had AI generate a comprehensive threat model and alternative views of the codebase, diagrams and trust boundaries. I read through all of that first. Even when I didn’t find issues in the threat model itself, it prepared my brain. It loaded context that made the subsequent manual review sharper. Similarly, for the research paper, I used AI to surface relevant prior work that I could study.
Use models to give you more things to read and think about and alternate them with deep review periods where you step away from your primary device.
3/ Think about review cadence
One of the biggest time sinks is reviewing the same type of issue over and over across iterations. The fix is to figure out which things compound and which things don’t, then review accordingly.
If your AI agent starts writing code in a pattern you dislike, and you don’t catch it early, that pattern becomes context for every future version and causes drift. You’ll want to review architectural decisions and style frequently so the AI’s context stays clean.
Remember: The Code is the Context:
Agents don’t have your context. They see the code as it is, not as you remember it.
In a research paper, formatting inconsistencies across sections are annoying but cheap to fix in a single pass at the end.
If frequent review of something significantly improves AI performance on subsequent tasks, review it early and often. If it’s cheap to fix at the end and doesn’t affect future output, save it for a single pass.
4/ Automate what you don't have to review
Developers already utilize continuous integration, automated tests and linters to automate the review process itself. All of us should do that now.
For my research paper, the document is actually a literate Haskell program. The paper itself is executable. I have a separate test file that validates the paper’s formal claims, which means I can lean on automated checks for syntactic correctness and basic logical consistency, and focus my manual review on the writing, ideas, and quality.
Not everything can be made executable, obviously but be on the lookout for review steps AI or software tools can perform well.
5/ Review the prompt, not the output
This has been one of the biggest shifts in how we work. We’ve started treating design documents as prompts. The closer a design doc is to the actual prompt that will drive AI work, the more valuable it is to review.
I would much rather review a well-structured prompt than review the sprawling code it generates. The prompt is the most succinct representation of intent. The output may be ten or fifty times larger and much harder to audit comprehensively. Of course you still review the output. But reviewing the prompt will compound feedback early.
6/ Give the right feedback
When you’re reviewing AI-assisted work from a colleague, your feedback could be aimed at the human, the AI, or vaguely both. These are different feedback points with different leverage. Sometimes the most important feedback is the simplest: I don’t think you wrote this. That may not always be true but it’s always the right feedback.
As models improve, more work will shift to genuine autonomy with less frequent human checkpoints. Could we be in an awkward middle ground where we’re drowning in review loops?
Either way, the quality of everything you produce now depends on your ability to review things so get good.


