It’s undeniable that the value of desk research has been a key value driver for many industries: media, investing, consulting, data, etc.
Performing it has been a rite-of-passage for aspiring capitalists. Many hours of my McKinsey tenure were spent doing everything from pouring over reports to organizing calls in China to get local pricing of services.
And it’s certainly not “empty work”. Doing desk research effectively requires and develops many useful skills such as prioritization, interpretation, framing the problem, quantitative skills, etc.
Someone who is world class at desk research can probably become a very effective decision maker.
Yet it feels inevitable and my claim is that Deep Research and other chain-of-thought agents are not a driver of leverage for desk researchers but a driver of full replacement.
AI research output will soon outstrip human research output and then will entirely eradicate it.
Here’s why.
DESK RESEARCH UNBUNDLES
Job titles say it all. From R&D teams to analysts, larger companies have already shown that desk research can be easily abstracted in the org chart.
A common coupling is to have the decision maker working jointly with the desk researcher (e.g., Product Manager, Product Analyst).
The Product Manager is supposed to use judgment, trust from the organization, understanding of the overall strategy, face-to-face touch points with customers, an understanding of how to operate the development team, etc. to guide desk research.
In isolation, the obvious counter that there will be a human desk researcher working together with an agent makes sense.
The problem is latency.
A human desk researcher is not available at 2am, or every 10 minutes and certainly not able to produce an answer in 5 minutes. Agents can do all that and will become the preferred interface for most tasks.
RESEARCH OUTPUTS ARE HIGHLY VERIFIABLE
Unambiguous problems tend to be the most tractable for AI. Coding has been a successful starting point for agents because given a set of test cases, it’s possible to deterministically verify whether a vibe coded program does what it needs to.
Research outputs are similar – while they may not be verifiable by a simple computer program, a combination of clearly referenced sources, arguments and conclusions can be verified by other LLMs for logical soundness.
In short, research is slightly less deterministic than programming but not sufficiently so to inhibit a fast improvement loop.
INFORMATION OUTPUT IS GROWING
I remember researching the insurance market to help a large global asset manager. We had to carefully evaluate which reports to buy and what we would get out of them; there was little else to go on.
While crypto is far less developed than trad-fi, there’s already a lot of information to parse.
Ranging from social media to onchain data, the technical ability to query and extract information becomes so important that without it it’s really challenging to produce nuanced insights.
Research agents can be programmed to do everything from writing Dune queries to reading a thousand tweets at once. There’s a threshold point where the ability to ingest more information outweighs the ability to synthesize it effectively and I think we’re reaching it soon.
To be honest, I'm torn whether this transition even matters in the long run because the value of information for decision making purposes is reducing at the same time.
This effect has been obvious in politics and markets, as narratives dominate fundamentals and volatility has made fundamentals less reliable.
In practice, companies are adopting a bias to action and telling their own story as opposed to relying on strategic correctness. In fact, a fascinating follow-up question is whether the end of research also marks the end of strategy.
I believe strategy will always exist (in any game), but the idea of a 5-10 year plan may go out of the window and modern strategies need to be dynamic and centered on a compounding process rather than a big resource allocation decision.