Agents are Strategic Software
How AI agents introduce unprecedented adaptability.
Despite record high valuations (Nvidia first to reach $5T) across the AI value chain, impact remains low. McKinsey reports only 10% of organizations are scaling AI agents and even for organizations that see EBIT impact from AI, less than 5% is coming from AI.
Part of the problem is that much of the excitement about adopting AI came from cutting costs:
High performers use AI to drive growth, innovation, and cost: Eighty percent of respondents say their companies set efficiency as an objective of their AI initiatives, but the companies seeing the most value from AI often set growth or innovation as additional objectives.
Headcount is the primary balance sheet item that companies sought to trim and it didn't work out. There were failed call center automation experiments, a proliferation of low quality SEO content among other examples.
Counter-intuitively while innovation is more complex than cost cutting, it’s easier for AI to help there since humans don't expect to replace other humans with it. Using AI as a thought partner, research agent, slide formatter, coding assistant all help toward innovation without fully replacing FTEs.
Nevertheless, there are far fewer organizations that extract value from AI than valuations would suggest. The long term trajectory may be clear but everyone is a little confused about where and how much to adopt now.
At Auditless, we disagree.
The best time to adopt AI is now.
We use AI extensively in our engineering projects (while maintaining the highest standards of human review). We don’t vibe code:
I don’t believe people who can’t code can maintain productivity while vibe coding an application that meets a growing set of user needs.
We use AI for research, but also probably not in the way you think. My favorite way to augment a research process is to create a highly detailed report outline and send it to Perplexity and Deep Research. From there, I’ll go through all the sources and manually read through them on an iPad. Sometimes I won't even read the final report. All I'm interested in is the cognitively-guided Google Search.
Most importantly, we are building agents.
STRATEGIC SOFTWARE
Traditional software was purpose built to fulfill a specific business function. CRM software in a CD, database software, accounting, HR, etc. While companies can over time expand into adjacencies through acquisitions or dedicated project development (e.g., SAP, Adobe), there is very little shared infrastructure between the different products.
Modern software is a combination of 10,000 small functions that each do a simple thing combined in just the right way to support very specific workflows that have been refined over years of feedback. Each workflow is expressed in everything from the user interface, the different options & configurations, the documentation, etc.
If you create a function that exports a PDF, you can certainly reuse that across various apps but it’s not really ground breaking. Whether through open source libraries or vibe coding (or both), it’s easy to reproduce utilities like that.
Now consider an agentic workflow. For example, you’re building health care software and you created an utility that helps you crawl ArXiv for research papers and extract information from them. The resulting workflow has the best characteristics of an SoP and software:
Multi-purpose. It can be used to crawl research papers in other domains. It can be used to extract information from PDFs that are not research papers. It can also be easily embedded as part of higher-level agents to do more complex workflows such as formulating research hypotheses, writing news about medical research, etc. One component like this can support a wide variety of product ideas. It makes fast pivoting based on user feedback incredibly easy.
Self-improving. The underlying models in any agentic workflow can be swapped out to reduce costs or improve performance. Many frameworks are now model agnostic. And the better models get, the least sensitive they become to differences in prompting. And even if you believe in just using OpenAI, they are converging towards a single entry point (the model router) which can pick the best model for the use case at hand.
These properties are why I call agents “strategic software”. They don't just fulfill a business function, they position the business to make strategic pivots and are a real asset in many ways that traditional software isn't (outside of the product context it’s being sold in).
They have a much lower activation cost to value meaning that they can be served to users with almost no user interface or easily integrated as a sub-agent or tool (or even just a function) in an existing application or No Code interface builder.
Finally,
MISTAKES IN AI ADOPTION
If the prize is so high, what is stopping us? I see three failure modes that explain low ROI we are seeing right now:
Selling AI tools. In a world where companies and customers don't know how to extract value from AI, selling AI tools is a way to compress your margins to 0. It’s natural to want to build products as it’s been the best and most scalable way to build software for several decades now. Unfortunately, there aren't many areas with high margin AI products.
Giving up/”skill issue”. Because people perceive LLMs to either be good at something or bad at something AND an end-to-end prototype is very easy to build, they give up too easily. While some problems are simply handled by chunking workflows, others require dedicated iterations in prompting, data augmentation, custom workflows, feedback loops, careful model choices, etc. It’s annoying that what you may end up with has only 1000 lines of code so it’s harder to assess performance.
Failure to improve. AI agent development requires very careful product management/prioritization skills which are rare. User stories are not binary, they are metrics. Agents naturally have a larger surface area and need to be constantly assessed for performance bottlenecks and improved. Improvements are hard to measure. Evals sound great in theory, but successful products don't even use them. We invested some time in building evals for an agentic workflow and it was much harder than anticipated.
If you’re not evaluating AI agent development as part of your strategy, hopefully this gives you additional machinery to help evaluate the pros/cons of investing in the AI route.



