In case you’re not fully caught up with crypto autonomous agents, there’s been a few recently.
Andy Audrey (pictured below) created a bot called “truth terminal” that earned >$1M by promoting the GOAT 0.00%↑ memecoin.
It did so by posting to its own X account.
This is not Andy's only experiment. He also broadcasts private AI to AI conversations on his website.
Here’s an excerpt of one such conversation about creating a fart NFT project:
Another agent called Luna started tipping individuals who interacted with her to incentivize growth.
While there’s a lot of excitement, people are struggling to predict what will happen next.
I'm not here to tell you that I know how this plays out.
But there’s a way to start making sense of it all.
I see agents impacting us in two different ways:
As a new type of resource with objective fulfilment capabilities;
As a new autonomous market participant.
What we are seeing is a shift in focus from the former to the latter.
The resource lens
One way to think about AI developed as LLM agents is as a new type of resource that can solve objectives.
I charted a quick comparison of costs, flexibility, the need for motivation, accuracy and other factors.
The obvious feature is the low cost of cognitive flexibility and the high speeds at which that flexibility can be applied to solve problems.
But LLMs have an even more powerful feature: it’s the ability to learn in a superlinear way.
That learning hasn't yet been abstracted into a product and currently requires manual installation of memory, the use of AI workflow tooling or even custom training.
But all of that is coming.
I believe superlinear learning will be the true differentiator in the mid/long-term and lead agents to outperform both static software and humans on a rapidly expanding set of tasks.
Humans and software simply cannot do that.
Human learning is sublinear
While humans have accelerated learning periods (childhood) and get more efficient with the improvement in learning methods, human development is sublinear.
By the time humans enter the workforce they only have a finite amount of decades of improvement available and they improve most quickly on a given job during the early years as opposed to the late years.
Apparent high levels of human improvement do exist in two situations:
Small differences create power law outcomes. E.g., Lebron James is not getting exponentially better at basketball but because he is so relatively good, he gets a disproportionate amount of awards, achievements and accolades which translate into opportunities;
Humans have access to leverage. In most “real world” situations, human performance is actually linked to resource ownership. Resource yield compounds, e.g., a human that can compound capital by 20% per year can get unrecognizably wealthy.
Software learning is linear
Software benefits from more linear levels of improvement over time. Whether through the new addition of data (e.g., user data) or from additional functionality being directly developed by human maintainers, regular software sees improvement at a linear rate.
Similar to the human learning situation, there are power law exceptions that apply.
For example, software company valuations can increase rapidly and therefore lead to more resource growth and faster software improvement paths.
Even in these cases improvement level per amount of resource invested is linear.
The autonomy lens
The more scary way to understand the “agentic web” is to look at AI agents as a new type of autonomy.
There are two autonomies we deal with today: individuals and organizations (including DAOs).
Philosophers have argued that unpredictability corresponds to randomness.
From this lens, one could see AI agents as new types of semi-autonomous entities that will have independent objectives, ownership of assets, become market participants and so on.
When a new type of autonomous entity enters existing market structures, market behavior may degrade of become highly unpredictable.
It’s not the interactions between agents and humans that are unpredictable in a market context, it’s the interactions between different agents which include very fast feedback loops and a market “meta” that evolves faster than humans can observe and reason about it.
Right now that behavior is less advanced but already markedly different. A recent paper let LLMs participate in a simple auction based market.
But markets are not just a collection of orders and settlements.
If you see narrative engines as a part of market structure (you should), LLMs are already playing a role there as we see from the above examples.