Iterating a society of agents very quickly is one way via which it seems that AI has fewer limitations than we do. I can’t escape the feeling that we (including our brains) have evolved to be extremely efficient at what we do, while LLMs have only been developed to mimic what we do without emphasis on efficiency. Assuming that we might be optimally efficient at something that AI isn’t optimally efficient at, what would that be?
Iterating a society of agents very quickly is one way via which it seems that AI has fewer limitations than we do. I can’t escape the feeling that we (including our brains) have evolved to be extremely efficient at what we do, while LLMs have only been developed to mimic what we do without emphasis on efficiency. Assuming that we might be optimally efficient at something that AI isn’t optimally efficient at, what would that be?
One thing humans are great at is being opinionated and having contrarian ideas.
This is one area that I see LLMs failing at even with some pretty deep prompting.
But that could simply be a context window limitation which will get overcome over time.