Discussion about this post

User's avatar
Rex Barq's avatar

Iterating a society of agents very quickly is one way via which it seems that AI has fewer limitations than we do. I can’t escape the feeling that we (including our brains) have evolved to be extremely efficient at what we do, while LLMs have only been developed to mimic what we do without emphasis on efficiency. Assuming that we might be optimally efficient at something that AI isn’t optimally efficient at, what would that be?

Expand full comment
1 more comment...

No posts