One thing that AI models as software fall short of compared to the human brain is that this brain “wetware” is equipped with a biological advantage of elasticity.
AI models do not learn continuously. There’s no resemblance to the “neurons that fire together, wire together” capability of the mind.
We have a complicated biochemical machine built in that “fine-tunes” with every inference. Information patterns are encoded and abstracted into efficient chunks automatically. LLM models start from scratch no matter how much time they spend dealing with the same problem.
The agentic behavior of LLMs acquires contextual connections through wording semantic similarity. The human mind understands and creates connections based on abstracted relations, effectively performing at a higher dimensional level of thinking.